Once, we might have studied patterns and processes that relate to a single gene or gene family, and made inferences from these about the whole genome. High-throughput genomic technologies now allow us to go directly for the genome-scale data sets. But even in this brave new world of genomics, basic genetic tools still have their place. As two recent papers that are discussed in this month's highlights show (page 243), improved versions of such tools will be vital for generating resources for global functional analyses.
In future, improvements in the way we analyse these large data sets will be as important as improvements in the way we generate them. Perhaps the most important trend at this end of the genetic research 'pipeline' has been the revolution in the way genetic data is analysed. As Mark Beaumont and Bruce Rannala discuss on page 251, a fundamental change is taking place: Bayesian statistics are starting to replace traditional methods for a wide range of purposes. Computational complexity previously constrained their use, but these approaches are now a convenient way to deal with data that is influenced by many interdependent variables, which is frequently the case in genetics.