Tuesday 28 September 2010

Science journal editors: a taxonomy

After many years of publishing papers, I have come to recognise wide diversity among journal editors. This variation has major consequences for authors, and it is important that they recognise the creature they are dealing with, if they want their work to be published in as timely and painless a way as possible. I have therefore developed a tripartite system of classification to guide authors.



Taxonomy of the Genus Editoris



Class 1
This species of editor should be avoided at all costs.

The Returning Officer
This humble creature has a very limited brain and is unable to make decisions. It can, however, count, and it therefore uses a strategy of accumulating reviewer reports until a consensus is reached. Typically, it is risk averse, and a single negative report will lead to rejection of a paper, even if other reports are glowing. If you aren’t rejected, an initial communication from a Returning Officer will say “Please address all the comments of the reviewers in your revision”, giving no guidance about how to deal with contradictory recommendations. When you submit your revision, the Returning Officer will send it back to all the reviewers, even if only minor changes were made, leading to unnecessary delay in publication and more toil for overworked reviewers. Since the Returning Officer cannot make a decision unless there is convergence of reviewer opinions, most papers are doomed to a long process with an ultimately negative outcome.

The Automaton

This is a subspecies of Returning Officer which has no human characteristics at all.  It evolved relatively recently with the advent of web-based journal submission systems. It generates letters written in computerese and does not read communications from authors. My most recent experience of an Automaton was with Journal of Neuroscience. The letter from the editor gave a rather ambiguous message, stating that the paper was potentially acceptable, but that major revision was required, and it would need to go back to reviewers. It also included the statement: 
   Violations: -The gender of the species should be mentioned in the methods
The dictionary definition of ‘violation’ includes such phases as “the act of violating, treating with violence, or injuring; the state of being violated. Ravishment; rape; outrage.” I decided it might be unwise to point this out to the editor, but I did explain that the “gender of species” was actually given in a table in the Methods section.  A further round of review took place, and the reviews (which were very useful) were accompanied by another letter from the Automaton. It was identical to the previous letter, gave no indication that the editor had read the paper or my response to reviewers, and simply upped the ante on the violation front, as it now stated:

   Violations: -The species is not mentioned in the abstract;   
   -The gender of the species should be mentioned in the methods
In what, thankfully, proved to be the final round of revision, I put the word “gender” in the text of the Methods. I explained, though, that I was reluctant to put “human children” in the Abstract, as this would be a tautology.

The Vacillator

This is a slightly more evolved form of Returning Officer, which is capable of decision-making, but prone to fits of paralysis when confronted by conflicting information.  The hallmark of a Vacillator is that, rather than waiting for consensus between reviewers, it responds to conflicting opinions by seeking yet more opinions, so that a paper may accumulate as many as four or five reviewers.

A variant known as Vacillator statistica sometimes inhabits the environment of medical journals, where the assumption is made that neither the editor nor the researchers understand statistics, so you are asked at submission whether a statistician was consulted. My experience suggests that if you say no, then after an initial round of review, the paper goes to a statistician if it looks promising.  It would be fine if the journal employed statisticians who could give a rapid response, but in my case, a brief paper sent to Archives of Disease in Childhood sat for months with a statistical reviewer, who eventually concluded that we did indeed know how to compute an odds ratio.

The Sloth
The Sloth has powers of judgement but finds journal editing tedious, so engages with the process only intermittently.  The motivation of the Sloth is often mysterious; it may have become an editor to embellish its curriculum vitae, and is then bewildered when it realises that work is involved. It is important to distinguish the true Sloth, who just can’t summon up the energy to edit a paper, from Crypto-sloths, who may have genuine reasons for tardiness; editors, after all, are beset by life events and health problems just like the rest of us.  Vacillators may also be mistaken for Sloths, because of the slowness of their responding, but their level of activity in soliciting reviews is a key distinguishing feature. Even Paragons (see below) may get unfairly categorised as Sloths, as they are dependent on reviewers, who can delay the editorial process significantly. A true Paragon, however, will be pro-active in informing an author if there are unusual reasons for delays, whereas the distinguishing feature of a Sloth is that it is unresponsive to communications and blithely unconcerned about the impact of delays on authors.

Class 2
Species in class 2 pose less of a threat to an author’s career, but can nevertheless be dangerous to mental health.

The Talent Scout
This species is found in the rarefied habitats of the top high-impact journals, although it starting to spread and may now be found in medium-impact journals who have introduced a triage process. The Talent Scout’s principal concern is whether a research finding has star quality. The species is distinguished from other species by including individuals who are not active researchers:  many are individuals with a doctorate in science who have moved into science journalism. Although it can be depressing to have one’s work judged as too unsexy for publication by someone with no expertise in  your field, the decision-making process is usually mercifully quick, making it possible to regroup and resubmit elsewhere. Although this means that the impact on the author is less severe than for Class 1 editors, it does have potentially worrying implications for science as a whole, because it introduces bias. For instance, it is all too easy to see why Science published a study of a computer-based intervention for language problems in children: the study was headed by a top neuroscientist, the method was innovative, and it demonstrated potential to help children with a common neurodevelopmental disorder. A study like this presses all the buttons for the Talent Scout. However, the methodology was weak and subsequent randomised controlled trials (RCTs) have been disappointing (see review). I don’t know if authors of those RCTs would have tried to publish them in Science, but my guess is that if they did, their papers would have been rejected because it is simply much less interesting to show that something doesn’t work, than to provide evidence that it does (see blog).

The Deity

The Deity is the opposite of the Vacillator: the Deity makes decisions which may strike authors as unfair or subjective, but which are absolute and irreversible. Deities do not engage in correspondence with authors, but delegate this to office staff, as I found on the one occasion when I tried to engage in debate with a Deity from PNAS. I was incensed by a reviewer report that maintained a postdoc and I had been ‘cherrypicking’ results because we’d used an automated artefact removal procedure to remove noisy trials from a study using event-related potentials (ERPs). The reviewer clearly had no expertise in ERP methods and so did not realise that we were following standard practice, and that the idea of cherrypicking was just silly – it would be quicker to re-run the experiment than to go through the thousands of individual trials removing data we didn’t like the look of. In my letter to the editor, I explained that I did not want the paper reconsidered, but I did want an acknowledgement of the fact that I had not been fudging the data. What ensued was a tedious correspondence with a member of editorial staff, whose response was to send the paper back to the reviewer as part of an ‘appeal’ process, and to then inform me that the reviewer still didn’t like the paper.  Nowhere in this process did the Deity descend from the heights to offer any comment. Indeed, I still wonder whether this Deity was really an Automaton. It showed no signs of having any sense of morality.

Another encounter with a Deity was when I sent a paper to New England Journal of Medicine. Since I thought this should have at least warranted review (novel study with important clinical and theoretical implications), I wrote to ask what the reason was for rejecting it without review. The response from editorial staff was classic Deity: they could not give me any feedback as the paper had not been sent out for review.


Class 3
Species in class 3 are typified by their attitude to the job of editor, which is neither as bean-counter, nor as gatekeeper, but as facilitating the communication of high quality research.

The Paragon

The Paragon reads manuscripts and treats reviewer reports as advisory rather than as votes. He or she aims to make decisions fairly, promptly and transparently. Confronted with conflicting reviewer reports, a Paragon makes an honest attempt to adjudicate between them, and explains clearly to the author what needs to be done – or why a paper has been rejected. The Paragon will listen to author complaints, but not be swayed by personal friendship or flattery.  I’ve often heard authors complain about a Paragon who writes such a long decision letter that it is equivalent to a further reviewer report: I don’t see that as cause for complaint. I would sometimes do that myself when I was a journal editor (needless to say, I tried hard to be a Paragon), and I saw it as part of my job to pick up on important points that were missed by reviewers. Paragons write personal letters to authors, and to thank particularly helpful reviewers, rather than relying on computer-generated bureaucratese.

The Obsessive
The Obsessive is a Paragon that has gone over the top. Obsessives are not dangerous like class 1 and 2 editors: they typically damage themselves rather than the authors, to whom they are just irritating.  They essentially take upon themselves the job of copy editor, requiring authors to make minor changes to formatting and punctuation, rather than restricting themselves to matters of content and substance. Journal publishers have got wise to the fact that they can save a lot of money by sacking all the copy editors and requiring the academic editor to do the work instead, and they realise they have hit gold if they can find a natural Obsessive to do this. Academics should be aware of this trap: their training equips them to judge the science, and they should not spend hours looking for extraneous full stops and missing italicisation.  They should remember that they already do work, typically for no reward, for publishers who make a lot of money from journals, and they should demand that the publisher offers appropriate support to them and their authors.  (They should also employ people to assist authors with graphics – see blog).

Summing up
The main problem for authors is that you often don’t know what species of editor you are dealing with until after the event of submitting a paper. In my main field of psychology, I am impressed at how many journals do have Paragons. The APA journals are usually good, in my experience, though Obsessives do make an appearance, and I know of one case where a junior colleague’s career was seriously blighted by a mega-Sloth.  It’s harder to generalise about small moderate-impact journals: many of them are overseen by a dedicated Paragon, but my impression is that you can only be a Paragon for 10 years at most. Editors who have served a longer term than this are liable to transmute into Sloths.  I also publish in the fields of neuroscience and genetics, and here I’ve struck more variability, with Returning Officers, Automatons and Deities being fairly prevalent.  If you want to publish in the really top journals, you have to grapple with Talent Scouts:  my attempts to make my work exciting enough for them have been singularly unsuccessful, and I've given up on them, but it may encourage younger readers to know that I’ve had a happy and successful career all the same.

Note: The author was co-editor of Journal of Child Psychology and Psychiatry from 1990-1993 and Chief Editor from 1994-1997. This year she signed up as an Academic Editor for PLOS One, in support of their Open Access publishing policy.

Wednesday 15 September 2010

Science and journalism: an uneasy alliance


“Fish oil helps schoolchildren to concentrate” shouted the headline in the Observer, “US academics discover high doses of omega-3 fish oil combat hyperactivity and attention deficit disorder”.  Previous research on this topic has been decidedly underwhelming (see slides for 7th BDA international conference), so I set off to track down the source article.

Well, here's a surprise:  the study did not include any children with ADHD. It was an experiment with 33 typically-developing boys. And another surprise: on a test of sustained attention, there was no difference between boys who'd been  given supplementation of an omega 3 fatty acid (DHA) for 8 weeks and those given placebo. Indeed, boys given low-dose supplementation made marginally more errors after treatment. So where on earth did this story come from? Well, in a brain scanner, children given DHA supplementation showed a different pattern of brain activity during a concentration task, with greater activation of certain frontal cortical regions than the placebo group. However, the placebo group showed greater activation in other brain regions. It was not possible to conclude that the brains of the treated group were working better, given the large number of brain regions being compared, and the lack of relationship between activation pattern and task performance. 

A day or two later, another article was published, this time in the Guardian, with the headline Male involvement in pregnancy can weaken paternal bond. I tried to track down the research report. I couldn’t find it. I traced the researcher. He told me that the piece was not referring to published research, but rather to views he had expressed in an interview with a journalist. He told me he had not intended to recommend that fathers stay away from antenatal classes. He was also concerned that the article had described him as Director of his research institute - in fact he is a lecturer.

At this point, inspired by the example of the Ig Nobel prize, I announced the Orwellian Prize for Journalistic Misrepresentation, an award for the most inaccurate newspaper report of an academic piece of work, using strict and verifiable criteria. An article would get 3 points for an inaccuracy in the headline, 2 points for inaccuracy in the subtitle, and 1 point for inaccuracy in the body of the article. The fish oil piece totalled 16 points.

Comments on the prize were mostly supportive. I had thought I might attract hordes of journalistic trolls but they did not materialise. Indeed, several journalists responded positively, though they also noted some difficulties for my scoring system. They politely pointed out, for instance, that headlines, to which I gave particular weight in the scoring, are not written by the journalist. Also, it is not unknown for university press officers, who regard it as their job to get their institution mentioned in the media, to give misleading and over-hyped press releases, sometimes endorsed by attention-seeking researchers.

But over in the mainstream media, a fight was brewing up. Ben Goldacre, whose Bad Science column in the Guardian I’ve long regarded as a model of science communication, independently picked up on the fish oil article and gave its author a thorough lambasting.  Jeremy Laurance of the Independent retorted with a piece in which he attacked Goldacre. Laurance made three points: first, science journalism is generally good; second, reporters can’t be expected to check everything they are told (implying that the fault for inaccuracy lay with the researcher in this case), and third, that journalists work under intense pressure and should not be castigated for sometimes making mistakes.

I would be the first to agree with Laurance’s initial point. During occasional trips to Australia and North America, I've found the printed media to be mostly written as if for readers with rather few neurons and no critical faculties. Only when deprived of them do you appreciate British newspapers. They employ many talented people who can write engagingly on a range of issues, including science. Regarding the second point, I am less certain. While I have some sympathy with the dilemma of a science reporter who has to report on a topic without the benefit of expertise, stories of hyped-up press releases and self-publicising but flawed researchers are numerous enough that I think any journalist worth their salt should at least read the abstract of the research paper, or ask a reputable expert for their opinion, rather than taking things on trust. This is particularly important when writing about topics such as developmental disorders that make people’s life a misery. Many parents of children with ADHD would feed their child a diet of caviare if they felt it would improve their chances in life. If they read a piece in a reputable newspaper stating that fish oil will help with concentration, they will go out and buy fish oil.(I've no idea whether fish oil sales spiked in June, but if anyone knows how to check that out, I'd be interested in the answer).  In short, reporting in this area has consequences – it can raise false hopes and make people spend unnecessarily.

On the third point, lack of time, Goldacre’s supporters pointed out that working as a doctor is not exactly a life of leisure, yet Ben manages to do a meticulously researched column every week. Other science bloggers write excellent pieces while holding down a full-time day-job.

It was unfortunate indeed that the following week, Laurance, whom I've always regarded as one of our better science journalists, produced a contender for the Orwellian in an Independent report on a treatment for people with Alzheimer’s disease. Under the title 'Magnets can improve Alzheimer’s symptoms' he described a small-scale trial of a treatment based on repetitive transcranial magnetic simulation, a well-established method for activating or inhibiting neurons by using a rapidly changing strong magnetic field. In this case,  the account of the research seemed accurate enough. The problem was the context in which Laurance placed the story, which was to draw parallels with ‘magnet therapy’ involving the use of bracelets and charms.  Several commentators on the electronic version of the story went on the attack, with one stating “This is not worthy of print and it is absolutely shameful journalism.”

I was recently interviewed for the More or Less radio 4 program about the Orwellian Prize, together with a science journalist who clearly felt I was being unfair in not making allowances for the way journalists work – using arguments similar to those made by Jeremy Laurance. At one point when we were off the air, she said, “But don’t you make loads of mistakes?” I realised when I said no that I was simultanously tempting fate, and giving an impression of arrogance.  Of course I do make mistakes all the time, but  I go to immense lengths of checking and rechecking papers, computations, etc, to avoid making errors in published work. A degree of obsessionality is an essential attribute for a scientist. If our published papers contained ‘loads of’ mistakes we’d be despised by our peers, and probably out of a job.

But is the difference between journalists and scientists just one of accuracy?  My concern is that there is much more to it than that. I did a small experiment with Google to find out how long it would take to find an account of transcranial magnetic stimulation. Answer: less than a minute. Wikipedia gives a straightforward description that makes  it abundantly clear that this treatment has nothing whatever to do with 'magnet therapy'. Laurance may be a busy man, but this is no excuse for his failure to check this out.

So here we come to the nub of the matter, and the reason why scientists tend to get cross about misleading reporting: it is not just down to human error. The errors aren't random: they fall in a particular pattern suggesting that pressure to produce good stories leads to systematic distortion, in a distinctly Orwellian fashion. Dodgy reporting comes in three kinds:

1. Propaganda: the worst case of misleading information, when there is deliberate distortion or manipulation of facts to support the editor’s policy. I think and hope this is pretty rare, though some reporting of climate change science seems to fall in this category. For instance, the Australian, the biggest-selling national daily newspaper in Australia, seems much happier to report on science that queries climate change than on science that provides evidence for it. A similar pattern could be detected in the hysteria surrounding the MMR controversy, where some papers only covered stories that argued for a link between vaccination and autism. It is inconceivable that such bias is just the result of journalists being too inexpert or too busy to check their facts. Another clue to a story being propaganda is when it goes beyond reporting of science to become personal, querying the objectivity, political allegiances and honesty of the scientists. Because scientists are no more perfect than other human beings, it is important that journalists do scrutinise their motives, but the odd thing is that this happens only when scientists are providing inconvenient evidence against an editorial position. The Australian published 85 articles about the 'climategate' leaked emails, in which accusations of dishonesty by scientists were repeated, but they did not cover the report vindicating the scientists at all. 

2. Hype. This typically does not involve actual misrepresentation of the research, but a bending of its conclusions to fit journalistic interests, typically by focusing more on future implications of a study rather than its actual findings. Institutional press officers, and sometimes scientists themselves, may collude with this kind of reporting, because they want to get their story into the papers and realise it needs some kind of spin to be publishable. In my interview with More or Less, I explained how journalists always wanted to know how research could be immediately applied, and this often led to unrealistic claims (see my blog on screening, for examples). The journalist’s response was unequivocal. She was perfectly entitled to ask a scientist what relevance their work was, and if the answer was none, then why were they taking public money to do it? But this reveals a misunderstanding of how research works.  Scientific discoveries proceed incrementally, and the goal of a study is often increased understanding of a phenomenon. This may take years: in terms of research questions, the low-hanging fruit was plucked decades ago, and we are left with the difficult problems.  Of course, if one works on disorders, the ultimate goal is to use that understanding to improve diagnosis or treatment, but the path is a long and slow one.  I discussed the conflict between the nature of scientific progress and the journalists’ need for a ‘breakthrough’ in another blog. So the typical researcher is, on the one hand, being encouraged by their institution to talk to the media, and on the other hand knows that their research will be dismissed as uninteresting (or even pointless) if it can’t be bundled into a juicy sound-bite with a message for the lay person. One of two reactions ensues: many scientists just give up attempting to talk to the media; others are prepared to mould an account of their research into what the journalists want. This means that the less scrupulous academics are more likely to monopolise media attention.

3. Omission: this is harder to pin down, but is nonetheless an aspect of science journalism that can be infuriating. What happens is that the papers go overboard for a story on a particular topic, but totally ignore other research in the same area. So, a few weeks before the fish-oil/ADHD paper was covered, a much larger and well-conducted trial of omega-3 supplementation in school-children was published but ignored by the media. Another striking example was when the salesman Wynford Dore was actively promoting his expensive exercise-based treatment for dyslexia, skilfully using press releases to get media coverage, including a headline item on the BBC News. The story came from a flawed small-scale study published in a specialist journal. While this was given prominence, excellent trials of other more standard interventions went unreported (for just one example, see this link).  I guess it is inevitable: Telling the world that you can cure dyslexia by balancing on a wobble board is newsworthy - it has both novelty and human interest. Telling the world that you can improve reading  with a phonologically-based intervention has a bit of human interest but is less surprising and less newsworthy. Telling the world that balancing on a wobble board has no impact on dyslexia whatsoever is not at all surprising, and is only of interest to those who have paid £3000 for the intervention, so it's totally un-newsworthy.  It's easy to see why this happens: it's just a more extreme form of the publication bias that also tarnishes academic journals whose editors favour 'interesting' research (see also Goldacre on similar issues). Problem is, it has consequences.

For an intelligent analysis of these issues, see Zoe Corbyn’s article in the Times Higher Education, and for some ideas about alternative approaches to science reporting, a blog by Alice Bell. I, meanwhile, am hoping that there won’t be any nominations for the Orwellian Prize that earn more points than the fish oil story, but I’m not all that confident.

P.S. I wanted to link to the original fish oil article, but it is no longer available on the web. The text is on my blog page describing the Orwellian prize.

P.P.S. Ah, I’ve just had a new nomination that gets 17 points, largely because it ignored wise advice tweeted recently by Noah Gray (@noahWG), Senior Editor at Nature: “Journalism Pro Tip: If your piece starts talking more about a study's untested implications rather than what the science showed, start over."

P.P.P.S It has been gently pointed out to me that I erred in original version of this blog, and said the Laurance magnet piece was in the Guardian, when in fact it was in the Independent.  Deeply embarrassing but now corrected.

Friday 10 September 2010

Genes for optimism, dyslexia and obesity and other mythical beasts

Copyright: www.CartoonStock.com




I recently received an email from a company called mygeneprofile: “By discovering your child's inborn talents & personality traits, it can surely provide a great head start to groom your child in the right way… our Inborn Talent Genetic Test has 99.8% accuracy.” I’d registered to receive information from the company having heard they were offering a genetic test for such diverse traits as optimism, composure, intelligence, and dancing (link).

Despite all the efforts of the Human Genome Project, I was not aware of any genetic test that could reliably predict a child’s personality or ability. I was not therefore surprised when my emails asking for evidence went unanswered, though I continue to receive messages that oscillate between carrots (free gifts! discounts!!) and sticks (without this test “your child will have MISERABLE life (sic))”.

The test company relies on a widespread assumption that people’s psychological attributes are predictable from their genes. So where does this belief come from, and why is it wrong? 

People’s understanding of genetic effects is heavily influenced by the way genetics is taught in schools. Mendel and his wrinkly and smooth peas make a nice introduction to genetic transmission, but the downside is that we go away with the idea that genes have an all-or-none effect on a binary trait.  Some characteristics are inherited this way (more or less), and they tend to be the ones that textbooks focus on: e.g., eye colour, colour-blindness, Huntington’s disease. But most genetic effects are far more subtle and complex than this. Take height, for instance. Genes are important in determining how tall you are, but this is not down to one gene: instead, there is a whole host of genes, each of which nudges height up or down by a small amount (see link).

The expression of a gene may also depend crucially on the environment; for instance, obesity relates both to calorie intake and genetic predisposition, but the effects are not just additive: some people can eat a great deal without gaining weight, whereas in others, body mass depends substantially on food intake (see link). Furthermore, a genetic predisposition to obesity can be counteracted by exercise (see link).Furthermore, genetic influences may interact in complicated ways.  For instance, coat colour in mice is affected by combinations of genes, so that one cannot predict whether a mouse is black, white or agouti (mouse coloured!) just by knowledge of status of one gene.

This means that we get a very different impression of strength of genetic influences on a trait if we look at the impact of a person’s whole genome, compared to looking at individual genes in isolation. The twin study was the traditional method for estimating genetic influences before we had the technology to study genes directly, and it compares how far people’s similarity on a trait depends on their genetic relationship. Researchers measure a trait, such as sensation-seeking, in identical and fraternal twin pairs growing up in the same environment and consider whether the two twin types are equally similar. If both sets of twins resemble each other equally strongly, this indicates that the environment, rather than genes, is critical. And if twins don’t resemble one another at all, this could mean either that the trait is influenced by child-specific experiences, not shared by the co-twin, or that our measure of sensation-seeking is unreliable.  But if identical twins are more similar than fraternal twins, this means genes affect the trait, i.e. it is heritable. There are several niggly criticisms of the twin method; for instance, it can give misleading estimates if identical twins are treated more similarly than fraternal twins, or if twinning itself influences the trait in question. For most traits, however, these don’t seem sufficient to explain away the substantial heritability estimates that are found for traits such as height, reading ability, and sensation-seeking.  But these estimates don’t tell us about the individual genes that influence a trait – they rather indicate how important genes are relative to non-genetic influences.

Interactive effects, either between multiple genes or between genes and environments, will not be detected in a conventional twin study analysis. If a gene is expressed only in a particular environment, twins who have the same version of the gene will usually also have the same environment, and so the expression of the gene will be the same for both. And for an effect that depends on having a particular combination of genes, identical twins will have the same constellation of genetic variants, whereas the likelihood of fraternal twins having an identical gene profile decreases with the number of genes involved.  Heritability estimates depend on comparing similarity of a trait for identical vs fraternal twins, and will be increased if gene-gene interactions are involved.

In contrast,  genome-wide association studies are designed to find individual genes that influence specific traits. They adopt the strategy of looking for associations between DNA variants (alleles) and the trait, either by categorising people, e.g. as dyslexic or not, and comparing the proportions with different alleles, or by seeing whether people who have zero, one or two copies of an allele differ in their average score on a trait such as reading ability.  When these studies started out, many people assumed we would find gene variants that exerted a big effect, and so might reasonably be termed ‘the gene for” dyslexia, optimism, and so on.  However, this has not been the case.

Take personality, for instance, one of the domains that mygeneprofile claims to test for. A few weeks ago, a major study was reported in which  the genes of over 5000 people were investigated but no significant associations were found. Commentators on the research argued that the measurement of personality – typically on the basis of self-report questionnaires - may be the problem.  But the self-same measures yield high estimates of heritability when used in twin studies.  And a similar pattern has been found for other traits: including height, intelligence and obesity, i.e., a mismatch in evidence of genetic influence from twin studies (typically moderate to strong for these traits) and findings of individual genes associated with the trait (with effects that are very small at best). 

This account may surprise readers who have read of recent discoveries of genes for conditions such as dyslexia, where the impression is sometimes given that there are strong effects.  The reason is that reports of molecular genetic studies usually emphasise the p-value, a measure of how probable it is that a result could have arisen by chance. A low p-value indicates that a result is reliable, but it does not mean the effect is large. These studies typically use very large samples precisely because this allows them detect even small effects.  Consider one of the more reliable associations between genes and behaviour: a gene known as KIAA0319 which has been found to relate to reading ability in several different samples. In one study, an overall association was reported with p = .0001, indicating that the likelihood of the association being a fluke is 1 in 10,000. However, this reflected the fact that one gene variant was found in 39% of normal readers and only 25% of dyslexics, with a different variant being seen in 30% of controls and 35% of dyslexics. Some commentators have argued that such small effects are uninteresting.  I disagree: findings like this can pave the way for studies into the neurobiological effects of the gene on brain development (see link), and for studies of gene-gene and gene-environment interactions.  But it does mean that talk of a ‘gene for dyslexia’, or genetic screening for personality or ability are seriously misguided.

The small effect size of individual genes, and interactions with environment or other genes,  are not the only explanations for “missing heritability”. A trait may be influenced by genetic variants that have a large effect but which are individually very rare in the population. These would be very hard to detect using current methods. The role of so-called copy number variants is also a focus of current interest: these are large chunks of DNA which are replicated or deleted and which are surprisingly common in all of us.  These lead to an increase or decrease in gene product, but won’t be found with standard methods that focus just on identifying the DNA sequence. Both mechanisms are thought to be important in the genetics of autism, which is increasingly looking like a highly heterogeneous condition – i.e. there are multiple genetic risk factors and different ones are important in different people. 

What are the implications of all of this for the stories we hear in the media about new genetic discoveries?  The main message is we need to be aware of the small effect of most individual genes on human traits. The idea that we can test for a single gene that causes musical talent, optimism or intelligence is just plain wrong. Even where reliable associations are found, they don’t correspond to the kind of major influences that we learned about in school biology. And we need to realise that twin studies, which consider the total effect of a person’s genetic makeup on a trait, can give different results from molecular studies of individual genes. What makes us individual can’t be reduced to the net effect of a few individual genes.

Background reading

Bishop, D. V. M. (2009). Genes, cognition and communication: insights from neurodevelopmental disorders. The Year in Cognitive Neuroscience: Annals of the New York Academy of Sciences, 1156, 1-18.

Maher, B. (2008). Personal genomes: The case of the missing heritability. Nature, 456, 18-21 doi:10.1038/456018a.

Plomin, R., DeFries, J. C., McClearn, G. E.,  McGuffin, P. (2008). Behavioral Genetics. (5th Edition). New York: Worth Publishers.

Rutter, M. (2006). Genes and Behavior: Nature-Nurture Interplay Explained. Oxford: Blackwell.


Note: this is a slightly extended version of a blog on Guardian Science Blog, 9/9/10