Sense about Science ? equipping people to make sense of science and evidence
ArchiveView full archive
- The Troubled Families debacle
- Citizen science in Europe: How to take a strategic approach
- It's silly to assume all research funded by corporations is bent
- The strange end of the Saatchi Bill
- Here's a plan to help the government to do better than its anti-lobbying clause
- Making the government's use of evidence more transparent
- Sense About Science at the METRICS conference
- Submission to the Independent Commission on Freedom of Information
- The vets are coming!
- The Times 10th October 2015
Posted by Volunteer on 18 October 2013
Jonathan Roberts is a genetic counsellor and blogs at www.thesarcasticowl.co.uk
Michael Gove’s special advisor Dominic Cummings is the latest figure to invoke the genetic basis for intelligence. Like many before he draws on the measurement of heritability but he’s crucially misunderstood the role of the environment in this scientific concept
Let us begin with what heritability is not:
It is not how much is genes (nature) and how much is environment (nurture). Saying “IQ shows 70% heritability” does not mean that 70% of IQ is because of our genes.
Instead heritability is a measurement of how much the variation in a trait, in a given environment, can be explained by genes. What is important about a heritability figure is that it is environment specific and thus a relative measurement. We can demonstrate this with a thought experiment using one of the workhouses of behavioural genetics: Twin studies.
Imagine the following. There are four children: Adam, Bob, Colin and Dean. Adam and Bob are identical twins. They share almost 100% of their DNA. Colin and Dean are non-identical twins. They share about 50% of their DNA. All four children share the same environment. We measure their verbal reasoning skills and the identical twins show more similarity than the non-identical twins. This means the heritability of verbal reasoning skills is high. Let’s say as high as 70%. ‘Bingo!’ Someone says. “Verbal ability is due to genetics. Now stop whining about your upbringing!”
But now imagine the same children are put in a different environment. This time Adam and Bob (the identical twins) are separated at birth. Adam has a good upbringing while Bob is severely mistreated. In fact Bob has little-to-zero verbal contact for the first four years of his life, something scientists know leads to children never truly mastering language. The same thing happens to Colin and Dean, the non-identical twins and we now have the following situation:
- Adam and Colin have had an average upbringing.
- Bob and Dean have never really been exposed to language.
We measure again the verbal reasoning skills of the children. Now Bob and Dean (neglected twins) score significantly less than Adam and Colin (average households).
What is the heritability of verbal reasoning now? While the genetic make up of the group is exactly the same the heritability will have gone down, as genetics will now explain less of the variance observed. The above is an extreme example but it does illustrate that heritability is a relative measurement.
Why is this important?
In his essay Cummings makes the mistake of assuming a high heritability score will hold true for all environments:
“Raising school performance of poorer children is an inherently worthwhile thing to try to do but it would not necessarily lower parent-offspring correlations (nor change heritability estimates)”
This is simply not true. Heritability scores will change as environments change. Cummings fundamentally misses the point that you can change the heritability of the trait by altering the environment.
High heritability scores, paradoxically, suggest that it is precisely the environment that matters. If you ignore this you can be misled into believing “it’s all in the genes.”
Posted by Prateek Buch on 11 October 2013
George Monbiot is concerned about UK government Ministers sidelining scientific evidence, citing the Canadian and Australian experience as a warning. Monbiot’s piece raises some valuable questions, but ignores the Principles for Scientific Advice to Government, based on those drafted by consensus amongst the scientific community in 2009 following the sacking of the government’s chief adviser on drugs, Professor David Nutt. These Principles are now part of the Ministerial Code, which every Minister signs up to. Prof Ian Boyd, Chief Scientific Adviser at DEFRA, also ignores these Principles in his intervention putting scientists back in their box.
Commentators, Ministers and their advisers need to be reminded of these Principles given the “chronically deep-seated mistrust of scientists” amongst government that Boyd complains of. More often than not it is government’s failure to stick to said Principles on the independence and integrity of scientific advice that leads to the breakdown of trust.
That was certainly the case when Professor Nutt was thrown out as Chairman of the Advisory Council on the Misuse of Drugs, with the same battles over evidence continuing to be fought – and frequently lost by scientists – on the classification of drugs or the science of climate change. There are as many, if not more, examples of dissonance between evidence and statistics on the one hand and policy as implemented on the other in areas of social policy such as education, crime, welfare and immigration. Policy that contradicts evidence isn’t necessarily the problem – of course elected policy-makers consider factors beyond scientific evidence – but policy-makers must at least be clear when and why they choose to set evidence aside.
It needn’t be this way. In 2006 a House of Commons Science and Technology Select Committee report set out how government should deal with scientific advice and risk in evidence-based policy making. Despite its authoritative recommendations, and notable improvements in how evidence is regarded by some in government, troubling practice persisted. Professor Nutt’s sacking was simply a symptom of the wider contempt with which evidence was, and sadly still is, held by many in Whitehall and Westminster in particular. It was to avoid such a clash between civil servants, elected representatives and the scientific community that Sense About Science and the Campaign for Science and Engineering – consulting directly with scientists – formulated the Principles that are now part of the Ministerial Code.
Recent abuses of evidence – and Professor Boyd’s remarkable admonition to scientists for daring to dissent from what is politically palatable – suggest that scientists need to defend this hard-fought territory and re-state how scientific advice should be handled by politicians. Ministers and their advisers need to be reminded that the Code and its Principles for Scientific Advice are there to be implemented in letter and spirit, to better equip our elected leaders to make informed decisions in line with reliable evidence.
Professor Boyd has responded to Monbiot’s article – a response that leaves many questions about the relationship between science and government unanswered, and that repeats unhelpful themes from his op-ed in the journal eLife – that it is not scientists’ “job to make politicians' decisions for them – when scientists start providing opinions about whether policies are right or wrong they risk becoming politicised.” This is a straw man – nobody is arguing for scientists to do politicians’ job, but that it is right and proper that scientists explain evidence, and the implications of different policy choices in light of that evidence. To Boyd, scientists expressing any conclusive thoughts about policy goes unacceptably beyond “sticking to the scientific evidence and clearly explaining the risk associated with different policy options.” He consider this an “adversarial politicisation of science” – whereas most in the scientific community, and I daresay in politics, consider it an essential component of the way science relates to public policy. Indeed, the Ministerial Code is explicit on this: “Scientific advisers are free to communicate publicly their advice to Government, subject to normal confidentiality restrictions, including when it appears to be inconsistent with Government policy.”
A key principle Professor Boyd ignores is that of transparency: that if politicians retain the right to override scientific evidence, they should tell us – the voters who empowered them to govern – why they chose to do so. Failing to be honest on that count, and subjugating science as a secondary concern in the policy-making process, will carry a heavy political price amongst an electorate that increasingly expects well-grounded policy and accountability.
Posted by Tracey Brown on 04 October 2013
Perhaps the findings of Science magazine’s investigation into the paucity of peer review aren’t that surprising, though it’s good that someone has made a fuss about it. But it’s not about open access, it’s about the author pays model and vanity publishing.
Vanity publishing has always existed (for journals and books) and of course uses the author-pays model since it doesn’t sell much. Until recently the sector was limited by the small market of authors willing to pay and was usually nation-based, meaning vanity publishers were easily identified as such and rarely used for publication of important papers or anything of interest to a wider community than the author and a small group.
What has changed is that following the Open Access (OA) movement, the author-pays model is now funded in many institutions and by some grant-giving bodies, so authors have budgets to spend on submitting papers. It has also become a more normal and expected way to publish.
There’s a corresponding burst of activity and change in scholarly publishing right now. New open access journals have grown rapidly. Over 2012 an average of four journals were being added to the Directory of Open Access Journals (DOAJ) every day. Some are existing subscription journals becoming open access and some are new titles or new collaborations. At the same time, though, the expansion of the author dollar worldwide appears to have given rise to a rapid increase in low-grade vanity publications with little quality control. They are seeking to attract those dollars, something that has attracted the attention of critics such as Jeffrey Beall, who are forming lists of journals suspected of predatory and aggressive marketing to the author market in return for no quality restrictions.
I hate the term ‘perfect storm,’ but there are other circumstances that fuel this boom for vanity publishing and help it to weave itself through the weft of scholarly publishing. This flurry of new journals and expansion of author-pays activity comes at a time of great international expansion of English language scholarly publishing. Authors from non-English speaking countries have increasingly looked outward over the past decade to international journals for publication and recognition (often pressured to do so by their home institutions). Vanity publishing, previously contained by the limited opportunities of author payments, is now given wider opportunity - there’s more money to be made from more people and less is known by those spending it in a busy, changing, unfamiliar publishing scene.
The Science sting purports to be about Open Access but as Phil Davis and Martin Eve point out, there are too many flaws in the investigation to teach us much about OA. Rather, Bohannon’s ‘sting’ is not a commentary on Open Access but on unscrupulous publications and vanity publishing being unshackled by the changing and fluid scholarly market place. And to a degree this is going to be the case for a while. Many new journals are backed by reputable names and publishers who are yet to see whether the journals will make a credible success of themselves or not (or, as in the case of the Journal of Natural Pharmaceuticals described by John Bohannon leave it to office staff to do the work).
So the changes in scholarly publishing, OA among them, have created opportunities not just for those who want to develop it but for the vain, the chancers and the lazy. In the history of change it’s ever thus. More trusted titles will emerge among the newer journals. Some promising things will slip into obscurity or non-scholarly practices, and big scholar and publisher names will dissociate from them. We need to up the critical watch, insist on standards of peer review and manuscript management where they are asserted, call out bad practice and yes perhaps have journals fearful of being hit by a sting. By doing all this we can speed up the process through which quality bubbles to the top and it becomes easier to spot the stuff at the bottom for what it is. But we also need to recognise that this is a process and that when things are embryonic, stings aren’t a reliable way of deciding what we’ll come to trust and admire.