Home » Blog »

John Maddox Prize 2014

Nominate someone who stands up for science

Learn more


watch and share the AllTrials campaign video

Learn more

Plant Science Panel

GMOs, insecticides, biofuels …

Learn more

Ask for Evidence

in policy, advertising, media and more ...

Learn more



Who will take responsibility?

The Government has known that the official statistics are wrong since the Royal Statistical Society raised the issue in the wake of the 2009 H1N1 pandemic. Since then it’s been bounced around departments until we and charities who use and fund research that is affected by this had to write to the Prime Minister calling for action.

I have been trying to track down a response since we sent the letter a month ago. In that time, I’ve repeatedly called Number 10, Department of Health, Ministry of Justice and Home Office, each of whom have sent it to someone else for a response at least once. Number 10 has most recently tasked the Ministry of Justice with providing a response and told us Number 10 won’t be able to help us anymore. Today, the Ministry of Justice told me that a draft response has just been sent to the Minister’s office for sign off and we could possibly expect a response by early next week.

Of course, I’ve now been told several times that we could expect a response very soon but one has yet to materialise. I’ll believe it when I see it and in the meantime, the official statistics are still wrong.

The "accuracy" of screening tests

This is a guest post by Professor David Colquhoun, FRS

Anything about Alzheimer’s disease is front line news in the media. No doubt that had not escaped the notice of Kings College London when they issued a press release about a recent study of a test for development of dementia based on blood tests. It was widely hailed in the media as a breakthrough in dementia research. For example, the far from accurate BBC report. The main reason for the inaccuracies are, as so often, the press release. It said:

"They identified a combination of 10 proteins capable of predicting whether individuals with MCI would develop Alzheimer’s disease within a year, with an accuracy of 87 percent"

The original paper says

"Sixteen proteins correlated with disease severity and cognitive decline. Strongest associations were in the MCI group with a panel of 10 proteins predicting progression to AD (accuracy 87%, sensitivity 85% and specificity 88%)."

What matters to the patient is the probability that, if they come out positive when tested, they will actually get dementia. The Guardian quoted Dr James Pickett, head of research at the Alzheimer’s Society, as saying

"These 10 proteins can predict conversion to dementia with less than 90% accuracy, meaning one in 10 people would get an incorrect result."

That statement simply isn't right (or, at least, it's very misleading). The proper way to work out the relevant number has been explained in many places - I did it recently on my blog.

The easiest way to work it out is to make a tree diagram. The diagram is like that on my blog, but with a sensitivity of 85% and a specificity of 88%, as specified in the paper.

In order to work out the number we need, we have to specify the true prevalence of people who will develop dementia, in the population being tested. In the tree diagram, this has been taken as 10%. The diagram shows that, out of 1000 people tested, there are 85 + 108 = 193 with a positive test result. Out ot this 193, rather more than half (108) are false positives, so if you test positive there is a 56% chance that it's a false alarm (108/193 = 0.56). A false discovery rate of 56% is far too high for a good test. This figure of 56% seems to be the basis for a rather good post by NHS Choices with the title “Blood test for Alzheimer’s ‘no better than coin toss’ 

If the prevalence were taken as 5% (a value that's been given for the over-60 age group) that fraction of false alarms would rise to a disastrous 73%.

How are these numbers related to the claim that the test is "87% accurate"? That claim was parroted in most of the media reports, and it is why Dr Pickett said "one in 10 people would get an incorrect result.

The paper itself didn't define "accuracy" anywhere, and I wasn't familiar with the term in this context (though Stephen Senn pointed out that it is mentioned briefly in the Wiikipedia entry for Sensitivity and Specificity). The senior author confirmed that "accuracy" means the total fraction of tests, positive or negative, that give the right result. We see from the tree diagram that, out of 1000 tests, there are 85 correct positive tests and 792 correct negative tests, so the accuracy (with a prevalence of 0.1) is (85 + 792)/1000 = 88%, close to the value that's cited in the paper.

Accuracy, defined in this way, seems to me not to be a useful measure at all. It conflates positive and negative results and they need to be kept separate to understand the problem. Inspection of the tree diagram shows that it can be expressed algebraically as

accuracy = (sensitivity × prevalence) + (specificity × (1 − prevalence))

It is therefore merely a weighted mean of sensitivity and specificity (weighted by the prevalence). With the numbers in this case, it varies from 0.88 (when prevalence = 0) to 0.85 (when prevalence = 1). Thus it will inevitably give a much more flattering view of the test than the false discovery rate.

No doubt, it is too much to expect that a hard-pressed journalist would have time to figure this out, though it isn't clear that they wouldn't have time to contact someone who understands it. It is clear that it should have been explained in the press release. It wasn't.

In fact, reading the paper shows that the test was not being proposed as a screening test for dementia at all. It was proposed as a way to select patients for entry into clinical trials. The population that was being tested was very different from the general population of old people, being patients who come to memory clinics in trials centres (the potential trials population)

How best to select patients for entry into clinical trials is a matter of great interest to people who are running trials. It is of very little interest to the public. So all this confusion could have been avoided if Kings had refrained from issuing a press release at all, for a paper like this.

I guess universities think that PR is more important than accuracy. That's a bad mistake in an age when pretentions get quickly punctured on the web.

There is more discussion of this work in the follow-up on my blog

Hightable with Tracey Brown: Asking for Evidence to make Sense About Science

Hephzi Tagoe, Science Writer at Oxbridge Biotech Roundtable, and past volunteer at Sense About Science, interviewed Tracey Brown about her job and changing public discussion about science.

You have been director of Sense About Science for over 10 years, what is the secret to your success? 

A large part of it is probably that I am, and we are, hard on ourselves, always asking whether we’re really making a difference. I hate complacency. It means I tend to see the mountain in front of us rather than being pleased with what we’ve climbed, which probably drives the team crazy at times, but it gives us a self-critical culture that drives us on.

Continue reading > 

EU GM plant cultivation - how did we get to this weird situation?

The EU’s Environment Council (that’s the group of environment Ministers from the 28 member states) this morning voted in favour of a proposal that would allow member states to opt out of growing a genetically modified crop if one is approved for cultivation. The proposal was supported by 26 countries with two abstaining. So member states which will probably never allow GM crops to be grown and those which are keen to get started voted for this.

But a lot of people don’t like it. Anti-GM campaign groups are against it because they think it will be challenged under trade laws. Companies don’t like it because they think it will damage innovation and open them up to legal action for cross border contamination. People who support Europe as an idea don’t like it because it undermines the principle of European wide regulation.

How did we get to such a strange situation?

The Cultivation proposal represents an attempt by European politicians to find a ‘go around’ because the European regulation of crops bred through one kind of genetic alteration (GM) is different, on no sound scientific basis, from regulation of new crops bred using another kind of genetic alteration (eg mutagenesis). The result of this has been years of approvals on scientific and safety basis, which have then been thrown to the winds as different nations delay political approval because of their own national politics. These delays put the European Commission in the dock at the European Courts of Justice, which ruled last year that a regulatory approval process could not be arbitrarily held up in that way.

The Cultivation proposal is the result of responding to that situation – a command to have a rational system confronted by the reality of politically motivated delays. By allowing countries to opt out at the beginning of a submission for cultivation approval, those national governments who want to continue to be seen to oppose GM can (e.g. France, where the government noisily backs the German greens’ anti-GM rhetoric in the hope that German reps won’t make too much fuss about its pro-nuclear position), without it stopping others from approving and without it technically breaking the fundamental terms of Europe and the common market.

The next step is second reading at European Parliament and the Commissioner thinks everything will be wrapped up by Christmas.

PETA says milk is linked to autism

PETA is an organisation which campaigns against drinking milk because this is part of the human use of animals for food and clothing that they object to. That’s fine and up to them. What’s not fine is this: “Got autism? Studies have found a link between cow’s milk and autismplastered on billboards and promoted on their website.

It is an extraordinary claim. I can see why they like it - worried, even desperate parents are an attentive audience. Stuck with little time this morning, I read through Steven Novella’s excellent review of the claim (which originates in the US) which found there is no evidence to support it. I tweeted at PETA to take it down. No response on twitter but the retweets got very lively. I emailed them at 3 o’clock, and got back a very long automatic response (they must be getting a lot of messages about this now).

I’ve just been able to get back to it. PETA's email says parents have told them they saw improvements in their child’s behaviour when they took milk out of their diet, and it mentions two studies into dairy products and autism – one of which actually concluded that the evidence is inconclusive and the other, from 20 years ago, concluded only that researchers could “hypothesise a relationship between food allergy and infantile autism."  PETA’s email to me also said that “Research has linked dairy consumption to higher rates of ovarian cancer, prostate cancer, and diabetes. Dairy products have also been linked to juvenile diabetes, allergies, constipation, and obesity” There’s no evidence given in support of any of these further claims. I’m going to have to phone them and ask them to meet me on Monday.

In the meantime, if you have a moment, whether you’re a vegan who doesn’t need misinformation to support that or are affected by autism and able to tell PETA you’re sick of people using it as a peg to hang all causes on, or if you just want to challenge people putting rubbish into circulation, please contact them yourselves at info@peta.org. By all means copy me your thoughts (slane@senseaboutscience.org) and I’ll update this in coming days.

UPDATE Monday 2nd June 2014 16:10: Ben Williamson from PETA’s UK office called me just now. He couldn't answer my questions, so I need to hear from him again when he can. He hadn’t looked at the studies PETA supplied as evidence to support its claim so when I set out the limitations of those (see above) and said “so, no evidence, then” he had no answer. He did however tell me that they have heard stories from parents about how helpful they found it to eliminate dairy foods from children’s diets, and that PETA’s role is to provide parents with valuable information like that. This was his answer too when I told him that parents and families have told us it’s not OK to use extraordinary and unsupported claims about autism to get attention for PETA’s agenda. He didn't acknowledge at all that claims like PETA's add to the pressure to wade through conflicting claims about the condition people affected by autism already feel.

He was clear that the campaign started in the US and is not a PETA UK campaign but he didn’t know if PETA UK has a position on it – he’s going to find out if PETA UK supports the claims by the end of today. I said they should ask PETA US to retract the claims. I'll let you know his response.

UPDATE Monday 2nd June 17:20: Ben Williamson has emailed me following our phone call. He hasn't directly answered my questions but says that PETA UK believes that PETA US’s website “provides parents with potentially valuable information” and that “research has shown that a dairy-free diet may help kids who have autism.” I told him on the phone that the studies they cite don't show this.

His email also says that consumption of milk contributes to “asthma, constipation, recurrent ear infections, iron deficiency, anaemia and even some cancers” and that “cows' milk might be the perfect food for baby cows, but it might also be making kids sick.” I’ve asked him to show me some evidence for these claims – as far as I know dairy products have actually been shown to protect against some cancers.