Home » Blog »

Voice of Young Science

Play an active role in public debates about science

Learn more


watch and share the AllTrials campaign video

Learn more

Plant Science Panel

GMOs, insecticides, biofuels …

Learn more

Ask for Evidence

in policy, advertising, media and more ...

Learn more



Extraordinary claims need extraordinary evidence: The importance of skepticism

Guest post by Grace Gottlieb, Sense About Science volunteer (@Grace_Gottlieb)

Check any newspaper today and you will find advice on what to eat or drink to lose weight or look good. Advertisers exploit our fixation with diet to sell products – from superfoods and “detox” treatments to diet supplements and even this extraordinary weight loss string. Buying into these fads can affect not just your bank balance but, more importantly, your health. So make sure you Ask for Evidence behind claims before believing everything you read.

Trust me, I’m a celebrity…Celebrities and Science 2013

Many companies use celebrities to sell their products. You have probably seen Vitabiotics adverts where athletes feature with quotes like “Anyone competing or living a healthy lifestyle needs Wellman in their life. I'm a champion and I recommend it.” When Sparkle Ward asked Vitabiotics for evidence, they responded saying that the claims made in their adverts are based on individual testimonies only and not scientific evidence. That’s clearly not a great tagline for selling a product! Companies also promote their products as “natural” and “chemical-free”. Well, “natural” doesn’t necessarily mean safe, and everything is made of chemicals. Apples, for example, contain toxic chemicals which would kill you if taken in a large enough dose.

But it’s “scientifically proven”…

Not everyone buys into celebrity endorsements or the idea that “natural” is best. But advertisers have another tactic up their sleeve – scientific-sounding claims to make you think there’s evidence. Danone yoghurt brand Actimel, for example, was advertised as “scientifically proven to help support your kids’ defences” but the Advertising Standards Authority (ASA) ruled that its evidence “wasn’t good enough to prove the claim”. The ASA can’t stop all misleading claims though, so it’s a good idea to be skeptical when you hear claims like “clinically proven” or “dermatologically tested” – vague terms which tell you nothing about the reliability of the evidence.

HoneyOften there is an element of truth to scientific claims, but it is twisted or exaggerated. In a Daily Mirror article on beauty secrets, a co-founder of Lush declared “Getting more honey in your diet is great for the face. It’s good at helping the skin absorb moisture.” It’s true that honey itself absorbs moisture but does eating honey help the skin absorb moisture? I asked Lush for evidence and they replied to say that the Lush co-founder “doesn’t remember saying that” but added that honey is “much easier for our body to digest” than sugar and she “feels that it fuels the muscles for longer”. Would you buy a product if the evidence to back it up was based on someone’s feelings?

Exceptional ingredients – are they special or just trendy?

Lush also told me that “Honey has been used as a skin ointment for over 2000 years”, as if that means it must be good for your skin. (Lead, which is poisonous, has been used in cosmetics since Roman times, but they didn’t mention that.) A lot of fuss is made out of the fact that honey and other products are traditional. For example, the Director of the Honey Research Unit at the University of Waikato in New Zealand has said, “I’m a great believer that if anything is traditional then it works. There may be no rational explanation, but that’s because we haven’t found it”. Flawed reasoning like this is at the core of the craze over Manuka honey, which supposedly has “curative powers” superior to regular honey. This belief is so widespread that manufacturers get away with selling it in small jars priced at £60, and there are even fake Manuka honeys on the market – normal honeys falsely advertised as Manuka so sellers can up the price.

Want to lose weight? Ask for Evidence before buying into fad diet claims

Misrepresenting the science seems to be the basis of all fad diets. Some diets have nothing to do with sound science, such as the clay diet which is said to “remove negative isotopes, helping you detox and stay in shape”. Diets such as this one are so ridiculous that they might as well have been made up. In fact, some are indistinguishable from diets that have just been made up. If you don’t believe it, have a go at the Spoof Diets quiz, which challenges you to spot fad diets from fictional ones – it’s harder than you might expect. So how are you supposed to tell what’s good for you and what to avoid? The answer is to Ask for Evidence. Scientists from the Voice of Young Science network helped us look at the evidence behind 13 diets – fad and fiction – and the results show just how important it is to Ask for Evidence behind claims, especially where your health is concerned.

Ask for Evidence

Who will take responsibility?

The Government has known that the official statistics are wrong since the Royal Statistical Society raised the issue in the wake of the 2009 H1N1 pandemic. Since then it’s been bounced around departments until we and charities who use and fund research that is affected by this had to write to the Prime Minister calling for action.

I have been trying to track down a response since we sent the letter a month ago. In that time, I’ve repeatedly called Number 10, Department of Health, Ministry of Justice and Home Office, each of whom have sent it to someone else for a response at least once. Number 10 has most recently tasked the Ministry of Justice with providing a response and told us Number 10 won’t be able to help us anymore. Today, the Ministry of Justice told me that a draft response has just been sent to the Minister’s office for sign off and we could possibly expect a response by early next week.

Of course, I’ve now been told several times that we could expect a response very soon but one has yet to materialise. I’ll believe it when I see it and in the meantime, the official statistics are still wrong.

The "accuracy" of screening tests

This is a guest post by Professor David Colquhoun, FRS

Anything about Alzheimer’s disease is front line news in the media. No doubt that had not escaped the notice of Kings College London when they issued a press release about a recent study of a test for development of dementia based on blood tests. It was widely hailed in the media as a breakthrough in dementia research. For example, the far from accurate BBC report. The main reason for the inaccuracies are, as so often, the press release. It said:

"They identified a combination of 10 proteins capable of predicting whether individuals with MCI would develop Alzheimer’s disease within a year, with an accuracy of 87 percent"

The original paper says

"Sixteen proteins correlated with disease severity and cognitive decline. Strongest associations were in the MCI group with a panel of 10 proteins predicting progression to AD (accuracy 87%, sensitivity 85% and specificity 88%)."

What matters to the patient is the probability that, if they come out positive when tested, they will actually get dementia. The Guardian quoted Dr James Pickett, head of research at the Alzheimer’s Society, as saying

"These 10 proteins can predict conversion to dementia with less than 90% accuracy, meaning one in 10 people would get an incorrect result."

That statement simply isn't right (or, at least, it's very misleading). The proper way to work out the relevant number has been explained in many places - I did it recently on my blog.

The easiest way to work it out is to make a tree diagram. The diagram is like that on my blog, but with a sensitivity of 85% and a specificity of 88%, as specified in the paper.

In order to work out the number we need, we have to specify the true prevalence of people who will develop dementia, in the population being tested. In the tree diagram, this has been taken as 10%. The diagram shows that, out of 1000 people tested, there are 85 + 108 = 193 with a positive test result. Out ot this 193, rather more than half (108) are false positives, so if you test positive there is a 56% chance that it's a false alarm (108/193 = 0.56). A false discovery rate of 56% is far too high for a good test. This figure of 56% seems to be the basis for a rather good post by NHS Choices with the title “Blood test for Alzheimer’s ‘no better than coin toss’ 

If the prevalence were taken as 5% (a value that's been given for the over-60 age group) that fraction of false alarms would rise to a disastrous 73%.

How are these numbers related to the claim that the test is "87% accurate"? That claim was parroted in most of the media reports, and it is why Dr Pickett said "one in 10 people would get an incorrect result.

The paper itself didn't define "accuracy" anywhere, and I wasn't familiar with the term in this context (though Stephen Senn pointed out that it is mentioned briefly in the Wiikipedia entry for Sensitivity and Specificity). The senior author confirmed that "accuracy" means the total fraction of tests, positive or negative, that give the right result. We see from the tree diagram that, out of 1000 tests, there are 85 correct positive tests and 792 correct negative tests, so the accuracy (with a prevalence of 0.1) is (85 + 792)/1000 = 88%, close to the value that's cited in the paper.

Accuracy, defined in this way, seems to me not to be a useful measure at all. It conflates positive and negative results and they need to be kept separate to understand the problem. Inspection of the tree diagram shows that it can be expressed algebraically as

accuracy = (sensitivity × prevalence) + (specificity × (1 − prevalence))

It is therefore merely a weighted mean of sensitivity and specificity (weighted by the prevalence). With the numbers in this case, it varies from 0.88 (when prevalence = 0) to 0.85 (when prevalence = 1). Thus it will inevitably give a much more flattering view of the test than the false discovery rate.

No doubt, it is too much to expect that a hard-pressed journalist would have time to figure this out, though it isn't clear that they wouldn't have time to contact someone who understands it. It is clear that it should have been explained in the press release. It wasn't.

In fact, reading the paper shows that the test was not being proposed as a screening test for dementia at all. It was proposed as a way to select patients for entry into clinical trials. The population that was being tested was very different from the general population of old people, being patients who come to memory clinics in trials centres (the potential trials population)

How best to select patients for entry into clinical trials is a matter of great interest to people who are running trials. It is of very little interest to the public. So all this confusion could have been avoided if Kings had refrained from issuing a press release at all, for a paper like this.

I guess universities think that PR is more important than accuracy. That's a bad mistake in an age when pretentions get quickly punctured on the web.

There is more discussion of this work in the follow-up on my blog

Hightable with Tracey Brown: Asking for Evidence to make Sense About Science

Hephzi Tagoe, Science Writer at Oxbridge Biotech Roundtable, and past volunteer at Sense About Science, interviewed Tracey Brown about her job and changing public discussion about science.

You have been director of Sense About Science for over 10 years, what is the secret to your success? 

A large part of it is probably that I am, and we are, hard on ourselves, always asking whether we’re really making a difference. I hate complacency. It means I tend to see the mountain in front of us rather than being pleased with what we’ve climbed, which probably drives the team crazy at times, but it gives us a self-critical culture that drives us on.

Continue reading > 

EU GM plant cultivation - how did we get to this weird situation?

The EU’s Environment Council (that’s the group of environment Ministers from the 28 member states) this morning voted in favour of a proposal that would allow member states to opt out of growing a genetically modified crop if one is approved for cultivation. The proposal was supported by 26 countries with two abstaining. So member states which will probably never allow GM crops to be grown and those which are keen to get started voted for this.

But a lot of people don’t like it. Anti-GM campaign groups are against it because they think it will be challenged under trade laws. Companies don’t like it because they think it will damage innovation and open them up to legal action for cross border contamination. People who support Europe as an idea don’t like it because it undermines the principle of European wide regulation.

How did we get to such a strange situation?

The Cultivation proposal represents an attempt by European politicians to find a ‘go around’ because the European regulation of crops bred through one kind of genetic alteration (GM) is different, on no sound scientific basis, from regulation of new crops bred using another kind of genetic alteration (eg mutagenesis). The result of this has been years of approvals on scientific and safety basis, which have then been thrown to the winds as different nations delay political approval because of their own national politics. These delays put the European Commission in the dock at the European Courts of Justice, which ruled last year that a regulatory approval process could not be arbitrarily held up in that way.

The Cultivation proposal is the result of responding to that situation – a command to have a rational system confronted by the reality of politically motivated delays. By allowing countries to opt out at the beginning of a submission for cultivation approval, those national governments who want to continue to be seen to oppose GM can (e.g. France, where the government noisily backs the German greens’ anti-GM rhetoric in the hope that German reps won’t make too much fuss about its pro-nuclear position), without it stopping others from approving and without it technically breaking the fundamental terms of Europe and the common market.

The next step is second reading at European Parliament and the Commissioner thinks everything will be wrapped up by Christmas.