Home » Blog »

Voice of Young Science

Play an active role in public debates about science

Learn more

AllTrials

watch and share the AllTrials campaign video

Learn more

Plant Science Panel

GMOs, insecticides, biofuels …

Learn more

Ask for Evidence

in policy, advertising, media and more ...

Learn more

Blog


 

Context is crucial

Guest post by Grace Gottlieb, Sense About Science volunteer (@Grace_Gottlieb)

“Lack of sleep linked with depression”

“Divorce linked to smoking” 

“Mother’s diet linked to childhood obesity”

Do these types of headlines look familiar to you? Newspapers, health websites and adverts for the latest ‘super foods’ regularly claim that a new ‘link’ has been discovered. But it’s important to look a little deeper when you see these sorts of headlines because with these ‘link’ stories, context is crucial.

The problem is that the word ‘link’ implies causation and assuming a ‘link’ is causal can lead you to the wrong conclusion. For example, a ‘link’ between sleeping with the light on and shortsightedness in young children made people think leaving a night light on makes you shortsighted. It turns out that the parents of shortsighted children are more likely to be shortsighted themselves (since shortsightedness is partly genetic), and this also means they are more likely to leave the light on in their children’s rooms. So night lights don’t cause shortsightedness. It is instead shortsightedness in parents that leads to both shortsightedness in their children and a night light being used.

It’s important to look at the context of a correlation to find out whether there really is a causal ‘link’. Try keeping in mind the principle that “correlation does not imply causation” to help weigh up whether a ‘link’ story is credible.

The dose makes the poison

It is becoming common knowledge that red wine and grapes contain a chemical called resveratrol which is ‘linked’ to longevity and cancer prevention. So will drinking lots of red wine make you live into your 90s and never get cancer? Unfortunately it probably won’t.

There’s some evidence from animal tests and experiments on cells grown in the lab that resveratrol may have an anti-cancer effect, but very little is known about its effects in humans. And many of the studies showing this anti-cancer effect have used doses of resveratrol that are far higher than the dose you could get in your diet. So drinking red wine will never be an effective way to prevent cancer.

Resveratrol is also marketed as an ‘anti-aging elixir’ in beauty products. Dr Cat Ball from the Biochemical Society decided to ask for evidence behind one of these products. She was sent a study which found that a resveratrol-containing solution increased lifespan – in mice, with the caveat that the researchers weren’t sure whether it was the resveratrol that was actually causing the effect.


Dose is just one part of the context behind a ‘link’ story. If you live in the UK and sit in the sun for just 10 minutes each day, your chances of developing skin cancer will be much lower than someone living in Australia who sunbathes daily for hours. And surely it’s obvious that putting a product containing a specific ingredient on your skin won't have the same effect as eating something with that ingredient? Unfortunately this is an assumption we see all too often. Mathilde Thomas, founder of beauty brand Caudalie, advised in a Daily Mirror article that you should eat grapes because they are “full of antioxidants, which help to give you really beautiful skin”. I asked Caudalie for evidence, and they got back to me explaining how antioxidants act by “inhibiting the oxidation of other molecules”. I was then told that: “Grapeseeds are very rich in polyphenols, such as [those in] our cosmetic range. This is why you can prevent oxidation by eating grapes and/or applying our creams and serums”. But there’s no reason to assume that applying something to your face will have the same effect as eating it. Caudalie failed to provide evidence that polyphenols, when consumed in grapes, either end up in your skin or can in any way work to make your skin “beautiful”.

Asking for Evidence can help

Next time you see a ‘link’ story or come across extraordinary claims like the one made by Caudalie, you can Ask for Evidence to find out the context, and see how robust the ‘link’ actually is – extraordinary claims need extraordinary evidence. 

Ask for Evidence


Extraordinary claims need extraordinary evidence: The importance of skepticism

Guest post by Grace Gottlieb, Sense About Science volunteer (@Grace_Gottlieb)

Check any newspaper today and you will find advice on what to eat or drink to lose weight or look good. Advertisers exploit our fixation with diet to sell products – from superfoods and “detox” treatments to diet supplements and even this extraordinary weight loss string. Buying into these fads can affect not just your bank balance but, more importantly, your health. So make sure you Ask for Evidence behind claims before believing everything you read.

Trust me, I’m a celebrity…Celebrities and Science 2013

Many companies use celebrities to sell their products. You have probably seen Vitabiotics adverts where athletes feature with quotes like “Anyone competing or living a healthy lifestyle needs Wellman in their life. I'm a champion and I recommend it.” When Sparkle Ward asked Vitabiotics for evidence, they responded saying that the claims made in their adverts are based on individual testimonies only and not scientific evidence. That’s clearly not a great tagline for selling a product! Companies also promote their products as “natural” and “chemical-free”. Well, “natural” doesn’t necessarily mean safe, and everything is made of chemicals. Apples, for example, contain toxic chemicals which would kill you if taken in a large enough dose.

But it’s “scientifically proven”…

Not everyone buys into celebrity endorsements or the idea that “natural” is best. But advertisers have another tactic up their sleeve – scientific-sounding claims to make you think there’s evidence. Danone yoghurt brand Actimel, for example, was advertised as “scientifically proven to help support your kids’ defences” but the Advertising Standards Authority (ASA) ruled that its evidence “wasn’t good enough to prove the claim”. The ASA can’t stop all misleading claims though, so it’s a good idea to be skeptical when you hear claims like “clinically proven” or “dermatologically tested” – vague terms which tell you nothing about the reliability of the evidence.

HoneyOften there is an element of truth to scientific claims, but it is twisted or exaggerated. In a Daily Mirror article on beauty secrets, a co-founder of Lush declared “Getting more honey in your diet is great for the face. It’s good at helping the skin absorb moisture.” It’s true that honey itself absorbs moisture but does eating honey help the skin absorb moisture? I asked Lush for evidence and they replied to say that the Lush co-founder “doesn’t remember saying that” but added that honey is “much easier for our body to digest” than sugar and she “feels that it fuels the muscles for longer”. Would you buy a product if the evidence to back it up was based on someone’s feelings?

Exceptional ingredients – are they special or just trendy?

Lush also told me that “Honey has been used as a skin ointment for over 2000 years”, as if that means it must be good for your skin. (Lead, which is poisonous, has been used in cosmetics since Roman times, but they didn’t mention that.) A lot of fuss is made out of the fact that honey and other products are traditional. For example, the Director of the Honey Research Unit at the University of Waikato in New Zealand has said, “I’m a great believer that if anything is traditional then it works. There may be no rational explanation, but that’s because we haven’t found it”. Flawed reasoning like this is at the core of the craze over Manuka honey, which supposedly has “curative powers” superior to regular honey. This belief is so widespread that manufacturers get away with selling it in small jars priced at £60, and there are even fake Manuka honeys on the market – normal honeys falsely advertised as Manuka so sellers can up the price.

Want to lose weight? Ask for Evidence before buying into fad diet claims

Misrepresenting the science seems to be the basis of all fad diets. Some diets have nothing to do with sound science, such as the clay diet which is said to “remove negative isotopes, helping you detox and stay in shape”. Diets such as this one are so ridiculous that they might as well have been made up. In fact, some are indistinguishable from diets that have just been made up. If you don’t believe it, have a go at the Spoof Diets quiz, which challenges you to spot fad diets from fictional ones – it’s harder than you might expect. So how are you supposed to tell what’s good for you and what to avoid? The answer is to Ask for Evidence. Scientists from the Voice of Young Science network helped us look at the evidence behind 13 diets – fad and fiction – and the results show just how important it is to Ask for Evidence behind claims, especially where your health is concerned.

Ask for Evidence


Who will take responsibility?

The Government has known that the official statistics are wrong since the Royal Statistical Society raised the issue in the wake of the 2009 H1N1 pandemic. Since then it’s been bounced around departments until we and charities who use and fund research that is affected by this had to write to the Prime Minister calling for action.

I have been trying to track down a response since we sent the letter a month ago. In that time, I’ve repeatedly called Number 10, Department of Health, Ministry of Justice and Home Office, each of whom have sent it to someone else for a response at least once. Number 10 has most recently tasked the Ministry of Justice with providing a response and told us Number 10 won’t be able to help us anymore. Today, the Ministry of Justice told me that a draft response has just been sent to the Minister’s office for sign off and we could possibly expect a response by early next week.

Of course, I’ve now been told several times that we could expect a response very soon but one has yet to materialise. I’ll believe it when I see it and in the meantime, the official statistics are still wrong.


The "accuracy" of screening tests

This is a guest post by Professor David Colquhoun, FRS

Anything about Alzheimer’s disease is front line news in the media. No doubt that had not escaped the notice of Kings College London when they issued a press release about a recent study of a test for development of dementia based on blood tests. It was widely hailed in the media as a breakthrough in dementia research. For example, the far from accurate BBC report. The main reason for the inaccuracies are, as so often, the press release. It said:

"They identified a combination of 10 proteins capable of predicting whether individuals with MCI would develop Alzheimer’s disease within a year, with an accuracy of 87 percent"

The original paper says

"Sixteen proteins correlated with disease severity and cognitive decline. Strongest associations were in the MCI group with a panel of 10 proteins predicting progression to AD (accuracy 87%, sensitivity 85% and specificity 88%)."

What matters to the patient is the probability that, if they come out positive when tested, they will actually get dementia. The Guardian quoted Dr James Pickett, head of research at the Alzheimer’s Society, as saying

"These 10 proteins can predict conversion to dementia with less than 90% accuracy, meaning one in 10 people would get an incorrect result."

That statement simply isn't right (or, at least, it's very misleading). The proper way to work out the relevant number has been explained in many places - I did it recently on my blog.

The easiest way to work it out is to make a tree diagram. The diagram is like that on my blog, but with a sensitivity of 85% and a specificity of 88%, as specified in the paper.

In order to work out the number we need, we have to specify the true prevalence of people who will develop dementia, in the population being tested. In the tree diagram, this has been taken as 10%. The diagram shows that, out of 1000 people tested, there are 85 + 108 = 193 with a positive test result. Out ot this 193, rather more than half (108) are false positives, so if you test positive there is a 56% chance that it's a false alarm (108/193 = 0.56). A false discovery rate of 56% is far too high for a good test. This figure of 56% seems to be the basis for a rather good post by NHS Choices with the title “Blood test for Alzheimer’s ‘no better than coin toss’ 

If the prevalence were taken as 5% (a value that's been given for the over-60 age group) that fraction of false alarms would rise to a disastrous 73%.

How are these numbers related to the claim that the test is "87% accurate"? That claim was parroted in most of the media reports, and it is why Dr Pickett said "one in 10 people would get an incorrect result.

The paper itself didn't define "accuracy" anywhere, and I wasn't familiar with the term in this context (though Stephen Senn pointed out that it is mentioned briefly in the Wiikipedia entry for Sensitivity and Specificity). The senior author confirmed that "accuracy" means the total fraction of tests, positive or negative, that give the right result. We see from the tree diagram that, out of 1000 tests, there are 85 correct positive tests and 792 correct negative tests, so the accuracy (with a prevalence of 0.1) is (85 + 792)/1000 = 88%, close to the value that's cited in the paper.

Accuracy, defined in this way, seems to me not to be a useful measure at all. It conflates positive and negative results and they need to be kept separate to understand the problem. Inspection of the tree diagram shows that it can be expressed algebraically as

accuracy = (sensitivity × prevalence) + (specificity × (1 − prevalence))

It is therefore merely a weighted mean of sensitivity and specificity (weighted by the prevalence). With the numbers in this case, it varies from 0.88 (when prevalence = 0) to 0.85 (when prevalence = 1). Thus it will inevitably give a much more flattering view of the test than the false discovery rate.

No doubt, it is too much to expect that a hard-pressed journalist would have time to figure this out, though it isn't clear that they wouldn't have time to contact someone who understands it. It is clear that it should have been explained in the press release. It wasn't.

In fact, reading the paper shows that the test was not being proposed as a screening test for dementia at all. It was proposed as a way to select patients for entry into clinical trials. The population that was being tested was very different from the general population of old people, being patients who come to memory clinics in trials centres (the potential trials population)

How best to select patients for entry into clinical trials is a matter of great interest to people who are running trials. It is of very little interest to the public. So all this confusion could have been avoided if Kings had refrained from issuing a press release at all, for a paper like this.

I guess universities think that PR is more important than accuracy. That's a bad mistake in an age when pretentions get quickly punctured on the web.

There is more discussion of this work in the follow-up on my blog


Hightable with Tracey Brown: Asking for Evidence to make Sense About Science

Hephzi Tagoe, Science Writer at Oxbridge Biotech Roundtable, and past volunteer at Sense About Science, interviewed Tracey Brown about her job and changing public discussion about science.

You have been director of Sense About Science for over 10 years, what is the secret to your success? 

A large part of it is probably that I am, and we are, hard on ourselves, always asking whether we’re really making a difference. I hate complacency. It means I tend to see the mountain in front of us rather than being pleased with what we’ve climbed, which probably drives the team crazy at times, but it gives us a self-critical culture that drives us on.

Continue reading >