There is an understandable focus on the future in science policy discussions. We are often concerned with how investment in science and other research will contribute to future economic growth, health and well-being, and sustainable development. How should we invest now to bring about the future we want to see? What types of science should we support? How should that science be conducted? But the evidence that we draw upon is often about the past. What has been the result of previous investment? What impact did policies or the environment have previously? The science of science policy is largely a historical science.
Too often the analysis of the past that is used in discussions about science policy is flawed, based on anecdote or partial and distorted narratives. These stories are modified to fit present prejudice and don’t always provide the reliable representation of the past that we need for evidence-based policy making for science.
The Haldane Principle is a classic example of a myth about science policy itself. It is held up as the great bastion of UK science policy, but often without a critical analysis of where it comes from or its history. This thorough essay by David Edgerton should be compulsory reading for all researchers and people working in science policy. The result of this lack of historical context is that debates hinge around the adherence, or not, to this mythical principle which casts scientific decision making into an us (scientists) versus them (politicians) framework. Instead we need to replace this with a more nuanced debate about decision making that recognises that there are many other voices to be balanced in the question of who decides on science.
Poor understanding of the historical context also leads to inaccurate notions of how scientific discovery has happened in the past. There is a persistent narrative that science contributes most when scientists are left to pursue their curiosity, unencumbered by considerations of application. But is this really always true?
Maxwell’s magnificent work of the 1860s is an excellent example. Rather than a stately progression from abstract theory to solid application, it was the product of a web of markets, technologies, labs and calculators in the workshop of the world.
In sum, On Physical Lines of Force is an odd text to use as example of the unyielding purity of physical science. Maxwell’s formulae did not appear in their most familiar form until almost 25 years after its publication. The four famous equations linking electromagnetic forces and fluxes owe their elegant and economical vector form to a brilliant London telegraphist, Oliver Heaviside. He published them in 1885 in The Electrician, a trade journal for electrical engineers and businessmen.
As Peter Medawar wrote in the 1960s, we need to be careful not to get carried away by an excessively romantic notion of the pursuit of science. His thinking was explained and amplified by Tom Webb recently.
Sound analysis is also important in understanding how innovation has worked in the past. For example challenge prizes for innovation are often mentioned in the context of Harrison and the Longitude Prize. But as the Board of Longitude project shows the story is rather more complicated than is often appreciated, and even that “There was no such thing as the Longitude Prize“.
Historical evidence is important for the development of science policy, but we need to make sure it is the best evidence available. Experts in the history of science have a key role to play in the policy-making process of today.
April 12, 2012
What are the key outstanding questions in science policy? This challenging question has recently been addressed in a paper describing work led by the Cambridge Centre for Science and Policy (CSaP). The resulting list is interesting and comprehensive and will be a useful guide both to those working on science policy in academia and science policy practitioners. There have been some criticisms that the work is insufficiently reflective, or that there are already, at least partial, answers to some of the questions in the literature, but I think the authors are to be congratulated in pulling together a comprehensive list.
Interestingly, a collaborative process was used to generate the list, building on earlier work to develop similar lists for other disciplines. The paper is an excellent example of what can be achieved by drawing on a range of experts in a systematic manner. I can’t help, though, feeling that an opportunity was missed. The process was entirely focussed on experts, but could have really benefitted from including some public dialogue. What do people think about how research evidence is used in policy making? How do people want to see scientific evidence balanced with other factors? Exploring these and other issues with people whose expertise and experience lies outside of the science policy world would have led to some refinement of the research questions, or indeed resulted in the identification of new areas for research. Some may think that engaging the public with abstract policy issues is too difficult, but I don’t believe this is the case. Indeed we need to open up the processes of science policy more, and public engagement is an important aspect. Our recent experience at RCUK is that, while challenging, it is perfectly feasible to engage on research policy. Working with Sciencewise, we have recently carried out a public dialogue on data openness that is generating useful insight for policy development. The question of how we can further improve the approaches that we use to hold useful dialogues with members of the public on science policy matters is itself an interesting matter for further research.
There is also real scope for a second phase to the CSaP project. The research questions in the paper are focussed largely on ‘science for policy’ – how scientific evidence can be effectively used in policy making – and this is clearly an area of strong public interest. But equally I think that a similar exercise would be really valuable to explore the key questions for ‘policy for science’. The Science of Science Policy initiative in the US has made an excellent start in this direction, especially with its research roadmap (pdf). This roadmap sets out some high level questions for research, framed largely from the perspective of policy makers in science and innovation policy. The roadmap could form the basis of further work, along the lines of the CSaP study, to further elaborate the questions and bring them into a European context. Of course this would only be valuable if it stimulates further research targeted at the questions that emerge, but strengthening the evidence base on which we make research policy decisions is likely to bring significant rewards. As was recently argued by Pierre Azoulay in Nature:
It is time to turn the scientific method on ourselves. In our attempts to reform the institutions of science, we should adhere to the same empirical standards that we insist on when evaluating research results.
October 29, 2009
Professor Nutt is in the news again. David Nutt is one of the UK’s leading neuroscientists, and chairs the Advisory Committee on the Misuse of Drugs, which advises the UK Government on drugs classification. He first hit the headlines earlier this year, when he pointed out that the drug ecstasy was no more harmful than horse riding and got a dressing down from the then home secretary. Fiona Fox wrote a great piece about this on her blog.
Now, he has criticised a government decision on the classification of cannabis, pointing out that it compares favourably to legal drugs, alcohol and tobacco, on many measures of harm. These debates around drug classification highlight two key issues that we need to face up to if we are serious about evidence based policy making:
- Policy debates seldom start from a blank sheet of paper. So the consequences of taking an evidence-based approach can be wide-ranging. In the case of drugs policy, the big problem is the existence of two addictive and harmful drugs that are widely accepted in society – alcohol and tobacco. Whatever the evidence says about relative harm, the choices of either banning these substances or adding to the list of legal drugs are both politically fraught.
- Evidence often goes against the prejudice that people have, or the so-called common sense view. The real value of taking an evidence based approach is to challenge the accepted view. Evidence to confirm the accepted view is nice, but it is the challenge that should really make a difference. The irony is that when evidence, however rigorous, goes against the commonly held view it is most likely to be ignored.
I firmly believe we need to face up to these issues as evidence-based approaches will lead us to better outcomes. And I also hope that Professor Nutt will be treated a little better this time. We need to hear the evidence however politically unpalatable it might be.
Update [30 October]:
If government decisions are to be robust, they need to be based on all relevant evidence. Science and engineering are key elements of this evidence base.
February 16, 2009
There is an interesting post on the subject of ‘scientific authoritarianism’ on the Prometheus science policy blog. The post is based on a recent article in the Guardian newspaper by James Hansen of NASA. The point of the post is that Hansen is taking too strong a line with Government implying that the scientific evidence offers only one policy option (in this case banning the construction of new coal-fired power stations). There is obviously some debate about Hansen’s actual position – the comments are well worth a read – but I think the general point is well made. The way in which scientists present their evidence into the political arena is important, and research evidence needs to be seen in the context of other considerations. Taking too strong a line on ‘what the evidence says we must do’ can sometimes be counter-productive.