October 20, 2012
Systematically gathered and analysed evidence is important, because it challenges myths and prejudices that we all have a tendency to develop. Sometimes these myths are important for innovation or policy, but other times they are just plain interesting. In the latter class, a fascinating study was reported by the BBC this week. The study shows that, contrary to a widely held myth, flying ants do not all emerge on the same day each summer. In fact emergence is spread over a more extended period.
As well as an interesting and counter-intuitive result, the study caught my eye as it is an example of citizen science. The data were recorded by members of the public, providing the researchers with an extended capacity to observe the ant behaviour. The study would have been far too expensive and difficult to carry out without this contribution from volunteers. This type of engagement is a fantastic way to get interesting research done, and also engage those outside of the research community in the process.
Great as citizen science is, we do need to be careful not to start believing another myth: that citizen science is limited to volunteers acting as data collectors or sophisticated image processors. Citizen science also needs to encompass involving people from outside the research community in shaping the nature and direction of research. While there have some moves to address this, another myth stands in the way; that allowing greater public involvement will reduce the quality or value of the research. Perhaps we need some experiments to put that myth to the test too.
October 18, 2012
There is an excellent opinion piece in the latest edition of Research Fortnight by Professor Doug Kell on text-mining and open access. As for many the article will be behind a pay-wall (the irony…) I thought I would summarize the argument and post a few quotes here.
The argument goes like this:
- New research findings are being added to the body of literature at a rate that means it is impossible for anyone to read it all, let alone assimilate and make sense of it all. The only solution is to use text-mining.
- There are clear benefits for researchers, business and policy-makers in using text-mining of the scientific literature. For example a recent report from JISC concludes that “there is clear potential for significant productivity gains, with benefit both to the sector and to the wider economy”.
- But for text-mining to be effective access is needed to the full text. Abstracts are not enough, and for rapid interpretation of new research embargo periods are a problem.
And here are some key paragraphs from the article:
The PubMed database records two new peer-reviewed papers in the life sciences every minute. Across all the sciences, the number is five.
Such is the rate at which scholarly papers are produced that only computers can read them all. As a result, text-mining techniques are infiltrating every field of research, from genomics to the social sciences and humanities. Historians are using text mining to analyse court records from the Old Bailey. Business has been mining newswires since the 1980s to acquire competitive intelligence and today companies use text mining, including of social media, to discover what customers think of their products and services.
To get the most from text mining requires open access to the literature. And it requires it as soon after publication as possible. In the life sciences, six months—the maximum embargo allowed in Research Councils UK’s policy on ‘green’ open access—is a very long time.
This is one reason why the research councils’ policy on open access announced this July made the ‘gold’ model the preferred route. Pursuing gold open access will help the UK to get ahead of the curve in exploiting the opportunities, including text mining, that come from open access.
April 12, 2012
What are the key outstanding questions in science policy? This challenging question has recently been addressed in a paper describing work led by the Cambridge Centre for Science and Policy (CSaP). The resulting list is interesting and comprehensive and will be a useful guide both to those working on science policy in academia and science policy practitioners. There have been some criticisms that the work is insufficiently reflective, or that there are already, at least partial, answers to some of the questions in the literature, but I think the authors are to be congratulated in pulling together a comprehensive list.
Interestingly, a collaborative process was used to generate the list, building on earlier work to develop similar lists for other disciplines. The paper is an excellent example of what can be achieved by drawing on a range of experts in a systematic manner. I can’t help, though, feeling that an opportunity was missed. The process was entirely focussed on experts, but could have really benefitted from including some public dialogue. What do people think about how research evidence is used in policy making? How do people want to see scientific evidence balanced with other factors? Exploring these and other issues with people whose expertise and experience lies outside of the science policy world would have led to some refinement of the research questions, or indeed resulted in the identification of new areas for research. Some may think that engaging the public with abstract policy issues is too difficult, but I don’t believe this is the case. Indeed we need to open up the processes of science policy more, and public engagement is an important aspect. Our recent experience at RCUK is that, while challenging, it is perfectly feasible to engage on research policy. Working with Sciencewise, we have recently carried out a public dialogue on data openness that is generating useful insight for policy development. The question of how we can further improve the approaches that we use to hold useful dialogues with members of the public on science policy matters is itself an interesting matter for further research.
There is also real scope for a second phase to the CSaP project. The research questions in the paper are focussed largely on ‘science for policy’ – how scientific evidence can be effectively used in policy making – and this is clearly an area of strong public interest. But equally I think that a similar exercise would be really valuable to explore the key questions for ‘policy for science’. The Science of Science Policy initiative in the US has made an excellent start in this direction, especially with its research roadmap (pdf). This roadmap sets out some high level questions for research, framed largely from the perspective of policy makers in science and innovation policy. The roadmap could form the basis of further work, along the lines of the CSaP study, to further elaborate the questions and bring them into a European context. Of course this would only be valuable if it stimulates further research targeted at the questions that emerge, but strengthening the evidence base on which we make research policy decisions is likely to bring significant rewards. As was recently argued by Pierre Azoulay in Nature:
It is time to turn the scientific method on ourselves. In our attempts to reform the institutions of science, we should adhere to the same empirical standards that we insist on when evaluating research results.