I came across this live map of wind patterns across the planet, via the BBC. Really interesting!
And it makes me realise how little I know about meteorological science: why is the wind blowing more over bodies of water than over continents?
I came across this live map of wind patterns across the planet, via the BBC. Really interesting!
And it makes me realise how little I know about meteorological science: why is the wind blowing more over bodies of water than over continents?
I was reading this article (Edwards & Roy, 2016) on maintaining scientific integrity in the 21st century, that was published a couple of days ago at the journal of Environmental Engineering Science. The facts discussed in the article, for example that scientific output (as measured in numbers of articles per year) is increasing , aren’t necessarily new, the focus of the authors is in discussing what is the impact of such trends on the overall scientific climate. The worry is that the ‘publish or perish’ culture leads to ‘perverse incentives’ that encourage the cutting of corners to stray into the territory of research misconduct.
The authors of the article cite several studies that have found manipulation of impact factors by journals, p-hacking by researchers, and rigged peer-review, as examples of this on-going and worrying trend in science. They reason that perverse incentives leads to reduced scientific productivity, since flooding the literature with fraudulent and misleading results would cause a high experimental error rate as results fail to replicate. The right path to strike between the two extremes of quantity and quality, the authors argue, is one that encourages the best of both worlds.
None of this is news to those of us who have been following the social world of science.
However, what really struck me was this part of the article:
While there is virtually no research exploring the impact of perverse incentives on scientific productivity, most in academia would acknowledge a collective shift in our behavior over the years (Table 1), emphasizing quantity at the expense of quality. This issue may be especially troubling for attracting and retaining altruistically minded students, particularly women and underrepresented minorities (WURM), in STEM research careers. Because modern scientific careers are perceived as focusing on “the individual scientist and individual achievement” rather than altruistic goals (Thoman et al., 2014), and WURM students tend to be attracted toward STEM fields for altruistic motives, including serving society and one’s community (Diekman et al., 2010, Thoman et al., 2014), many leave STEM to seek careers and work that is more in keeping with their values (e.g., Diekman et al., 2010; Gibbs and Griffin, 2013; Campbell, et al., 2014).
Thus, another danger of overemphasizing output versus outcomes and quantity versus quality is creating a system that is a “perversion of natural selection,” which selectively weeds out ethical and altruistic actors, while selecting for academics who are more comfortable and responsive to perverse incentives from the point of entry.
It is also telling that a new genre of articles termed “quit lit” by the Chronicle of Higher Education has emerged (Chronicle Vitae, 2013–2014), in which successful, altruistic, and public-minded professors give perfectly rational reasons for leaving a profession they once loved—such individuals are easily replaced with new hires who are more comfortable with the current climate. Reasons for leaving range from a saturated job market, lack of autonomy, concerns associated with the very structure of academe (CHE, 2013), and “a perverse incentive structure that maintains the status quo, rewards mediocrity, and discourages potential high-impact interdisciplinary work” (Dunn, 2013).
(Please see the full article, which is open-access, for the full references included above.)
What stood out to me about that paragraph was just how well it resonated with my own experiences of academia.
At this point, I have been in academia for seven years: spending four years as an undergraduate student, one year as a Master’s student, and now two years as a PhD student. During this period, I have undergone a transition that has surprised even myself, because it is that drastic.
I grew up wanting to be a scientist. I loved everything about biology, being that girl who’d gross everyone out by having no fear of frogs or other crawling critters, and who’d need to test any trust claim that came her way, taking no claim at face value without proper investigation. In secondary school, I remember that we had a temporary teacher, who, during a lesson in chemistry, asked the classroom who had the career aim of winning the Nobel Prize. I was the only one to raise my arm, and I remember with what pride I did that. I wanted to be a scientist and to discover things that would change the world.
Starting my undergraduate degree, I soaked everything up like an intellectual sponge. I went over and beyond what was expected of me and I remember that one tutor noted (incredulously) on an essay I’d submitted that “40 references for a 1’500 word essay” was a “bit much”. I begged to disagree, since I had done a lot of reading of the primary literature to synthesise the essay, and I wanted to give all authors their due. Similarly, my emphasis on scientific rigour led a senior scientist to proclaim this as “bordering on the fanatic”.
Nearing the end of my undergraduate degree, having completed five lab-based internships and one final-year research project, something had changed. I’d started to get frustrated with science, having developed an intense dislike for the culture where, just like everyone says, quality had been superseded by quantity. This is not because I had any bad practical experiences, far from it – all the labs I ever interned in were full of lovely people – but because of a strong sense of disappointment that science wasn’t as idealistic as I’d imagined and expected it to be.
I’ve always held scientists in an almost super-human regard. I expected them all to be maddeningly intelligent and resourceful, and honest, most of all. To me, being a scientist meant something over and beyond everything else; to me, being a scientist meant that you had dedicated your life to the pursuit of truth. It was bit a reality-check for me to realise that scientists are just people – people who don’t worry too much about the cutting of corners and mis-citing the literature and discouraging the publication of negative results. Over time, these realities chipped away at my reverence, and now, another three years down the academic line, I feel unbearably disappointed by the whole enterprise.
As a friend very eloquently put it to me at one point, I am an “idealist about motives”. For scientists, who claim to be after the truth, the cutting of corners proved to be a reality-check that I simply could not bear.
So that paragraph above resonated with me. I am female (so I’m the ‘W’ in ‘WURM’) and I went into science (the ‘S’ in ‘STEM’) with altruistic motives (to serve society). But the reality of the scientific enterprise and its culture has disappointed me to the point where I simply cannot bear it any longer.
I’m tired of reading papers that cite an interesting finding, only for me to look the reference up and find that it says no such thing. I’m tired of reading papers where they say they used line ‘X’ only to find that line ‘X’ is not in the materials and methods, and the entire paper therefore being a big methodological dead end. I’m tired of blase principal investigators and lab heads who worry more about p < 0.05 rather than the overall likelihood that the finding is genuine, or encouraging experiments to be re-done to allow you to obtain the ‘correct’ result for the paper. I’m tired of the kind of model-driven research, where the model is developed before the results are obtained, and the discussions that invariably follow about how the results can be ‘improved’. And I’m tired of scientists willingly giving up the intellectual rights to their work for the ‘privilege’ of publishing in a respected journal. I’m tired of people going ‘ga-ga’ over the prospect of publishing with Science or Nature, rather than the prospect of being rewarded with the feeling of accomplishment that comes with producing quality work.
More broadly, I’m tired of ‘streamlining’ of scientific education in the interest of saving money. I’m tired of universities proud of being rated ‘top-10’ taking active steps to reduce the quality of the education that they offer. I’m tired of institutions that paint themselves as ‘a great place to work’ that keep you without all the essentials of doing that work well.
As Edwards & Roy (2016) say:
The [‘academic excellence’] rankings rely on subjective proprietary formula and algorithms, the original validity of which has since been undermined by Goodhart’s law—universities have attempted to game the system by redistributing resources or investing in areas that the ranking metrics emphasize.
I’m tired of all that – and so many other things. There’s something very rotten at the heart of academia, and of science itself. I’m tired of the fact that people get away with doing all of these things.
I’m currently in the third year of my PhD, and I’m so looking forward to finishing, so I can leave academia behind, to, as the article says “leave STEM to seek careers and work that is more in keeping with [my] values.”*
*Although this has put me in the awkward position of trying to change career path with a CV that says nothing but ‘biology-biology-biology’. I might write another blog post on this.
That’s why the article on maintaining scientific integrity in the 21st century resonated with me.
It’s important to keep in mind that scientific publishers are businesses, and money is cccc
I’ve been reading a lot about peer-review lately.
I’ve previously been of the opinion that peer-review is far from a perfect process, and that many (researchers and lay-people alike) are putting undue trust in the system. Arguments in favour of peer review focus on the idea that peer review functions as some sort of ‘quality control’; as a ‘gatekeeper’ or ‘checkpoint’ process that separates the bad research from the good research – of which only the latter is allowed to be set free upon the scientific community to form part of the scientific literature and lay the groundwork for future research.
In a 2013 article on the Laboratory News website, the Vice President of Health and Medical Sciences at Elsevier, Peter Harrison, is quoted as saying:
Publishing in high-impact peer-reviewed journals helps researchers’ careers as it puts a ‘quality stamp’ on their research and helps them get visibility and recognition.
It’s not surprising that scientific publishers endorse the idea of ‘peer review as quality control’. Scientific publishing is, after all, the main provider of the service, and it is the industry that requires peer review the most. Since peer review endows research with a mark of approval of its perceived quality, scientific publishers who wish to appear reputable and as taking their work as guardians of scientific information seriously, must ensure that articles they publish are also peer reviewed.
However, as efforts such as Retraction Watch have shown, peer review is far from a perfect filter, and even the most prestigious and most well-respected journals have failed in their quality control, allowing both since-debunked and fraudulent research to bear their name. However, it is still possible that although peer review is not perfect, that it still manages to keep a lot of sub-quality research away from reputable journals.
Shroter et al. (2008) however report that peer reviewers for the British Medical Journal found very few errors, of both major and minor importance, in papers they had been sent to review – results that suggest that peer reviewers are, on the whole, unable to find all errors in the manuscripts they are sent to review. This finding is not surprising, since peer reviewers are, as the name suggests, peers. Like the authors of manuscripts, reviewing peers are also hard-working researchers, with all the responsibilities that this will bring. It has been estimated that a good peer review effort can take everything from 5-8 hours of a reviewer’s time – which is the equivalent of a large part of a normal work day. It is therefore unlikely that most reviewers, however well-meaning, will be able to catch most errors in the papers they have been sent to review; they simply do not have the time.
So peer review is not perfect, and that is not even going into all the other confounding issues, of author and reviewer bias and deliberate attempts to mislead. This is part of the reason why I have been – and still am – skeptical of peer review: the system is simply not as good as people seem to think it is.
What are the alternatives? Opinions differ. Some argue for a post-publication peer review system, where manuscripts are published first, and reviewed second. I have traditionally been a fan of this system, since it is (under ideal conditions) meritocratic, where only the best papers will prosper. As initiatives like PubPeer have shown, it can also be very efficient at finding questionable conclusions and deliberate attempts to mislead in the scientific literature. Similarly, as physicians’ and mathematicians’ use of arXiv have shown, pre-publication peer review can also be a very constructive way of improving papers and disseminating scientific results prior to formal publication. These kinds of systems however require a culture that is open to them, and I worry that perhaps the life sciences are not.
Nature Publishing Group trialed a pre-publication system of peer review in 2006, but found that enthusiasm was low. While there was sufficient interest for Nature to trail the system, once it was up and running, it failed to deliver as potential reviewers seemed reluctant to review papers, or indeed leave comments to improve the manuscripts, and comments that were left on the manuscripts were of ‘limited use’ according to editors.
In some sense, the results Nature obtained from their experiment make perfect sense. As with the problems with peer review mentioned above, academics are busy. Peer review might work in its current form because manuscripts are sent by the editors to specific authors, who thus are ‘assigned’ the job of reviewing the paper. Being personally given the responsibility to review a paper, a reviewer with some time and/or altruism to spare may be more incentivised to perform a proper review of the paper, especially when the editor is capable of sending them reminders, nudging them to get on with the review. Conversely, in an open system, these direct assignments no longer exist, and all of a sudden the job of reviewing a paper becomes ‘someone else’s job’, and a problem that can be pushed aside for individual, potential reviewers. In an open review system, where anyone can be a potential reviewer, who is going to step up to the task? Very few, would have been my guess, and, indeed, this is what Nature found. When peer review is everyone’s responsibility, it becomes the responsibility of none.
Similarly, I think this is part of the reason why peer review is seen as a method of quality control. Even if there is little evidence of its success in acting in such a capacity, it is a comforting thought. It is easier to open your article of choice — and rest assured that its contents have been peer reviewed, and just assume that everything that it contains is reasonably close to the truth — than to read every article with an unerringly critical eye. It is draining to question everything, especially for long periods of time. In that sense, the idea of peer review as a gatekeeper of truth, may be more comforting than true.
Peer review is not perfect, and, so it seems, neither are the alternatives. So where does that leave us? Where does that leave me? As I mentioned at the beginning of this post, I am traditionally very skeptical of peer review, and I want to believe in the merits of pre- or post-publication efforts to source reviews from a larger pool of peers. It would be great if peer review was a community effort – to the same degree that publishing seems to be.
When I first went into science, this was the kind of idealised view I held; thinking that science was something that scientists did together, where everyone contributed and received their due. With more experience, I now realise how naïve such idealisations are. And I think ideas of peer review are similarly so. Peer review in its current form is not perfect, because scientists are busy, and crowd-sourcing efforts are going to be brought down for the same reason: scientists are busy. It’s easier for a busy person to deal with one paper being added to their to-do list than for a busy person to find the time to contribute to a never-ending pool of scientific results awaiting review.
I’m tempted to think that more honour is needed in science, but this, too, would be unrealistically idealistic. As much as I want to propose a new system of peer review – a perfect one – many such ideas have already been proposed — and they all fall foul of the same assumption: that a good idea is all that it takes to change the culture and the peer review process. Instead, I will conclude with the insight that, having read more about peer review, and the more I know about the process – the more I realise that I don’t know how the problems with peer review should be solved.
But I think that a good place to start would be for the consumers of scientific literature to be educated about the pitfalls of peer review and its imperfections; to make the consumers of scientific literature understand that peer review is not a mark of unerring quality. Second, peer review should worry less about impact, and more about the methodological soundness of the work. This is what PLOS ONE does, but I am also aware that there are mixed feelings about this in the community, with some people worrying that without some kind of editorial predictions of impact and importance, the literature will be flooded with inconsequential papers. However, there are two problems with this view. One, there is no information on how many ‘bad’ papers peer review are keeping away from the community; if you want to publish your work, you’ll always find a journal willing to publish, so this argument doesn’t hold any water. Second, there seems to be some redundancy to the argument of what’s ‘important’, on one hand, peer review is supposed to identify important papers and bring them to attention, whereas, on the other hand, citations are supposed to identify important papers and bring them to the attention of the community. And then there’s the additional consideration of what’s important? Is everything new important? But what about replications? – they are important too. And what one scientist finds ‘boring’ is the topic of choice for another.
In fact, I think my readings about peer review have made me realise that when we speak of peer review, its problems and how to fix these – that we’re actually asking the wrong questions. It’s not the peer review system that is flawed; it’s the publishing system itself. The problems with peer review are the symptom, not the cause. Instead of seeing charities and Research Councils spending money on new journals and new initiatives to improve peer review, what I would like to see is an initiative that’s trying to fix the problems at the bottom of it all. And, the way I see it, journals are part of the problem, since they obsess over ‘importance’ and ‘impact’ and worry more about impact factors than about negative results and attempts to replicate previous findings (or so it seems).
It’s important to keep in mind that scientific publishers are businesses, so we cannot expect them to prioritise the needs of the scientific community over their own monetary bottom line. It has been noted that high-profile journals are in the business of publishing ‘impactful’ papers — even though these are not necessarily the most scientifically rigorous. Flawed papers will also attract citations — and incidentally, increase the impact factor of the journal. For example, the ‘arsenic life’ paper has over 400 citations on Google Scholar, despite having been shown to be erroneous.
Indeed, what I would like to see is a system that gets rid of journals altogether. Imagine, and this is a very crude sketch, that there was a centralised system where manuscripts (results) were deposited, and where they could be retrieved by other researchers since this is what constituted ‘the scientific literature’. Here, there would be an emphasis on results: this is what these researchers found. When other researchers accessed these results, they could not only be prompted to share their view (do these results make sense? Have you replicated these results yourself?) and comments, so that results become less a section-header in stand-alone manuscripts, and more building-blocks in a larger community endeavour, where individual results are shared and used between researchers and research groups.
Because, naïve as I am, I like the idea that science is about sharing, so everyone can learn a little bit more about the world, rather than the idea that science is defined as a process of scientific career advancement — which it seems to be right now.
In a previous post I wrote that open access is about education. The argument I made in my previous post was tangential; it was the conclusion of a few thoughts I typed up in discussing the arguments made by people who are against open access. So I want to take this opportunity to elaborate a bit on my argument.
Scientists have an interesting role in society. On one hand, they’re seen as being wise and intelligent; and being called ‘an Einstein’ is a compliment. On the other hand, people without personal experience of science seem to subscribe to the popular stereotype of the ‘mad scientist’ to various degrees, and think that scientists are not to be trusted. There is a considerate anti-science movement in the world, with people who claim all sorts of outlandish things that betray a misunderstanding of how science works and what the scientific literature actually says. The anti-science movement is, in other words, based on a misconception of what science is and what it can do, and these misconceptions are giving rise to mistrust.
It is vital to the scientific enterprise that the results are trusted — by both scientists and non-scientists; experts and non-experts alike. But to be trusted, science needs to be communicated. Science is, traditionally, communicated through publication. It follows that good science communication involves easy access to the scientific literature.
Change happens gradually, and the world will not be educated overnight. But for every person who is spreading ideologically-motivated misinformation, there’s another person who wants to check if this is true. And it is for these people that the scientific literature must be accessible, because educating even a single person out of ignorance is, on the whole, a good thing, and exactly what the scientific enterprise is about.
In fact, I’d go so far as to say that science is about more than the scientists themselves. The purpose of the doing of science is not to facilitate scientists to make a career. No, rather, science is about expanding the body of human knowledge — and not just for science; we do this for humankind. And seen this way, it seems immoral to put whatever we find behind lock and key. To do so is to act selfishly; jealously guarding whatever we find because it can benefit not everyone, but ourselves.
Going back to my previous post, which came about after I read Mike Taylor’s post on the bizarre comments made by some people who don’t believe in open access, it seems like argument from personal gain is a recurrent theme: that it’s a ‘fallacy’ that ‘non-experts should read journals’. Indeed, this is even further elaborated by Robin Osborne at the Guardian, where he makes the argument that open access ‘makes no sense’ because access comes at a price. And that price, he argues, is the admission to the institutions where research is being carried out:
For those who wish to have access, there is an admission cost: they must invest in the education prerequisite to enable them to understand the language used.
But this kind of education can come in many forms. And one of those is free access to the literature. You don’t need to be an expert to make sense of things. It helps, but it’s not a prerequisite. Furthermore, this means that there is a parallel argument to be made: that research should be freely accessible both intellectually and financially. You shouldn’t have to pay to access research, and that research should be well-written. Indeed, there is an argument to be made to reduce the amount of jargon used in scientific papers, because what is jargon except another barrier that is used by those ‘in the know’ to keep those who are not in the dark?
And that brings me to my second observation, that a lot of the hostility that is directed against open access seems to stem from a sense of academic insecurity; that science is only worth something if it is ‘pure’. Osborne says:
There can be no such thing as free access to academic research. Academic research is not something to which free access is possible. Academic research is a process – a process which universities teach (at a fee). Like it or not, the primary beneficiary of research funding is the researcher, who has managed to deepen their understanding by working on a particular dataset. The publications that result from the research project are only trivially a result of the research funding, they come out of a whole history of human interactions that are not for sale.
It seems like Osborn is saying that open access is impossible because it would threaten the ivory tower of academia and that, as a result, it would dilute the academic enterprise. It’s exceedingly elitist to suggest that the uneducated don’t deserve access to the scientific literature, as this would reduce its value, but, alas, it is also a very common argument.
It’s very interesting that this argument is common. It seems to imply that academics and researchers and scientists who are against open access oppose it on the grounds that they think their role will be lessened if the ivory tower was to open its gates; that the value of science is inversely proportional to the ease by which it can be accessed. This is not true. Rather, it is the opposite; the value of research increases the more it is accessed. Indeed, isn’t a high download rate and high citation count what every researcher yearns for — since these metrics measure the amount of times information has been disseminated and impact has been made?
I don’t want to read too much into this, but part of me wonders if at least some hostility towards the idea of open access comes from the misconception that the status of being a scientist comes not so much from being a facilitator of the spread of knowledge as much as a custodian of the unknown?
The Ashmolean Museum here in Oxford currently has a special exhibition on Andy Warhol. I’m not usually a fan of modern art, but the exhibit was free for students, and a friend therefore managed to coax me into spending a Saturday afternoon there.
The exhibit featured works from the Hall collection, and as far as I (in my Warhol-naïvety) was aware, there were none of the typically iconic pieces on display. However, I found the exhibit surprisingly enjoyable. There was a good selection of pieces, most of them colourful. The exhibit was divided into four sections: one showing some of his less well-known and more experimental pieces; another showing some of his films; a third showing the characteristic pop-art portraits; and a fourth showing some of his black-and-white work.
I’ve never understood modern art, other than that the artist is trying to provoke. Although the definition of ‘art’ is nebulous, I think it’s valid to differentiate between art that tries to portray something (which is what most people would call ‘art’), and art that tries to provoke (which I guess would be classified as ‘modern art’).
However, while I was at the exhibition, I saw this display case that contained a single Brillo box. The box wasn’t as much a cardboard Brillo box as a wooden crate that had been painted to, for all intents and purposes, resemble an authentic Brillo box.
The exhibition was quite busy, so the box in its case was quite busily looked upon. While watching people looking at the Brillo box and trying to make sense of it, something occurred to me: that maybe that was what the box was about. That maybe the Brillo box wasn’t meant to be looked at directly — but that it was meant to be looked at by some people, so other people can look at people looking at it! Once I realised this, the box made so much more sense.
Perhaps the provocation of modern art isn’t so much as to provoke you personally, it’s about creating a provocation that you can then observe; and that this is what is the art. Or perhaps not so much ‘art’ as the statement. Because most modern art is statement-driven; having been created to tell you something. Which is perfectly valid: because humans are empathic creatures; we must feel things ourselves in order to truly understand those things. And maybe that’s modern art: it’s a form of non-verbal communication that allows us to experience something in order for that experience to be properly communicated and with nothing lost in translation.
So perhaps the Brillo box isn’t so much about being a Brillo box or an artist’s impression of a Brillo box, but perhaps it’s meant to be an artist’s way of communicating the absurdity of culture; of perception; of existence. Because it is rather absurd to have a Brillo box that isn’t really a Brillo box, but just a painted wooden crate, set in glass. Even more absurd is observing people looking at this not-quite-a-Brillo-box, as if it was anything more than it is. Because I’ve noticed that people do have a tendency to read more into provocative pices like that than is justified. Sometimes a box is just a box (or not even that). But since we’re social creatures, and ones terrified of not getting something, of not understanding, of looking stupid, we can look at a box and think to ourselves that it’s just a box and then second-guess ourselves and think that we’re not being intelligent enough, not cultivated enough, not sophisticated enough, and pretend that, we too can see the Emperor’s new clothes. And while all of this is going on, a separate observer, watching this person experiencing all the second-guessing and pretending that’s part of being a human in a social/cultural situation, is given a unique glimpse into the human psyche, allowing the inner world of someone else to be laid bare for us to experience — and therefore understand.
Once I’d had this revelation, modern art made s much more sense to me. And all thanks to a Brillo box that’s not actually a Brillo box.
As an aside, here’s a funny story: I was once standing right in front of a set of Warhol’s Marilyn Monroe-paintings, which are arguably his most iconic. I didn’t think twice about them, because they are so widely reproduced that they have become commonplace. It was only later that someone else pointed out to me that the pieces were authentic.
And that set me off thinking: why do we place such importance on authentic pieces of art and not reproductions? What’s so special about the original that we stop in our tracks to take it in, or even travel to a particular place to see it, in the metaphorical flesh? Because the experience of looking at a replica (at least a decent one) isn’t particularly different. It’s the same shapes, the same colours, the same everything. Except that it’s lacking the special ‘it’ of once having been in touch with the creator that once gave rise to it.
The answer to this discrepancy, I think, lies with our cultural obsession with talismans and similar objects, of thinking that the original is imbued with something immaterial and special that replicas just cannot reproduce. What do you think?
I read Mike Taylor’s post on how Sci-Hub is a ‘litmus test’ a few days ago. The post highlights some of the people who are sympathetic to Sci-Hub, as well as some of the people who are hostile towards the initiative. Reading the post, what stood out to me was the inanity of some of the hostile comments — some of them making the most bizarre claims.
Among the more bizarre claims are the ones to the effect that keeping the scientific literature locked away is a good thing. Taylor quotes David Wojick who writes:
I personally doubt that there are large numbers of people who (1) have the expert knowledge required to read and benefit from the scholarly literature but who (2) cannot find a way to access what they need. The arguments I have seen to this effect are completely unconvincing.
This is one of the fundamental fallacies of OA, namely that non-experts should read journals. […] Only a few people can understand the typical journal article.
That’s a very elitist argument, and for two reasons.
First, these blessed people Wojick speaks about who don’t have problems with access don’t exist. Anecdotally, but not irrelevant to the issue at hand, I’m currently doing a DPhil at the University of Oxford — supposedly one of the most prestigious universities in the UK. Yet, it is a regular occurrence that I find a paper that I think looks interesting from the abstract, but that I find, to my dismay, that I can’t access because the Bodleian Library doesn’t seem to have a subscription that covers that particular journal. If there is no freely available copy easily accessible via Google Scholar, then what is a poor, suddenly-not-so-privileged DPhil student to do?
Second, to claim that it’s a “fallacy” that non-experts would want to read scientific journals is beyond ignorant. Not only is that a moot point that does nothing to support Wojick’s argument — in no way should the lack of desire for access mean that access should be denied — it’s also making a very unpalatable claim, namely, that people who aren’t experts have no business reading the expert literature. That’s like saying that people who don’t have an education don’t deserve one. That is just silly.
In addition, there is a third argument, and one that is very close to my own heart. Even if I don’t have access to all the papers that I need/want for my own research, I still have access to a decent amount of the scholarly literature. Because of this, I regularly download papers and articles on behalf of friends and family. I do so for friends who don’t have access via their own institutions, or for friends who recently finished their degrees, which leaves them without an institution, and therefore without access. In such scenarios, the system is preventing access to legitimate experts, which completely invalidates Wojick’s claim.
In addition, although I’m only half-way through my doctorate, I’m already thinking about life after I’ve completed my degree. It’s been a long while brewing, but I do not intend on staying on in academia as an active researcher. The reasons for this are many, and will form the topic of a future post, since they aren’t relevant to the current issue. What is relevant is the fact that there will be a day when I no longer belong to an institution; that there will be a day when I no longer have access to the scientific literature. And this saddens me immensely. Because over the almost seven years (and counting!) that I’ve spent as a student of science, I’ve become increasingly reliant on the scientific literature. If I want to educate myself on a topic, it is to the scientific literature that I turn. To be denied access because leaving science makes me ‘unworthy’ is immensely discouraging. And this brings me to an even more fundamental issue:
Just like I trust in the scientific literature to educate me out of my ignorance, so do other people. Science is trusted by society. Experts and non-experts trust science to be a force of enlightenment. So we should therefore not be surprised that they want to access the scientific literature itself. This is a very important consideration, because we cannot ask of the public to trust in science and then deny them access because they aren’t ‘worthy’. That runs counter to the scientific enterprise as a catalyst of knowledge. So anyone who’s pro-science and pro-education should also be pro-access. It’s as simple as that.
Thinking back to the beginning of humankind, what a strange event it must have been to ‘awake’ as a conscious creature. Just imagine, what a strange thing for an organic being of tissue and bone to consider its own purpose — for the first time. At the beginning of culture, there must have been discussions of how best to do things — how do you even do things, when there is no historical precedent? Imagine the first funeral (what to do with the dead?); the first painting (why depict anything at all?); the first idea of a god (why was there such a need to seek explanation, when other animals need none?).