The pandemic sharply divided the American public - there were those who “believed science”, and those who didn’t. I consider this polarizing sentiment to be immensely damaging to science as a whole. Obviously, as someone who conducts research for a career, I believe in the process of science, but saying that you “believe science” as a whole implies that every single paper and research study conducted ever contains absolute truth and has no mistakes, and that science as an institution has no flaws. Neither of those things are true, and saying so just allows those who are already skeptical to dig in their heels. It’s important to discuss nuance and context instead of making sweeping, easily disproved statements when talking to those with different viewpoints. One area in which this is particularly applicable is the scientific publishing process.
Lots of people want to publish their research for because they are excited about the results and feel it is are beneficial to humanity as a whole. However, the “publish or perish” method of academia and the US patent and copyright system means that furthering careers and company profits are also a significant motivating factor. If you are in graduate school, you may be required to publish original research in order to graduate. If you are a professor, you want to publish as much high-impact research as possible to increase your chances of getting tenure. If you work in industry, you are writing proposals to government agencies like the National Institute of Health or the Defense Advanced Research Projects Agency (DARPA) asking them to fund your work. You want your research to be easily found, and thus you want as many high-profile publications as possible. In addition, some of the grants you receive might have “number of papers published” or “number of conferences presented at” as success metric.
Notice that I have used adjectives like “high impact” and “high profile” - it is not enough to just have a large number of publications, you also want other researchers to be citing you in their work. This is quantified by your “H index” (the number of publications you have authored with h or more citations) and your “i10 index” (the number of papers you authored with at least 10 citations). The order in which you appear on the list of authors also matters. If you are the first author, it means you did the bulk of the work. The further back you are in the list, the more hands-off you are. Usually, the earlier you are in your career, the more first-author papers you want, because it shows that you can do research. This requires a lot of work on your part to conduct the research, collate the results, and write the paper. As you progress in your career, you will start advising younger colleagues and students. You will have more publications in a given year, but your name will appear further back since you were in an advisory role.
There are also distinctions between publication venues - journal or conference. A journal is a collection of periodically published work in a specific field, like The Journal of the American Medical Association (JAMA). Journals are more prestigious than conference articles, and usually contain finished, polished work. As the name implies, conference publications are published in conjunction with a conference. These are less prestigious, and can contain less-complete work. However, conferences are a great place to get share and get feedback on your work. There are many metrics to quantify how prestigious as journal is, with one of the most popular being its “impact factor”, which measures how often the average article in the journal is cited in a given year. The higher the impact factor, the more prestigious the publication.
After submitting a paper, all reputable publications perform peer review, where your name and affiliations are stripped from the paper and it is sent to experts in the field for review. The experts will then make a judgement on if the paper contains research worthy of publication, and they may require that additional research or rewrites be completed. You can read more about the peer review process and potential intermediate steps in my previous article.
If the paper passes peer review, the authors then need to secure funding for publication. There is always a fee to publish in a journal or conference, and usually the research will remain behind a paywall. People will need to pay for the journal in order to read the article. There is a growing movement towards “open access” publication in which the research is available to the general public free of charge, but the authors need to pay additional fees for that. If you are publishing results based off research conducted under a grant or company project, the grant/company will have funds for this built into the budget. This Guardian article goes into detail about how these fees came about. Once the fee is paid, the article is published on the journal or conference’s website and will be available to anyone will access.
Overall, this system makes a lot of sense - we want good research to be published after being checked by experts, and we want metrics to quantify how well someone is doing. However, the increasing competitiveness of academia and the proliferation of automation has exposed issues. The first is the quantification of research output and journalistic quality. Academics are under intense pressure to publish as much high-quality research as possible in order to gain tenure, a paradigm known as “publish or perish”. This leads researchers to focus only on high-profile research that is bound to generate lots of citations, and it means that often research that is considered “boring” or “bad” is not submitted for publication. However, boring and bad research is essential for moving science forward - we gain confidence in theorems and technologies during “boring” replication trials, and we learn what not to do when reading about research with unsuccessful outcomes. This also leads to rushed and potentially fraudulent research and the rise of predatory journals. Predatory journals will charge exorbitant fees to publish research with minimal review, but market themselves as high-quality academic journals. It can be hard to identify if a journal is predatory, but Predatory Journals and Beall’s List both keep updated lists.
Another issue with this current setup is the way the peer review process works. All reviewers work for free, on their own time. This means that many qualified experts don’t review any papers because they don’t have the time, and the ones who do, may not spend as much time as needed investigating the work. This can lead to incorrect or fraudulent results being published. In addition, the rise of automation means that it is easy for a journal to spam thousands of authors with requests for review. It is tempting to just ignore all journal-related emails because it is so time-consuming to sift through the morass to find non-predatory journals that you are actually qualified to review for. As an example - I have three publications. One is a journal publication summarizing my master’s thesis work on machine learning + skin cancer, two are conference articles discussing work I’ve done on explainability in machine learning (one published, one pre-print). Combined, I have 18 citations. I am by no means an expert or remotely influential in any field, but this screenshot below shows what comes up when I search “journal” in the “Last Week” section of my email inbox. Imagine what tenured professors get.
Are there any ways to fix these problems? The San Francisco Declaration on Research Assessment proposes alternative ways to measure research output and impact. Universities could take a holistic, transparent approach to tenure to eliminate pressure on academics. Elsevier can pay experts to conduct peer review. However, none of these solutions have caught on. I can theorize about why this is the case, but I am not an expert and anything I can come up with would be circumstantial at best. For now, it is simply important to keep these issues in mind when reading about scientific research, especially if the results are controversial.