The Business of Science: How the Publication Industry Detached Research from Reality

The Business of Science: How the Publication Industry Detached Research from Reality

The commodification of scientific output and its consequences on innovation ecosystems

Author: Monica Bianco, Ecosystems Cooperation advisor -CRF Italy

Abstract

In recent decades, the scientific ecosystem has undergone a profound mutation.

What was once an intellectual endeavor aimed at advancing understanding and solving real-world problems has increasingly morphed into a highly commercialized system.

As Hicks et al. emphasize, “the systematic use of simplistic metrics to assess research threatens to displace judgment and to create perverse incentives” [1]. Rather than focusing on impact, experimentation, and societal relevance, research today is often organized around the production of publishable units optimized for journal metrics, rankings, and funding evaluations.

The publishing industry — historically intended as a means to disseminate knowledge — has evolved into a profitable economic sector with vested interests in maintaining and reinforcing this dynamic. Consequently, scientific communication has shifted from being a public good to a commodified output, influencing profoundly what research is done, how it is conducted, and which knowledge is prioritized.

The Rise of the Publication Industry and Its Economic Logic

The transformation of scientific publishing into a business was neither inevitable nor neutral.

As Larivière et al. observed, “a small number of publishers have succeeded in building monopolistic or oligopolistic positions, extracting high rents from public-funded research” [2]. This structural concentration of publishing power has had profound implications on the way research is produced, evaluated, and disseminated.

One of the critical mechanisms underpinning this economic model is the use of the Journal Impact Factor (JIF) as a universal currency. Originally developed merely as a tool for assisting librarians in journal selection, the JIF has progressively evolved into a proxy for research quality, despite its many distortions and limitations. As Seglen critically pointed out, “the impact factor is not a reliable indicator of the quality of individual articles” [3]. Nevertheless, researchers are strongly pressured to publish in high-JIF journals to secure grants, promotions, and institutional recognition, reinforcing a cycle where perceived prestige outweighs scientific relevance.

Another important development is the spread of Article Processing Charges (APCs), especially with the rise of open access models. According to Björk and Solomon, “APCs have institutionalized a direct financial transaction between authors and publishers” [4], effectively transforming the act of publishing from a merit-based dissemination of knowledge into a pay-to-publish business.

Simultaneously, the number of academic journals has exploded, driven not by genuine scientific demand but by market segmentation strategies aimed at maximizing revenue streams. As Mabe and Amin noted, “the growth dynamics of journals are often explained more by publisher expansion strategies than by actual increases in research output” [5].

Finally, the widespread adoption of metric-driven evaluation systems has completed the transformation. University rankings, research funding decisions, and individual career advancements increasingly rely on bibliometric indicators such as publication counts and citation metrics. Moher et al. emphasize that “assessment practices heavily dependent on bibliometrics risk promoting perverse incentives that prioritize quantity over quality” [6].

As a cumulative effect of these interlocked mechanisms, the value of research is today less determined by its societal relevance, transformative potential, or experimental originality, and more by its visibility and performance within a commercially-driven publishing ecosystem.

Distortions in Research Practices: From Knowledge Creation to Metric Optimization

The industrialization of scientific production has led to a series of profound systemic distortions, reshaping the very nature of research activities.

One major consequence is the fragmentation of scientific outputs into minimal units of publishable material. Researchers are increasingly incentivized to divide their results into multiple articles, a phenomenon known as “salami slicing”. As Salager-Meyer describes, “fragmented publication practices are driven more by career pressures than by genuine advances in knowledge” [7]. This leads to an inflation of publication numbers without a corresponding growth in substantive contributions.

Closely linked to this is the decline of experimental and long-term research. As Heuritsch notes, “projects requiring extensive experimentation and longitudinal studies are structurally disfavored because they do not yield rapid publishable results” [8]. The time-consuming nature of real experimentation is at odds with the need for continuous publication outputs, resulting in a research environment that systematically marginalizes slow but crucial forms of scientific inquiry.

The multiplication of authorships and the phenomenon of hyper-collaboration further distort research practices. Wuchty, Jones, and Uzzi found that “the increasing dominance of large teams reflects the reward structure favoring volume over individual accountability” [9]. This often leads to the dilution of scientific responsibility, where the relationship between individual researchers and the quality of published results becomes blurred.

Another significant distortion is the prioritization of fashionable topics over neglected but socially critical areas. Brembs highlights that “fields aligned with editorial trends and citation potential enjoy disproportionate visibility, whereas less glamorous but vital research remains sidelined” [10]. This results in a biased research agenda that follows the logic of market visibility rather than societal needs.

Finally, applied research and research addressing local or territorial issues are increasingly marginalized. Adams points out that “research focused on local needs struggles to find publication venues within high-prestige circuits, reinforcing global imbalances in knowledge production” [11]. This not only creates an intellectual asymmetry but also deepens territorial inequalities in the distribution of research attention and funding.

Thus, research activities are no longer primarily optimized for discovery, innovation, or problem-solving; instead, they are increasingly aligned with the imperatives of metric performance and market visibility.

Consequences for Territorial Development and Societal Resilience

The dominance of the publication industry has profound systemic consequences for the relationship between research and society, progressively eroding the foundations that historically linked scientific knowledge to social advancement.

One major effect is the alienation of research from the territories that fund and host it.

As Bornmann et al. observe, “societal needs are systematically underrepresented in metric-driven research portfolios” [12], highlighting how research agendas increasingly prioritize academic visibility over tangible community impact.

Rather than addressing the urgent challenges faced by local societies — such as environmental degradation, energy transition, or social inequalities — research tends to orbit around topics that guarantee high-impact publications, often remote from everyday realities. Closely connected to this is the weakening of the experimental infrastructure necessary for transformative innovation.

The focus on rapid and easily publishable outputs marginalizes long-term experimental setups, pilot plants, and real-world laboratories, which are instead critical for sectors like water management, renewable energy, agriculture, and health. Johnstone and Schot emphasize that “without robust experimental infrastructures, the capacity for systemic transitions towards sustainability is severely compromised” [13].

The loss of experimental platforms deprives territories of vital tools for innovation and resilience building. Another critical consequence is the erosion of the credibility and social legitimacy of science. Citizens, perceiving a growing disconnect between the pressing issues they face and the often abstract outputs of academia, become increasingly skeptical.

As Merton already warned, “the credibility of science depends crucially on its perceived alignment with societal concerns” [14]. When science appears more focused on maintaining internal prestige than solving real problems, public trust inevitably deteriorates.

Finally, the commercialization of scientific publishing exacerbates territorial inequalities.
Regions already under-represented in high-impact publication circuits — particularly in the Global South and in peripheral areas of Europe and the Mediterranean — see their research efforts marginalized, creating a vicious cycle. As Chan et al. argue, “structural inequalities in research visibility reinforce funding disparities and entrench knowledge hierarchies” [15]. This dynamic not only deepens the scientific divide between regions but also undermines the potential for inclusive, bottom-up innovation essential for sustainable development. Thus, the commercialization of scientific publishing is not a neutral process; it actively reshapes the geography of knowledge production, amplifying imbalances and weakening the societal role of research. 

Towards a New Evaluation Culture: From Metrics to Meaning

To reclaim the transformative role of research, a profound cultural shift is required — one that not only modifies technical evaluation procedures but redefines the very purpose of scientific activity within society.

First, it is essential to restore peer judgment and contextual evaluation as the foundation of research assessment.
Research should be judged based on expert scrutiny of its methods, results, and relevance to societal needs, rather than simply on the prestige of the journal in which it is published.

As Hicks et al. affirm, “evaluation processes must support the quality, not merely the quantity, of research output” [1].

In parallel, funding and evaluation systems must evolve to support applied and mission-oriented research.

It is crucial to recognize and reward research that addresses concrete societal challenges, even when it does not conform to the traditional high-impact publication model.

Schot and Steinmueller underline that “transformative change requires re-aligning research agendas with societal missions rather than with narrow academic incentives” [16].

Furthermore, the indicators of success in research must be diversified.

Societal impact, technology transfer, policy influence, and territorial regeneration should become central criteria for evaluating research quality and relevance.

The European Commission explicitly states that “a broader set of impact pathways, including societal and policy contributions, must complement traditional bibliometric indicators” [17].

Finally, it is imperative to challenge the monopolistic structures that currently dominate scientific publishing.

Open science initiatives, public repositories, and new, community-based models of scientific communication must be promoted to reduce dependency on commercial publishers and to democratize access to scientific knowledge.
UNESCO’s Recommendation on Open Science emphasizes that “open access to scientific knowledge is a global public good and a fundamental pillar for building inclusive knowledge societies” [18].

This shift is not merely technical; it is strategic and cultural.

It involves rethinking the role of research in society — moving away from a system optimized for metric performance towards an infrastructure dedicated to resilience, innovation, inclusion, and real-world transformation.

Conclusion

The commercialization of scientific publishing has profoundly altered the priorities, practices, and outcomes of research, detaching it from its societal mission.
If research is to reclaim its role as a driver of territorial development and systemic resilience, it must break free from the logic of metric optimization and reconnect with real-world problems and transformative agendas.
Restoring meaning to research evaluation is not only a matter of academic reform — it is a prerequisite for building a more sustainable, inclusive, and resilient future.

References

  1. Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). “The Leiden Manifesto for research metrics.” Nature, 520(7548), 429–431.
  2. Larivière, V., Haustein, S., & Mongeon, P. (2015). “The oligopoly of academic publishers in the digital era.” PLOS ONE, 10(6), e0127502.
  3. Seglen, P. O. (1997). “Why the impact factor of journals should not be used for evaluating research.” BMJ, 314(7079), 498–502.
  4. Björk, B. C., & Solomon, D. (2015). “Article processing charges in OA journals: Relationship between price and quality.” Scientometrics, 103(2), 373–385.
  5. Mabe, M., & Amin, M. (2002). “Growth dynamics of scholarly and scientific journals.” Scientometrics, 51(1), 147–162.
  6. Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P., & Goodman, S. N. (2018). “Assessing scientists for hiring, promotion, and tenure.” PLOS Biology, 16(3), e2004089.
  7. Salager-Meyer, F. (2014). “Scientific publishing in developing countries: Challenges for the future.” Journal of English for Academic Purposes, 13, 87–95.
  8. Heuritsch, J. (2021). “Reflexive behaviour: How publication pressure affects research quality in astronomy.” arXiv preprint arXiv:2109.09375.
  9. Wuchty, S., Jones, B. F., & Uzzi, B. (2007). “The increasing dominance of teams in production of knowledge.” Science, 316(5827), 1036–1039.
  10. Brembs, B. (2013). “Prestigious science journals struggle to reach even average reliability.” Frontiers in Human Neuroscience, 7, 291.
  11. Adams, J. (2013). “The fourth age of research.” Nature, 497(7451), 557–560.
  12. Bornmann, L., Haunschild, R., & Adams, J. (2018). “Do altmetrics assess societal impact?” arXiv preprint arXiv:1807.03977.
  13. Johnstone, P., & Schot, J. (2023). “Shocks, institutional change, and sustainability transitions.” Research Policy, 52(1), 104–115.
  14. Merton, R. K. (1973). “The sociology of science: Theoretical and empirical investigations.” University of Chicago Press.
  15. Chan, L., Okune, A., Hillyer, R., & Posada, A. (2019). “Contextualizing openness: Situating open science.” Canadian Journal of Development Studies, 40(2), 170–183.
  16. Schot, J., & Steinmueller, W. E. (2018). “Three frames for innovation policy: R&D, systems of innovation and transformative change.” Research Policy, 47(9), 1554–1567.
  17. European Commission. (2020). “A new ERA for Research and Innovation.” Brussels: COM(2020)628 final.
  18. UNESCO. (2021). “Recommendations on Open Science.”
Sostieni i giovani
Sostieni i giovani
Sostieni i giovani
Sostieni il welfare generativo
Sostieni il welfare generativo
Sostieni il welfare generativo
Sostieni la ricerca per l’ambiente
Sostieni la ricerca per l’ambiente
Sostieni la ricerca per l’ambiente
previous arrowprevious arrow
next arrownext arrow
Slider