Open access (OA) and generative artificial intelligence (genAI) shape how researchers, practitioners, and policymakers discover and synthesize criminological knowledge. We examined how deep-research tools from three popular genAI platforms handle criminology literature reviews. The systems vary considerably in quality: some produce plausible reports with unreliable citations, while others generate structured reviews drawing predominantly on OA and other free-to-read full-text articles and reports. The findings reveal that genAI is making OA outputs central to criminology’s evidence base. Free materials are easier for both humans and machines to discover, scrutinize, and integrate into policy and practice. Paywalled research is more likely to be ignored. The takeaway is clear: criminologists who keep their work closed access are blocking its inclusion in the evidence base.
Open access (OA) to criminology outputs—articles, data, code, and other scholarly resources—is essential for building a solid evidence base. An output is OA if it is “digital, online, free of charge, and free of most copyright and licensing restrictions” (Suber, n.d.). Making criminology outputs OA maximizes their uptake and, in turn, supports better informed policy decisions, more effective crime-prevention strategies, reduced injustices in the legal system, and accelerated scientific progress.
The case for OA is even more compelling in light of generative artificial intelligence (genAI), especially large language models (LLMs)—systems that generate content from natural-language prompts (Stryker and Scapicchio, 2024; Yu and Guo, 2023). Although first developed just to guess the next word in a string of text, these models now perform a wide range of complex tasks (Kelleher, 2019; Raaijmakers, 2025; Wheeler, 2026).
GenAI has quickly become a key tool for research discovery, review, and synthesis. Information users—including researchers, policymakers, practitioners, students, and the general public—increasingly rely on this new form of intelligence to locate, evaluate, interpret, and apply scholarly knowledge.
The boom in genAI began in 2022. Until then, genAI tools were effectively limited to technically skilled users, such as those able to train models. With the release of ChatGPT (OpenAI, n.d.a) and the rapid emergence of competitors such as Gemini (Google, n.d.) and Perplexity (n.d.), genAI entered a new era of widespread accessibility and everyday use.
By introducing chat-style interfaces, genAI no longer requires users to have technical expertise. With simple natural-language prompts, anyone with an internet-connected computer or smart device can use genAI to identify, digest, summarize, and brainstorm applications of scholarly literature.
Like humans, the ability of genAI to produce accurate, complete, and otherwise desirable outputs reflects their access to information. This is why OA outputs are essential for building a solid evidence base. By expanding what can be used, OA improves the discovery, review, synthesis, and application of scholarly knowledge.
Conversely, closed access constrains what both human minds and genAI can do. An output is closed access when it is unpublished, paywalled, or otherwise restricts who or what can use it and how (Jacques, 2023). These restrictions increase the likelihood that tools will offer lower-quality responses. If LLMs cannot see the material, it can neither be used in training future iterations of the model, or be referenced directly when the models search for additional materials.
Most criminology datasets, code libraries, and teaching materials remain locked away in private files. The bulk of our published articles require payment to access (Ashby, 2021). This is not out of necessity but choice (Jacques and Piza, 2022; Jacques and Wheeler, 2025; Piza and Jacques, 2020). Criminologists can make all of their outputs OA, whether by funding gold OA via APCs, choosing diamond OA venues with no APCs, or providing green OA by posting preprints and postprints on personal webpages and in repositories such as CrimRxiv (n.d.) (Jacques, 2025a).
Criminologists are slowly but steadily joining the OA movement. This shift advances social justice by making our outputs available to everyone and strengthens science by rendering them more transparent, readily accessible, and reusable. The result is greater impact and higher return-on-investment (ROI)—more citations, pageviews, and practical applications for equal or lower investments of time, effort, and money (Jacques, 2025b).
GenAI is accelerating the salience of OA for criminology. These systems decide which sources to treat as part of the evidence base and—as we show below—disproportionately draw on OA and other free-to-read materials. This is unsurprising, since openly available, crawlable texts are easier for genAI to ingest, index, and reuse than closed-access content. The implication for authors is clear: if you want genAI to treat your work as part of the evidence base, you must ensure it has full access to your publications.
Users of popular genAI platforms are often presented with two options: a basic “search” alongside “deep research.” Deep research refers to a mode in which the model systematically scans the web, visits many sources, and assembles a structured, report-style answer to a prompt. In practice, this mode approximates what scholars would recognize as a literature review: it identifies relevant sources, extracts key findings, and organizes them into an integrated narrative.
We were curious how these tools would handle criminology literature reviews, so we ran a small informal study. We used three platforms—Google’s (n.d.) Gemini, OpenAI’s (n.d.a) ChatGPT, and Perplexity (n.d.)—and gave each a different criminology-related prompt: measuring stress in police officers (Gemini), the effectiveness of gunshot detection (ChatGPT), survey measures of attitudes toward police (Perplexity), and making convenience samples more representative (Perplexity). The full deep-research reports for each query are available as online supplements (linked above), and we draw on them below to examine what these systems actually cite.
Google’s Gemini and Perplexity produced usable reference lists by default, whereas OpenAI’s ChatGPT required repeated prompting to generate them. In other words, Gemini and Perplexity preemptively disclose where their information comes from, while ChatGPT must be explicitly instructed to reveal its sources, placing an additional burden on users who want to check those sources.
What did these tools actually cite? Below, we examine the types of sources and the kinds of access to them. Specifically, we classify each citation as real or hallucinated, academic or non-academic, linked to freely available full-text or only to a summary, and—when full-text is freely available—as either “really OA” or merely “bronze access.”
Real vs. hallucinated
The good and bad news about genAI tools is that they almost always produce answers that sound plausible, especially to uncritical readers. Yet their outputs vary widely in accuracy, completeness, and other features—such as citation practices, structure, and tone—depending on the model’s training, the task and domain, the prompts used, and the size of the context window.
Like humans, genAI will sometimes make up an answer—“hallucinate”—rather than acknowledge they do not know (Li et al., 2025; Lowe, 2025; Wu et al., 2025). They can also misinterpret data, rely on outdated information, or draw on flawed sources, including findings from retracted studies. For both human-created and genAI-created content, users have a duty to check the claims and adjust their decisions accordingly. GenAI has not made critical thinking more important, but it has made the need for it more visible.
Unlike the reports from Google and Perplexity, OpenAI’s reference list was essentially one large hallucination. It lists 12 works (shown in the above-linked PDF and in table 1), but every link fails. Link rot is a genuine problem for websites, yet it does not explain these results: these URLs never worked because the cited items were never real web pages (as the subsequent report shows). Instead, the references are portmanteaus of real works and fabricated details.
To help make sense of these hallucinated references—and to illustrate how much platforms’ reports can vary in quality—we asked Perplexity’s deep-research model to analyze the citations in ChatGPT’s bibliography. The full thread from this exercise is available as a linked PDF (linked above), which includes a summary table of Perplexity’s assessment. For illustration, table 1 presents Perplexity’s evaluation of the first five citations in ChatGPT’s list (which we manually reviewed and it is accurate.)
The ChatGPT example underscores a central risk of relying on genAI for literature reviews: even when the narrative sounds plausible, the underlying references may be fabricated or corrupted, and users have no guarantee that the cited works actually exist. Users must therefore scrutinize the cited sources, which is also good practice when evaluating and building on human-generated literature reviews.
Table 1. Perplexity’s analysis of ChatGPT’s first five references
|
Citation (as given) |
Real? |
What’s wrong? |
Closest real source? |
|---|---|---|---|
|
Mares, D., & Blackburn, E. (2021). Evaluating the Impact of ShotSpotter Technology on Firearm Homicides and Arrests. Criminal Justice Policy Review. https://doi.org/10.1177/0887403421992907 |
Partially hallucinated |
Wrong title, wrong journal, wrong DOI, but correct authors/topic/year. |
Real article: Mares, D., & Blackburn, E. (2021). Acoustic gunshot detection systems: A quasi-experimental evaluation in St. Louis, MO. Journal of Experimental Criminology, 17(2), 193–215. https://doi.org/10.1007/s11292-019-09405-x |
|
National Institute of Justice. (2022). Acoustic Gunshot Detection: A Review of the Evidence. https://nij.ojp.gov/library/publications/acoustic-gunshot-detection-review-evidence |
Likely hallucinated / misattributed |
URL does not resolve; no NIJ report with this exact title; NIJ/OJP has related GDT reports around 2019–2022 but not this one. |
Closest: BJA/NIJ/OJP materials such as Gunshot Detection: Reducing Gunfire through Acoustic Technology (BJA, 2022) and Implementing Gunshot Detection Technology: Recommendations for Law Enforcement and Municipal Partners (Urban Institute/NIJ, 2019–2020). |
|
Piza, E. L., et al. (2023). Gunshot Detection Technology and Police Response: A Quasi-Experimental Evaluation. Journal of Experimental Criminology. https://doi.org/10.1007/s11292-023-09553-z |
Partially hallucinated |
Wrong DOI and inexact title; authors/year/journal/topic roughly right but conflated. |
Real JEC article: Piza, E. L., Arietti, R. A., Carter, J. G., & Mohler, G. O. (2023). The effect of gunshot detection technology on evidence collection and case clearance in Kansas City, Missouri. Journal of Experimental Criminology. DOI ≈ 10.1007/s11292-023-09594-6 (not 09553-z). |
|
Dallas Police Department. (2015). ShotSpotter Evaluation Report. https://dallaspolice.net/Reports/shotspotter2015.pdf |
Hallucinated |
URL and title do not exist; no 2015 DPD ShotSpotter evaluation report online. |
Closest: Mazerolle, L. G., Watkins, C., Rogan, D., & Frank, J. (1998). Using gunshot detection systems in police departments: The impact on police response times and officer workloads. Police Quarterly, 1(2), 21–49 – based in Dallas but not a DPD report. |
|
Government Technology. (2022). ShotSpotter: Technology That Doesn’t Stop Crime? https://www.govtech.com/public-safety/shotspotter-technology-that-doesnt-stop-crime |
Partially hallucinated |
Wrong year and title; URL given does not match any article. |
Closest: GovTech, 2024. Study: ShotSpotter Doesn’t Reduce Crime or Shootings (May 2, 2024) and Chicago Latest City to Rethink Gunshot Detection Technology (Feb 28, 2024). |
Academic vs. non-academic
Conventional academic literature reviews typically restrict their citations to scholarly articles, books, reports, and conference proceedings. By contrast, deep research defaults to something closer to Wikipedia’s citation practice (Reagle, 2010): a factual claim can be supported by almost any source—news articles, blogs, social media posts, and crowdsourced encyclopedias—so long as there is a clickable link to a specific item (e.g., OpenAI, n.d.b).
In our small study, however, the deep-research tools mostly drew on academic sources—or, in ChatGPT’s case, on sources that appeared academic but were in fact fabricated. To illustrate the emphasis on academic sources, we now focus on Gemini’s report on officer stress. Table 2 summarizes the first ten citations from that report, out of 78 total references.
All of the first ten sources are academic journal articles or reports. Across the full set of 78 citations, the few non-academic sources consist mainly of internal procedure documents, policy guidelines, and materials on how to conduct evaluations (e.g., the Maslach Burnout Inventory, PTSD Checklist for DSM-5). These non-academic sources are appropriate to include in a literature review because they define key constructs, operationalize measures, and shape how stress and related outcomes are assessed in practice.
This example suggests that deep-research tools can surface the kinds of peer-reviewed studies that anchor the evidence base, while also drawing in practice documents that are relevant. Our caution is that, in addition to checking whether sources are real, users should watch for cases where these tools drift toward lower-quality or advocacy-driven material (for examples, see Johnson and Johnson, 2026).
Table 2. Source-type and access to the first ten cited works in Gemini’s report on officer stress
Free full-text vs. summary
Typically, academic literature reviews only draw on published sources. What they tend not to do is restrict citation to works with freely available full-text—the complete article, not only an abstract or snippet. As a result, any nuance in paywalled sources remains out of sight, limiting readers’ ability to interrogate the evidence and undermining independent scrutiny.
Because most criminology articles are closed access (Ashby, 2021), it is reasonable to assume that the references in our literature are overwhelmingly to restricted content. By extension, one might expect deep-research citations to point mainly to paywalled sources as well. If that were the case, readers who rely on genAI for literature reviews would still be pushed toward sources they cannot inspect directly, reproducing the same barriers to scrutiny that characterize traditional, paywalled publishing.
Yet in the officer-stress report, deep research primarily draws on sources with full-text available at the cited URL. All of the first ten citations meet this standard. Across the 78 citations, roughly one in five URLs do not directly lead to full-text content. In some cases, the full-text is reachable with an extra click, while in others the page only links onward to paywalled versions. A few links are broken, likely reflecting the lag between when the reports were generated (summer 2025) and analyzed for this article (winter 2025/26).
Overall, the report still leans heavily toward directly accessible full-text sources. The tool appears to favor sources that can be read in their entirety. This orientation is good for knowledge because it lets readers and reviewers inspect the full arguments, methods, and results for themselves, rather than relying on secondhand summaries. It also makes it easier to detect errors, replicate analyses, and integrate findings across studies, strengthening criminology’s evidence base.
Really OA or bronze access
All OA sources are free to read, but not all free sources are OA. To be “really OA” (Jacques, 2025a), a work must be permanently and legally free of charge and free of most restrictions on reuse (Suber, n.d.). This is ultimately a matter of copyright and licensing (Suber, 2016; Willinsky, 2022). By default, the creator of a work owns the copyright and therefore controls whether it is made public and, if it is published, how it is distributed and under what conditions others may reuse it.
The oldest form of OA is the public domain (Boyle, 2010). If a work is in the public domain, no one owns the copyright and it can be used in any way: shared or sold, adapted or remixed. Any creator can place their work in the public domain, but in practice—at least for criminology—the law usually does the work. This happens in two main ways: the copyright term expires, or the work was created by the government. In the United States, for example, criminology sources produced by federal agencies are in the public domain, and older works enter the public domain on a rolling basis once their copyright has expired.
In contemporary scholarly publishing, most works become OA through a license (Suber, n.d., 2016). A license shifts a work from “all rights reserved” to “some rights reserved” (with public-domain works effectively having “no rights reserved”). For academic literature, Creative Commons (n.d.) licenses are the most common type. These licenses spell out how a work may be reshared and adapted, including conditions such as “non-commercial” use and “share alike” redistribution.
There are no takebacks with OA. Once a work has a Creative Commons license or enters the public domain, it cannot be converted to a more restrictive status (except by changes in the law, which are rare). The work is permanently and legally free to read, share, and—depending on the license—adapt. By contrast, when there is no such legal guarantee, content that is temporarily free to read can later be placed behind a paywall. This kind of ephemeral free-to-read status is referred to as “bronze access” (for details, see Jacques, 2023, 2025a).
In the report on officer stress, as shown in table 2, six of the first ten citations are to really OA sources: five have a Creative Commons license, and the SAMHSA source is in the public domain. The other four are bronze-access sources, with free access currently available but not protected by law. The breakdown is roughly similar across the report’s 78 citations.
Given that most criminology articles are closed access (Ashby, 2021), this pattern suggests that deep research is disproportionately drawing on sources that are really OA. For more than a decade, OA has had an established “citation advantage” (Suber, n.d., 2016; Willinsky, 2022; in criminology, see Worrall and Wilds, 2024). Now, this visibility and impact advantage is being multiplied by genAI.
In our exploratory study, we compared how the deep-research tools of three genAI models summarized the literature on a few criminology topics. ChatGPT produced a seemingly plausible report, but its accompanying reference list was almost entirely hallucinated. By contrast, Gemini and Perplexity generated structured reports that mostly cited academic journal articles and reports, supplemented by appropriate practice documents such as guidelines and measurement tools. Their citations largely pointed to freely readable full-text sources, many of which are really OA because they are in the public domain or carry a Creative Commons license.
Notwithstanding the obvious limitations of our little study, the findings point to a straightforward implication: genAI makes open-access (and other freely accessible) outputs central to criminology’s evidence base. They are easier for humans and genAI to ingest, interrogate, and reuse than closed-access content. Their lessons are more likely to be discovered, scrutinized, and incorporated into future research, policy, and practice. By contrast, when criminology outputs remain paywalled, people and machines are more likely to overlook relevant evidence or rely on incomplete, outdated, or lower-quality information.
It is more important than ever for authors to make their work OA. Criminologists have not done a good job of this to date (Ashby, 2021). The field and its stakeholders are held back by closed access, but it does not need to be this way. Authors who are willing and able can pay to make their versions-of-record gold OA. Personally, however, we have never done this because the ROI is better with diamond OA and green OA. These routes cost authors nothing yet work just as well (Jacques, 2025a; Jacques and Piza, 2022; Piza and Jacques, 2020). Criminologists who keep their work closed access are blocking its inclusion in the evidence base.
Anthropic. No date. Using Research on Claude. Retrieved January 8, 2026, from https://support.claude.com/en/articles/11088861-using-research-on-claude
Ashby, Matt P. (2021). The Open-Access Availability of Criminological Research to Practitioners and Policy Makers. Journal of Criminal Justice Education 32:1-21. https://doi.org/10.1080/10511253.2020.1838588 (OA postprint: https://osf.io/preprints/socarxiv/wnq7h)
Boyle, James. 2010. The Public Domain: Enclosing the Commons of the Mind. New Haven, CT: Yale University Press.
Creative Commons (CC). No date. About CC Licenses. https://creativecommons.org/share-your-work/cclicenses
CrimRxiv. No date. CrimRxiv — The Global Open Access Hub and Repository for Criminology. Retrieved January 15, 2026, from https://crimrxiv.com
Google. No date. Gemini Deep Research – Your Personal Research Assistant. Retrieved January 8, 2026, from https://gemini.google/gb/overview/deep-research
Jacques, Scott. 2023. Ranking the Openness of Criminology Units: An Attempt to Incentivize the Use of Librarians, Institutional Repositories, and Unit-Dedicated Collections to Increase Scholarly Impact and Justice. Journal of Contemporary Criminal Justice 39:371-386. https://doi.org/10.1177/10439862231172737 (OA postprint: https://doi.org/10.21428/cb6ab371.69930b9a)
Jacques, Scott. 2025a. Making Your Articles Open Access: Opportunities and Choices. The Criminologist, May/June issue. https://asc41.org/wp-content/uploads/ASC-Criminologist-2025-05.pdf
Jacques, Scott. 2025b. Creating Criminology Podcasts with Generative Artificial Intelligence, Storing Them on Web3, and Sharing Them Open Access: A Contribution to Utilitarian Digital Pedagogy (Part I – Conceptual and Theoretical Issues). CrimRxiv. https://doi.org/10.21428/cb6ab371.0753be51
Jacques, Scott, and Eric Piza. 2022. The Irony of Paywalled Articles Is They Can Be Made Open Access for Free: What It Means for ACJS Journals. ACJS Today. (OA postprint: https://doi.org/10.21428/cb6ab371.b21d7325)
Jacques, Scott, and Andrew Wheeler. 2025. A plea for open access to qualitative criminology: With a Python script for anonymizing data and illustrative analysis of error rates. Journal of Qualitative Criminal Justice & Criminology 15(2). https://doi.org/10.21428/88de04a1.29bd3039
Johnson, Thaddeus L., and Natasha N. Johnson. 2026. Academic Freedom, Manufactured Evidence, and the Integrity of Criminal Justice Policy. Evidence Base: Criminal Justice Research, Policy & Action, 2612200. https://doi.org/10.1080/30679125.2025.2612200
Kelleher, John D. 2019. Deep Learning. Cambridge, MA: MIT Press.
Li, Minghao, Ying Zeng, Zhihao Cheng, Cong Ma, and Kai Jia. 2025. ReportBench: Evaluating Deep Research Agents via Academic Survey Tasks.” arXiv. https://doi.org/10.48550/ARXIV.2508.15804
Lowe, Derek. 2025. An Evaluation of “Deep Research” Performance. Science. https://www.science.org/content/blog-post/evaluation-deep-research-performance
OpenAI. No date, a. ChatGPT. Retrieved January 16, 2026, from https://chatgpt.com
OpenAI. No date, b. Deep Research. Retrieved January 16, 2026, from https://platform.openai.com
Perplexity. No date. Perplexity. Accessed January 31, 2026. https://www.perplexity.ai
Piza, Eric, and Scott Jacques. 2020. ASC Should Make It Legal for Their Journals’ Authors to Immediately, Publicly Share the Accepted Version of Their Manuscripts. CrimRxiv. https://doi.org/10.21428/cb6ab371.34729226
Raaijmakers, Stephan. 2025. Large Language Models. Cambridge, MA: MIT Press.
Reagle, Joseph. 2010. Good Faith Collaboration: The Culture of Wikipedia. MIT Press.
Stryker, Cole, and Mark Scapicchio. 2024. What is generative AI? IBM. https://www.ibm.com/think/topics/generative-ai
Suber, Peter. No date. Open access (the book). https://cyber.harvard.edu/hoap/Open_Access_(the_book)
Suber, Peter. 2016. Knowledge Unbound: Selected Writings on Open Access, 2002-2011. Cambridge, MA: MIT Press.
Wheeler, Andrew P. 2025. Deep Research and Open Access. Andrew Wheeler. https://andrewpwheeler.com/2025/08/28/deep-research-and-open-access
Wheeler, Andrew P. 2026. Large Language Models for Mortals: A practical guide for analysts with python. Durham, NC: Crime De-Coder LLC.
Willinsky, John. 2022. Copyright’s Broken Promise: How to Restore the Law’s Ability to Promote the Progress of Science. Cambridge, MA: MIT Press.
Worrall, John L., and Katherine M. Wilds. 2024. Is Open Access Criminology Influential? Journal of Criminal Justice Education, 1–19. https://doi.org/10.1080/10511253.2024.2389096
Wu, Kevin, Eric Wu, Kevin Wei, Angela Zhang, Allison Casasola, Teresa Nguyen, Sith Riantawan, Patricia Shi, Daniel Ho, and James Zou. 2025. An Automated Framework for Assessing How Well LLMs Cite Relevant Medical References.” Nature Communications 16:3615. https://doi.org/10.1038/s41467-025-58551-6
Yu, Hao, and Yunyun Guo. 2023. Generative artificial intelligence empowers educational reform: Current status, issues, and prospects. Frontiers in Education, 8, 1183162. https://doi.org/10.3389/feduc.2023.1183162