The rapid evolution of Artificial Intelligence (AI) and Machine Learning (ML) has significantly impacted numerous sectors, contributing to tackling information overload and reshaping how decision-making processes are approached. However, nowhere is this potential more evident and contested than within the Intelligence Community (IC), whose assessments can shape strategic choices with major consequences for national and international security. Security remains the main goal of every society, as it is a vital condition for enabling all other sectors of society to prosper. Consequently, these trends are transforming intelligence analysis alongside the persistent issues of cognitive and organisational biases that impede objective reasoning. This essay examines the extent to which AI and ML technologies can effectively serve as force multipliers in intelligence analysis, improving the speed, quality, and ultimately the efficiency of security and strategic decision-making.
For decades, scholars and practitioners have warned of the dangers of overreliance on human judgment in environments saturated with information yet constrained by time and interpretation. Strategic surprise and intelligence failure, were often less about the absence of information than about the improper synthesis of existing data that led to errors of analytical judgement by individuals and groups in different parts of intelligence organisations. As er famously stated, “Major intelligence failures are usually due to analysis errors rather than collection failures.”
A growing body of literature in intelligence studies now addresses whether the integration of AI/ML technologies can mitigate these failures. Optimists frame these tools as “force multipliers,” enhancing analysts’ capacity to detect patterns, generate hypotheses, and provide decision-makers with actionable intelligence faster and more reliably. However, important questions still exist. Are the machines capable of augmenting the analyst’s role, or do they merely amplify existing biases? Will more algorithmic help lead to better decision-making, or will it make accountability more challenging when mistakes occur? Moreover, given that organisations generally do not adapt easily, and this is exacerbated within governmental entities, will the Intelligence Community (IC) trust these technologies, especially in ambiguous environments with incomplete data? By addressing these questions, this essay asserts that the emerging technologies mentioned above will inevitably enhance Intelligence Analysis, thereby impacting the entire Intelligence Cycle and benefiting the Intelligence Community as a whole.
In modern warfare, which involves cyber, space, information, and traditional domains, these shifts in intelligence practices influence how conflicts are strategized, executed, and managed. AI and ML powered intelligence plays a crucial role in battlefield awareness, targeting decisions, escalation control, and crisis stability, fundamentally transforming the nature and risks of contemporary conflict.
To situate this work within the broader field, it draws on the extensive literature on cognitive bias in intelligence, particularly the effects of confirmation bias, serial position effects, and framing effects. Although Structured Analytic Techniques (SATs), like the Analysis of Competing Hypotheses (ACH), are commonly used, experimental research shows they often fail to prevent belief entrenchment due to various factors, such as time pressure. Consequently, AI presents an attractive force multiplier solution, serving not only as a means for significantly accelerated data processing but also potentially as a cognitive prosthetic that mimics or supplements human reasoning processes.
Moreover, these AI models could help address a long-standing issue in intelligence: the mistrust between decision-makers and intelligence providers. Decision makers often naturally distrust sources of power they don't control or can't influence. Intelligence can sometimes unsettle, confuse, or disrupt existing strategies, which may increase tensions between the Intelligence Community (IC) and policymakers. Building trust between intelligence providers and decision makers is crucial for attaining high performance. Therefore, suitable AI models could be a discreet and effective way to deliver intelligence products to decision-makers, allowing their use without concerns about loyalty or conflicting interests. This trust could grow if decision-makers feel in control of guiding the AI models to satisfy each specific requirement.
The landscape of intelligence analysis is undergoing a fundamental transformation driven by advances in AI and ML. As intelligence agencies around the world face rapidly growing data volumes, advanced adversaries, and tighter decision-making deadlines, the debate over whether AI and ML can genuinely act as force multipliers has become a key topic in strategic planning. In this context, AI is often framed as a force multiplier: not replacing analysts but expanding their capacity to process information, detect patterns and deliver timely assessments. At the same time, integrating AI into intelligence raises fundamental questions about the role of human judgment, tradecraft standards and the reliability of machine-generated insights in high-stakes security decisions.
Theory of Intelligence Analysis
Since the early days of Intelligence Studies, intelligence analysis has usually been described as using cognitive techniques to evaluate data and test hypotheses within a covert socio-cultural environment. This definition captures the essence of the analytical work process. While intelligence analysis is mainly cognitive, it is also shaped by organizational, historical, and social factors. Analysts often collaborate and rely on existing reports, fostering a risk-averse culture that may lead to confirmation bias, favoring evidence that supports initial conclusions. Thus, intelligence analysis combines cognitive reasoning with social and organizational influences. Moreover, scholars challenge the traditional, stepwise view of intelligence work, arguing that the classic Intelligence Cycle is outdated. Therefore, AI-enhanced intelligence may be introduced into the intelligence cycle at either the collection or processing stage. Regardless of the entry point, it remains the task of the all-source analysis and assessment function to contextualize this AI-generated intelligence alongside all other relevant information tied to the same requirement.
It should be noted that intelligence analysts face biases such as expectations, resistance, consistency bias, and anchoring that affect judgment. Techniques such as Brainstorming, Devil’s Advocacy, and scenario-building help expose and reduce these biases. Diagnostic methods add transparency, contrarian approaches challenge views, and imaginative methods broaden perspectives. While not foolproof, these methods improve rigor, reduce bias errors, and boost credibility. They require significant time and resources, increasing the workload. AI can automate or support SATs, lessening analyst burden and enabling more consistent, systematic analysis.
AI/ML in Intelligence
To fully understand AI and ML's impact on intelligence, grasp their basic definitions. These technologies are often discussed in complex terms but are fundamentally about advancements in machines simulating human-like abilities such as reasoning, learning, and decision-making. AI involves computer systems mimicking human intelligence, enabling tasks like language understanding, pattern recognition, and decision-making with minimal human input. ML, a branch of AI, focuses on creating algorithms that let machines learn from data, improving performance and predictions over time without explicit programming. Starting with these fundamentals lays a solid base for understanding advanced intelligence applications. Future AI aims to foster human-machine collaboration, where AI supports rather than replaces humans. Combining AI’s speed with human judgment and ethics improves decision-making, transparency, and trust, ensuring technology benefits society responsibly and unlocks its full potential.
AI shows strong capabilities by processing data, detecting patterns, and automating tasks quickly and accurately. ML, a key part of AI, helps systems interpret large, complex data like text, images, and metadata, surpassing human ability. This makes techniques like anomaly detection, predictive modelling, and scenario simulation easier, which would be slow for humans. Unsupervised learning and Large Language Models (LLMs) improve workflows by finding hidden patterns and providing insights like summarisation, translation, and knowledge management. Rule-based AI ensures consistent, objective decisions, reducing human error and fatigue. But, AI has limits. Its performance depends on the quality of training data; biased or poor data can cause errors. Fixed logic limits adaptability, and AI lacks traits like intuition and judgment, which are vital for subtle cues. It also struggles to generalise, learning from history rather than reasoning about the future. While AI processes information well, it often misses what’s implied or missing, requiring human oversight for strategic and accurate interpretation.
Human Machine Collaboration: Opportunities and limitations
Exploring how IC can succeed in the AI revolution is essential, as integrating AI and ML into security shifts from traditional human-tool models to human-machine teaming (HMT). This partnership combines human and machine strengths for better results. While based on operational opportunities, it faces limitations and implications. Success depends not just on AI technology but on creating a socio-technical system fusing human judgment with machine capabilities. In future warfare, humans will be augmented by machines with access to curated data, acting as strategic centaurs. The most effective systems will mix human and machine intelligence to develop hybrid cognitive architectures.
Next-generation AI can improve synergistic teaming, but issues with alignment, oversight, and role clarity persist. Experiments show human-AI teams often lacked synergy, sometimes performed worse than individuals, possibly due to reliance issues. AI boosts human performance, with teams outperforming individuals, demonstrating augmentation. Achieving true synergy may require new interaction strategies or better research setups. This teaming could transform the intelligence cycle by combining strengths. AI quickly analyses large data, supporting human analysts by automating routine tasks, freeing them for complex activities like critical thinking and strategic planning. The growing data volume from sensors demands new capabilities, which HMT aims to provide for faster decision-making.
Overall, human-machine collaboration in intelligence analysis offers benefits like improved reasoning support with AI tools that help analysts organise thoughts, automate validity assessments, and increase transparency of reasoning and evidence. However, challenges include aligning these tools with organisational standards, understanding and trusting AI processes, and the fact that many solutions are still prototypes not yet ready for widespread use. Additionally, AI can serve as equal partners in SATs processes, such as challenge analysis techniques, where they can work alongside human analysts in red teaming to question conventional wisdom established by experts. Many barriers to SAT adoption in the global analyst community have decreased due to AI. This enables IC and business analysis units to use AI-supported tools. Phersons and Heur forecast that by 2030, these advances will create a more rigorous, insightful analysis environment.
Decision makers will adopt advanced methods to address complex policy issues, requesting analyses that identify key drivers, explore scenarios, and challenge assumptions. Success depends on designing collaboration models that leverage strengths and governance frameworks to manage human-AI interactions. Human-AI teaming requires rethinking traditional models, especially regarding shared goals and contributions. As AI shifts from tools to intelligent agents, it offers flexibility and autonomy, demanding new strategies to clarify roles and improve communication, coordination, and trust. The future of HMT involves orchestrating partnerships that respect each entity's capabilities, ensuring technology enhances rather than replaces human judgment. AI/ML developers must understand how analysts think, behave, and act to build analyst-centered models that benefit IC. They may need to redesign terminology so machines can effectively communicate confidence levels, like a probability scale, to increase human trust in machines and strengthen trust in HMT.
Adopting of Technological Innovation in Intelligence
The adoption of technological innovations will accelerate as they become increasingly useful across sectors. In intelligence, with AI's rapid integration across sectors, humans must decide when to trust the technology. IC must understand AI's capabilities and limitations and adapt as these systems evolve. Qualified users are needed to maximize these systems' potential, increasing their use in intelligence. Achieving trust-based outcomes is crucial. Human-machine teaming ensures effective collaboration with AI now and long-term. Stronger AI simplifies HMT, boosts trust, and speeds up adoption in the IC. While technological change can promote progress, it may also cause fragmentation. Gaps often develop along pre-existing divides rooted in earlier technological waves, which are influenced by technology lock-ins or disparities in resources and adaptation capacity. Additionally, as knowledge and technology accumulate during innovation, the divide between innovators and non-innovators tends to grow. The speed of technology adoption by government agencies, where most IC is located, depends on how quickly relevant regulations are updated. As AI’s capabilities advance, policymakers are becoming increasingly focused on establishing AI-related policies. This trend in policymaking shows a growing recognition of the importance of regulating AI while also harnessing its transformative power.
The IC must quickly adapt by expanding GenAI use across the intelligence cycle, transforming processes like collection, analysis, and dissemination. It needs urgent changes in partnerships, communications, adoption, and access to stay competitive. Leveraging new data and tech, it can enhance decision-making despite rising competition. To accelerate GenAI adoption, clear analytic standards are crucial. Start by integrating AI into current methods, and promote experimentation with LLMs through accessible, domain-specific systems for analysts, ensuring standards while enabling rapid GenAI integration into daily work. In total, AI systems are currently effective at processing raw intelligence data and may handle even more sensitive information in the future. Data centers must be fortified against threats from advanced nation-states. Hence, new standards for high-security AI data centers are needed, along with adopting classified computing environments for secure AI workloads.
In any relationship, trust relies on communication, even between teammates where one, is a machine. Both depend on communication, which is based on NLP. A major breakthrough in NLP is LLMs, enabling systems to understand and generate human-like language and perform reasoning, a challenging AI task. These advances have ushered in a new era in NLP, enabling the development of conversational systems that support smooth human communication.
Conclusion
In conclusion, this essay argued that AI and ML are true strategic force multipliers in intelligence analysis and security decision-making. When integrated into analyst-focused, well-managed human-machine teams, they are already helping reshape how intelligence supports modern warfare rather than functioning solely as standalone technical tools. From the above, it became evident that AI offers improvements in speed and scale during collection and processing, supported by reliable analytic reasoning through summarisation, pattern discovery, retrieval workflows, and hypothesis generation. On the other hand a practical constraint in adoption is the requirement for substantial computational power and secure, specialised hardware, often with accelerators, typically used in high-security, accredited settings. Factors such as procurement, energy needs, data mandates, and classified limits hinder the spread of AI and create disparities. These issues directly affect who can access AI’s benefits and how quickly they can do so. Therefore, investments in secure AI data centres, edge computing for urgent tasks, and training analysts are fundamental foundations, not just supplementary measures to harness AI and ML in IC. AI adoption also faces theoretical constraints that extend beyond practical limits, primarily due to trust and ethical concerns. Trust forms a fundamental core in the social sciences and affects a wide range of factors, from the stability of democracies to the effectiveness of policing delivery.
These can be addressed through human oversight, which helps prevent harm caused by inaccurate or biased AI outputs. The global community should establish a treaty for oversight, drawing from examples set by the EU and the US. Oversight can be implemented through HITL, human-on-the-loop, or human-in-command mechanisms, with different methods and scope depending on the situation. In short, these can frame AI as a force multiplier in an analyst-centred system, boosting scale and speed without undermining tradecraft, accountability, or strategy. Looking ahead, strategic decision-making is expected to improve as advances are made in computing power, data infrastructure, and model architectures. Particularly, more advanced, high-capacity neurosymbolic systems could improve understanding of causal relationships, uncertainty, and long-term effects, offering a more comprehensive perspective. Currently, AI/ML is most reliable at tactical and operational levels, where issues have higher signal-to-noise ratios, shorter timeframes, and more focused current information events. Understanding this progression helps set realistic expectations for deploying, evaluating, and overseeing AI-driven missions.
Lastly, evidence supports a balanced view. AI and ML improve phases of the intelligence cycle by increasing speed, coverage, and consistency, but they also bring new dependencies and risks that must be managed. Practically, this dissertation recommends the following. First, integrating AI with SAT-aligned tradecraft. Next, focus on evaluation frameworks that prioritise mission effects over just model accuracy, and then develop systems based on those. Third, establishing governance in AI appliances will increase transparency, which will in turn help with adoption. Last, investing in both capable computational and human infrastructure for sustained use. Given the inevitable spread of AI across sectors, the measures outlined here can turn isolated pilot successes into lasting, system-wide capabilities, improving security and strategic decision-making while maintaining the accountability fundamental to the IC. However, this requires collaboration with specialists in AI and tech beyond the IC, and the IC has long recognised its importance in unlocking AI's potential. As a result, it's now deeply embedded in the latest research projects emerging from the tech sector. What has been found, though, is that working closely with STEM innovators isn't straightforward. There is an uneasy relationship between the community and the sector. Nevertheless, the community cannot acquire this force multiplier without taking this risk. “This is a team sport,” stressed Dawn Meyerriecks, the CIA’s top technologist, in a speech in 2019.
Bibliography
Agnihotri, Arpita, Carolyn M. Callahan, and Saurabh Bhattacharya. “Influence of Power Imbalance and Actual Vulnerability on Trust Formation.” International Journal of Organisational Analysis 32, no. 5 (2024): 861–882. https://doi.org/10.1108/IJOA-11-2022-3499.
Agrawal, Ajay, Joshua Gans, and Avi Goldfarb. “Prediction, Judgment, and Complexity: A Theory of Decision-Making and Artificial Intelligence.” In The Economics of Artificial Intelligence: An Agenda, edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, 89–110. Chicago: University of Chicago Press, 2019.
Barnett, Jackson. “AI Is Breathing New Life into the Intelligence Community.” FedScoop. Accessed 20 September 2025. https://fedscoop.com/artificial-intelligence-in-the-spying/.
Barry, James A., Jack Davis, David D. Gries, and Joseph Sullivan. “Bridging the Intelligence-Policy Divide: A Progress Report.” Studies in Intelligence 37, no. 5 (1994): 1–13. Accessed 18 September 2025. https://www.cia.gov/resources/csi/studies-in-intelligence/1994-2/bridging-the-intelligence-policy-divide/.
Barry, W. J., C. Metcalf, and B. Wilcox. “Strategic Centaurs: Harnessing Hybrid Intelligence for the Speed of AI-Enabled War.” Modern War Institute at West Point. Accessed 18 September 2025. https://mwi.westpoint.edu/strategic-centaurs-harnessing-hybrid-intelligence-for-the-speed-of-ai-enabled-war/.
Bawden, David, and Lyn Robinson. “Information Overload: An Overview.” In Oxford Encyclopedia of Political Decision Making, edited by David P. Redlawsk. Oxford: Oxford University Press, 2020. https://doi.org/10.1093/acrefore/9780190228637.013.1360.
Britten, Shane. “Intelligence Failures Are Analytical Failures.” Counter Terrorist Trends and Analyses 10, no. 7 (2018): 3–15. https://doi.org/10.2307/26458486.
Cipan, Vibor. “Cognitive Biases in Intelligence Analysis and Their Mitigation (Debiasing).” Accessed 18 September 2025. https://viborc.com/cognitive-biases-intelligence-analysis-mitigation/.
Davis-Stober, Clintin P., Ido Erev, and Sudeep Bhatia. “The Interface between Machine Learning, Artificial Intelligence, and Decision Research.” Decision 11, no. 4 (2024): 435–451. https://doi.org/10.1037/dec0000252.
Davydoff, Daniel. Rethinking the Intelligence Cycle for the Private Sector. White Paper. Alexandria, VA: ASIS International, 2017.
Dictus, Christopher, Claire Bernish, Cortney Weinbaum, Alexa Bruce, and Trevor Johnston. Has Trust in the U.S. Intelligence Community Eroded? Examining the Relationship Between Policymakers and Intelligence Providers. Santa Monica, CA: RAND Corporation, 2024. Accessed 25 November 2025. https://www.rand.org/pubs/research_reports/RRA864-1.html.
Flusberg, Stephen J., Kevin J. Holmes, Paul H. Thibodeau, Robin L. Nabi, and Teenie Matlock. “The Psychology of Framing: How Everyday Language Shapes the Way We Think, Feel, and Act.” Psychological Science in the Public Interest 25, no. 3 (2024): 101–148. https://doi.org/10.1177/15291006241246966.
Garcea, Frank. “Serial Position and von Restorff Effect on Memory Recall.” The Review: A Journal of Undergraduate Student Research 10 (2009): 28–33. Accessed 25 November 2025. https://fisherpub.sjf.edu/ur/vol10/iss1/6.
Government Office for Science. Artificial Intelligence: Opportunities and Implications for the Future of Decision Making. London: Government Office for Science, 2016. Accessed 18 September 2025. https://assets.publishing.service.gov.uk/media/5a7f96e9ed915d74e622b62c/gs-16-19-artificial-intelligence-ai-report.pdf.
Harasimiuk, Dominika E., and Till Braun. Regulating Artificial Intelligence: Binary Ethics and the Law. London: Routledge, 2024.
Heuer, Richards J. Psychology of Intelligence Analysis. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency, 1999.
Heuer, Richards J., Jr., and Randolph H. Pherson. Structured Analytic Techniques for Intelligence Analysis. 3rd ed. Thousand Oaks, CA: CQ Press, 2020
Henrique, Bruno Miranda, and Eugene Santos. “Trust in Artificial Intelligence: Literature Review and Main Path Analysis.” Computers in Human Behavior: Artificial Humans 2, no. 1 (2024): 100043. https://doi.org/10.1016/j.chbah.2024.100043.
Hughes, Michael, Rebecca Carter, Adam Harland, and Alexander Babuta. AI and Strategic Decision-Making: Communicating Trust and Uncertainty in AI-Enriched Intelligence. London: The Alan Turing Institute, Centre for Emerging Technology and Security, April 2024.
Hulnick, Arthur S. “What’s Wrong with the Intelligence Cycle.” Intelligence and National Security 21, no. 6 (2006): 959–979. https://doi.org/10.1080/02684520601046291.
Ivančík, Radoslav. “Security Theory: Security as a Multidimensional Phenomenon.” Vojenské reflexie 16, no. 3 (2021): 32–53. https://doi.org/10.52651/vr.a.2021.3.32-53.
Jamieson, Dash, Lt Gen, USAF (Ret.). Human Machine Teaming: The Intelligence Cycle Reimagined. Forum Paper No. 53. Arlington, VA: Mitchell Institute for Aerospace Studies, January 2024. Accessed 18 September 2025. https://www.mitchellaerospacepower.org/human-machine-teaming-the-intelligence-cycle-reimagined.
Johnston, Rob. Analytic Culture in the US Intelligence Community: An Ethnographic Study. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency, 2005.
Kahneman, Daniel, and Gary Klein. “Conditions for Intuitive Expertise: A Failure to Disagree.” American Psychologist 64, no. 6 (2009): 515–526. https://doi.org/10.1037/a0016755.
Knack, Anna, Richard J. Carter, and Alexander Babuta. Human–Machine Teaming in Intelligence Analysis: Requirements for Developing Trust in Machine Learning Systems. London: Centre for Emerging Technology and Security, December 2022. Accessed 18 September 2025. https://cetas.turing.ac.uk/sites/default/files/2022-12/cetas_research_report_-_hmt_and_intelligence_analysis_vfinal.pdf.
Miletić, Steven, and Leendert van Maanen. “Caution in Decision-Making under Time Pressure Is Mediated by Timing Ability.” Cognitive Psychology 110 (May 2019): 16–29. https://doi.org/10.1016/j.cogpsych.2019.01.002.
Moran, C. R., J. Burton, and G. Christou. "The US Intelligence Community, Global Security, and AI: From Secret Intelligence to Smart Spying." Journal of Global Security Studies 8, no. 2 (2023): ogad005. DOI: 10.1093/jogss/ogad005.
Morris, R. “What Are the Shortcomings of the Intelligence Cycle and How Might They Be Mitigated?” Tac Talks no. 36 (2021): 2–4. Canberra: Australian Department of Defence.
Mukhamediev, Ravil I., Yelena Popova, Yan Kuchin, Elena Zaitseva, Almas Kalimoldayev, Adilkhan Symagulov, Vitaly Levashenko, et al. “Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges.” Mathematics 10, no. 15 (2022): 1–38. https://doi.org/10.3390/math10152552.
Murdick, Dewey. “Building Trust in AI: A New Era of Human–Machine Teaming.” Center for Security and Emerging Technology. Accessed 18 September 2025. https://cset.georgetown.edu/article/building-trust-in-ai-a-new-era-of-human-machine-teaming/.
Office of the Director of National Intelligence. A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis. Washington, DC: Central Intelligence Agency, March 2009.
Seth, K. “Transforming Intelligence: Innovations Defining the AI and ML Revolution.” Analytics Insight. Accessed 24 July 2025. https://www.analyticsinsight.net/artificial-intelligence/transforming-intelligence-innovations-defining-the-ai-and-ml-revolution.
Siegel, Mark G., Michael J. Rossi, and Jon H. Lubowitz. “Artificial Intelligence and Machine Learning May Resolve Health Care Information Overload.” Arthroscopy: The Journal of Arthroscopic and Related Surgery 40, no. 6 (2024): 1721–1723. https://doi.org/10.1016/j.arthro.2024.01.007.
Simon, Herbert A. “A Behavioral Model of Rational Choice.” The Quarterly Journal of Economics 69, no. 1 (1955): 99–118. https://doi.org/10.2307/1884852.
Special Competitive Studies Project. Intelligence Innovation. Washington, DC: Special Competitive Studies Project, 2024. Accessed 18 September 2025 [online] <https://www.scsp.ai/wp-content/uploads/2024/04/Intelligence-Innovation.pdf.>
Stanford Institute for Human-Centered Artificial Intelligence. AI Index Report 2025. Stanford, CA: Stanford University, 2025. Accessed 18 September 2025. https://hai.stanford.edu/assets/files/hai_ai_index_report_2025.pdf.
Treverton, Gregory F., and Wilhelm Agrell, eds. National Intelligence Systems: Current Research and Future Prospects. Cambridge: Cambridge University Press, 2009.
Toniolo, Alice, Federico Cerutti, Timothy J. Norman, Nir Oren, John A. Allen, Mukesh Srivastava, and Paul Sullivan. “Human–Machine Collaboration in Intelligence Analysis: An Expert Evaluation.” Intelligent Systems with Applications 17 (2023): 200151. https://doi.org/10.1016/j.iswa.2022.200151.
Tura, F., S. Pickering, M. E. Hansen, and J. Hunter. “Intersectional Inequalities in Trust in the Police in England.” Policing and Society (2025): 1–23. https://doi.org/10.1080/10439463.2025.2529300.
Urrea, V. “The International Community’s Need for Human Oversight in Artificial Intelligence.” Michigan Journal of International Law. Accessed 20 September 2025. https://www.mjilonline.org/the-international-communitys-need-for-human-oversight-in-artificial-intelligence/.
U.S. Department of Homeland Security. The Impact of Artificial Intelligence on Traditional Human Analysis. Washington, DC: U.S. Department of Homeland Security, 2024. Accessed 18 September 2025 https://www.dhs.gov/sites/default/files/2024-09/2024aepimpactofaiontraditionalhumananalysis.pdf
White House, The. America’s AI Action Plan. Washington, DC: The White House, July 2025. Accessed 18 September 2025. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
Whitesmith, Martha. Cognitive Bias in Intelligence Analysis: Testing the Analysis of Competing Hypotheses Method. Edinburgh: Edinburgh University Press, 2020.
Xiao, Tong, and Jingbo Zhu. “Foundations of Large Language Models.” arXiv preprint arXiv:2501.09223, version 2 (2025). https://doi.org/10.48550/arXiv.2501.09223.
Zegart, Amy B. Spies, Lies, and Algorithms: The History and Future of American Intelligence. Princeton: Princeton University Press, 2022.