This CETaS Research Report presents the findings of a project commissioned by the Joint Intelligence Organisation (JIO) and GCHQ, on the topic of artificial intelligence (AI) and strategic decision-making. The report assesses how AI-enriched intelligence should be communicated to strategic decision-makers in government to ensure the principles of analytical rigour, transparency, and reliability of intelligence reporting and assessment are upheld. The findings are based on extensive primary research across UK assessment bodies, intelligence agencies, and other government departments. Intelligence assessment functions have a significant challenge to identify, process, and analyse exponentially growing sources and quantities of information. The research found that AI is a valuable analytical tool for all-source intelligence analysts and failing to adopt AI tools could undermine the authority and value of all-source intelligence assessments to government. However, the use of AI could both exacerbate known risks in intelligence work such as bias and uncertainty, and make it difficult for analysts to evaluate and communicate the limitations of AI-enriched intelligence. A key challenge for the assessment community will be maximising the opportunities and benefits of AI, while mitigating any risks. To embed best practice when communicating AI-enriched intelligence to decision-makers, the report recommends the development of standardised terminology for communicating AI-related uncertainty; new training for intelligence analysts and strategic decision-makers; and an accreditation programme for AI systems used in intelligence analysis and assessment.
‘AI-enriched intelligence’ in this context refers to intelligence insights that have been derived in part or in whole from the use of machine learning analysis or generative AI systems such as large language models.
The research considered:
- Whether national security decision-makers are sufficiently equipped to assess the limitations and uncertainty inherent in assessments informed by AI-enriched intelligence.
- When and how the limitations of AI-enriched intelligence should be communicated to national security decision-makers to ensure a balance is struck between accessibility and technical detail.
- Whether further governance, guidelines, or upskilling may be required to enable national security decision-makers to make high-stakes decisions based on AI-enriched insights.
Key findings from the research are as follows:
- AI is a valuable analytical tool for all-source intelligence analysts. AI systems can process volumes of data far beyond the capacity of human analysts, identifying trends and anomalies that may otherwise go unnoticed. Choosing not to make use of AI for intelligence purposes therefore risks contravening the principle of comprehensive coverage in intelligence assessment, set out in the Professional Head of Intelligence Assessment Common Analytical Standards. Further, if key patterns and connections are missed, the failure to adopt AI tools could undermine the authority and value of all-source intelligence assessments to government.
- However, the use of AI exacerbates dimensions of uncertainty inherent in intelligence assessment and decision-making processes. The outputs of AI systems are probabilistic calculations (not certainties) and are currently prone to inaccuracies when presented with incomplete or skewed data. The opaque nature of many AI systems also makes it difficult to understand how AI-derived conclusions have been reached.
- There is a critical need for careful design, continuous monitoring, and regular adjustment of AI systems used in intelligence analysis and assessment to mitigate the risk of amplifying bias and errors.
- The intelligence function producing the assessment product remains ultimately responsible for evaluating relevant technical metrics (such as accuracy and error rates) in AI methods used for intelligence analysis and assessment, and all-source intelligence analysts must take into account any limitations and uncertainties when producing their conclusions and judgements.
- National security decision-makers currently require a high level of assurance relating to AI system performance and security to make decisions based on AI-enriched intelligence.
- In the absence of a robust assurance process for AI systems, national security decision-makers generally exhibited greater confidence in the ability of AI to identify events and occurrences than the ability of AI to determine causality. Decision-makers were more prepared to trust AI-enriched intelligence insights when they were corroborated by non-AI, interpretable intelligence sources.
- Technical knowledge of AI systems varied greatly among decision-makers. Research participants repeatedly suggested that a baseline understanding of the fundamentals of AI, current capabilities, and corresponding assurance processes, would be necessary for decision-makers to make load-bearing decisions based on AI-enriched intelligence.
This study has reinforced existing research that AI is a valuable tool for the intelligence analysis and assessment community. AI could improve productivity and efficiency both as a support function and to generate new insights beyond the capabilities of human analysts. Choosing not to make use of available AI tools risks missing key patterns across increasing volumes of data, thereby contravening the guiding principle of comprehensive coverage, and potentially undermining the authority and value of all-source intelligence assessments to SDMs.
However, the use of AI in intelligence analysis and assessment is not without risk. AI could exacerbate existing risks such as bias and uncertainty, and make it more challenging for intelligence analysts to evaluate and communicate the limitations of AI-enriched intelligence. The risks of using AI in intelligence analysis and assessment must be weighed up against a) risks inherent to all intelligence analysis work, and b) the perceived additional benefits of using AI. In addition, there is a critical need for careful design, continuous monitoring, and regular adjustment of AI systems to mitigate the risk of amplifying human biases and errors in intelligence assessment.
Guidance is needed to ensure intelligence analysts can effectively communicate the limitations of AI-enriched intelligence to SDMs in a way that upholds the levels of rigour, transparency, and reliability demanded by intelligence assessment standards. The intelligence analyst producing the assessment product remains ultimately responsible for evaluating relevant technical metrics in the underlying AI model, and taking any limitations and uncertainty into account when producing their conclusions and judgements.
Further upskilling across the assessment and SDM community will help to establish a baseline level of technical understanding of AI models and their limitations. Finally, standardised assurance processes for AI systems are also required to build credibility and trust in assessments informed by AI-enriched intelligence.
You can read the complete report here