Intelligence Agencies in the Age of AI: Balancing Opportunities and Risks

With the growing popularity of AI-powered commercial chatbots such as GPT-3, Claude and Sage, artificial intelligence has taken the public discourse by storm. Questions of public utility, productivity, and task automation on the one hand and concerns over loss of jobs or dystopian visions of a robotized future on the other have divided public opinion across the world. It is thus no surprise that such controversial, yet undoubtedly extremely promising technology has grabbed the attention of state services, notably intelligence agencies

The term artificial intelligence refers to computer technology capable of performing tasks by applying skills otherwise considered unique to human intelligence, or “computational part of the ability to achieve goals in the world.” These range from logical thinking, language processing and object recognition to the ability to learn and improve  their performance based on feedback, to name but a few. This capacity of machines to adapt and optimise their performance and computational abilities by learning from sample data is referred to as machine learning.  An especially advanced type of machine learning is called deep learning, which structures algorithms in a way that is inspired by the signal transmission of the human brain – neural networks. 

However revolutionary, AI is only the latest in a long line of technologies that have transformed the way intelligence actors operate. Throughout history, each new means of communication, from the telegram to the telephone and radio transmission, has brought with it new methods of intercepting, decrypting, and analysing messages. During the Cold War, for example, human intelligence (HUMINT) was complemented by the use of "bugs" or listening devices or , planted in strategic places such as embassies. In the last decades, it was the growth of the Internet that completely shifted the intelligence communities’ focus. Today, signals intelligence has become an integral part of intelligence activities worldwide, with vast technical agencies collecting data from information systems worldwide. Just like the US Department of Defence considers that “AI is poised to transform every industry and is expected to impact every corner of the Department,” this emerging technology is likely to bring about another significant paradigm shift in the intelligence community.

The work of intelligence agencies can be divided into multiple steps, typically represented as cogs in an intelligence cycle. According to Mark Lowenthal – the godfather of intelligence studies and the expert on the US intelligence system – this cycle comprises the collection of data, its processing and analysis, production and dissemination of intelligence, and finally feedback. This article will analyse both the opportunities and dangers of AI on the processing and analysis of intelligence, likely to be most affected by this emerging technology. The first part will consider the various potential uses of AI in the interpretation of relevant data. Secondly, the limits of AI’s analytical capabilities and the dangers of excessive trust in this technology will be discussed. Lastly, three policy recommendations will be proposed to mitigate these risks and shortcomings.

AI recognition and prediction capabilities

It is estimated that data stored in cloud services alone will reach 100 zettabytes, or 100 trillion gigabytes by 2025. The digitisation of virtually every aspect of our social, political, and economic life creates the perfect opportunity for actors – criminals and intelligence agencies alike – to collect bulks of data of astronomical sizes with each infiltration of an information system. Manually sorting through these sources to identify pertinent elements is virtually impossible as it would require enormous human capital and time. In the case of open-source intelligence (OSINT), too, extraction of data can greatly surpass human capabilities of filtering out the irrelevant “noise”. 

The integration of AI-powered technology can significantly enhance the processing and analysis phase of the intelligence cycle. This recognition is evident in the US Intelligence Advanced Research Project Activity (IARPA) and its Office of Analysis' ongoing initiatives. The Deep Intermodal Video Analytics (DIVA) project aims to develop video forensic analysis software that can analyse extensive CCTV footage at an extremely fast pace and detect specific events that occur. Meanwhile, the Babel program seeks to simplify the analysis of noisy phone conversations, while the MATERIAL program aims to enhance the analysis of foreign-language data in speech and writing. These programs are only some of the many examples of AI’s potential to boost the efficiency and accuracy of data processing and analysis. 

The use of image recognition technology has already generated numerous controversies, as it has the potential to be employed for invasive surveillance and targeted discrimination. The facial recognition technology utilized in China shows how this technology can achieve near-perfect accuracy overtime and be consecutively employed to oppress discriminated groups such as the Uyghur ethnic minority. Meanwhile, researchers at Stanford University have developed a facial recognition tool that can identify a person's sexual orientation with superior accuracy than humans. The potential of AI to enhance the analytical capacities of human analysts is virtually limitless and is already being employed for both domestic security purposes – for example by France in the context of the 2024 Olympic Games - and in armed conflicts, for example to differentiate militants from civilians when using armed drones.

Conflict prediction is yet another use of AI technology for intelligence and security services. Machine learning models are being developed to offer early warning of potential armed escalations. These programs compare data from previous conflicts to current crises, in search of matching statistical variables. While unlikely to ever replace human judgement as decision-making tools, the value of preparing the authorities for alternative crisis outcomes against their wishful thinking should not be underestimated.

Risk of manipulation and machine bias

Nevertheless, there are certain limitations to this seemingly endless pool of opportunities AI-powered models may offer. These challenges include the proliferation of synthetic media, putting into question the reliability of collected online content, and other countermeasures that thwart AI-powered recognition tools, as well as concerns over machine bias. 

Indeed, some applications of AI technology can pose significant challenges for intelligence work. This is the case of deep fakes and any other footage that is digitally fabricated or altered, often using AI-powered deep learning models. There already exist prompt-based commercially available models, such as DALL-E and Midjourney, that generate images based on a text prompt. This accessible technology has great potential for spreading disinformation, further blurring the line between facts and fabrications, such as in the case of ongoing protests in France against pension reform or in the context of Donald Trump's arrest by the FBI. The abundance of carefully crafted deep fakes on social media may make identification of reliable sources much more difficult, especially if a given intelligence community is analysing a region it is not well familiar with. It is likely that soon, synthetic media may become indistinguishable from legitimate sources, both to a computer and a human eye. While agencies may find innovative uses for this technology, such as blackmailing agents into cooperation using fabricated incriminating footage, the risks and ethical concerns surely outweigh the potential benefits.

Besides, the image recognition technology is not without its flaws, as it is particularly vulnerable to adversarial attacks, or manipulations of images through adding perturbations to images which confuse machine learning models. These attacks often involve adding distorted image layers that are only visible as noise to humans but can cause the AI to misclassify the object being analysed. Such techniques are already in use – a unique example is a clothing brand producing clothes with patterns that cause AI facial recognition software to misidentify the wearer. The increasing use of these techniques, which may become automated in the future, may one day raise concerns about the legitimacy of AI analytical tools, and as AI tools become more sophisticated, their growing complexity may leave room for further evading techniques.

Finally, relying on AI-based analysis of massive real life data sets poses a risk of manipulations going undetected by humans. Even without adversarial attacks and other deception techniques, the analytical use of deep learning is susceptible to errors and manipulations. Achieving objectivity of a model is impossible, as the biases of the developers and the authors of content it trains on will inevitably replicate throughout the deep learning process. For instance, if the sample data is biased against certain ethnic minorities, facial recognition tools used by intelligence agencies may identify those groups as more dangerous. In the case of conflict prediction systems, the AI may be more prone to judging a pre-emptive use of force as a viable response as it is mostly trained on crises that did escalate into armed conflicts, rather than false alarms or cases of successful diplomatic de-escalation.This in itself does not invalidate AI as a viable analytical tool, and some scholars argue that simple heuristics-based models including cognitive biases can offer more accurate predictions in uncertain environments than complex machine learning models; it does become a serious problem when these limitations are not recognized and the results of AI-assisted analysis taken for granted as impartial.

Policy recommendations

History has shown that technological revolutions, while highly disruptive, eventually give birth to innovative solutions and adaptation strategies. Based on the current speed of artificial intelligence development and the growing interest of the state in their development, it is safe to assume AI will very likely revolutionise the way national security and intrastate competition is waged by state institutions, including intelligence agencies. This said, in a field as critical for national security as intelligence, it is especially important to be mindful of the limitations and dangers of this technological paradigm shift and properly adapt to them. Three policy recommendations for intelligence agencies can be identified. 

 Development of fake content detecting technology

With image generating deep learning models being available to the public, the agencies need to prepare for expanding their criteria of analysis to account for AI-fabricated content. If image recognition tools are to improve the analytical capabilities of the state, it needs to be ahead of the rapidly evolving image generating technology. R&D resources could be allocated into the development of advanced anti-deep fake image and voice recognition tools, able to detect abnormalities in images, videos, and sound recordings. Currently, no such advanced technology exists; instead, regulations are being developed that would require the AI industry to make its generated content “recognizable”. Microsoft has already pledged to include a cryptographic watermark to all synthetic media its apps generate, and the same topic was recently brought up in talks between the OpenAI CEO Sam Altman and French Finance Minister Bruno Le Maire

Strengthened supervision of AI-powered analysis 

Even with these and other measures in place, the intelligence community must make sure human control is exercised over the AI-generated data analysis. Presence of human analysts at the end of the loop is crucial to minimise the impact of biased AI analysis. These supervisors must be sensitised to the dangers of machine bias and verify at least a sample of the data analysed using artificial intelligence. Regular tests of the AI judging criteria, both before and after deployment, are necessary to observe if the software’s analysis results become skewed over time. If any biases are spotted, the learning model needs to be retrained using synthetic data to offset this tendency. 

 Ethical AI guidelines, transparency, and data integrity

Finally, intelligence agencies must take responsibility for the legality and morality of missions that AI-powered models may be tasked with. In democratic regimes, control of the agencies’ actions by supervisory bodies, including the parliament, is a necessary step against abuse of power. One way to ensure the respect of social and human rights of citizens and minority groups is through national regulations that would impose certain ethical guidelines on any AI models employed by intelligence agencies. If communicated to the public in an open and transparent manner, such constraints engraved in the models, similar to the limits of chatbots from OpenAI or Anthropic on harmful and illicit requests, would help build trust of citizens in intelligence agencies and the often-controversial AI technology. But domestic misuse of personal information is not the only risk – data sets are a common target for external actors, private cyber criminals and foreign states alike. Governments must pay great attention to the securitisation of this data, lest it fall into the wrong hands.

Previous
Previous

Overlooked Risk 2024: Growing Global Market and Demand for Malware-as-a-Service

Next
Next

London Politica & Warwick Think Tank: The Social Face of Spyware Report