Tessa Foo London Politica Tessa Foo London Politica

Benchtop DNA Printers - Biosecurity and Proliferation Risks

DNA synthesis technology can print DNA from digital sequence representations, allowing researchers to study and engineer biological systems. Next-generation benchtop DNA synthesis devices provide greater confidentiality, enabling a wider range of users to print DNA in home laboratories. However, this also creates risks as malicious actors can produce dangerous pathogens without oversight. The convergence of AI with DNA synthesis technology has amplified concerns about AI-bio capabilities.

Read More

Retrospective Decryption: Harvest Now, Decrypt Later

With rapid advancements in quantum computing, cybersecurity experts and adversaries are likely to begin to explore and assess the weak points in current cryptographic systems. One of the most discussed challenges is popularly called “Harvest Now, Decrypt Later” or Retrospective Decryption.

At its core, retrospective decryption embodies a surveillance-oriented approach, involving the acquisition and long-term retention of currently unreadable encrypted data which can then be decrypted through a quantum computer.

Read More

Dual-Use Space Technologies - Part 1

This first report in our series on dual-use space technology focuses on Space Situational Awareness (SSA) technology and Space Communications technology. It begins by providing overviews of the international legal landscape regulating dual-use space technology and defining both Space Situational Awareness and Space Communications technologies.

This report focuses on the three major geopolitical actors in SSA and Space Communications technology: the US, China and Russia. In both technology categories, the US dominates and continues to foster partnerships and enhance its existing capabilities, however, China and Russia are also investing in SSA capabilities that can be used for military purposes and strengthen the importance of information warfare in their military doctrines, leading to a significant focus on space communications technology.

Read More
Ethan Kawamara Mugire London Politica Ethan Kawamara Mugire London Politica

Algorithms on the Battlefield: Risk, Reward, Responsibility

Artificial Intelligence (AI) has continued to be a pioneering force in advancements set to revolutionise fields of study and application. This unprecedented pace of innovation comes amidst escalating global conflict, prompting nations to reimagine AI within a different sphere than previously popularised i.e., defence and security. This reimagining of AI’s application in the battlefield promises to enhance precision, efficiency, and decision-making in warfare, however, it opens debate on several ethical and geopolitical challenges as a result.

Read More

Drone Proliferation in Modern Warfare: A Closer Look at Iran and Ukraine

The case studies of Iran and Ukraine underscore the far-reaching impact of drone proliferation on contemporary conflicts. The rapid growth and globalisation of the Unmanned Aerial System (UAS) industry have created numerous access points for parties looking to acquire drones. While the sector's growth has democratised the technology, the increasing number of states using drones highlights a troubling trend of proliferation.

Read More
Ridipt Singh, Marko Filijović London Politica Ridipt Singh, Marko Filijović London Politica

AI-driven Power Concentration and the Need for an Inclusive Global AI Governance

AI is bound to transform the lives of people across borders and activities in all sectors. However, the discussions around its regulation being restricted to a select few countries would lead to an AI-driven concentration of power. The resulting socio-political and economic effects would further aggravate an existing digital divide.

Read More
Written by Piotr Malachinski London Politica Written by Piotr Malachinski London Politica

Intelligence Agencies in the Age of AI: Balancing Opportunities and Risks

With the growing popularity of AI-powered commercial chatbots such as GPT-3, Claude and Sage, artificial intelligence has taken the public discourse by storm. Questions of public utility, productivity, and task automation on the one hand and concerns over loss of jobs or dystopian visions of a robotized future on the other have divided public opinion across the world. It is thus no surprise that such controversial, yet undoubtedly extremely promising technology has grabbed the attention of state services, notably intelligence agencies. 

With the growing popularity of AI-powered commercial chatbots such as GPT-3, Claude and Sage, artificial intelligence has taken the public discourse by storm. Questions of public utility, productivity, and task automation on the one hand and concerns over loss of jobs or dystopian visions of a robotized future on the other have divided public opinion across the world. It is thus no surprise that such controversial, yet undoubtedly extremely promising technology has grabbed the attention of state services, notably intelligence agencies

The term artificial intelligence refers to computer technology capable of performing tasks by applying skills otherwise considered unique to human intelligence, or “computational part of the ability to achieve goals in the world.” These range from logical thinking, language processing and object recognition to the ability to learn and improve  their performance based on feedback, to name but a few. This capacity of machines to adapt and optimise their performance and computational abilities by learning from sample data is referred to as machine learning.  An especially advanced type of machine learning is called deep learning, which structures algorithms in a way that is inspired by the signal transmission of the human brain – neural networks. 

However revolutionary, AI is only the latest in a long line of technologies that have transformed the way intelligence actors operate. Throughout history, each new means of communication, from the telegram to the telephone and radio transmission, has brought with it new methods of intercepting, decrypting, and analysing messages. During the Cold War, for example, human intelligence (HUMINT) was complemented by the use of "bugs" or listening devices or , planted in strategic places such as embassies. In the last decades, it was the growth of the Internet that completely shifted the intelligence communities’ focus. Today, signals intelligence has become an integral part of intelligence activities worldwide, with vast technical agencies collecting data from information systems worldwide. Just like the US Department of Defence considers that “AI is poised to transform every industry and is expected to impact every corner of the Department,” this emerging technology is likely to bring about another significant paradigm shift in the intelligence community.

The work of intelligence agencies can be divided into multiple steps, typically represented as cogs in an intelligence cycle. According to Mark Lowenthal – the godfather of intelligence studies and the expert on the US intelligence system – this cycle comprises the collection of data, its processing and analysis, production and dissemination of intelligence, and finally feedback. This article will analyse both the opportunities and dangers of AI on the processing and analysis of intelligence, likely to be most affected by this emerging technology. The first part will consider the various potential uses of AI in the interpretation of relevant data. Secondly, the limits of AI’s analytical capabilities and the dangers of excessive trust in this technology will be discussed. Lastly, three policy recommendations will be proposed to mitigate these risks and shortcomings.

AI recognition and prediction capabilities

It is estimated that data stored in cloud services alone will reach 100 zettabytes, or 100 trillion gigabytes by 2025. The digitisation of virtually every aspect of our social, political, and economic life creates the perfect opportunity for actors – criminals and intelligence agencies alike – to collect bulks of data of astronomical sizes with each infiltration of an information system. Manually sorting through these sources to identify pertinent elements is virtually impossible as it would require enormous human capital and time. In the case of open-source intelligence (OSINT), too, extraction of data can greatly surpass human capabilities of filtering out the irrelevant “noise”. 

The integration of AI-powered technology can significantly enhance the processing and analysis phase of the intelligence cycle. This recognition is evident in the US Intelligence Advanced Research Project Activity (IARPA) and its Office of Analysis' ongoing initiatives. The Deep Intermodal Video Analytics (DIVA) project aims to develop video forensic analysis software that can analyse extensive CCTV footage at an extremely fast pace and detect specific events that occur. Meanwhile, the Babel program seeks to simplify the analysis of noisy phone conversations, while the MATERIAL program aims to enhance the analysis of foreign-language data in speech and writing. These programs are only some of the many examples of AI’s potential to boost the efficiency and accuracy of data processing and analysis. 

The use of image recognition technology has already generated numerous controversies, as it has the potential to be employed for invasive surveillance and targeted discrimination. The facial recognition technology utilized in China shows how this technology can achieve near-perfect accuracy overtime and be consecutively employed to oppress discriminated groups such as the Uyghur ethnic minority. Meanwhile, researchers at Stanford University have developed a facial recognition tool that can identify a person's sexual orientation with superior accuracy than humans. The potential of AI to enhance the analytical capacities of human analysts is virtually limitless and is already being employed for both domestic security purposes – for example by France in the context of the 2024 Olympic Games - and in armed conflicts, for example to differentiate militants from civilians when using armed drones.

Conflict prediction is yet another use of AI technology for intelligence and security services. Machine learning models are being developed to offer early warning of potential armed escalations. These programs compare data from previous conflicts to current crises, in search of matching statistical variables. While unlikely to ever replace human judgement as decision-making tools, the value of preparing the authorities for alternative crisis outcomes against their wishful thinking should not be underestimated.

Risk of manipulation and machine bias

Nevertheless, there are certain limitations to this seemingly endless pool of opportunities AI-powered models may offer. These challenges include the proliferation of synthetic media, putting into question the reliability of collected online content, and other countermeasures that thwart AI-powered recognition tools, as well as concerns over machine bias. 

Indeed, some applications of AI technology can pose significant challenges for intelligence work. This is the case of deep fakes and any other footage that is digitally fabricated or altered, often using AI-powered deep learning models. There already exist prompt-based commercially available models, such as DALL-E and Midjourney, that generate images based on a text prompt. This accessible technology has great potential for spreading disinformation, further blurring the line between facts and fabrications, such as in the case of ongoing protests in France against pension reform or in the context of Donald Trump's arrest by the FBI. The abundance of carefully crafted deep fakes on social media may make identification of reliable sources much more difficult, especially if a given intelligence community is analysing a region it is not well familiar with. It is likely that soon, synthetic media may become indistinguishable from legitimate sources, both to a computer and a human eye. While agencies may find innovative uses for this technology, such as blackmailing agents into cooperation using fabricated incriminating footage, the risks and ethical concerns surely outweigh the potential benefits.

Besides, the image recognition technology is not without its flaws, as it is particularly vulnerable to adversarial attacks, or manipulations of images through adding perturbations to images which confuse machine learning models. These attacks often involve adding distorted image layers that are only visible as noise to humans but can cause the AI to misclassify the object being analysed. Such techniques are already in use – a unique example is a clothing brand producing clothes with patterns that cause AI facial recognition software to misidentify the wearer. The increasing use of these techniques, which may become automated in the future, may one day raise concerns about the legitimacy of AI analytical tools, and as AI tools become more sophisticated, their growing complexity may leave room for further evading techniques.

Finally, relying on AI-based analysis of massive real life data sets poses a risk of manipulations going undetected by humans. Even without adversarial attacks and other deception techniques, the analytical use of deep learning is susceptible to errors and manipulations. Achieving objectivity of a model is impossible, as the biases of the developers and the authors of content it trains on will inevitably replicate throughout the deep learning process. For instance, if the sample data is biased against certain ethnic minorities, facial recognition tools used by intelligence agencies may identify those groups as more dangerous. In the case of conflict prediction systems, the AI may be more prone to judging a pre-emptive use of force as a viable response as it is mostly trained on crises that did escalate into armed conflicts, rather than false alarms or cases of successful diplomatic de-escalation.This in itself does not invalidate AI as a viable analytical tool, and some scholars argue that simple heuristics-based models including cognitive biases can offer more accurate predictions in uncertain environments than complex machine learning models; it does become a serious problem when these limitations are not recognized and the results of AI-assisted analysis taken for granted as impartial.

Policy recommendations

History has shown that technological revolutions, while highly disruptive, eventually give birth to innovative solutions and adaptation strategies. Based on the current speed of artificial intelligence development and the growing interest of the state in their development, it is safe to assume AI will very likely revolutionise the way national security and intrastate competition is waged by state institutions, including intelligence agencies. This said, in a field as critical for national security as intelligence, it is especially important to be mindful of the limitations and dangers of this technological paradigm shift and properly adapt to them. Three policy recommendations for intelligence agencies can be identified. 

 Development of fake content detecting technology

With image generating deep learning models being available to the public, the agencies need to prepare for expanding their criteria of analysis to account for AI-fabricated content. If image recognition tools are to improve the analytical capabilities of the state, it needs to be ahead of the rapidly evolving image generating technology. R&D resources could be allocated into the development of advanced anti-deep fake image and voice recognition tools, able to detect abnormalities in images, videos, and sound recordings. Currently, no such advanced technology exists; instead, regulations are being developed that would require the AI industry to make its generated content “recognizable”. Microsoft has already pledged to include a cryptographic watermark to all synthetic media its apps generate, and the same topic was recently brought up in talks between the OpenAI CEO Sam Altman and French Finance Minister Bruno Le Maire

Strengthened supervision of AI-powered analysis 

Even with these and other measures in place, the intelligence community must make sure human control is exercised over the AI-generated data analysis. Presence of human analysts at the end of the loop is crucial to minimise the impact of biased AI analysis. These supervisors must be sensitised to the dangers of machine bias and verify at least a sample of the data analysed using artificial intelligence. Regular tests of the AI judging criteria, both before and after deployment, are necessary to observe if the software’s analysis results become skewed over time. If any biases are spotted, the learning model needs to be retrained using synthetic data to offset this tendency. 

 Ethical AI guidelines, transparency, and data integrity

Finally, intelligence agencies must take responsibility for the legality and morality of missions that AI-powered models may be tasked with. In democratic regimes, control of the agencies’ actions by supervisory bodies, including the parliament, is a necessary step against abuse of power. One way to ensure the respect of social and human rights of citizens and minority groups is through national regulations that would impose certain ethical guidelines on any AI models employed by intelligence agencies. If communicated to the public in an open and transparent manner, such constraints engraved in the models, similar to the limits of chatbots from OpenAI or Anthropic on harmful and illicit requests, would help build trust of citizens in intelligence agencies and the often-controversial AI technology. But domestic misuse of personal information is not the only risk – data sets are a common target for external actors, private cyber criminals and foreign states alike. Governments must pay great attention to the securitisation of this data, lest it fall into the wrong hands.

Read More

London Politica & Warwick Think Tank: The Social Face of Spyware Report

Following the recent heated battle for a TikTok ban in U.S Congress earlier this month, the words ‘TikTok’, ‘China’ and ‘Spyware’ have taken the internet by storm. In this new report, titled The Social Face of Spyware, we examine the ways in which the revolutionary social interface of TikTok may contain deeper interwoven systems of data collection, user tracking and surveillance tools that make it a substantial political threat. Through our series of articles, analysts from both London Politica and Warwick Think Tank delved into the ways in which TikTok has influenced propaganda and misinformation, whether it can be used as a means of surveillance, how it compares to long- standing privacy regulations and what implications this has for the wider international landscape.

Read More
Oliver Tate London Politica Oliver Tate London Politica

What the UK’s Online Safety Bill could mean for tech companies.

Britain’s Online Safety Bill, which is currently in the committee stage in the House of Lords, has been the subject of significant controversy. Having undergone numerous revisions and adaptations thus far. It appears the bill is finally near completion and implementation, and it will likely have a large impact on tech companies operating in the UK. With the potential to alter the relationship between big tech and government,the bill which aims to make the internet a safer place, will give OFCOM new powers to enforce how tech companies operate online. In particular, the bill aims to ensure user-to-user service providers tackle illegal content and protect children. 


A significant element of the bill, which is being considered, is the implementation of criminal liability for senior managers at social media companies who fail to uphold their duties in child safety. Michelle Donelan, Secretary of State for Digital, Culture, Media and Sport, stated that the measures will be largely modeled upon the Irish Online Safety and Media Regulation Act. It will focus on individual responsibility for managers, which could see some managers could face fines and imprisonment, after they fail to comply with warnings from OFCOM.


This amendment has come under fire from free speech campaign groups because there are concerns that tech companies may inadvertently remove perfectly legal content to avoid any potential repercussions that may be deemed harmful by OFCOM. Article 19, a freedom of expression NGO, argues that the child protection responsibilities in the bill have vague concepts and definitions, making it challenging for tech companies to fully understand what content should be enforced. Social media companies are already at the centre of the debate surrounding freedom of expression versus internet safety, and the Online Safety Bill will likely exacerbate this issue for social media firms.


This is likely going to be most noticeable with Twitter, whereby one of the reasons Elon Musk acquired the company was to ‘preserve free speech’, therefore, the Online Safety Bill will likely present a new set of challenges for how Twitter should regulate content on their services, with polling suggesting 24% of people think freedom of speech is more important than freedom from abuse. Writing in a recent article for The Independent, a former Twitter employee suggested that “Twitter’s new management may find itself stuck between an obligation to the responsibilities imposed by the new laws it cannot keep; and a penalty (the fines or sentences for imposed for not adhering to tech regulations) it cannot afford.”. However, Elon Musk and Twitter have already shown that they are willing to cooperate with individual nations' internet regulations, as seen with Twitter blocking a BBC documentary in India that was critical of Prime Minister Narendra Modi.


Furthermore, the bill includes a category of ‘primary priority content that is harmful to children’. This category of content is to be entirely prevented from reaching children, with the bill stating that tech companies should “use proportionate measures to effectively mitigate and manage the risks of harm”. In this instance, the bill does not direct what specifically should be implemented, which arguably leaves a large amount of flexibility for tech companies to ensure the measures don’t hugely interfere with how their services are already run. Moreover, it is the Culture Secretary, not parliament, that can determine what content falls under this remit. A House of Lords committee has argued that this provision hands the Culture Secretary too much power which is “needlessly expansive” in directing what content online should be enforceable, so it is possible that this could be amended. The bill is currently focussed on protecting children from content that encourages activities such as self-harm, and eating disorders. 


Another key measure is that services that publish pornographic content will have to actively prevent minors from viewing such content. However, the bill does not detail what specific measures should be taken to prevent minors from viewing harmful content, and it appears that the method of doing so is largely up to the companies themselves. It is possible that tech companies may approach the issue by introducing age verification measures, similar to that of gambling sites whereby users must upload their ID to prove their age. 43 MP’s have written to the Culture Secretary calling for more strict age verification measures to be written into the bill rather than it being left up to the tech companies, so it is possible that the age validation measures previously scrapped from the 2017 Digital Economy Act may make a comeback in the near future. 

Overall, Britain’s Online Safety Bill will have a significant effect on the future of social media firms and the wider tech space. For the bill to work successfully and not be undermined, it is essential that the measures in the bill do not go beyond their original remit, and that tech companies will have to face the challenge of ensuring they are precise, accurate, and transparent in what content gets enforced. The bill may also undergo various revisions, and it will be interesting to see how it is implemented upon completion. Watch this space!

Read More
Dylan Waste London Politica Dylan Waste London Politica

Cracking the Code: How Next-Generation Computing Could Upend Digital Security

Due to the unprecedented amounts of information being exchanged via new technologies, governments, and corporations pushed for the development of new methods to secure these communications, particularly in the latter half of the 20th century.  During this period, cryptographers established various algorithms to protect sensitive information, such as hash, Symmetric, and Asymmetric Key algorithms. Whilst Symmetric encryption is simple and fast, the approach’s traditional reliance on the transfer of keys causes scalability issues for these algorithms. Resulting from these drawbacks with symmetric cryptography, digital communications mostly rely on public-key cryptography (PKC). Also known as asymmetric cryptosystems, PCK algorithms include the RSA (Rivest-Shamir-Aldeman), ECC (Elliptic Curve Cryptography), and Diffie-Hellman approaches. Based on complex prime factorization problems, these PKC algorithms have historically been the gold standard of scalable internet security. 

In the Information Age, PKC is at the bedrock of digital communications. From validating digital identities to protecting the exchange of sensitive information, asymmetric cryptosystems provide a variety of highly scalable and secure cryptographic solutions. Of these widely implemented PKC systems, RSA is a digital security standard. Whilst lower-bit RSA keys can be cracked using brute force attacks, the National Institute of Standards and Technology (NIST) recommends that RSA keys should be 2048 bits long. The length of the 2048-character-long RSA key causes the algorithm to be protected from brute-force attacks from classical computers. For these digital devices, solving a 2048-bit prime factorization problem is an incredibly onerous process forecasted to take 300 trillion years

Quantum Computing and Classical Cryptography

As early as 1994, Peter Shor proved that quantum computers could decrypt RSA cryptosystems with a large N of bits in a key “with much less computational power” than classical computers. If powerful enough quantum computers were developed, Shor’s algorithm could dramatically reduce the time of factor decomposition. 

Whilst Shor’s algorithm only theoretically established that large-bit RSA keys could be decrypted using a quantum computer, his algorithm's validity has been corroborated by further study. Over the last decade, the number of qubits in new quantum computing systems has been scaling exponentially. When released in 2023, IBM’s Osprey (433 qubits) will be the most powerful quantum processor in the world. Previously, IBM’s most powerful quantum processor, Eagle, boasted 127 qubits. With the ever-increasing power of quantum computers, these systems could begin to practically threaten secure systems soon. 

During the waning days of 2022, a group of Chinese cryptography researchers published a paper where they alleged that a 2048-bit RSA algorithm could theoretically be broken with a 377-qubit device. Relying on an ensemble of prime factorization algorithms (e.g., Schnorr and QAOA), the authors rocked the cybersecurity world with their alleged results. Whilst the group was only able to decrypt a 48-bit RSA key with its quantum computer and methods, its projections claim that a quantum system slightly less powerful than IBM’s upcoming Osprey machine could crack RSA-2048. Even though this paper suggests that classical PKC methods could be rendered useless in the next year, the mixture of the leveraged mathematical methods causes their algorithm's quantum speedup to be unclear. Currently, further study on the scalability of these approaches suggests that the algorithm would not be able to crack RSA-2048 because of the exponential quantum speed for factoring integers. Even though the paper claims that decrypting a 2048-bit RSA key could be possible extremely quickly, critical engagement with the research reveals that it is implausible that their algorithm’s speedup speed is linear. 

Q-Day and the Race for Quantum Supremacy

Whilst the media coverage of this research project was overblown, the threat of quantum cryptography to classical digital security methods continues to become an increasingly pressing concern for cybersecurity experts and policymakers. In a report, NIST researchers claimed that the proliferation of large-scale quantum computers “would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere”. As advances in next-generation computing accelerate, the handling of the fallout from Q-Day becomes an increasingly important consideration. Since Shor’s cryptographic approach has already been established theoretically, once powerful enough quantum computing devices are developed, classical PKC systems will become scalably solvable. For this reason, there has been a drive to create post-quantum cryptographic systems. 

In late 2019, the U.S. government passed the National Quantum Initiative Act to supercharge domestic quantum computing investment and policy advisory. Additionally, a bipartisan consortium of U.S. Senators and House Members introduced a bill to audit and prepare government information systems for quantum cybersecurity risks in late 2022. Concerned about how competing powers could leverage quantum breakthroughs to comprise America’s national security, policymakers in D.C. are boosting the government’s cyber posture and public investment in quantum research. On top of government funding for quantum research, the U.S.’s strong private quantum research sector places the country at the forefront of the next-generation computing space. Against the backdrop of wider geopolitical strife, the U.S. has found itself on the end of an ever-intensifying race for emerging technologies. Primarily, quantum competition has accelerated as the result of tense Sino-American relations. After centralizing quantum research at the National Laboratory of Quantum Information Science in 2017, Chinese policymakers pledged to invest $14.76 billion from 2017 to 2022 in quantum R&D. 

On top of the threats that powerful quantum devices pose to classical cryptography, American and Chinese policymakers acknowledge that these systems could be leveraged to improve warfighting capabilities. Whilst this next-generation computing still has a ways to go before upending classical digital security, global powers are increasingly aware of the benefits and risks of these innovative systems. Even if scalable quantum-resistant cryptographic approaches are not discovered soon, a painstaking process of retrofitting legacy communication systems will have to occur to develop resilient digital security measures in the long term.


Read More
London Politica London Politica

AI Regulation: New Threats and Opportunities

Due to the rising ubiquity and possible exploitation of artificial intelligence in numerous sectors and settings, it is crucial to strengthen AI policy in 2023. Concerns regarding their potential for abuse and ethical ramifications have arisen with the development of AI technologies such as realistic chatbots and high-quality picture production. In addition, the use of artificial intelligence in combat has emphasised the need for rules and control in this domain. Moreover, the propensity of AI to perpetuate prejudices and discrimination underscores the significance of ethical issues in its development and deployment. As artificial intelligence continues to grow and become more ingrained in society, it is necessary to set guidelines and laws to guarantee its responsible usage and limit possible harmful effects on people and society.

Problems with the development and spread of AI

The study on the danger posed by unrestricted AI exports for surveillance shows the potential harmful effects of the expanding worldwide commerce of artificial intelligence technologies, notably face recognition AI. The report investigates how China's leadership in developing and selling this technology may bolster autocracies and surveillance regimes across the globe.  The objective of this study is to demonstrate China's comparative advantage in the export of face recognition AI and the possible political bias in imports of this technology, specifically in the context of autocracies and fragile democracies. The study results would aid policymakers in comprehending the possible worldwide ramifications of the trade of AI technology and in adopting rules to limit these risks, particularly controls on items having global externalities.

In the backdrop of China's rapidly expanding position in the global landscape of artificial intelligence research, development, and regulation; it is crucial to evaluate the circumstances and goals under which technologies are developed. China's extensive engagement in worldwide networks of AI R&D is well-documented by several papers, hosting corporate AI laboratories, and extending global AI research frontiers. The immersive nature of Chinese engagement presents it as a leader in AI.

Concerns have been raised in recent years regarding the implications of these ties between China and global networks for R&D, particularly in light of China's growing capabilities and ambitions in AI. The unethical use of AI for mass surveillance, the Chinese state's policies that strengthen these capabilities, and the knowledge transfer from abroad are of particular concern. The nature of these long-standing links has been subject to intensive examination and fresh inquiries as a result of these concerns.

The 2021 Forum for Cooperation on AI (FCAI) progress report has identified the implications of China's development and use of AI for international cooperation and touched on China in connection with several of the recommendations regarding regulatory alignment, standards development, trade agreements, and R&D projects; but it has also focused on Chinese policies and applications of AI that present a variety of challenges in the context of China's broader geopolitical, economic, and an international context.

It is essential to evaluate the ramifications of international cooperation on artificial intelligence, particularly the possibility for immoral AI usage and the strengthening of autocracies. It is also vital to address the economic, ethical, and strategic considerations that call into question the sustainability of such levels of cooperation on AI, as well as the difficulties and downsides of disconnecting the channels of collaboration.

AI on war

The use of AI in war may have a substantial influence on its development and application regulation. The employment of artificial intelligence in the Ukraine conflict, notably in military operations and psychological warfare techniques, has raised awareness of the possible advantages and drawbacks of using AI in warfare. The employment of artificial intelligence in warfare may have a substantial effect on the regulation of its development and implementation.

On the one hand, the employment of AI in warfare may be advantageous since it enables troops to make judgements in real time and improves medical decisions during battle. In addition, the military's investigation of AI for the development of tanks of the next generation that employ AI to locate targets has the potential to enhance combat operations, equipment maintenance, and supply chain management.

On the other side, there are also worries over the possible abuse of AI in conflict, specifically the contentious use of face recognition technology to allow psychological warfare methods. In addition, there is a concern that AI models trained on combat data, such as that from the Ukraine conflict, may perpetuate prejudices and discrimination, which might have harmful effects on people in the warzone.

As a result, the use of artificial intelligence in warfare underscores the necessity for legislation and monitoring regarding its development and implementation. It is essential to guarantee that the use of artificial intelligence in war is ethical and responsible, and that it is not used in a manner that might damage civilians.


Read More

Emerging Disruptive Technologies

Emerging disruptive technologies have the potential to significantly impact and transform various industries, shaping the way we live, work, and interact with the world around us. In this report, we explore a range of such technologies and their potential applications, as well as the ethical and moral considerations that they raise.

These technologies include cultured meats, which are an alternative to traditional animal agriculture; artificial intelligence, which has the potential to revolutionize various industries; quantum computing and quantum cryptography, which leverage the principles of quantum mechanics to perform calculations and transmit information; nuclear fusion reactors, which could provide a virtually limitless and clean source of energy; solar geoengineering, which involves manipulating the Earth's climate to mitigate the effects of global warming; asteroid mining, which could provide access to rare and valuable materials; space-based solar power, which could provide a stable and reliable source of renewable energy; green hydrogen, which is produced through the electrolysis of water using renewable energy sources; lethal autonomous weapon systems, which are a type of military technology that can select and engage targets without human intervention; directed energy weapons, which use focused energy to disable or destroy targets; anti-satellite weapons, which are designed to disrupt, disable, or destroy satellites in orbit; regenerative medicine, which involves the repair and replacement of damaged or diseased tissues and organs; and finally genetic engineering, which involves the manipulation of an organism's genome using biotechnology.

Throughout this report, we delve into the characteristics, current development status, key actors, the technology’s supply chains, as well as the technologies’ potential applications. We also examine the potential impacts of these technologies on their respective industries and on society as a whole.

Read More
Kevin Fulgham London Politica Kevin Fulgham London Politica

Maryland Tax on Digital Advertisements Struck Down

What is important?

In October 2022, the first U.S. state laws taxing digital advertisements was declared unconstitutional by a Maryland Circuit Court, as it violated the prohibition on state interference with interstate commerce. While this law can be appealed to the Supreme Court, the decision represents a major legal inflection point in the U.S. political conflict over taxing digital advertisements. Firms should be aware of the U.S. state legislature’s growing interest and activity in taxing digital advertisements. It could be advisable to plan a financial contingency of 2% - 12% increase in the digital advertising budget in case of new successful legislation. 

What happened?

On October 17, 2022, the Anne Arundel County Circuit Court of Maryland ruled against the State of Maryland and Maryland Comptroller declaring that the America’s first tax targeting digital advertisements was unconstitutional. Judge Alison Asti noted in her decision that the tax violated the US Constitution’s prohibition on state interference with interstate commerce and the First Amendment, as certain sites were taxed but others were not. 

The Digital Advertising Gross Revenues Tax would impose 2.5% - 10% tax on a firm depending on the firm's digital advertising’s annual gross revenue, where 10% would be applied to companies making at least $15 billion in annual revenue. This Law covers “advertisement services on a digital interface, including advertisements in the form of banner advertising, search engine advertising, interstitial advertising, and other comparable advertising services.” It would also require any entity, “that reasonably expects estimated digital advertising gross revenues tax for a calendar year to exceed $1,000,000 [within the State of Maryland] shall file a declaration of estimated digital advertising gross revenues.” The Act was passed in 2021, overriding the Governor’s veto

Why does it matter? 

The Maryland decision is a major inflection point in the conflict over tax of digital advertisements in the United States. Washington, Oregon, Arkansas, Indiana, New York and Connecticut have various bills taxing social media advertisements and personal data sales. Many of these bills are modeled on-each other, and any legal decision could have significant impacts on each other. Currently, both the Democrat and Republican political parties are growing increasingly concerned with large technology companies' media power and content standards. Additionally, the average American’s view of large technology corporations has been declining over the last few years, especially since 2016, inspiring the name “techlash.” These shifting cultural attitudes could provide a fertile environment for successful legislation. If legislation taxing digital advertisements were to be successful, it could have significant impacts on the digital media ecosystem. There is a possibility that major technology companies would divide their assets into subsidiary companies in order to reduce their tax burden. Further, current research has indicated that taxes on digital advertisements not only creates the opportunity for double taxation but also  consumer price increases. 

How likely is it to affect you?

If the case reaches the U.S. Supreme Court, a decision could have significant impacts on other laws centered on taxing digital or social media advertisements. Ruling in favor of the State of Maryland could lead major internet firms to raise future digital advertising prices in response to the ruling. In other countries, Google and Facebook have increased prices, as a pass-through, in response to new taxes on digital advertisements: Google raised prices 2% in the United Kingdom and 5% in Turkey; Amazon raised prices 2% in the United Kingdom; Facebook raised prices 6% in Malaysia. 

Given that the current Supreme Court is composed of a majority of justices appointed by Republican Presidents, it is more likely that they will uphold the pro-business decision. Business should begin establishing a financial contingency plan, of an increase 2 - 12%, in case of new laws. 

What’s next?

The Maryland Attorney General Biran Frosh will appeal the court’s decision striking down the Digital Advertising Gross Revenues Tax. However, State Comptroller Peter Franchot expressed his concern at successfully appealing the decision.

Read More
Ella Startt London Politica Ella Startt London Politica

Emerging political risks of botnets, bot logs, and bot markets

On December 8, the Lithuanian company NordVPN, one of the world’s largest VPN providers, disclosed that approximately 5 million people globally were hacked and had their data stolen and sold on bot markets. According to NordVPN’s report, India was the most affected country, with 600,000 citizens victims to such attacks. Around 26.6 million credentials have been leaked onto these markets, including 720,000 Google accounts, 654,000 Microsoft accounts and 647,000 Facebook accounts.  

Bot markets are a new phenomenon that emerged around 2017, and have been increasingly used by hackers because they allow larger amounts of data to be stored in one place. Once a hacker gains access to a victim’s device, an infostealer malware is installed onto the exposed device, downloading personal data such as the user’s registered cookies, digital fingerprints, logins, screenshots and autofill forms. Each victim’s data is separated into a separate folder called a bot log, which the hacker then puts a price on and sells in a bot market. More expensive bot logs will usually include stolen credit card credentials that buyers can use to extract from bank accounts. 

Understanding how infostealers work

While infostealers are composed of code which differs in execution, all infostealers are designed to steal victim information and save it into bot logs.

RedLine, the most used infostealer malware, is deployed to access a user’s saved credentials, autofill data, and credit card information. It can also access session tokens (single use codes that are sent to users as part of two- or multi- factor authentication). RedLine is particularly desirable for bot logs since it can continuously extract data from an infected device. 

Vidar, the second most used infostealer, can access accounts by stealing passwords, cookies, search history, autofill data, along with cryptocurrency wallet and credit card details that have been stored on a targeted user’s web browser. Vidar is harder to detect: after the hack or "theft", the malware wipes all its fingerprints off the victim’s device. 

The list of infostealers continues to develop with the creation of more sophisticated software. In March 2022, Meta Stealer—a new infostealer very similar to RedLine, but harder to detect by anti-virus software—made its appearance on the dark web. 

Another example is Rhadamanthys infostealer, which was launched in August 2022, and poses a particular threat to businesses because of its amplified ability to access corporate networks. Like other infostealers, it can access information and logins from a variety of different platforms. It targets banking information as well as communication applications such as Outlook and Slack amongst other applications. But the biggest concern is that it has a very low anti-virus detection rate. Rhadamanthys infostealer has the capacity to hack into multi-factor-authentication (MFA) apps, such as Authenticator, EOS Authenticator and others, and gain access to the two-factor authentication (2FA) codes generated in these applications to login into secure accounts. In addition, Rhadamanthys software can avoid the need to acquire 2FA codes altogether by changing a hacked computer’s settings in its control panel, allowing the transfer of cookies generated by these 2FA applications to a bespoke browser.

Bot nets and Bot markets

After a hacker has successfully installed infostealer malware onto a computer, the malware will start to extract account login details. All of this is then saved into a “bot”, which is a program that can autonomously gather data from an infected computer. 

Instead of having control over a single device, hackers will generally control a network of devices, called a “botnet”, which allows the hacker to infiltrate thousands of accounts at a time without detection. 

Data extracted from bots are then categorised in separate files called bot logs, which hackers price and sell on bot markets. 

The three most known bot markets include 2easy, Genesis and the Russian Market. Out of all of these, the Russian Market is the biggest, selling more than 3,870,000 logs from 225 countries. The Russian Market is particularly dangerous, since its dark web version is widely used, making hacking activities harder to track. Bot markets operate on blockchain platforms and allow transactions exclusively in cryptocurrencies, which decentralise transactions and make them harder to track.

Risks for governments, businesses and NGOs 

Fundamentally, infostealers, botnets, bot logs and bot markets all provide hackers with means to bypass existing cybersecurity measures, such as MFA, Anti-money laundering (AML) activities, and more. 

Politically, the developments in infostealer software and botnets are a significant addition to existing spyware and cyberwarfare technology. According to the UK national Cyber Security Centre (NCSC), the Sandworm group (a group of hackers from Russia’s foreign intelligence agency (GRU), known alternatively as Unit 74455), has waged several high profile cyber-attacks in Eastern Europe. Notable attack include the Ukraine power grid hack in 2015, where more than 230,000 residents experienced a blackout, and the Georgia cyber attacks of 2019, involving the large scale hacks of websites across the country, including government, NGO, and media websites. The Sandworm group has also begun using infostealers and botnets to conduct cyber-espionage and cyber attacks. The US brought down VPNFilter, a botnet attributed to the Sandworm group which mainly targeted Ukrainian hosts, in May 2018 with intelligence-collection and destructive cyber attack operations. In 2019, a successor to VPNFilter, called Cyclops Blink emerged, which security researchers believe was capable of collecting intelligence, conducting espionage, and launching denial-of-service (DoS) attacks to make devices inoperable. 

The theft of login details sold on bot markets also poses a huge national security and economic risk. The Colonial Pipeline Network hack in May 2021, which highly disrupted energy supply chains in Eastern US states, provides an example of the severe risk associated with the sale of logins on bot markets. DarkSide, the group behind the attack, used a login to the Pipeline’s VPN network—sold on a bot market—to steal 100 GB of data, compromising the company’s billing and accounting system. To recover its data, Colonial Pipeline had to pay 75 bitcoin (approximately USD 5 million at the time of payment). Hackers tend to prefer Bitcoin payments, as it is the most stable cryptocurrency, and its decentralised natures makes it harder to trace for financial authorities. 

While DarkSide declared they were solely after ransom and did not intend to cause social chaos, the incident illustrates existing capabilities that could be utilised in politically-motivated cyberattacks. It is also worth noting that since the Colonial Pipeline attack, the FBI recovered the amount paid in ransom by tracking the company’s payment to a cryptocurrency wallet used by DarkSide. The FBI was able to access the public ledger of the bitcoin traded, which stores all transaction history, to track down a wallet that had been used by the hacker group. Such ledger recoveries, which can be traced back to the Silk Road crackdown, has incited cybercriminals to use hyper-private coins such as Zcash and Monero, which hide all previous transaction details, making it more difficult for authorities to restore ransomware payments. 

Politically-motivated cyber attacks are an increasing trend targeting governments, NGOs, universities and media outlets. RedAlpha, a group of hackers likely linked to the Chinese Communist Party (CCP), has conducted multi-year cyber-espionage operations on organisations deemed to clash with the CCP's political interests, such as the Taiwan’s ruling Democratic Progressive Party, Amnesty International, the International Federation for Human Rights, and Radio Free Asia. So far, RedAlpha has infiltrated its targets through software that mimics organisation login pages, tricking users into providing their login details. The group has also benefited from organisations still not adopting MFA. But as governments, NGOs, and others move on to more robust security measures, it is likely that the technologies described in this article (infostealers, botnets, bot logs and bot markets) will continue to facilitate politically motivated cyberattacks. 

A final point of concern is specific to the propagation of bot markets. With increasing public legislation and self-regulation of social media giants following the Cambridge Analytica Scandal in 2016, the acquisition of personal data for political means has become harder. With social media data becoming trickier to harvest for political usage, there is a risk that political campaign officers could move to bot markets to illegally acquire information on citizens’ political preferences. Politicians wishing to sway public votes in their favour, or oppressive political regimes seeking greater surveillance over their population, may seek out such data. 

At the time of writing, no known instances of such usage of bot markets has been reported. However, the world of political campaigning is not a stranger to the use of bot technology for political manipulation. Political bots, automated social media accounts which are programmed to act like real people and post comments or share posts to influence public political opinion, have been used in the run-up to presidential elections in the UK, the US, Argentina, Iran, Bahrain, China and more. With the increasing usage of bots in political manipulation activities, one can’t completely rule out the future use of other bot technology, such as bot markets, for political ends.

Read More
Fernando Prats London Politica Fernando Prats London Politica

US export controls on semiconductors: implications for the global economy 

On October 7th, the United States Department of Commerce’s Bureau of Industry and Security (BIS) announced a measure that is highly likely to reshape global value chains. The BIS has updated its export controls to unprecedented levels with the goal of diminishing China's advanced semiconductors production and purchasing capabilities. 

The BIS document explains this decision on national security grounds, stating that the People’s Republic of China (PRC)  is using American advanced semiconductor technologies to “produce advanced military systems including weapons of mass destruction”. Due to the strong connection between China's military and commercial industries, the most effective means to contain China's military capabilities is a complete restriction on advanced semiconductor exports. China is the largest single-country semiconductor market, so this measure will have costly implications for US companies in hindering their access to a crucial market.


Why has the BIS taken such measures?

The reason for the measures may be found in certain US official documents released in the last months. In September, National Security Advisor Jake Sullivan said, regarding certain key technologies—including advanced logic and memory chips—the US “must maintain as large of a lead as possible”. With regard to this, the National Security Advisor sustained that the US needed to “revisit the relative advantage premise”, an approach that “said we need to stay only a couple generations ahead”. Given the shift in the “strategic environment”, Sullivan claimed that the US must change its strategy by actively deploying resources such as export controls in order to keep its competitive advantage over China as large as possible. 

Additionally, Biden's Administration National Security Strategy (NSS), released just a week after the export controls, also explains the BIS measures. This document clearly states that the US perceives the PRC  as the ‘most consequential geopolitical challenge’. As such, the NSS underpins the American strategy to outcompete China. Technology is one of the key areas considered in the document,  and export controls are prescribed as a strategic policy to preserve the US's advantage on advanced technologies. 

A shift in global economy

“National security” is increasingly more prevalent in government policies than in the past. Indeed, this is a trend that will become more important in the next decade. With respect to this, Evan Feigenbaum talks about “a collision between economics and security”. His analysis underlines the move towards a world in which security concerns and geopolitical affairs will have a key role in economic decisions. The export control on advanced semiconductors and the equipment to design them reflects this trend. In fact, this trend widely explains the tech competition between the US and China that we have seen throughout the last years. 

Since the late 1970s and especially after the Cold War, global value chains have worked efficiently and helped increase innovation and mass production. This phenomenon is largely explained by the offshoring of US manufacturing to East Asia, largely to China. Nevertheless, the situation in 2022 is much more complex, as the geopolitical rivalry between the US and China has reflected on value chains and effectively disrupted the world economy. 

The case of semiconductors is particularly relevant, since they are vital to electronic devices, ranging from those used in everyday life, such as mobile phones or laptops, to advanced military devices like drones. 

The semiconductor supply chain has gained considerable attention amid growing tensions in the Taiwan strait. In the first quarter of 2022, Taiwan accounted for 53% percent of global chip fabrication. Given the growing tensions between the US and China over Taiwan, a considerable number of semiconductor companies are rethinking their strategy, revising business continuity plans in the island.

 Overall, American-Chinese competition and general geopolitical tensions have reached a tipping point, which has started to severely affect a number of business sectors, including the technology industry. This global economic trend underpins the vitality for companies to analyze and evaluate geopolitical affairs to reduce the risks and costs as well as identify new opportunities. 


Wider implications of the export controls

It will be important to track the reactions and strategic shifts of American companies in response to the export controls. Second, it remains to be seen whether Washington will condition its support to traditional partners in Western Europe and East Asia on the basis of their trade ties to China's technology sector. With respect to this, it is vital to consider the relevance that companies from the Netherlands, South Korea or Taiwan have in the chip supply chain, since their support and participation in the measure is essential in order for the controls to succeed in the long run.  

Two scenarios may result from the export control with regard to China's technology capabilities. The Chinese Communist Party's  ‘Made In China 2025’ plan may be negatively affected by the BIS measure, in cutting China's capability to build advanced technologies requiring advanced semiconductors. This is the scenario that Washington seeks. Nonetheless, the export control may have exactly the opposite effect: some specialists argue that, in the long run, this restriction may help to boost China’s domestic advanced chip design ecosystem. 

All of these implications are a major indicator of numerous developments to come out of the export controls, and that will be central to understanding the global economy in the coming years.  The LP Tech and Cyber Watch will be following US-Chinese competition in the technology sector and its impact on global supply chains, providing timely analysis on developments.

Read More
Joris Benjaminas Žilinskis London Politica Joris Benjaminas Žilinskis London Politica

Big Tech Under Fire: Antitrust Crackdown Targets Industry Titans


In recent years, there has been a resurgence of antitrust actions against major technology companies, both in Europe and the United States. In Europe, the European Commission, led by the new Commissioner for Competition, Margrethe Vestager, has brought multiple high-level lawsuits against companies such as Alphabet, Meta, and Apple, due to their alleged anti-competitive behaviour. On the American side, the Biden administration has appointed several officials with the explicit goal of dismantling the monopolistic barriers erected by these tech conglomerates. The ramifications of these actions could potentially span decades.

In Europe, the European Commission has launched numerous legal proceedings against major technology companies and introduced major legislation aimed at levelling the playing field for technology companies in recent years. As a result of its unlawful restrictions on Android phone manufacturers, Alphabet received the largest fine to date, which was upheld by the European General Court in 2022. The fine was more than 4 billion Euros. The company had previously been fined nearly 2.5 billion Euros for favouring its own services on the Google search engine. In 2023, the commission won another major lawsuit against a major technology company, Meta, for its data collection practises. This suit resulted in a fine of almost 400 million Euros. By reaching a settlement agreement with the European Commission at the end of 2022, to change its business practices in the 27-nation bloc, Amazon was able to avoid large fines from the European Commission. The changes will impact how Amazon displays rival products in customer searches and prohibit the company from using internal data on independent merchants to influence Amazon's home brand product offerings. Several lawsuits are currently pending against Apple for limiting access to mobile payments, with Apple Pay being the only allowed application on Apple devices, and for the Apple Store rules for music providers. Simultaneously, political agreements on the Digital Markets Act and Digital Services Act were reached in the first half of 2022, which will affect how technology companies interact with consumers and act as gatekeepers of online platforms. These pieces of legislation will go into effect between 2023 and 2024, with a particular emphasis on large technology conglomerates.

On the other side of the pond, American regulators have also become more active in limiting the powers of major technology companies. One example is the Department of Justice's antitrust lawsuit against Google, which accuses the company of stifling competition by abusing its dominance in online search and advertising. To maintain its dominance, Google allegedly entered into exclusivity agreements and linked its search engine to its other products. Another example is the FTC's antitrust lawsuit against Meta, which accuses the company of engaging in a "systematic strategy" to acquire potential competitors in order to maintain its dominance in the social media market. Regulators have also taken more aggressive preemptive action, such as the FTC suing to prevent Microsoft from acquiring Activision Blizzard, alleging that the company intended to use the game developer to further corner the gaming console market by lowering the quality of releases for other gaming platforms.

As antitrust actions against major technology companies gain traction, one possible outcome is increased regulation of these firms. This could take the form of even stricter antitrust enforcement, including the imposition of ever-increasing fines and penalties on companies that engage in anti competitive behaviour. For example, the framework introduced in the Digital Markets Act would result in fines of up to 10% of the offender's global annual turnover. This increased regulatory scrutiny may result in changes in how these companies operate and conduct business, potentially creating a more level playing field for smaller businesses and startups.

The dissolution of large technology companies, as seen in historical antitrust cases such as the breakup of Standard Oil and AT&T, is a less likely but still possible outcome of these actions. This could include the forced sale of specific business units or the division of various aspects of a company's operations, which would result in the emergence of new competitors and increased innovation in the technology industry. A widely floated example is the proposed break-up of Facebook, which would be implemented by spinning off Instagram or Whatsapp, as regulators argue that the acquisitions were approved in error. Many people, however, believe that such action is too drastic and unnecessary.

It is also important to note that these actions may have unintended consequences. According to a National Bureau of Economic Research study, antitrust actions have historically slowed innovation and reduced the efficiency of the technology industry. Furthermore, the costs of complying with new regulations may be passed on to consumers, leading to higher prices for technology products and services. Additionally, some academics argue that the antitrust tools and assessments used may not apply to the technology industry and might actually stifle the emergence of new companies.

The growing trend of antitrust measures on both sides of the Atlantic is highly advantageous as it serves to mitigate the expansive power of large technology corporations and promote a more equitable environment within the technological realm. By dismantling the barriers established by these companies as gatekeepers of their platforms, a more level playing field is established. However, it is essential to exercise caution and avoid more radical calls for dissolution and instead focus on developing antitrust strategies that are better suited to the unique characteristics of technology companies in order to preserve value.


Read More
Grace Watson London Politica Grace Watson London Politica

Internet Shutdowns: Increasingly common and increasingly hard to detect

In 2021, more than 182 internet shutdowns took place around the world. This was an increase of just over seventeen percent, up from 155 in 2020. After the recent US-EU Trade and Technology Council meeting a statement was released stating “the European Union and the United States reiterate our alarm at the increasingly entrenched practice of government-imposed Internet shutdowns”.

 

India had the highest number of targeted shutdowns in 2021. More than 100 targeted shutdowns took place throughout the country. Myanmar, Sudan, and Iran had the next highest number of shutdowns respectively. The Carnegie centre underlines five main reasons for shutdowns: “mass demonstrations, military operations, and coups elections, communal violence and religious holidays, and school exams”.

 

Internet shutdowns, besides shutting down avenues of communication, cause widespread disruptions to essential services. Shutdowns in India have seen disruptions to banking and education among others.

 

There are two key methods of implementation for internet shutdowns:

The first and currently most common implementation method is to instate a complete shutdown of internet servers in a region or country.

 

This process has been seen most recently in Iran, following widespread protests throughout the country after the death of Jhina (Mahsa) Amini. Several regions of the country lost all internet access for periods of time during heavy protests — in one case, for an entire day. In Myanmar, complete internet shutdowns throughout the country continued for days during the military coup d’état in 2021.

 

The second and increasingly used method is to implement a blockage of specific sections of internet services in a region or country.

 

In recent months, Russia has avoided implementing complete internet shutdowns in favour of new tools to block dissident media networks and internet sites in the country. These new tools, which can block different sections of the internet without being detected instantly, offer new ways for authorities to block access to the internet. A similar incident was reported in Egypt in 2016 when authorities shut down Facebook’s Free Basics service in a highly targeted shutdown.

Overarching concerns

As shutdowns are expected to increase and become more targeted according to an OHCHR report, authorities and internet rights groups are working on entrenching protective measures against them into law. A United Nations General Assembly report on internet shutdowns recommended states and companies agree to ensure internet access. However, commitments to internet freedom by the UN and other organisations will not be useful if governments worldwide continue to restrict access. Indeed, analysts at Censored Planet at the University of Michigan, warn that it is possible the Russian Government will export internet shutdown methods to other governments.


The effect of recent instances of internet shutdowns show these tactics becoming increasingly common within conflicts affected regions. Methods for creating internet shutdowns are becoming increasingly tailored to a specific section of the internet. These trends are likely to continue unless international partners increase public international responses to shutdowns.

Read More
Lewis Chapman London Politica Lewis Chapman London Politica

China’s Race for AI Supremacy

In 2017, the Chinese Communist Party (CCP) set out its “New Generation Artificial Intelligence Development Plan”, setting a deadline to be the world leader in AI by 2030, unseating the US from its long-held top position. This is more than a symbolic race with the US though. AI is a key industry of the future and will redefine many aspects of the economic, social, and military spheres. China is determined to lead. Unsurprisingly, the US is keen to defend its top spot, but China’s long-term trajectory, ambitions, and unique advantages make for an interesting battle in years to come. 

 

Currently, the US is ranked as the dominant player in AI investment, largely due to the thriving tech hub of Silicon Valley combined with the nation's strong and stable capital markets. US total private investment in AI is three times higher than in China, and the US has twice as many AI start-ups. A key difference in the two countries' approach to AI is the role of the state. Much of Chinese AI investment is controlled by the central government, which has launched several tech-investment vehicles. Figures here are opaque, though the 2017 plan suggested an investment of 1 trillion RMB ($138 Bn USD) over the next few years. Although a significant sum, it would not draw level with US numbers. China is, however, rapidly emerging as a leader in AI research. Indeed, accounting for AI research publications, China had 63.2% more publications than the US in 2021. But analysts have commented that these publications were, on average, of a lower quality compared to US ones.

 

Many consider the race for AI supremacy almost synonymous with the race for semiconductor supremacy and, therefore, linked with the national capacity to build high-end computer chips. Indeed, advanced chips are crucial for cutting-edge AI research and development. China has lagged behind the US in this regard, producing only 6% of the chips it used in 2020, whilst having even lower numbers for the most advanced chip technology. 


The supply chains behind these advanced chips are complex and rely on numerous highly specialised suppliers. Currently, the United States and its allies have a significant competitive advantage in most parts of the supply chain of these chips. For example, the US firm Nvidia dominates AI chip design, and three other US firms dominate electronic design automation (EDA) software used to design chips


Chinese firms are also far behind on AI chip design and heavily rely on US EDA software. ASML, a pioneering Dutch firm, claims 80% of the total market for lithography machines that make semiconductors. It is the only company capable of making the cutting-edge extreme ultraviolet (EUV) lithography machines used to make the most advanced chips needed for AI research and development.


In its efforts to restrict China’s AI progress, the US has pushed ASML to stop selling its EUV machines to China. They can still sell less advanced deep ultraviolet (DUV) machines, but the US is now pushing for the Dutch government to ban exports of these machines to China as well. The US has also moved to restrict exports of EDA software. China is placing a huge focus on its semiconductor industry and has invested heavily, in 2015 setting out a target of 70% of chip supply to be met domestically by 2025. However, this figure is now expected not to surpass 20% by 2025. 

 

The competition over AI is not confined to funding and hardware. High-performance AI also requires immense quantities of data. The race for AI supremacy cannot be won without procuring and compiling large-scale datasets that are needed to train AI models. This is where the surveillance state plays into China’s hands, in contrast to the West’s focus on privacy. Combine this with both the scale of China’s population (over four times larger than the US), and the booming digital economy with vibrant social networks and online commerce having almost entirely replaced cash, and China finds itself with a significant data advantage.


Further, China is investing heavily in the Belt and Road Initiative, which may extend its data advantage beyond its borders. Indeed, MI6 chief Richard Moore has warned of China’s “data-traps” abroad, whereby it uses its economic heft to “harvest data from around the world” and “get people on the hook”. Data is the fuel of the AI economy, and China is keen to extend its advantage here.

 

Abroad, there is much concern over China’s approach to ethics within AI as well as concern over how China may use AI to boost its military capabilities. The lack of focus on ethics may partially explain China’s AI progress. Whilst other countries are held up by regulation and lengthy discussions and deliberation on ethics, China has powered ahead. Nicolas Chaillan, the Pentagon’s first chief software officer, has already criticised debates on AI ethics for slowing down development within the military. Chaillan also directed blame toward Google’s refusal to work with the military on AI. This is a stark contrast to China where, he said, Chinese companies are obliged to work with Beijing and are making “massive investment” into AI without regard for ethics. 

 

Technological prowess and semiconductor advantage is keeping the West ahead of China for now. But China’s vision for a world order that places surveillance above privacy, and forgoes ethics in favour of state ambitions, may well give it the upper hand in the race for AI supremacy. 

Read More