AI Regulation: New Threats and Opportunities

Due to the rising ubiquity and possible exploitation of artificial intelligence in numerous sectors and settings, it is crucial to strengthen AI policy in 2023. Concerns regarding their potential for abuse and ethical ramifications have arisen with the development of AI technologies such as realistic chatbots and high-quality picture production. In addition, the use of artificial intelligence in combat has emphasised the need for rules and control in this domain. Moreover, the propensity of AI to perpetuate prejudices and discrimination underscores the significance of ethical issues in its development and deployment. As artificial intelligence continues to grow and become more ingrained in society, it is necessary to set guidelines and laws to guarantee its responsible usage and limit possible harmful effects on people and society.

Problems with the development and spread of AI

The study on the danger posed by unrestricted AI exports for surveillance shows the potential harmful effects of the expanding worldwide commerce of artificial intelligence technologies, notably face recognition AI. The report investigates how China's leadership in developing and selling this technology may bolster autocracies and surveillance regimes across the globe.  The objective of this study is to demonstrate China's comparative advantage in the export of face recognition AI and the possible political bias in imports of this technology, specifically in the context of autocracies and fragile democracies. The study results would aid policymakers in comprehending the possible worldwide ramifications of the trade of AI technology and in adopting rules to limit these risks, particularly controls on items having global externalities.

In the backdrop of China's rapidly expanding position in the global landscape of artificial intelligence research, development, and regulation; it is crucial to evaluate the circumstances and goals under which technologies are developed. China's extensive engagement in worldwide networks of AI R&D is well-documented by several papers, hosting corporate AI laboratories, and extending global AI research frontiers. The immersive nature of Chinese engagement presents it as a leader in AI.

Concerns have been raised in recent years regarding the implications of these ties between China and global networks for R&D, particularly in light of China's growing capabilities and ambitions in AI. The unethical use of AI for mass surveillance, the Chinese state's policies that strengthen these capabilities, and the knowledge transfer from abroad are of particular concern. The nature of these long-standing links has been subject to intensive examination and fresh inquiries as a result of these concerns.

The 2021 Forum for Cooperation on AI (FCAI) progress report has identified the implications of China's development and use of AI for international cooperation and touched on China in connection with several of the recommendations regarding regulatory alignment, standards development, trade agreements, and R&D projects; but it has also focused on Chinese policies and applications of AI that present a variety of challenges in the context of China's broader geopolitical, economic, and an international context.

It is essential to evaluate the ramifications of international cooperation on artificial intelligence, particularly the possibility for immoral AI usage and the strengthening of autocracies. It is also vital to address the economic, ethical, and strategic considerations that call into question the sustainability of such levels of cooperation on AI, as well as the difficulties and downsides of disconnecting the channels of collaboration.

AI on war

The use of AI in war may have a substantial influence on its development and application regulation. The employment of artificial intelligence in the Ukraine conflict, notably in military operations and psychological warfare techniques, has raised awareness of the possible advantages and drawbacks of using AI in warfare. The employment of artificial intelligence in warfare may have a substantial effect on the regulation of its development and implementation.

On the one hand, the employment of AI in warfare may be advantageous since it enables troops to make judgements in real time and improves medical decisions during battle. In addition, the military's investigation of AI for the development of tanks of the next generation that employ AI to locate targets has the potential to enhance combat operations, equipment maintenance, and supply chain management.

On the other side, there are also worries over the possible abuse of AI in conflict, specifically the contentious use of face recognition technology to allow psychological warfare methods. In addition, there is a concern that AI models trained on combat data, such as that from the Ukraine conflict, may perpetuate prejudices and discrimination, which might have harmful effects on people in the warzone.

As a result, the use of artificial intelligence in warfare underscores the necessity for legislation and monitoring regarding its development and implementation. It is essential to guarantee that the use of artificial intelligence in war is ethical and responsible, and that it is not used in a manner that might damage civilians.


Previous
Previous

Cracking the Code: How Next-Generation Computing Could Upend Digital Security

Next
Next

Emerging Disruptive Technologies