Skip to content

G7 to Set Voluntary Code of Conduct for Advanced AI Development

The Group of Seven (G7) industrial countries is set to establish a voluntary code of conduct for organizations developing advanced artificial intelligence systems. This landmark code aims to enhance the safe and secure use of AI, offering guidance for developing advanced AI systems.

In a significant move towards AI governance, the Group of Seven (G7) industrial countries is poised to introduce a voluntary code of conduct for entities involved in the development of advanced artificial intelligence systems. This proactive measure comes in response to concerns regarding the potential misuse and security risks associated with AI technology.

The G7, comprising influential economies and the European Union, commenced this initiative during a ministerial forum known as the Hiroshima AI process.

This voluntary code of conduct represents a pivotal moment in the governance of artificial intelligence, signifying a collective effort to address privacy and security issues in the AI landscape.

The 11-point code aims to promote the global adoption of safe, secure, and trustworthy AI. It provides voluntary guidance for organizations engaged in the development of advanced AI systems, including foundational models and generative AI, with the goal of harnessing the benefits and addressing the associated risks and challenges.

Key Points of the AI Code:

1. Risk Mitigation Across the AI Lifecycle:
The code calls for organizations to implement appropriate measures to identify, evaluate, and mitigate risks throughout the AI development process. It emphasizes the importance of addressing incidents and patterns of misuse after AI products are deployed.

2. Transparency and Accountability:
Companies are encouraged to publish public reports detailing the capabilities, limitations, and the responsible use of AI systems. Transparency and accountability are vital components in ensuring the ethical use of AI.

3. Robust Security Controls:
The code stresses the need for organizations to invest in robust security controls to protect AI systems from potential threats and breaches.

The European Union has taken a proactive role in AI regulation with the introduction of the comprehensive AI Act. This act demonstrates a firm commitment to addressing the challenges posed by AI.

While the EU leads the way in AI regulation, other regions, including Japan, the United States, and Southeast Asian countries, have adopted differing approaches, some more hands-off, to stimulate economic growth through AI innovation.

European Commission digital chief Vera Jourova highlighted the significance of a Code of Conduct as a transitional measure to ensure safety until comprehensive AI regulation is firmly in place.

The introduction of a voluntary code of conduct by the G7 reflects a collaborative effort to govern advanced AI development and promote responsible AI usage. This landmark initiative underscores the global importance of addressing the challenges and opportunities that AI technology presents.

As AI continues to shape the future, the establishment of a voluntary code of conduct by the G7 marks a pivotal step towards responsible AI development and use on a global scale.