Microsoft President Brad Smith has moved to endorse the creation of a new federal agency tasked with licensing major AI systems. The move reflects a growing need for regulatory oversight in an industry that is rapidly commercializing powerful AI tools, including the likes of ChatGPT. In an interview with The Wall Street Journal, Smith emphasized the importance of ensuring AI benefits people while maintaining human control, signaling a shift towards safeguarding the development of AI technologies.
Smith's stance aligns with calls for responsible AI regulation, underscoring Microsoft's commitment to fostering AI's positive impact. The proposal for a new federal agency resonates with Sam Altman, CEO of OpenAI, the creator of ChatGPT, who recently advocated for such regulatory measures before a congressional panel.
The urgency for AI regulation stems from the rapid adoption of AI technologies like ChatGPT, which have the capacity to emulate human-like conversational and creative abilities, as well as code generation. Policymakers are engaging in discussions around bipartisan legislation to establish safeguards for AI, while the Biden administration seeks public input for shaping a national AI strategy that may lead to comprehensive regulations.
Concerns regarding the potential negative consequences of AI, such as heightened hacking capabilities and manipulation of voter behavior, have fueled the need for regulatory intervention. Notably, Microsoft's widespread deployment of powerful AI tools, including ChatGPT, has further underscored the importance of comprehensive regulations.
Microsoft's partnership with OpenAI, which involves a significant investment, has positioned it as a key player in the AI landscape. ChatGPT's integration with Microsoft's Azure cloud-computing platform and its incorporation into products like the Bing search engine illustrate Microsoft's determination to compete with tech giants like Google.
To address these concerns, Smith emphasized Microsoft's advocacy for AI safeguards, pointing to the company's support for legislation regulating facial-recognition technology in Washington state. The company aims to establish a regulatory framework that can be universally embraced by AI stakeholders.
Smith's proposed regulatory approach extends beyond AI creators to companies providing AI-based applications. Microsoft emphasizes that such companies should undertake the responsibility of understanding their customers and identifying potential misuse of their technology. The company also recommends marking digital content created by AI to promote transparency.
While AI regulation is still in its early stages, Smith called for a focus on critical infrastructure AI systems, such as those used in power grids and city traffic systems. Additionally, Smith recommended that the administration require companies selling AI tools to the government to adhere to the AI risk-management framework outlined by the National Institute of Standards and Technology.
As discussions unfold, Smith acknowledges the possibility of the industry adopting voluntary AI standards and best practices, while highlighting the pivotal role government should play in shaping AI regulations. The call for comprehensive regulation reflects a shared responsibility among industry players, policymakers, and technology leaders to harness AI's potential while ensuring its responsible and ethical deployment.