In the ever-evolving landscape of artificial intelligence (AI), generative AI has emerged as a game-changer, unlocking new possibilities for innovation and efficiency across various industries. However, alongside the promises of this technology come significant challenges, especially for corporate leaders who must find a balance between harnessing its potential and managing the associated risks. Ethical risk expert Reid Blackman, CEO of Virtue and author of "Generative AI-nxiety," delves into the anxiety that corporate leaders are facing and offers insights into navigating the complex landscape of generative AI.
The convergence of AI technology and business strategy is driving corporate leaders to grapple with the dual challenge of staying competitive in an AI-driven world and ensuring responsible AI deployment. As Reid Blackman points out, the anxiety stems from two main sources: the fear of lagging behind in technological innovation and the concerns around ensuring safe and ethical use of generative AI. "A lot of leaders are feeling pressure to figure out, how do we use this new technology to innovate, increase efficiencies—make money or save money? The other concern is, how do we do this safely?" Blackman notes. This sentiment reflects the growing urgency among corporate decision-makers to tap into AI's potential while minimizing potential pitfalls.
The launch of ChatGPT by OpenAI in November 2022 marked a significant milestone in the evolution of generative AI. While some organizations had been experimenting with large language models (LLMs) for data processing and text generation, ChatGPT's availability brought the power of generative AI to a wider audience within organizations. This democratization of generative AI presents a "double-edged sword," as Blackman describes it. On one hand, it empowers diverse teams across an organization to explore creative ways to drive business outcomes. On the other hand, this accessibility without proper guidance and constraints raises concerns about misuse and potential damage to brand reputation.
Blackman underscores the significance of establishing robust systems and structures to manage generative AI risks effectively. While risk management is not a new concept for enterprises, the unique challenges posed by generative AI necessitate a dedicated approach. The anxiety surrounding generative AI is justified when organizations lack the necessary mechanisms to account for these risks. As PwC's recent survey of C-suite leaders reveals, 59% of respondents are planning to invest in new technologies over the next 12-18 months. Among these leaders, 52% of CFOs prioritize investments in generative AI and advanced analytics, indicating a strong push towards leveraging AI for strategic advantage.
Creating a responsible AI program and an AI ethical risk program emerges as a strategic imperative for organizations. Blackman emphasizes that organizations should focus on managing risks related to generative AI effectively. He highlights the need to align governance structures, policies, procedures, workflows, and metrics to ensure the compliance and impact of these programs. Addressing the unique risks of generative AI requires a nuanced approach. Blackman identifies four cross-industry risks: the hallucination problem, the deliberation problem, the sleazy salesperson problem, and the problem of shared responsibility. Each of these risks demands a combination of due diligence, continuous monitoring, and human intervention to mitigate potential negative outcomes.
Metrics and key performance indicators (KPIs) play a pivotal role in evaluating the success of AI ethical risk programs. Blackman suggests that tracking discriminatory outcomes caused by AI models, concerns related to insider trading, and instances where short-term financial gains compromise long-term reputation can serve as valuable KPIs. While an outright ban on generative AI may seem like a cautious approach, Blackman believes that the real opportunity lies in enabling safe usage. Rather than stifling innovation, organizations should prioritize educating their workforce on the responsible and ethical use of generative AI.
In essence, the convergence of innovation and ethical responsibility forms the core of managing generative AI risk. By fostering a culture of awareness, understanding, and continuous improvement, organizations can effectively navigate the complexities of generative AI. Balancing innovation with risk mitigation requires a comprehensive approach that includes policies, governance, training, and metrics. As Reid Blackman's insights illuminate, it is through this holistic approach that organizations can unlock the full potential of generative AI while mitigating the anxieties that come with its implementation.