A new chapter begins

A new chapter begins

Learn More

5 Critical Factors To Mobilize Generative AI The Right Way

A comprehensive framework to build a competitive advantage
By Nidhi Agarwal, Prakhar Sureka, Ashwini Karandikar, and Kapil Sabharwal
Home  // . //  Insights //  5 Critical Factors To Mobilize Generative AI The Right Way

The meteoric rise of generative artificial intelligence (AI) is transforming every aspect of doing business. While the technology has unlocked opportunities to drive greater efficiencies and spur innovation, it also comes with a unique set of challenges and risks, ranging from privacy and cybersecurity to data hallucination and reliability. Additionally, governments across the world are ratcheting up scrutiny and regulation of generative AI, according to an Oliver Wyman Forum report.

Navigating this balancing act requires a comprehensive and enterprise-wide approach. Leaders need to ensure that generative AI is tied to use cases that will increase value for the organization and its customers. That requires a careful examination of how generative AI is deployed across functions, from customer service to risk and compliance, while ensuring that the organization can manage potential risks. 

Five key factors for successful generative AI deployment in organizations

Too often, we’ve found that organizations take a siloed approach to deploying advanced AI tools, partly to keep up with the technology arms race. But that results in inconsistent or suboptimal results. To achieve better and sustained results, we’ve identified five critical and interconnected success factors that executives must master, whether they were early adopters in 2022 or are taking a more conservative approach. 

1. Prioritize for value by measuring generative AI impact

Executives are under pressure to prove a return on investments for generative AI but are struggling to capture measurable financial impact from initiatives such as chatbots. Organizations need a robust value-measurement framework to ensure investments translate into true business, customer, and shareholder value. 

We have developed a value measurement framework that leaders can use to build objective metrics, outlining four aspects of generative AI value — impact, workflows, shareholder value, and broad applicability.

Exhibit: Value measurement framework for generative AI
Donut chart outlining 4 aspects of generative AI value: impact, workflows, shareholder value, and broad applicability.

Another area where companies are struggling is integrating generative AI with traditional AI and analytics capabilities and then embedding results into systems and processes. The intention is to find the right technical solution for each task, and to create a workbench of usable widgets and AI agents to automate repeatable tasks and optimize process efficiency and consistency using supporting technology.

Consider financial institutions where AI can bolster decision support throughout the credit process. Managing non-retail credit is highly manual and complex. For example, writing and reviewing a credit memo requires specialized industry and credit knowledge. By employing agentic workflows, banks can leverage AI agents to reason, fulfill tasks, and automate manual and time-consuming steps, with humans stepping in as necessary.  

This not only saves time but also allows for a more comprehensive analysis of creditworthiness through multiple tasks like document collection from various sources, data verification and validation, and AI-assisted risk assessments. Notably this is not just workflow automation; AI can conduct complex reasoning tasks — what are the key risk drivers that a relationship manager should assess when extending credit to an automotive dealership, among other factors.

A variety of applications are emerging across other industries. In healthcare, for instance, generative AI is showing promise in advancing personalized medicine and improving the patient experience. Insurance companies are tapping into generative AI to analyze large amounts of data, reduce overpayments, and help leaders make more informed decisions. And auto manufacturers are using it to accelerate research and development. 

2. Ensure governance that accelerates AI

Organizations must define internal governance structures that establish clear roles and responsibilities.

A centralized governance structure with an empowered council of senior stakeholders is one model for success. Such a senior management council or steering committee should cut across the entire enterprise and can play a vital role in prioritizing and approving use cases to guide the right investments. It can also manage risk management processes, ensure alignment with organizational objectives and provide strategic direction, and focus decisions based on true value potential, tracking it through agreed key performance indicators. 

3. Implement appropriate risk guardrails and controls

Risk management guardrails are essential to amplify operational, reputational, and regulatory challenges. Yet too many controls can paralyze innovation. Leading risk teams are implementing generative AI risk tiering matrices to distinguish high-impact customer-facing models from low-risk internal copilots.

A comprehensive risk management framework should encompass several critical elements, including how to assess risks, use case risk tiering, validation and testing guidelines, and risk-proportionate controls. Use case risk tiering, for instance, involves categorizing generative AI applications based on their potential impact and associated risks. High-risk use cases may require more stringent validation and testing protocols, while lower-risk applications can follow streamlined processes.

Clear validation and testing guidelines are crucial for ensuring the reliability and accuracy of generative AI models, which includes defining methodologies for evaluating model performance and conducting stress tests. 

Implementing risk-proportionate controls, guardrails, and filters is vital, too, but they shouldn’t strangle innovation — they must scale with risk and be designed to prevent misuse or unintended consequences such as bias in decision-making or breaches of customer privacy. By defining a robust risk management framework, organizations can not only enhance their governance model but build trust with stakeholders by demonstrating a commitment to responsible AI deployment.

4. Design a centralized operating model to break down silos

Implementing change management practices to prepare the organization for the cultural shift that accompanies generative AI adoption is essential. One of the most common and costly friction points we've observed is the tension between technology (CIO/CTO) and risk data teams. CIOs want to move fast, deploy enterprise-wide platforms, and deliver generative AI at scale, while model risk and data teams, rightly, want to ensure compliance, explainability, and accountability.

Too often, this misalignment stems from an unclear operating model — where accountability for model risk, data ownership, or model monitoring is blurred. As a result, innovation slows, deployment stalls, and pilots never scale. Clarifying accountability across tech, risk, and data isn't just good governance — it's essential for agility in the operating models. 

A centralized center of excellence (CoE), particularly in the nascent stages of adoption, offers several advantages and has proven successful with multiple clients. A well-designed CoE can ensure proper oversight is in place, assess projects for scalability, and reduce duplicative efforts across siloed units.

The CoE should work in tandem with the centralized governance framework and act as the execution arm of the senior management council. Successful CoEs have:

  • Sufficient resources, including skilled personnel. This may involve recruiting data scientists, AI specialists, and domain experts
  • A dedicated budget to drive innovation and support various departments in their efforts, develop playbooks, toolkits, and reusable assets
  • The ability to provide change management support to business units, such as promoting a culture of collaboration and knowledge sharing, providing training on new technologies, and fostering a mindset of innovation and adaptability
  • Clarity around roles and accountability, especially when it comes to scaling applications across technology, data, data science, and risk teams

Finally, establishing KPIs to monitor the effectiveness of the CoE and its initiatives can help track the number of successful generative AI projects launched, the time taken to deploy solutions, and the overall impact on business outcomes.

5. Invest In the right foundational technology stack

A modern technology stack is required to scale AI solutions. It should include strong data management that facilitates the collection, storage, and processing of large volumes of data, ensuring high-quality inputs as well as integration with workflows to ensure AI can scale. Moreover, organizations should invest in tools and frameworks that support the development and deployment of generative AI applications, such as cloud-based platforms that enable scalability.

Selecting the right large language model (LLM) is essential for ensuring generative AI delivers trusted, effective results in production-grade, mission-critical environments. Not all LLMs are ready for production-grade enterprise use, especially when the use case involves domain-specific knowledge, long context windows, or decision-making with risk or compliance implications. In using a general-purpose model where a fine-tuned or domain-adapted one is needed, performance will suffer, and so will business outcomes. It is important for the technology stack and platform to allow for choice of model, and capabilities to test and monitor the foundational models.

Integration capabilities are also crucial, as generative AI solutions must seamlessly integrate with existing systems to enhance functionality and user experience. This integration is vital for maintaining operational efficiency and ensuring that AI can be part of normal workflow and unlock gains for the organization. Furthermore, security measures and compliance tools should be implemented to protect sensitive data and ensure adherence to regulatory requirements, including encryption, access controls, and monitoring tools to detect and respond to potential security incidents.

Why organizations need to scale generative AI now

Organizations must take a holistic approach that addresses both strategic and operational challenges of generative AI. A piecemeal approach risks a missed opportunity and accumulated technical debt. Those that integrate risk, value, and execution from day one will build an enduring competitive advantage. Integrative generative AI into business as usual operations not only positions organizations to thrive in the current environment but also ensures they remain competitive in an increasingly digital landscape. 

Move from AI ambition to real-world value and strategic success by connecting with our Quotient — AI by Oliver Wyman team to lead the transformation.