Generative AI (GenAI) has moved rapidly from experimentation to enterprise agenda, but many early pilots still fail to make it past proof of concept because expectations are misaligned with the practical realities of deploying the technology at scale. The IDC report, “Successfully Navigating the Complexities of GenAI,” shows that 29% of enterprise cloud buyers have already introduced GenAI workloads into their businesses. A further 60% are experimenting or working on deployments.
This means the shift from pilot to production is now a mainstream challenge rather than an edge case.
The good news is that most common failure modes are both predictable and avoidable. IDC’s findings point to a repeatable path for successful deployments: start with measurable business outcomes, select use cases that align with data and workflow realities, then build an ecosystem of technologies and services to scale delivery.
Why early pilots fail
Early pilots often begin with a generic ambition such as “boost productivity” or “transform customer service”, then stall when it is discovered that GenAI is not a plug-in feature but a programme that touches infrastructure, data governance, security and operating model. GenAI projects are highly complex and frequently require changes to IT processes, infrastructure, data governance, security, culture and operational procedures.
Research shows that the most significant adoption challenges start with an inability to identify use cases where generative AI can truly benefit the business. This is closely followed by an inability to determine ROI of the pilot.
Concerns that some technologies still need to mature, regulation considerations, differing infrastructure requirements and organisational inertia are also significant factors, along with concerns about how AI will affect employees. This results in pilot projects that demonstrate novelty but cannot prove value, cannot be scaled safely or cannot be embedded into real work.
Another frequent cause of failure is underestimating data location and access constraints. According to IDC, just 27% of an organisation’s data is in the public cloud, leaving 73% in centralised data centres, remote offices or edge locations. Decentralisation raises issues with data protection and sovereignty requirements, complicating model access and governance.
Identify generative AI use cases that deliver meaningful gains
A use case is viable when:
- It solves a real business problem
- Can be measured
- Can be operated securely
- Can be integrated into day-to-day workflows.
IDC’s 3Q24 Cloud Pulse Survey showed enterprises’ leading near-term GenAI goals include improving customer experience and support (21%) and using AI to drive better analytics (19%).
Use case selection should begin by translating those ambitions into operational outcomes and clear KPIs. Organisations must relate GenAI benefits to business savings or key performance indicators. They can then communicate KPI success to raise the profile of projects, drive awareness across the business and help unlock future innovation and budget.
To ensure a deployment is genuinely useful, rather than simply novel, ask the following questions:
- Does the process rely on knowledge work, language, summarisation, drafting, classification, search or guided decision support?
- Can performance be measured using baseline metrics such as cycle time, first contact resolution, customer satisfaction, cost per ticket, time to insight or reduction in rework?
- Can the AI output be embedded into an existing system of record, workflow tool, service desk or customer channel so it becomes part of normal operations rather than a separate chatbot?
These questions will keep your pilot focused on delivering true business benefits.
Build the foundations for deployment
Even the best use case will struggle without the right data architecture and integration approach. IDC notes that AI agents require a data architecture that supports all data types and data models for ingestion and that organisations get more contextually relevant answers when they take a holistic approach to how GenAI interacts with their data.
The first step towards holistic data is preparing for retrieval-augmented generation (RAG). RAG combines publicly available large language models with internal enterprise data to customise meaningful outputs for your business. This often requires a hybrid approach with access to external and internal data, and it depends on examining data architectures, avoiding data silos and ensuring data is clean – and that all compliance obligations are maintained.
Integration is equally important because the ‘last mile of AI’ is where adoption is won or lost. IDC stresses that organisations need to consider how GenAI offerings will be integrated within existing workflows and business applications, especially as AI capabilities are introduced across your technology stack.
Scale through ecosystems and services
One of IDC’s clearest messages is that enterprise GenAI adoption is not just a product decision; it is an ecosystem decision. The complexity of GenAI deployments is pushing vendors, professional services organisations and enterprises to partner when they deploy GenAI.
Professional services accelerate results when moving from pilot to repeatable delivery. 72% of enterprise cloud buyers already leverage some form of professional services, citing benefits such as improved cloud management, better cost insights, increased flexibility and avoidance of vendor lock-in through access to an ecosystem of providers. The same approach applies directly to GenAI because it helps your teams define KPIs, set baselines, design governance, industrialise integration and build change management into the rollout.
Infrastructure choices also need to reflect where data lives and how workloads will run across on-premises and off-premises environments. Infrastructure requirements span dedicated and public cloud and on-premises and off-premises deployments. This requires highly interoperable environments and tech stacks.
To succeed, your business must ground every GenAI pilot project in measurable outcomes. Focus on selecting use cases that align with data and workflow realities and lean on an ecosystem of technology and professional services. This will allow your business to turn GenAI from a series of experiments into a scalable capability that spans functions.
To learn more about successfully scaling your Generative AI use cases and how WTL can help, please give us a call.