The generative AI revolution, spearheaded by the advent of ChatGPT-3.5 in November 2022, has propelled businesses into a race to harness this transformative technology. However, organizations face unique challenges in developing generative AI assistants, as the complexity and pace of advancement in this field differ significantly from traditional tech builds. With almost a two-year timeframe since the introduction of significant generative AI tools, it is becoming evident that many firms are at a considerable risk of “failing” in their endeavors to create effective AI assistants. The likelihood of making poor choices regarding various aspects of AI development is high, often necessitating substantial redesigns or completions from the ground up within just a few years.
To illustrate these challenges, consider two hypothetical projects undertaken by an airline, starting with the straightforward development of a mobile app for managing customer bookings. In this case, the airline would typically follow a familiar process to outline a business case, seek stakeholder approvals, and execute the mobile app build. This standard methodology generally yields satisfactory results for conventional software development initiatives. In contrast, envision the airline attempting to develop a generative AI-based customer service assistant. This approach is fraught with difficulties since the technological landscape is evolving so rapidly that the budget and approved models may soon become obsolete or inappropriate, requiring significant recalibration. Organizations cannot rely on the traditional “set it and forget it” strategy typical in standard tech projects when it comes to generative AI.
Three primary risk factors can derail such initiatives. First, the choice of a large language model (LLM) vendor is critical; the technology landscape is dynamic, and the leading models of today may be less relevant in the near future. The performance of LLMs is subject to rapid change as new models are released, making it essential for businesses to maintain agility and adaptability to avoid becoming locked into potentially outdated technology. Second, organizations must weigh the pros and cons of open-source versus closed LLMs. While closed models can offer easier implementation and dedicated support, they often come with high costs and limitations on customization. Open-source alternatives may provide more flexibility at lower costs but can necessitate advanced engineering capabilities, highlighting the fixture of choice that comes with both their respective sets of challenges.
Moreover, the fast-paced advancements in generative AI mean that fundamental technological breakthroughs could alter the landscape of how AI assistants are built and maintained. Current best practices may soon be eclipsed by novel approaches that leverage multiple AI models working in concert or more sophisticated memory maintenance systems that enhance conversational capabilities. As per AI researchers, substantial shifts in the technologies used could require a complete overhaul of existing infrastructures if older methods are rendered obsolete. This potential for change further complicates the landscape for organizations trying to navigate their paths in the generative AI space, requiring a willingness to pivot and adapt rapidly.
Given the significant uncertainties surrounding generative AI, organizations must adopt novel operational frameworks. Traditional project management approaches, characterized by a straight path from planning to execution, are too rigid for the volatile nature of AI development. Companies need to establish cross-functional teams that regularly engage with evolving AI technologies, allowing for ongoing decision-making and strategy adjustments. Setting up agile processes similar to those implemented for seasonal product launches or pricing models may provide the necessary infrastructure for quick responses to any emerging challenges.
Financial planning must also reflect a departure from conventional models. Rather than earmarking a one-time budget for a project, organizations must embrace a flexible funding model that accounts for continuous evolution and potential redirection. A dedicated team focused on generative AI initiatives should be seen as a long-term investment, not just a short-term project. Additionally, investing in modernizing existing tech stacks can facilitate smoother integration with the generative AI systems, ensuring that firms do not fall behind their competitors in leveraging AI capabilities. In doing so, companies can transform the complexities of generative AI builds into opportunities for infrastructural improvements and iterative enhancements that keep them competitive in an increasingly AI-driven landscape.