The Struggle is Real: Why Enterprise A.I. Fails to Deliver Results for Most Companies
According to experts, the real barrier to A.I. success is not the technology but the legacy systems and messy data that undermine it. A recent MIT report found that 95 percent of enterprise A.I. applications fail to deliver the revenue growth companies expect, while a Wharton study revealed that it is still “too early” for most large organizations to see measurable gains from A.I.
However, long-term optimism remains high, with 88 percent of the Wharton study respondents saying their organizations expect to increase A.I. spending next year. Adam Gabrault, CEO of Solvd, a software and digital infrastructure firm, believes that the narrative that A.I. can’t deliver business impact is misleading. In a survey of 500 U.S. CIOs and CTOs from companies with annual revenues exceeding $500 million, nearly 60 percent reported business benefits from A.I. in specific business departments, such as predictive analytics, customer support, HR, and data management.
Aligning A.I. with Clear Goals and Digital Transformation Practices
Companies that use A.I. effectively tend to align it with clear goals and follow long-established digital transformation practices. These approaches have guided successful tech adoption since the rise of personal computers and the shift to cloud computing. Gabrault emphasizes the importance of tying A.I. to a specific objective, such as reducing customer churn, improving support, or lowering costs. The “think big, take small wins” mindset applies here as well, as companies that see returns on A.I. don’t try to use it to “solve all problems.”
Deploying A.I. on top of legacy systems and poor-quality data is often futile. An insurance company still relying on 30-year-old systems to write policies and manage claims, for example, will struggle to make any A.I. platform work. To even get to a place of A.I. adoption, companies need to start looking at their data stack and how to make it A.I.-ready. Bakul Banthia, co-founder of Tessell, an A.I.-native enterprise data platform, notes that A.I. models run best on complete and consistent data.
Navigating Governance and Regulation
Governing A.I. is complex, and as a new technology, it lacks a well-established framework, leaving companies to navigate largely uncharted territory. Steven Pappadakes, founder and CEO of NE2NE, an automation and data integration company, emphasizes the importance of being thoughtful and ethical around how A.I. is used in business and to continue to monitor and reform governance. Privacy and data protection should be top priorities, and building a strong relationship with an A.I. provider can help companies understand the technology and train internal teams.
Companies should also be aware that regulators like the SEC have lost patience with A.I.-washing—the practice of overstating a product’s A.I. capabilities. A.I.-washing can lead to legal consequences, fines, and lasting reputational damage. In the U.S., while federal regulators have been cautious about imposing broad A.I. rules, most states have already enacted or plan to enact some form of A.I. legislation. Companies operating in Europe face a more complex compliance landscape, with new laws such as the EU AI Act taking effect.
Highly regulated sectors such as finance, banking, and health care must involve strong compliance teams from the outset. These teams must vet A.I. projects, approve deployments, and track new rules across local jurisdictions. Companies that plan for compliance early will be better prepared as new A.I. regulations inevitably emerge. As Gabrault notes, those organizations that have a structure and framework ready will be in a much better position than those that do not.
Image Source: observer.com


