Faculty & Research

Why Exaggerated AI Claims Threaten Sustainable Innovation

New research from LAU’s Adnan Kassar School of Business warns that inflated claims about artificial intelligence are eroding trust, distorting markets and undermining the sustainability of technological innovation.

Artificial intelligence (AI) has rapidly become one of the defining forces of contemporary business and society, promising efficiency, innovation and solutions to some of the world’s most complex challenges. 

Yet as AI narratives proliferate across industries, a quieter and more troubling phenomenon is gaining momentum: AI washing, the exaggeration and misrepresentation of AI capabilities to enhance corporate image, attract investment, or secure legitimacy. 

In “AI Washing: A Conceptual Exploration,” a study published in the Academy of Marketing Science Review in November 2025, Associate Professor and Chair of the Department of Marketing at LAU’s Adnan Kassar School of Business (AKSOB) Omar Itani, together with Professor of Marketing Samer Elhajjar of the National University of Singapore, offers one of the most comprehensive academic frameworks to date for understanding how and why this phenomenon occurs and why it poses serious ethical, economic and societal risks.

While most AI ethics scholarship has focused on algorithms, bias and governance, far less attention has been paid to how AI is marketed to consumers, investors and regulators. Dr. Itani explains that this leaves a critical blind spot. “Most discussions assume harm emerges only from how AI systems function,” he says. “But misrepresentation itself is a form of harm. When companies claim AI sophistication they do not actually possess, they distort expectations and decisions long before any algorithm is deployed.”

The timing of the research is deliberate. “AI is at a hype peak,” notes Dr. Itani, “and many stakeholders lack the technical expertise needed to verify corporate claims, creating fertile ground for vague language and inflated narratives.” 

“If these practices go unexamined,” he adds, “they risk becoming normalized.”

Rather than viewing AI washing as a series of isolated missteps, the study frames it as a systematic market behaviordriven by competition, investor pressure and regulatory gaps. Companies often amplify AI narratives to signal innovation, adopt AI language to appear credible, and tailor claims to reassure different audiences. 

Over time, even references to ethical or responsible AI can become performative, creating the appearance of accountability while concealing gaps between promise and practice. Seen this way, AI washing is less about careless wording and more about how trust and perception are managed in innovation-driven markets.

A central contribution of the research is its classification of AI washing practices. Symbolic AI washing, according to the study, relies on buzzwords and branding with little substantive AI integration. Attention deflection AI washing highlights narrow tools, such as chatbots, while masking largely manual operations. The most severe form, deceptive manipulation AI washing, involves deliberate falsehoods about AI transparency or ethics, often intended to avoid scrutiny. 

“Not all AI washing is equally deceptive,” says Dr. Itani, “but all of it carries consequences for trust and market integrity.”

Those consequences extend beyond individual firms. AI washing can mislead investors, misdirect capital and erode consumer confidence, thereby slowing the adoption of legitimate technologies. Over time, widespread skepticism may prompt regulatory overcorrection, constraining ethical and sustainable innovation. “AI washing may offer short-term visibility,” warns Dr. Itani, “but in the long run it damages markets and delays progress.”

From a sustainability perspective, the study highlights how performative innovation diverts resources away from technologies capable of delivering long-term social and economic value. In doing so, it underscores a broader insight: Sustainable digital transformation depends on technological advancement as well as honest communication.

To address these pressing issues, the authors urge businesses to adopt greater transparency in AI communication, clearly distinguish between automation and AI, and disclose system limitations. Policymakers, meanwhile, are encouraged to clarify standards governing AI-related claims, while consumers develop critical evaluation skills. Education, the study asserts, is crucial to restoring accountability.

As debates around AI governance intensify worldwide, the research contributes to ongoing discussions about how emerging technologies are communicated and evaluated. By examining AI through a marketing ethics lens, the study features the importance of addressing how technologies function and how claims about them shape expectations and trust. This analytical focus is strengthened by the collaboration with Dr. Elhajjar, which adds an international and interdisciplinary dimension by bringing together expertise in AI systems and marketing ethics.

The research aligns with the AKSOB’s broader focus on ethical marketing, corporate accountability and the responsible use of emerging technologies—areas that continue to gain relevance in business education and practice. 

By emphasizing transparency and trust as essential conditions for long-term innovation, the study offers insights relevant to scholars, policymakers and practitioners alike. “AI has the potential to deliver transformative benefits,” concludes Dr. Itani. “But that potential will only be realized if innovation is matched by integrity.”