Rethinking Cybersecurity in Smart Cities: Where Technology Is Trusted
An AI-driven framework developed by LAU’s AKSOB scholars reframes cybersecurity as a shared, adaptive responsibility shaped by technology, trust and human readiness.
As cities increasingly rely on interconnected digital systems to manage transportation, energy and public services, cybersecurity has emerged as one of the defining challenges of modern urban life. In a study published in the Journal of Innovation & Knowledge, researchers from LAU’s Adnan Kassar School of Business (AKSOB) examine how artificial intelligence (AI) can strengthen cybersecurity in smart cities, arguing that technology alone is insufficient to secure complex urban systems.
Co-authored by Associate Professor and Chair of the Information Technology and Operations Management departmentManal Yunis and Assistant Professor of Practice at AKSOB Ayman Khalil, alongside Heng Zeng of the Business School at Hubei University in Wuhan and Nawazish Mirza of the Excelia Business School, CERIIM, in France, the study proposes a conceptual framework that treats cybersecurity as a shared, adaptive responsibility.
By combining AI-driven anomaly detection with human behavior, organizational readiness and governance, the researchers offer a more holistic way to think about protecting smart city infrastructures.
“Our cities are getting smarter but also more vulnerable,” they explained, pointing out that every added sensor or connected device improves services while opening potential doors for cyberattacks. What concerned them most was not technological advancement, but the way cybersecurity was often discussed in isolation—treated either as a technical problem to be engineered away or a policy issue to be regulated, with little attention paid to the people expected to interact with these systems every day.
This forms the study’s central premise: Smart city cybersecurity should be understood as an evolving ecosystem rather than a fixed technical solution. Instead of positioning AI as a standalone fix, the framework integrates AI into a broader system that includes users, institutions and the social contexts in which technologies operate. In this view, the smart city behaves like a living system, one that adapts continuously as technologies, threats and human practices evolve.
Developing such a framework required grappling with the real-world complexity of smart cities. “Smart city environments involve thousands of heterogeneous Internet of Things devices, multiple computing layers and a wide range of stakeholders, from citizens and engineers to city officials and regulators. Integrating different theoretical perspectives into a single, coherent model meant striking a careful balance between academic rigor and practical relevance,” explained Dr. Yunis.
At the heart of the study are three interconnected insights. “First, AI-driven anomaly detection can significantly strengthen cybersecurity by identifying unusual patterns early, before they escalate into major disruptions,” noted Dr. Khalil. Second, technology alone is never enough, for “without user understanding and engagement, even the most sophisticated tools risk remaining underused or misunderstood,” he added. Third, technological readiness matters, as cities with strong infrastructure, sound governance and robust data capabilities are far better positioned to translate AI’s potential into meaningful, everyday protection.
A notable aspect of the study is its emphasis on explainable AI. In smart homes and cities, AI systems increasingly influence decisions related to safety, privacy and essential services. “When these systems operate as opaque black boxes, trust erodes quickly. If people cannot understand why something was flagged as risky, it becomes very hard to trust the system or challenge it when something goes wrong,” said Dr. Yunis. Explainable AI, by contrast, provides clear and human-understandable reasoning, supporting accountability and enabling users to learn from and improve the systems over time.
This focus on transparency also extends to how data is managed across smart environments. “Rather than funneling all information into centralized systems, the framework promotes a layered approach in which edge and fog devices perform early filtering and local detection,” said Dr. Yunis. This way, only relevant or suspicious data is escalated to more powerful cloud systems, she added, reducing technical overload while also addressing a persistent challenge in cybersecurity: False alarms. Too many alerts, the researchers warn, can lead to fatigue and disengagement, ultimately weakening security rather than strengthening it.
Some challenges, however, remain inherently difficult. Subtle cyberattacks that unfold slowly over time, unusual but legitimate human behavior during emergencies or public events and intermittent device malfunctions can all blur the line between normal variation and malicious activity. These cases highlight the limits of automation and reinforce the study’s central message: Effective cybersecurity depends on continuous learning and collaboration between intelligent systems and human judgment.
If the framework is applied in a real smart city today, the authors believe its impact would be felt less through dramatic technological interventions and more through quiet reliability. Residents would experience fewer unexplained disruptions to transportation, utilities or public services, and city teams would spend less time responding to false alarms and more time addressing genuine risks.
“Over time, citizens would gain greater confidence that the invisible digital layer of their city is being monitored intelligently, transparently and ethically,” concluded Dr. Khalil.