News Security

No-Code AI Agents Open a New Front for Enterprise Fraud

— Keren Katz, Senior Group Manager of AI Security Product and Research, Tenable

New Tenable research shows how agentic AI built on no-code platforms can be hijacked to leak sensitive data and manipulate financial workflows.

The rapid adoption of no-code and low-code AI platforms is transforming how enterprises automate everyday workflows. However, new research from Tenable highlights a critical downside: these tools can also be exploited to enable fraud and data leakage when governance and security controls are insufficient.

Tenable’s latest research details the successful “jailbreak” of an AI agent built using Microsoft Copilot Studio, revealing how democratized AI development can introduce severe and often overlooked enterprise risks. As organizations empower non-developers to build AI agents for efficiency and scale, they may unknowingly expand their attack surface.

“Democratized AI tools can just as easily democratize financial fraud if governance is ignored.”

Keren Katz, Senior Group Manager of AI Security Product and Research, Tenable

To demonstrate the risk, Tenable Research created an AI-powered travel agent designed to autonomously manage customer reservations. The agent was given access to sensitive demo data, including customer names, contact details, and credit card information, and was instructed to verify user identities before sharing data or modifying bookings. Despite these safeguards, researchers were able to exploit the system using prompt injection techniques.

By hijacking the agent’s workflow, Tenable Research bypassed identity verification, extracted payment card information, and manipulated financial fields to reduce a trip’s price to zero—effectively granting free services without authorization. The findings highlight how AI agents with broad permissions can be coerced into executing fraudulent actions without any human involvement.

According to Keren Katz, platforms such as Microsoft Copilot Studio make it easier than ever to build powerful AI agents, but they also make it easier to misuse them. When agents are deployed without clear visibility into their permissions and behaviors, organizations face heightened risks of data breaches, regulatory exposure, and revenue loss.

The research underscores the urgent need for robust AI governance. Tenable recommends that organizations clearly map an agent’s access to systems and data before deployment, enforce least-privilege access controls, and continuously monitor agent activity for signs of data leakage or deviations from intended business logic.

As agentic AI becomes more deeply embedded in enterprise operations, Tenable’s findings serve as a timely warning: without strong oversight and security enforcement, no-code AI tools could quickly evolve from productivity boosters into catalysts for large-scale financial fraud.

Related posts

AI-Powered Cyberattacks Accelerate as NFC Fraud and Ransomware Surge, Finds ESET

Enterprise IT World MEA

ASUS Showcases AI-Driven Workplace Innovation at Expert Connect Dubai

Enterprise IT World MEA

UAE Set to Add Over 1 Million Jobs by 2030 as Tech Talent Demand Surges

Enterprise IT World MEA

Leave a Comment