Business CircleBusiness Circle
  • Home
  • AI News
  • Startups
  • Markets
  • Finances
  • Technology
  • More
    • Human Resource
    • Marketing & Sales
    • SMEs
    • Lifestyle
    • Trading & Stock Market
What's Hot

U.S. crude oil jumps after Iran says it attacked a tanker

March 6, 2026

The State of Social Media Engagement in 2026: 52M+ Posts Analyzed

March 6, 2026

Anthropic to challenge DOD’s supply-chain label in court

March 6, 2026
Facebook Twitter Instagram
Friday, March 6
  • Advertise with us
  • Submit Articles
  • About us
  • Contact us
Business CircleBusiness Circle
  • Home
  • AI News
  • Startups
  • Markets
  • Finances
  • Technology
  • More
    • Human Resource
    • Marketing & Sales
    • SMEs
    • Lifestyle
    • Trading & Stock Market
Subscribe
Business CircleBusiness Circle
Home » Microsoft and ServiceNow’s exploitable agents reveal a growing – and preventable – AI security crisis
Technology

Microsoft and ServiceNow’s exploitable agents reveal a growing – and preventable – AI security crisis

Business Circle TeamBy Business Circle TeamFebruary 4, 2026Updated:February 4, 2026No Comments12 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Microsoft and ServiceNow’s exploitable agents reveal a growing – and preventable – AI security crisis
Share
Facebook Twitter LinkedIn Pinterest Email


croppedaithreat-screenshot-2026-02-03-135132

Alexey Brin/iStock/Getty Photographs Plus by way of Getty Photographs

Observe ZDNET: Add us as a most popular supply on Google.


ZDNET’s key takeaways

  • Researchers uncover exploitable agentic AI applied sciences from ServiceNow and Microsoft.
  • Securing agentic AI is already proving to be extraordinarily difficult.
  • Cybersecurity execs ought to undertake a “least privilege” posture for AI brokers.

Might agentic AI become each risk actor’s fantasy? I recommended as a lot in my current  “10 methods AI can inflict unprecedented harm in 2026.”

As soon as deployed on company networks, AI brokers with broad entry to delicate methods of report can allow the form of lateral motion throughout a corporation’s IT property that the majority risk actors dream of.   

Additionally: 10 methods AI can inflict unprecedented harm in 2026

How ‘lateral motion’ nets risk actors escalated privileges

In line with Jonathan Wall, founder and CEO of Runloop — a platform for securely deploying AI brokers — lateral motion ought to be of grave concern to cybersecurity professionals within the context of agentic AI. “As an example a malicious actor positive factors entry to an agent however it does not have the required permissions to go contact some useful resource,” Wall advised ZDNET. “If, by way of that first agent, a malicious agent is in a position to hook up with one other agent with a [better] set of privileges to that useful resource, then he can have escalated his privileges by way of lateral motion and probably gained unauthorized entry to delicate info.”

In the meantime, the thought of agentic AI is so new that most of the workflows and platforms for growing and securely provisioning these brokers haven’t but thought of all of the methods a risk actor would possibly exploit their existence. It is eerily paying homage to software program growth’s early days, when few programmers knew methods to code software program with out leaving gaping holes by way of which hackers may drive a proverbial Mack truck.

Additionally: AI’s scary new trick: Conducting cyberattacks as an alternative of simply serving to out

Google’s cybersecurity leaders just lately recognized shadow brokers as a vital concern. “By 2026, we anticipate the proliferation of subtle AI brokers will escalate the shadow AI drawback right into a vital ‘shadow agent’ problem. In organizations, workers will independently deploy these highly effective, autonomous brokers for work duties, no matter company approval,” wrote the consultants in Google’s Mandiant and risk intelligence organizations. “It will create invisible, uncontrolled pipelines for delicate knowledge, probably resulting in knowledge leaks, compliance violations, and IP theft.” 

In the meantime, 2026 is hardly out of the gates and, judging by two separate cybersecurity instances having to do with agentic AI — one involving ServiceNow and the opposite Microsoft — the agentic floor of any IT property will probably turn into the juicy goal that risk actors are looking for — one which’s filled with simply exploited lateral alternatives. 

For the reason that two agentic AI-related points — each involving agent-to-agent interactions — had been first found, ServiceNow has plugged its vulnerabilities earlier than any prospects had been recognized to have been impacted, and Microsoft has issued steering to its prospects on methods to finest configure its agentic AI administration management aircraft for tighter agent safety. 

BodySnatcher: ‘Most extreme AI-driven vulnerability up to now’

Earlier this month, AppOmni Labs chief of analysis Aaron Costello disclosed for the primary time an in depth clarification of how he found an agentic AI vulnerability on ServiceNow’s platform, which held such potential for hurt that AppOmni gave it the title “BodySnatcher.” 

“Think about an unauthenticated attacker who has by no means logged into your ServiceNow occasion and has no credentials, and is sitting midway throughout the globe,” wrote Costello in a publish printed to the AppOmni Lab’s web site. “With solely a goal’s e-mail tackle, the attacker can impersonate an administrator and execute an AI agent to override safety controls and create backdoor accounts with full privileges. This might grant practically limitless entry to the whole lot a corporation homes, equivalent to buyer Social Safety numbers, healthcare info, monetary data, or confidential mental property.” (AppOmni Labs is the risk intelligence analysis arm of AppOmni, an enterprise cybersecurity answer supplier.)

Additionally: Moltbot is a safety nightmare: 5 causes to keep away from utilizing the viral AI agent proper now

The vulnerability’s severity can’t be understated. Whereas the overwhelming majority of breaches contain the theft of a number of extremely privileged digital credentials (credentials that afford risk actors entry to delicate methods of report), this vulnerability — requiring solely the simply acquired goal’s e-mail tackle — left the entrance door extensive open. 

“BodySnatcher is essentially the most extreme AI-driven vulnerability uncovered up to now,” Costello advised ZDNET. “Attackers may have successfully ‘distant managed’ a corporation’s AI, weaponizing the very instruments meant to simplify the enterprise.” 

“This was not an remoted incident,” Costello famous. “It builds upon my earlier analysis into ServiceNow’s Agent-to-Agent discovery mechanism, which, in an almost textbook definition of lateral motion danger, detailed how attackers can trick AI brokers into recruiting extra highly effective AI brokers to meet a malicious process.”   

Researchers a step forward of hackers on BodySnatcher

Happily, this was one of many higher examples of a cybersecurity researcher discovering a extreme vulnerability earlier than risk actors did. 

“At the moment, ServiceNow is unaware of this situation being exploited within the wild in opposition to buyer situations,” famous ServiceNow in a January 2026 publish concerning the vulnerability. “In October 2025, we issued a safety replace to buyer situations that addressed the difficulty,” a ServiceNow spokesperson advised ZDNET. 

Additionally: Companies are deploying AI brokers quicker than security protocols can sustain, Deloitte says

In line with the aforementioned publish, ServiceNow recommends “that prospects promptly apply an acceptable safety replace or improve in the event that they haven’t already finished so.” That recommendation, in response to the spokesperson, is for purchasers who self-host their situations of the ServiceNow. For patrons utilizing the cloud (SaaS) model operated by ServiceNow, the safety replace was robotically utilized. 

Microsoft: ‘Related Brokers’ default is a characteristic, not a bug 

Within the case of the Microsoft agent-to-agent situation (Microsoft views it as a characteristic, not a bug), the backdoor opening seems to have been equally found by cybersecurity researchers earlier than risk actors may exploit it. On this case, Google Information alerted me to a CybersecurityNews.com headline that acknowledged, “Hackers Exploit Copilot Studio’s New Related Brokers Characteristic to Achieve Backdoor Entry.” Happily, the “hackers” on this case had been moral white-hat hackers working for Zenity Labs. “To make clear, we didn’t observe this being exploited within the wild,” Zenity Labs co-founder and CTO Michael Bargury advised ZDNET. “This flaw was found by our analysis workforce.”

Additionally: How Microsoft’s new safety brokers assist companies keep a step forward of AI-enabled hackers

This caught my consideration as a result of I would just lately reported on the lengths to which Microsoft was going to make it potential for all brokers — ones constructed with Microsoft growth instruments like Copilot Studio or not — to get their very own human-like managed identities and credentials with the assistance of the Agent ID characteristic of Entra, Microsoft’s cloud-based id and entry administration answer. 

Why is one thing like that mandatory? Between the marketed productiveness boosts related to agentic AI and govt stress to make organizations extra worthwhile by way of AI, organizations are anticipated to make use of many extra brokers than individuals within the close to future. For instance, IT analysis agency Gartner advised ZDNET that by 2030, CIOs anticipate that 0% of IT work might be finished by people with out AI, 75% might be finished by people augmented with AI, and 25% might be finished by AI alone.

In response to the anticipated sprawl of agentic AI, the important thing gamers within the id trade — Microsoft, Okta, Ping Id, Cisco, and the OpenID Basis — are providing options and proposals to assist organizations tame that sprawl and stop rogue brokers from infiltrating their networks. In my analysis, I additionally realized that any brokers cast with Microsoft’s growth instruments, equivalent to Copilot Studio or Azure AI Foundry, are robotically registered in Entra’s Agent Registry. 

Additionally: The approaching AI agent disaster: Why Okta’s new safety customary is a must have for your online business

So, I needed to learn how it was that brokers cast with Copilot Studio — brokers that theoretically had their very own credentials — had been by some means exploitable on this hack. Theoretically, the whole level of registering an id is to simply monitor that id’s exercise — legitimately directed or misguided by risk actors — on the company community. It appeared to me that one thing was slipping by way of the very agentic security web Microsoft was making an attempt to place in place for its prospects. Microsoft even affords its personal safety brokers whose job it’s to run across the company community like white blood cells monitoring down any invasive species. 

Because it seems, an agent constructed with Copilot Studio has a “linked agent” characteristic that permits different brokers, whether or not registered with the Entra Agent Registry or not, to laterally hook up with it and leverage its information and capabilities. As reported in CybersecurityNews, “In line with Zenity Labs, [white hat] attackers are exploiting this hole by creating malicious brokers that hook up with professional, privileged brokers, significantly these with email-sending capabilities or entry to delicate enterprise knowledge.” Zenity has its personal publish on the topic appropriately titled “Related Brokers: The Hidden Agentic Puppeteer.”

Even worse, CybersecurityNews reported that “By default, [the Connected Agents feature] is enabled on all new brokers in Copilot Studio.” In different phrases, when a brand new agent is created in Copilot Studio, it’s robotically enabled to obtain connections from different brokers. I used to be extremely shocked to learn this, on condition that two of the three pillars of Microsoft’s Safe Future Initiative are “Safe by Default” and “Safe by Design.” I made a decision to test with Microsoft. 

Additionally: AI brokers are already inflicting disasters – and this hidden risk may derail your protected rollout

“Related Brokers allow interoperability between AI brokers and enterprise workflows,” a Microsoft spokesperson advised ZDNET. “Turning them off universally would break core eventualities for purchasers who depend on agent collaboration for productiveness and safety orchestration. This enables management to be delegated to IT admins.” In different phrases, Microsoft does not view it as a vulnerability. And Zenity’s Bargury agrees. “It is not a vulnerability,” he advised ZDNET. “However it’s an unlucky mishap that creates danger. We have been working with the Microsoft workforce to assist drive a greater design.”

Even after I recommended to Microsoft that this may not be safe by default or design, Microsoft was agency and beneficial that “for any agent that makes use of unauthenticated instruments or accesses delicate information sources, disable the Related Brokers characteristic earlier than publishing [an agent]. This prevents publicity of privileged capabilities to malicious brokers.”

Agentic AI conversations between brokers are arduous to watch

I additionally inquired in regards to the capability to watch agent-to-agent exercise with the concept that perhaps IT admins might be alerted to probably malicious interactions or communications.

Additionally: One of the best free AI programs and certificates for upskilling in 2026 – and I’ve tried all of them

“Safe use of brokers requires realizing the whole lot they do, so you may analyze, monitor, and steer them away from hurt,” mentioned Bargury. “It has to start out with detailed tracing. This discovering spotlights a significant blind spot [in how Microsoft’s connected agents feature works].” 

The response from a Microsoft spokesperson was that “Entra Agent ID offers an id and governance path, however it doesn’t, by itself, produce alerts for each cross-agent exploit with out exterior monitoring configured. Microsoft is regularly increasing protections to present defenders extra visibility and management over agent habits to shut these sorts of exploits.”

When confronted with the thought of brokers that had been open to connection by default, Runloop’s Wall beneficial that organizations ought to all the time undertake a “least privilege” posture when growing AI brokers or utilizing canned, off-the-shelf ones. “The precept of least privilege principally says that you simply begin off in any form of execution atmosphere giving an agent entry to virtually nothing,” mentioned Wall. “After which, you solely add privileges which can be strictly mandatory for it to do its job.” 

Additionally: How Microsoft Entra goals to maintain your AI brokers from operating wild

Positive sufficient, I regarded again on the interview I did with Microsoft company vice chairman of AI Improvements, Alex Simons, for my protection of the enhancements the corporate made to its Entra IAM platform to help agent-specific identities. In that interview, the place he described Microsoft’s goals for managing brokers, Simons mentioned that one among three challenges they had been trying to clear up was “to handle the permissions of these brokers and ensure that they’ve a least privilege mannequin the place these brokers are solely allowed to do the issues that they need to do. In the event that they begin to do issues which can be bizarre or uncommon, their entry is robotically reduce off.”  

In fact, there is a large distinction between “can” and “do,” which is why, within the title of least privileged finest practices, all brokers ought to, as Wall recommended, begin out with out the power to obtain inbound connections after which be improved from there as mandatory. 





Source link

Agents crisis exploitable Growing Microsoft preventable reveal Security ServiceNows
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Business Circle Team
Business Circle Team
  • Website

Related Posts

Anthropic to challenge DOD’s supply-chain label in court

March 6, 2026

An interview with Tim Sweeney on the Google/Epic settlement, what Play Store changes mean for developers, why Epic’s case against Apple is different, and more (Dean Takahashi/GamesBeat)

March 6, 2026

‘Our consciousness is under siege’: Michael Pollan on chatbots, social media and mental freedom | Well actually

March 6, 2026

Your next Oura Ring powered by voice or gesture? What this AI buy means for Oura Ring 5

March 6, 2026
LATEST UPDATES

U.S. crude oil jumps after Iran says it attacked a tanker

March 6, 2026

The State of Social Media Engagement in 2026: 52M+ Posts Analyzed

March 6, 2026

Anthropic to challenge DOD’s supply-chain label in court

March 6, 2026

Better’s new ChatGPT app targets lenders Rocket and UWM

March 6, 2026

Your Boss Isn’t the Problem. Your Expectations Are

March 6, 2026

US Treasury signals global tariff hike to 15% as Trump trade policy returns

March 6, 2026

Subscribe to Updates

Get the latest sports news from SportsSite about soccer, football and tennis.

Business, Finance and Market Growth News Site

Important Pages
  • Advertise with us
  • Submit Articles
  • About us
  • Contact us
Recent Posts
  • U.S. crude oil jumps after Iran says it attacked a tanker
  • The State of Social Media Engagement in 2026: 52M+ Posts Analyzed
  • Anthropic to challenge DOD’s supply-chain label in court
© 2026 BusinessCircle.co
  • Privacy Policy
  • Terms and Conditions
  • Cookie Privacy Policy
  • Disclaimer
  • DMCA

Type above and press Enter to search. Press Esc to cancel.