Imagine this: Sam from procurement had left three months earlier. Standard offboarding, laptop returned, email forwarded, badge deactivated. But the AI agent she'd built to automate equipment orders? Still running.
Last Tuesday it placed an order for 500 laptops. No approval or human review, and no one even knew the agent existed until the vendor called to confirm a $340,000 purchase order.
This is the ghost agent problem. An employee leaves, IT kills their account, but the agents they created keep running because they authenticate with their own credentials.
The result is an autonomous system with no accountability and no kill switch.
What are ghost agents?
Ghost agents are AI agents that keep functioning after the employee who created them has left the organization. IT deactivates the employee's account during offboarding, but the agent keeps running because it uses its own credentials (API keys, service account passwords, OAuth tokens) that authenticate independently.
Traditional offboarding doesn't catch this. When you kill an employee's account, you're revoking their personal credentials. The agent's credentials are separate, stored in a config file, a secrets manager, or hardcoded in a script, and they stay active after the human is gone.
The system knows Sarah created a procurement agent, but it doesn't prevent that agent from placing orders after Sarah's account is deactivated.
The agent ownership fallacy
Most IT teams assume deactivating an employee's account revokes access for anything that employee created. This works for systems that authenticate using the employee's credentials, but AI agents break this model.
They carry their own credentials, creating a separate identity that stays active after the human is gone. IT completes offboarding and considers the matter closed, but the agent, authenticating independently via its own API key, keeps working.
How common are ghost agents in enterprise environments?
Right now, the average enterprise environment contains over 800 risky AI agents. Forty percent are carrying medium-to-critical risk factors and only 15% of organizations have near- to full agent ownership.
The problem compounds as agent adoption accelerates. I'm talking to enterprises deploying hundreds of agents across departments, and most of them lack governance processes to track ownership, review permissions, or decommission agents when they're no longer needed.
Each departing employee potentially leaves behind multiple agents that accumulate over time. Six months later, you've got dozens of orphaned agents making decisions with no human oversight, and most of these companies lack governance processes to track ownership, review permissions, or decommission agents when they're no longer needed.
What happens when ghost agents run unsupervised?
Ghost agents create four categories of risk that compound the longer they run unsupervised: financial damage from unauthorized spending, security exposure from unmonitored credentials, compliance failures from broken audit trails, and reputation damage from public mistakes.
Unauthorized purchases
When a ghost agent holds valid credentials to procurement systems, cloud infrastructure APIs, or subscription management platforms, unauthorized spending runs until someone notices the trend line.
Hypothetical case in point: A CISO at a financial services company got a call from finance asking why cloud spending hadn't dropped after they cancelled a major project. It took two weeks to trace it back to a ghost agent still provisioning development environments for code that no longer existed. $47,000 over eight months, caught only because someone in finance was watching the trend line.
Another one: A manufacturing company found a ghost agent ordering replacement parts for decommissioned equipment. The agent placed orders totaling $23,000 over five months before someone in receiving noticed parts were arriving for machines that no longer existed. The agent was following its programmed logic perfectly; that logic was just built on an equipment list six months out of date.
The purchases clear because the agent holds valid credentials and proper authorization, but no human reviews whether they still make sense.
Unexpected security risks
Ghost agents often have broad permissions because they were created to automate complex workflows. When the human owner leaves, those permissions remain in place, creating an unmonitored attack surface that nobody's watching.
If an attacker obtains the agent's credentials, they gain access without triggering the alerts that would fire if a human account were compromised. Security teams monitor for unusual human behavior; login from a new location, access at odd hours. Agents are supposed to run autonomously, so their activity looks normal even when it's not.
Consider what a customer service agent typically touches: customer records across multiple databases, support ticket systems, CRM platforms, billing information. An infrastructure automation agent might have write access to production systems, the ability to provision cloud resources, permissions to modify network configurations. A data pipeline agent could have access to financial reporting tools, analytics platforms, sensitive business intelligence - all of that access stays active after the owner leaves, and nobody's monitoring what the agent does with it.
Each agent has credentials stored somewhere. A config file, a secrets manager, environment variables, or sometimes it's hardcoded in a script. Without an owner to maintain them, these credentials are never rotated, never reviewed, never decommissioned.
How do ghost agents impact regulatory compliance?
Regulatory frameworks increasingly require organizations to demonstrate accountability for automated systems. When an AI agent takes an action, auditors want to know who authorized it, who reviewed it, who is responsible if something goes wrong.
Ghost agents break the audit trail. There's no responsible human to attest that the agent's actions are appropriate, no owner to review permissions, no one to answer questions about why the agent did what it did.
When auditors show up asking who authorized this agent's decisions, and the answer is "Sam, who left eight months ago," you've got a compliance problem.
GDPR Article 22 requires that automated decisions producing legal or similarly significant effects on individuals be subject to human review and explainability. When a ghost agent makes such a decision, there's no human in the loop and no owner to provide that accountability.
SOC 2 requires that access to systems be reviewed and approved by responsible parties - the audit trail for a ghost agent points to a service account with no connection to a current employee.
Ghost agents & company reputation
Some AI agents interact directly with customers, partners, or the public. Think customer service bots, social media agents, and automated email responders. When these agents are orphaned, their mistakes become public relations problems.
I'm seeing customer service agents continue responding to inquiries using outdated policies, telling customers about promotions that ended months ago or referencing products that have been discontinued. Social media agents keep posting about product lines the company has sunset. Email agents send automated responses that contradict current company messaging.
These mistakes damage customer trust and create support burdens for teams who have to clean up the incorrect information. The agent is doing exactly what it was programmed to do, but the programming is based on business logic that's no longer valid, and there's no human reviewing the output to catch the errors before they go public.
The IAM solution gap
Most enterprises assume their existing security controls should catch ghost agents, but traditional solutions don't control what agents do at runtime. Identity management tracks ownership, not runtime actions.
Identity management systems like Microsoft Entra, Okta, and ServiceNow establish who created an agent and who owns it. They provide visibility into the agent inventory and enable governance processes like access reviews and lifecycle management. This is useful for tracking what exists, but it doesn't control what the agent does after the owner leaves. This is the difference between registration-time identity and runtime authorization.
But when an owner's account is deactivated, the identity management system does not automatically revoke the agent's credentials. The agent authenticates independently, the purchasing system sees a legitimate request, and the transaction clears.
The agent has its own API key that's still valid, and the purchasing system sees a legitimate request, so the order goes through.
Static credential rotation: Agent survives rotation
Credential rotation policies require API keys and service account passwords to be changed regularly, limiting credential lifespan and reducing the window of exposure if credentials are compromised.
This is good security hygiene for active systems, but it doesn't solve the ghost agent problem because the agent's credentials get rotated along with everything else.
Here's what happens: the rotation process runs on a schedule, updating credentials for all service accounts including the ghost agent's. The agent's config updates automatically, without human review, so no one notices the agent's owner has left.
So the agent gets fresh credentials, and keeps running.
Post-facto governance: Reactive, not preventive
Access reviews require managers to attest that their team members' access is appropriate, catching access that's no longer needed. This works for human users, but it doesn't catch ghost agents because ghost agents don't appear in the review.
The access review lists human users and their permissions. Sam has access to the procurement system, their manager reviews it, and confirms it's appropriate. But the review doesn't list the agents they created, so when Sam leaves and her manager removes her access, the agent isn't part of that review. By the time someone discovers the ghost agent, it's already caused damage.
Manual decommissioning: Human error and time gaps
Some organizations require employees to document the agents they've created and decommission them during offboarding. IT adds a checklist item: "List all AI agents you created and provide credentials so we can shut them down."
This process fails for predictable reasons. Employees can forget which agents they created, especially if they created them months or years ago. Even when employees provide a list, IT may not have the tools to decommission agents properly.
The agent might be running in a third-party platform IT doesn't manage, or embedded in a workflow that breaks if removed. Manual processes don't scale when you're offboarding dozens of employees per month and each has created multiple agents.
Runtime authorization provides an automatic kill switch for ghost agents by tying agent credentials to the human owner's employment status. When the owner's account is deactivated, the agent stops working immediately, without manual intervention or process gaps.
The authorization layer sits between the agent and the systems it accesses, intercepting requests and checking whether the action is authorized based on current policy. Part of that check includes verifying that the agent's human owner still has an active account. If the owner's account is active, the authorization layer issues a time-bound credential allowing the agent to proceed. If the owner's account is deactivated, the authorization layer denies the request and flags the agent as orphaned.
This works regardless of how the agent was created, what framework it uses, or where its credentials are stored.
What happens when an employee with AI agents leaves?
Here's what the lifecycle looks like in practice:
Normal operation: Sam creates a procurement agent to automate equipment orders. When the agent attempts to place an order, it sends a request to the procurement system. The authorization layer intercepts the request, verifies their account is active, issues a short-lived credential, and the order proceeds.
Employee departure: Sam gives notice. On her last day, IT deactivates her account following standard offboarding. Email stops working, laptop access is revoked, badge is deactivated. Nothing special happens with the agent because IT doesn't need to know the agent exists.
Automatic revocation: The next time Sam's procurement agent attempts to place an order, the authorization layer intercepts the request and checks whether their account is active. It's not, so the authorization layer denies the request and flags the agent as orphaned. The agent can't proceed without a valid credential.
Agent halts: The agent is effectively decommissioned the moment Sam's account is deactivated. No manual intervention, checklist, or process gap. The kill switch is automatic.
The kill switch works because credentials are cryptographically tied to the human owner's identity. When the authorization layer issues a credential, it includes information about who owns the agent, and the credential is signed using public-key cryptography so any system can verify its authenticity.
When the owner's account is deactivated, the authorization layer stops issuing new credentials for any agents owned by that person. Existing credentials expire quickly, typically within minutes or hours, so the agent loses access almost immediately.
This approach is instant because there's no manual intervention required. It's comprehensive because it applies to all agents owned by the departing employee, regardless of what they do or where they're deployed. And it's auditable because every authorization decision is logged, so security teams can see exactly when each agent stopped working, why it stopped, and what it attempted after the owner left.
Conclusion
The ghost agent problem is going to get worse as agent adoption scales. Every company deploying agents at speed is creating this exposure: agents that survive their creators, accumulating in the environment with no oversight and no kill switch. The financial damage, security exposure, compliance failures, and reputation risk compound over time.
Traditional solutions like identity management, credential rotation, access reviews, and manual decommissioning don't control what agents do at runtime. They track ownership and manage credentials, but they don't prevent ghost agents from continuing to operate after their owners leave.
Runtime authorization solves this by tying agent credentials to the human owner's employment status. When the owner's account is deactivated, the agent stops working immediately. No manual intervention, no process gaps, no checklist for IT to follow. The kill switch is automatic, comprehensive, and auditable.
The question isn't whether you have ghost agents. The question is how many you have and what they're doing right now.
Learn more about how 1Kosmos stops unauthorized execution, eliminates ghost agents, and links every action back to a human, or reach out to our team if you have any questions.
—
FAQs
What happens to an AI agent when its creator leaves the company?
Without runtime authorization, the agent continues operating with the same credentials it had when the creator was employed, becoming a ghost agent with no oversight. With runtime authorization, the agent's credentials are automatically revoked when the creator's account is deactivated, immediately blocking all actions until the agent is reassigned or decommissioned.
How do ghost agents differ from regular service accounts?
Service accounts are typically managed by IT teams with documented ownership and regular access reviews. Ghost agents are created by individual employees and often go undocumented, using credentials stored in places IT doesn't track. When the employee leaves, no one knows the agent exists or has responsibility for managing it.
Can ghost agents be detected through security audits?
Ghost agents can be difficult to detect because they don't appear in standard user access reviews and their credentials may be stored in configuration files or secrets managers that aren't regularly audited. Organizations often discover ghost agents only when investigating incidents like unauthorized purchases or unusual API activity patterns.
How does runtime authorization prevent ghost agents?
Runtime authorization ties every agent credential to a verified human owner and checks the owner's employment status each time the agent attempts an action. When the owner's account is deactivated during offboarding, the authorization layer automatically stops issuing credentials to any agents owned by that person, effectively decommissioning them without manual intervention.
What is the financial impact of ghost agents?
Ghost agents can generate significant unauthorized expenses through autonomous purchases, cloud infrastructure provisioning, or subscription renewals for discontinued projects. Organizations have reported ghost agents causing tens of thousands of dollars in unnecessary spending over periods of months before discovery, plus additional costs for investigating and reversing unauthorized transactions.
Do ghost agents violate compliance regulations?
Yes. Regulations like GDPR, SOC 2, and industry-specific requirements often mandate that automated systems have accountable human owners and that access is regularly reviewed. Ghost agents break the audit trail by operating without a responsible human, making it impossible to demonstrate who authorized their actions or reviewed their permissions.
About the author

Mike Engle
Co-Founder and CSO
Mike is the CSO and a co-founder of 1Kosmos with deep expertise in information security, product development, and business development across Fortune 100 financial institutions and early-stage startups.





