AI Agents Built for Teams: Shared Context and Transparency in Enterprise AI

Asana チーム 寄稿者の画像Team Asana
2026年3月19日
facebookx-twitterlinkedin
AI Agents Built for Teams: Shared Context and Transparency

The Accountability gap

Enterprise AI agents are AI systems that can take actions inside shared workflows across teams and projects. This landscape has grown quickly as a growing list of platforms have shipped their own flavors of AI assistance that touch shared team content.

Most organizations still can't get past pilots. Camunda's 2026 State of Agentic Orchestration & Automation report found only 11% of agentic AI use cases have reached production, with 73% of organizations reporting a gap between their AI ambitions and reality. A separate Dynatrace study found half of all AI projects stuck at the concept or pilot stage. The top barrier in both: trust.

The models are remarkably capable, but the bottleneck is accountability. When an AI agent acts on shared work affecting multiple people, everyone involved needs to know what it did, why, and within what boundaries. Most AI tools today don't answer these questions as they don't make it easy to see what the AI did or why. Actions happen in private threads or behind interfaces where the AI's involvement is invisible. When things go wrong (and with probabilistic systems, things will go wrong), there's no trail to follow.

We built AI Teammates on a premise that came directly from our history. For over a decade, Asana has been developing the work graph: a structured representation of who is doing what, by when, toward what goals, and in coordination with whom. The principles behind it (shared visibility, clear ownership, permission-aware access, structured communication) were designed to make human collaboration effective. Those same principles turned out to be exactly what's needed to make AI collaboration trustworthy.

Building on what already worked

Most AI agent products start from the model and work outward: here's what the AI can do, now let's figure out how it fits into your workflow. We started from the opposite direction. Asana already had strong opinions, built into the product over years of iteration, about how to achieve structure, accountability, and effortless collaboration on shared work.

The work graph encodes all of this. Every task, project, goal, and conversation exists within a web of relationships: who owns it, who's collaborating, what it contributes to, what depends on it. When teams use Asana, they're continuously building and refining this structured picture of their work.

When we started building AI agents, the question became: what if the AI operated within this same structure? What if, instead of inventing a separate context model for the AI, we let it participate in the collaboration model we'd already built?

This is our core conceptual model behind AI Teammates: wherever possible, their capabilities match those of a human user within an organization. They get assigned tasks. They read and write comments. They show up in the same activity feeds. Their access to content is scoped so that collaborating with a Teammate never escalates anyone's permissions beyond what they already have. The infrastructure that keeps human collaboration organized and transparent extends naturally to AI, because the hard problems of enterprise AI (context, access control, coordination, visibility) were already problems we'd been solving for people.

What shared context looks like in practice

Most AI tools scope an agent to whatever a single user can see when they invoke it. AI Teammates work differently: they live in the same workspace where the team's actual work happens, alongside the projects, tasks, goals, and conversations that define what an organization is trying to accomplish.

When a Teammate is assigned a task, it receives context about that task and all the work immediately connected to it: the parent project, related goals, dependencies, and collaborators. It can also search across the broader work graph, pulling in relevant tasks, projects, goals, and people based on what's useful for the work at hand.

This means multiple people interact with the same Teammate on shared work. A project manager assigns a task. A designer comments with constraints. An engineer adds technical context. The Teammate builds on all of it, ac

cumulating working knowledge that spans collaborators and initiatives.

Because the work graph already captures how teams coordinate, the Teammate can reason about work the way a collaborator does: what's blocking the launch, who owns the next step, and what we agreed on last week. That kind of reasoning is only possible when the AI operates within the same structure the team already uses to stay aligned.

Asana AI Teammates Campaign Brief Writer

How AI Teammates respect privacy

Like human users, AI Teammates are subject to explicit access controls throughout the Work Graph. Some tasks and projects are publicly visible to everyone (and every AI Teammate) in the organization. Others require project and portfolio memberships explicitly granted to individuals or inherited through teams.

Only a specific set of people (often an Asana team) is permitted to trigger action from a teammate. They collectively manage the teammate’s knowledge, guidance, and access to work.

AI Teammates get an additional safeguard: a Teammate's effective access is always bounded by the permissions of the person who triggers it. This affordance allows teammates to be given broad access to content, while minimizing the risk of someone escalating their access to include information the AI Teammate has learned in a separate, private context.

AI Respecting Privacy

Nondeterminism over rigid workflows

Different organizations structure the same work in completely different ways. Some track dependencies through task relationships, others through subtasks, others through project sections. We could have built a rigid execution model that works perfectly for one workflow style. Instead, we give the Teammate context on how relationships look in a given workspace, plus the ability to search and learn from past work, so it builds org-specific expertise. Much like a newly hired team member, it learns your team's way of working rather than forcing you into ours.

You can't draw a single flowchart of how the AI processes a request. That's deliberate. A deterministic system would work well for the fraction of teams that match our assumptions and fail for everyone else. Nondeterminism is what allows the same agent to adapt to radically different ways of organizing work. And because Asana's work graph already captures how each team structures its collaboration, the Teammate has a rich, org-specific foundation to learn from on day one.

Asana's nondeterminism AI workflows

Simple memory with tight feedback loops

The Teammate's memory system is, at its core, a list of text facts linked to the objects they're relevant to. We started here rather than with a vector database or knowledge graph because we wanted to understand what a tight feedback loop could achieve before investing in heavier infrastructure.

What we found is that the loop does most of the heavy lifting. The Teammate operates in a shared context where users naturally correct it, refine its instructions, and interact with its outputs in the course of doing their work. Providing any path for learned facts to re-enter the context window on future turns allows for all sorts of emergent intelligence and customization. So we focused on a quick and simple implementation to start.

The architecture will evolve as the product matures, but the insight that keeps holding up is that a good feedback loop matters at least as much as storage sophistication. And because the memory is just text today, it's fully inspectable. Users can see every memory their Teammate holds, right on its profile.

Memory access follows the same permission model as everything else in Asana. A memory is only visible to people who can view the task where the Teammate was working when it was created. When an AI has access to multiple projects with different access levels, this is the line that prevents information from leaking across permission boundaries.

Action-level accountability that mirrors team workflows

When a Teammate executes work, users see a real-time indicator of its work and can open AI action logs – a full trail of every action such as tasks created, comments posted, searches performed, objects modified. The concrete results are visible in the work graph to anyone with access, including people who didn't trigger the Teammate. Before performing any privacy-sensitive action, Teammates must get explicit user approval, a hard constraint built into the system.

This accountability model extends how teams already work in Asana. When a human collaborator completes a task or posts an update, it's visible to the team. AI Teammates follow the same pattern. You can see what they did. You can ask them why, and they'll explain from the same context they used to make the decision. They won't take sensitive actions without permission. When something goes wrong, there's a clear record. In a landscape where most organizations cite transparency as a deployment barrier, we think concrete accountability (visible actions, inspectable memory, approval gates) is more valuable than abstract promises about explainability. Because AI Teammates operate in shared team spaces, their work is visible to the entire team — not just the person who triggered them. This is built-in auditability: everyone sees what the AI did, not just the person who asked.

Why these properties are inseparable

Shared context without accountability is a liability. An AI that accumulates knowledge across multiple people's work but offers no way to inspect what it knows or how it acts will lose trust fast. People will withhold context from an AI they can't audit, which defeats the purpose of team-scoped design.

Accountability without shared context solves a simpler and less interesting problem. If the AI operates in isolated threads, auditability is straightforward. The real challenge is making AI accountable when it operates on shared, cross-functional work affecting people with different access levels and different stakes.

The access model ties them together. The years Asana spent building a permission model for human collaboration provided most of the foundation. Extending it to AI meant going further, ensuring that no one's access is escalated by working with a Teammate. Without that layered approach, team-scoped AI would be a non-starter for most organizations.

Where we are

Teams use AI Teammates for project management, documentation, research, and cross-functional coordination. The most common pattern we see: AI making coordination manageable at a scale where human attention breaks down. A Teammate assigned to a launch project follows the work from goal-setting through execution, surfacing what needs attention without anyone manually compiling a status update.

Consistently, positive feedback we’ve collected emphasizes three qualities. Teammates are assignable and collaborative, working alongside you on shared tasks. They operate with auditable access, so you see what they did and they ask before doing anything sensitive. And they build adaptive knowledge, getting more useful through every interaction as feedback loops and simple memory accumulate.

The entire industry is working through the gap between AI demo capabilities and reliable production performance. The teams finding real value with AI Teammates are the ones treating them like a new hire who needs context, feedback, and clear boundaries to be effective.

Internally, we’ve used AI Teammates for sensitive work including creating status reports, triaging bugs, and sequencing launches. We’ve leveraged the access control model and human checkpoints to give us confidence that humans and AI can collaborate safely and productively, regardless of the task at hand.

The agents that last won't be the ones with the most impressive demos. They'll be the ones that work the way teams already work: in shared spaces, with visible actions, building on collaboration patterns that teams already trust. Asana spent years learning how to make human teamwork structured, visible, and accountable. AI Teammates are the evolution of those principles in the age of collaborative AI.

AI チームメイトを使い始める

新しいチームメイトがサポートを提供できるようになりました。どんな規模のチームや組織でも、今すぐ始められる方法をご紹介します。


This article was written by Cory Desautels, Software Engineer. Cory Desautels is an engineer on the AI Teammates team, where he works to build and scale Asana's collaborative agentic AI product.

関連記事

エンジニアリング

How Asana makes me a more effective engineering manager