This page explains what we actually do — not what we aspire to. We describe the security measures that are built and running today.
Integration credentials you connect are encrypted with AES-256-GCM before being stored. Keys are never stored alongside the data they protect. Agents never see your raw credentials — they receive only the access tokens needed to complete their task.
Each agent is given access only to the specific integrations and tools you explicitly connect to it. An agent running a research task has no access to your email. An outreach agent has no access to your codebase. Scope is set by you, not inferred by the system.
Agents that are about to send an email, modify a file, or take any action you've flagged as sensitive will pause and ask for your confirmation first. You approve or reject directly from Slack or the 2Hands app. Nothing happens without your sign-off.
Every action an agent takes is logged — what it did, when, and what the outcome was. You can review any agent's activity from the dashboard at any time. Logs are retained so you can trace exactly what happened on any given run.
Your workspace data is fully isolated from other workspaces. There is no shared state or data between teams. Agents, missions, and results are scoped to your workspace only.
We do not use your tasks, agent outputs, credentials, or any workspace data to train AI models — ours or anyone else's. Your data is used only to run the work you assign.
No. Each agent only has access to integrations you explicitly assign to it during setup. Other integrations in your workspace are not visible to agents that weren't given access.
OAuth tokens or API keys you authorize are encrypted and stored. The raw credential is never logged or exposed to the agent directly — the agent uses it only at the moment it needs to take an action.
When setting up an agent you can flag specific action types — like sending emails or posting to Slack — as requiring confirmation. The agent will always pause before those actions and wait for your response.
No. Your workspace data — tasks, outputs, credentials, agent history — is never used to train any AI model.
Access to customer data is strictly limited to situations where it's required to diagnose a technical issue you've reported, and only with your consent. We don't browse customer workspaces.