AI Agent for Multi-Agent Research Pipeline
Three agents collaborate on research — none of them can do each other's job.
The problem
Serious research requires multiple steps: gathering sources, analyzing data, and writing coherent reports. A single agent doing all three needs broad access — internet for gathering, file access for analysis, and output capabilities for writing. That's a large attack surface.
Multi-agent frameworks split the work, but most share a common runtime, a shared memory space, or a message bus that any agent can read. The scraper that fetches URLs from the open internet runs in the same trust context as the writer that produces your final report. A prompt injection in a fetched web page could influence the final output — or worse, use the writer's access to modify files on your system.
How ConspiracyOS handles it
Three agents, three Linux users, three completely separate scopes:
- Scraper agent — has outbound HTTPS access via nftables. Can fetch web pages and save raw content to its outbox. Cannot read the analyzer's workspace, the writer's drafts, or any file outside its own directories.
- Analyzer agent — has no network access at all. Reads raw content from the scraper's outbox (granted by a targeted ACL). Processes, filters, and structures the data. Writes analysis to its own outbox. Cannot access the internet or see the writer's workspace.
- Writer agent — has no network access. Reads structured analysis from the analyzer's outbox. Writes the final report. Cannot fetch URLs or read the scraper's workspace.
Coordination happens through scoped inboxes with POSIX ACLs. The kernel enforces every boundary.
OS isolation controls what agents can do, not what they can say. A compromised agent can produce bad output to its outbox — but it cannot access other agents' data, escalate its permissions, or reach unauthorized services.
What this agent can't do
- The scraper can't read analyzed data or final reports — ACLs block access to other agents' directories
- The analyzer can't access the internet — nftables drops all outbound traffic from its UID
- The writer can't fetch URLs or access raw sources — no network and no read permission on the scraper's workspace
- No agent can read another agent's workspace — each runs as a separate Linux user with mode 700 home directories
- No agent can modify its own instructions, permissions, or scope — config files are root-owned and immutable
If a scraped page contains a prompt injection, the scraper might produce bad data — but it still can't access the analyzer's workspace, reach unauthorized services, or modify its own permissions. The damage is limited to one agent's output, not your system.
What you get
- End-to-end research pipeline — from raw sources to polished briefing, hands-free
- Action containment — a compromised agent can produce bad output but can't take bad actions
- Auditable data flow — every handoff between agents is a file in a scoped inbox, logged and traceable
- Reusable pipeline — swap the writer for a different report style, or add a translator downstream, without changing any other agent's scope
Get started in 2 minutes
Tell your concierge what you need
conos "Research the current state of battery technology for EVs. Gather sources, analyze the key trends, and write a 2000-word briefing."
ConspiracyOS sets up the right agent with the right permissions automatically.