Architecture & Security

This page is for technical teams.

How The Agent Within fits into your AWS stack

Scaffold deployed in under an hour — once kickoff inputs and approvals are in place. First tailored enterprise agent delivered in 2–8 weeks (integrations/scope).

We’ll walk through your current AWS setup, security requirements, and exactly how the deployment would land in your environment.

High-level

High-level architecture

At a glance, the core request flow looks like this (exact components can vary by posture and integrations).

Frontend (CloudFront + S3) → API Gateway → Lambdas → Agent runtime on EC2 (Docker) → Amazon Bedrock / external model APIs → Aurora PostgreSQL + S3 + Bedrock Knowledge Bases → CloudWatch logs/metrics.

Built using AWS-native services and aligned to reference architectures your cloud team will recognize. For a functional breakdown of each component, see the Product page.

AWS footprint

A concise view of the primary building blocks (services may vary depending on posture).

Deployment model

AWS-native & in-account deployment

The core model is simple: it runs in your AWS environment.

In your account

Runs within a dedicated AWS account, or an existing account you designate.

You keep control

Services are provisioned in your account under your control (S3, Aurora, EC2, Bedrock, etc.).

Licensed software, your data

We provide application components as licensed software; your data/config remain yours and stay in your account.

This keeps your existing security, monitoring, and compliance processes in place instead of introducing a separate SaaS platform.

Baseline

Data, networking & security baseline

Defaults are designed to be reviewable, familiar, and adjustable during deployment.

Data

  • Documents stored in S3 buckets in your account.
  • Embeddings live in Bedrock Knowledge Bases and/or Aurora (pgvector) depending on configuration.
  • Chat history + app/config data stored in Aurora (pgvector keeps vectors + relational together).
  • No data leaves your AWS account unless you explicitly choose external LLM APIs.

Networking

  • Deployed into your AWS account within a VPC; public or private subnet patterns are used based on your requirements (databases/internal services are typically private).
  • End-user traffic enters via CloudFront and API Gateway over HTTPS only.
  • Service-to-service traffic is controlled with VPC routing + security groups, and can be fully private (e.g., VPC endpoints/PrivateLink) if required.

Security configuration

  • IAM roles scope permissions following least-privilege patterns.
  • Security groups restrict network access between components (e.g., API/Lambda ↔ runtime, runtime ↔ Aurora) to only what’s required.
  • Your team can review/adjust IAM and SGs to match internal policies during deployment.

For base component wiring, see Product → What you get on day one.

Roadmap

Model and multi-cloud roadmap

AWS-native scaffold with Amazon Bedrock by default, plus optional support for external LLM APIs (e.g., OpenAI) when required.

Today

The Agent Within runs in AWS and supports Amazon Bedrock as the default model layer. If your use case requires it, the same runtime can be configured to call external LLM APIs (such as OpenAI), subject to your security posture and approvals.

Long-term

The design is intentionally portable: the long-term roadmap is to adapt the same “customer owns the environment” pattern to other clouds or infrastructure platforms. Principle stays constant: your environment, your data, our scaffold.

Ops

Observability & reliability

Your ops team should see what’s happening and harden the system as usage grows.

Logs

CloudWatch logs enabled for Lambdas and the agent runtime for inspection and incident response.

Metrics & alerts

Key signals: API errors, agent failures, latency spikes—so issues show up early.

Optional integration with your observability stack

If you use Datadog, New Relic, or OpenTelemetry, we can scope an integration as part of the engagement (or keep everything in CloudWatch).



If you don’t have a dedicated dev/ops team, we can include monitoring setup and operational support as part of your package.


Advanced scaling/HA options (ECS/EKS, autoscaling, multi-AZ) are available via on-demand extensions on the Product page.

Governance

Governance & guardrails

Beyond infrastructure security, some teams also require governance controls over what agents can say and do. These capabilities can be delivered as part of an engagement based on your requirements.

Optional central guardrails

  • Blocked content categories, PII handling, and policy rules applied consistently across agents.
  • Configuration-first approach (not ad-hoc filters scattered in code).

Optional profiles by team/tenant

  • Different risk posture per team/use case while using the same backbone.
  • Option to separate or scope access based on org needs.

Optional audit workflows

  • Logging + review flows for flagged conversations so compliance/legal teams can audit.
  • Iterate policies over time based on evidence and usage patterns.

For related platform capabilities, see the Product page → On-demand extensions.