CAIBots Intelligence Hub · Technical Reference · FS & Capital Markets

Enterprise Agentic AI —
Architecture, Retrieval
& Compliance Reference

The complete technical knowledge base for CAIBots' agentic AI platform — from RAG pipeline internals and agent data-source mix, to provider stack decisions and EU AI Act risk classification. Every section is analytically linked: architecture choices determine retrieval design, which determines compliance tier.

10
FS Agent Archetypes
7
Architecture Layers
5
RAG Stack Layers
4
EU AI Act Risk Tiers
<50ms
Global Inference
SR 11-7
Model Risk Compliant
Module 01 — RAG Pipeline & Memory Architecture

Where Does the RAG Pipeline & Memory Sit Inside the Agent?

RAG = one of five peer tools at Source Selection · Memory = personalization + continuity + audit

Architecture v2.1
FS · Capital Markets
CAIBots © 2025
Key mental model: The RAG pipeline is a black-box tool the Agent calls at Source Selection — exactly like an API call or SQL query. The Agent does not know or care what happens inside. It calls the tool, receives ranked text chunks back, then passes them to Context Assembly. RAG is not special from the Agent's perspective. It is one of five peer tools.
💬
1  ·  User Query
Natural language question or instruction — entry point for the entire agent pipeline
⚙️
2  ·  Agent Orchestrator
Runtime control loop — manages tool calls, retries, sub-agent delegation, and state  ·  Frameworks: LangGraph · AutoGen · ReAct
📋
3  ·  Task Planning
Decomposes query into sub-tasks · determines which sources are needed · sets retrieval order and dependencies
decides which tools to call
🎯
4  ·  Source Selection
Agent picks one or more tools based on sub-task type — may call all five in parallel for complex queries
Entire RAG Pipeline Lives Here
Vector RAG
Documents · Policies · Reports · Filings
1
Chunking
600–1000 token pieces · LangChain · LlamaIndex
2
Embedding
Dense vectors · OpenAI Ada · HuggingFace BGE
3
Vector Store
Index & persist · Pinecone · pgvector · FAISS
4
Retriever
Cosine similarity · top-k · MMR · BM25 hybrid
5
Prompt Assembly
Format for LLM · LangChain · custom templates
↳ Returns: ranked text chunks to Agent
API Calls
Peer Tool
  • HTTP client + auth
  • Request / response cycle
  • JSON parsing
  • Live systems data
↳ Returns: structured JSON
SQL Queries
Peer Tool
  • DB connection pool
  • Query builder
  • Result set formatting
  • Data tables / DB
↳ Returns: tabular rows
Knowledge Graph
Peer Tool
  • Graph DB connect
  • Entity resolution
  • Path traversal
  • Relationship links
↳ Returns: entity paths
Memory Store
Peer Tool
External / Long-Term
Redis · VectorDB · KV store · Personalization, prior decisions
In-Context
Conversation history in prompt · Session continuity
KV Cache
Attention state reuse · Compute efficiency
In-Weights
Training / fine-tune knowledge · Lives in LLM box
↳ Returns: user context + history
all tool outputs flow here
🔀
5  ·  Context Assembly
Merges, deduplicates & ranks results from all called tools into one coherent context window — handles conflicts, trims to token limit
RAG chunks ranked API responses merged SQL rows filtered Graph paths resolved Memory context injected
🧠
6  ·  LLM Reasoning
Receives assembled context + original query · synthesizes, infers, resolves conflicts · goes beyond retrieval to actual reasoning
Closed: Azure OAI / Claude
Open-Weight: Llama / Mistral
Fine-tuned: domain-adapted
★ In-Weights Memory lives here
7  ·  Final Answer + Citations
Grounded response with source attribution — every claim traceable to a specific document chunk, API response, DB row, or graph node
Source citations Traceable grounding HITL if required SR 11-7 audit logged
🔗
Module 01 → Module 02
This pipeline is the engine inside every one of the 10 FS agents — but each agent weights it differently.

A Trading Desk Copilot runs 65% API / 8% RAG. An Investment Research Assistant runs 45% RAG / 15% API. The mix determines infrastructure cost, latency, and data residency requirements. The agent architecture table below maps the exact breakdown for all 10 agents, with model selection and HITL requirements.

Explore Agent Data-Source Mix → Module 02 ↓

Module 02 — Enterprise Agent Architecture

Expanded Data Source & Reasoning Mix Across 10 FS Agents

API · RAG · Knowledge Graph · Fine-Tuning · Memory · LLM Reasoning · Human-in-the-Loop. % allocations = relative data source dependency per agent (sum = 100% input mix).

Version 2.0
FS Capital Markets
CAIBots © 2025
API Calls (Live Systems)
SQL / Structured DB
RAG Documents
Knowledge Graph
Fine-Tuning
Memory / Personalization
LLM Reasoning
Human-in-the-Loop
AGENT TYPE Structured Input Unstructured / Knowledge LLM Governance What the Agent Actually Does
API
Live Sys
SQL
Structured
RAG
Docs
KG
Graph
Fine-
Tune
Mem
Personal.
LLM
Reason.
Model Type HITL Required Data Sources Used KG + FT + Memory Focus Agent Action Summary
💛
Customer Service Agent
35%
5%
30%
5%
10%
15%
25%
Azure OAI Escalation Only Account info, transaction history · FAQs, support manuals, complaint scripts Product → entitlement graph · Support tone & complaint handling · Sentiment trend Retrieves account data via API, pulls policy docs via RAG, generates conversational response with personalized history
🔍
Fraud Detection Agent
55%
15%
10%
12%
5%
3%
20%
Open-Weight Mandatory (SAR) Transaction streams, card alerts, device fingerprints, velocity checks · Fraud typology playbooks Fraud ring networks, mule account graphs · Pattern recognition · Customer risk profile Analyzes real-time transactions via API, traverses fraud graph, explains suspicious activity, drafts SAR narrative
📋
Insurance Claims Processing Agent
40%
15%
25%
8%
7%
5%
20%
Azure OAI Mandatory (high-value) Claims database, reserve calculations · Policy documents, coverage schedules, exclusion clauses Claim → policy → coverage relationships · Adjudication rules · Prior settlements Accesses claims systems, retrieves policy terms via RAG, interprets narrative, flags coverage gaps
🛡️
KYC / AML Compliance Agent
35%
15%
25%
15%
5%
5%
20%
Open-Weight Mandatory (onboarding) Identity records, sanctions DBs, PEP lists · FATF / FinCEN regulatory guidance UBO ownership chains, entity networks · Compliance rules, jurisdiction thresholds Retrieves identity data, traverses ownership graph, references regulatory docs, summarizes compliance risk for human sign-off
📊
Wealth Mgmt Advisor Copilot
25%
10%
35%
8%
7%
15%
25%
Azure OAI Mandatory (suitability) Portfolio holdings, account balances · Research reports, fund factsheets, ESG ratings Asset correlations, sector exposure maps · Suitability rules · Goals, risk appetite, life events Retrieves portfolio data, pulls research insights, checks suitability rules, generates investment summary with citations
🏛️
Credit Underwriting Agent
45%
20%
20%
5%
5%
5%
15%
Fine-tuned OW Mandatory (credit decision) Credit bureau scores, financial statements · Underwriting policies, credit appetite statements Borrower risk relationships, sector concentration · Policy-grounded reasoning Pulls financial data via APIs, retrieves underwriting rules, drafts credit memo, flags policy exceptions for underwriter
📈
Regulatory Reporting Agent
40%
20%
25%
5%
5%
5%
15%
Azure OAI Mandatory (CFO sign-off) Financial ledgers, GL data, trade repos · BCBS / IFRS / CCAR reporting templates Regulatory mapping relationships, line-item hierarchies · Disclosure language norms Low autonomy — mostly templated; LLM handles narrative sections only. Generates disclosure narratives
🔬
Investment Research Assistant
15%
5%
45%
10%
10%
15%
25%
Azure OAI Optional (analyst review) Market data feeds, earnings APIs, consensus estimates · SEC filings, research reports, transcripts Company → sector → macro graph · Sector valuation norms · Analyst workflow preferences Retrieves filings via RAG, cross-references sector graph, synthesizes investment thesis with analyst context
⚠️
Risk Management Agent
50%
20%
15%
8%
4%
3%
15%
Open-Weight Mandatory (limit breach) Exposure metrics, VaR outputs, limit utilization · Risk policies, stress test frameworks Counterparty contagion graph, risk dependency chains · Scenario calibration Pulls risk metrics via APIs, traverses counterparty graph, references policy docs, drafts risk narrative for CRO
📉
Trading Desk Copilot
65%
10%
8%
7%
5%
5%
15%
Azure OAI Mandatory (execution) Market feeds, live positions, order books, P&L streams · Market commentary, research notes, macro reports Market correlation graph · Trading strategies, execution heuristics · Trader mandates, desk preferences Streams market data via APIs, summarizes signals, pulls commentary via RAG — execution stays with trader
🔗
Module 02 → Module 03
These agents use RAG extensively — but RAG is not a product. It's a 5-layer stack you assemble or buy managed.

The Investment Research Assistant at 45% RAG and the Wealth Advisor at 35% RAG both need a production-grade vector retrieval stack. Do you self-host for data sovereignty, or use a managed cloud service? The provider breakdown below maps every layer — chunking, embedding, vector store, and LLM — with FS-specific guidance on each choice.

Explore RAG Provider Stack → Module 03 ↓

Module 03 — RAG Architecture & Provider Stack

What Is RAG, Who Provides It & What Does Each Layer Do?

From concept to managed service — open-source vs commercial options across every component. Your stack decision directly determines data residency, latency, and compliance posture.

Knowledge Artifact v1.0
FS · Capital Markets
CAIBots © 2025
1
What Is RAG? — Five Ways to Think About It
IS IT CODE, A FRAMEWORK, OR A PRODUCT?
Layer 1 · Concept
Architecture Pattern
Architecture Pattern
Retrieve fresh context at query time and inject into the LLM prompt — instead of relying on training memory alone
Origin
2020 Meta Research PaperLewis et al.
Layer 2 · Framework
Open-Source Libraries
Libraries You Assemble
Wire together chunking, embedding, vector store, retriever, and prompt assembly using popular OSS frameworks
Providers
LangChainLlamaIndex
Layer 3 · Managed Service
Fully Hosted
Fully Hosted — No Infra Needed
Cloud providers handle chunking, embedding, vector indexing, and retrieval — just point at your documents
Providers
AWS Bedrock KBAzure AI SearchGoogle Vertex AI
Layer 4 · Platform
Platform
Embedded in Platforms
RAG is one module inside a larger enterprise AI product — bundled with agents, connectors, and governance
Providers
MS Copilot StackGleanSalesforce Einstein
Layer 5 · IT Services
Custom Bespoke
Custom Bespoke Implementation
SI firms build client-specific RAG pipelines on top of the above — tailored to FS data, compliance, and workflow
Providers
TCSCapgeminiInfosysWipro
2
Component-by-Component: Open Source vs Commercial
EVERY LAYER MAPPED
Component
Open Source Options
Commercial / Managed Options
FS Note
⚙️ Chunking & Orchestration
Splitting + pipeline wiring
LangChainLlamaIndexcustom code
Full control over chunk size, overlap, and splitting strategy
Azure AI StudioAWS BedrockGoogle Vertex AI Studio
Managed chunking with UI-driven config — faster to deploy
OSS preferred for custom FS doc types (policy PDFs, SEC filings)
🧠 Embedding Model
Text → dense vectors
HuggingFace BGEES-LargeSentence Transformers
Self-hosted — data never leaves your infra; ideal for regulated FS workloads
OpenAI Ada v2Cohere Embed v3Voyage AIAzure OpenAI Embeddings
Higher quality; use Azure/AWS private cloud for FS data residency rules
Critical — affects retrieval quality most. BGE for self-hosted, Ada v2 via Azure for quality.
🗄️ Vector Database
Index + similarity search
FAISSChromaWeaviate OSSQdrant
FAISS = fastest pure search; Weaviate/Qdrant = richer metadata filtering
PineconeAzure AI Searchpgvector on RDSWeaviate Cloud
Managed scaling, SLAs, hybrid search (vector + keyword BM25)
pgvector popular in FS — reuses existing Postgres infra, avoids new vendor lock-in
🤖 LLM at the End
Reasoning + generation
Llama 3MistralFalconQwen
Open-weight — self-hosted on firm's GPU infra; complete data privacy
OpenAI GPT-4oAnthropic ClaudeGoogle GeminiAzure OpenAI (private)
Highest capability; FS firms access via Azure/AWS for data residency compliance
FS default: Azure OAI for data sovereignty. Open-weight for Fraud / Risk where data cannot leave firm GPU.
🔗 Full RAG Pipeline
End-to-end assembled stack
Open Source Stack
LangChainHuggingFace EmbeddingsWeaviate / FAISSLlama 3 (self-hosted)
Full flexibility, more engineering effort, total data sovereignty
VS
Managed Stack
AWS Bedrock Knowledge BasesAzure OpenAI on Your DataGoogle Vertex AI Search
Faster to deploy, less control, Microsoft/AWS data residency guarantees
Dominant FS pattern: Azure OAI + pgvector (OSS). Managed faster; OSS = more control + cost savings at scale.
🎯 The One Mental Model That Ties This Together
RAG is not a single thing you buy or install. It is a pattern you implement by assembling components — or you buy a managed service that assembles them for you. In Financial Services, the dominant enterprise pattern is Azure OpenAI or AWS Bedrock (closed LLM quality + data residency compliance) on top of open-source vector stores (pgvector, Weaviate) for cost control. IT services firms like TCS and Capgemini act as system integrators — they do not invent new RAG components; they assemble and tune the above stack for FS-specific data types, compliance workflows, and regulatory constraints.
⚖️
Module 03 → Module 04
Your stack architecture choices — especially HITL design — directly determine your EU AI Act risk tier.

Whether your Wealth Advisor auto-executes vs. human-decides is not just a UX choice. It's the difference between HIGH RISK and LIMITED RISK under Annex III. Four of the ten agents above are HIGH Risk by default. The EU AI Act classification below maps every agent and shows exactly how HITL design can downgrade your risk tier before August 2026 enforcement.

View EU AI Act Classification → Module 04 ↓

Module 04 — Regulatory Compliance

EU AI Act — Risk Tiers, FS Agent Classification & Implications

Not all Financial Services AI Agents are High Risk — classification depends on decision impact, not industry. The HITL architecture column in your agent design is your primary EU AI Act compliance lever.

Regulatory Brief v1.0
FS · Capital Markets
CAIBots © 2025
EU AI Act — 4 Risk Tiers
PROHIBITED
Unacceptable Risk
Banned outright — violates fundamental rights
Social scoringReal-time biometric surveillanceSubliminal manipulation
HIGH RISK
Strict Governance Required
Annex III listed + materially influences decisions on individual rights or service access
Credit scoringKYC / identityFraud detectionInsurance claims
Key trigger: Annex III AND materially influences decisions affecting a natural person's rights
LIMITED RISK
Transparency Obligations Only
Must disclose AI nature — no binding decisions on individuals
Customer chatbotsReporting agentsAdvisory copilots (HITL)
MINIMAL RISK
Low Regulatory Burden
Voluntary codes of practice only — no individual impact
Research assistantsInternal productivityInsight-only copilots
Your 10 FS Agents — Risk Classification
🏛️
Credit Underwriting Agent
Annex III explicitly lists creditworthiness — blocks service access
HIGH
🛡️
KYC / AML Compliance Agent
Annex III — identity verification, onboarding, affects individual rights
HIGH
🔍
Fraud Detection Agent
Account blocking / transaction denial directly affects financial rights
HIGH
📋
Insurance Claims Processing
Annex III — insurance pricing and claims decisions
HIGH
⚠️
Risk Management Agent
High if influences counterparty decisions; Limited if purely internal
LTD–HIGH
📊
Wealth Mgmt Advisor Copilot
High if auto-executes advice; Limited if human advisor decides
LTD–HIGH
💛
Customer Service Agent
Must disclose AI — no binding decisions on individuals
LIMITED
📈
Regulatory Reporting Agent
Outputs go to regulators, not individuals — no rights impact
LIMITED
🔬
Investment Research Assistant
Internal tool — analyst makes all decisions, no individual impact
MINIMAL
📉
Trading Desk Copilot
Insight only — execution stays with trader, no person's rights affected
MINIMAL
Obligations & Compliance Levers
📋 High-Risk Obligations (Annex III)
Model traceability
Full audit trail of model versions, training data, changes
Explainability
Decisions interpretable to regulators and individuals
Data lineage
Document training data sources and governance
Meaningful HITL
Not just rubber-stamp review — human must have real override power
Robustness testing
Adversarial testing before deployment
Conformity assessment
Register in EU AI Act database before go-live
🎛️ HITL Design = Your Compliance Lever
Architecture choices directly determine risk tier
Auto-executes advice
Human decides
↳ Can downgrade HIGH → LIMITED
Human must make final call — not just approve AI output
Override must be frictionless and logged
Override rate monitored by compliance
⚠️ The Grey Zone — "Materially Influences"
Fraud agent flags account for freezing — human executes but AI materially influenced outcome → likely High Risk
EBA, ESMA, ECB debating what "meaningful" oversight means in practice
Banks designing HITL architectures specifically to manage tier classification
🗓️ Enforcement Timeline
Feb 2025
Prohibited AI provisions in force
Aug 2025
GPAI model obligations apply
Aug 2026
High-Risk obligations fully enforceable — Banks must comply
Aug 2027
Legacy high-risk systems must comply
🎯
The One Rule That Determines Everything
High Risk is triggered when AI is in Annex III AND materially influences a decision affecting a natural person's rights or service access. Being used in a bank does NOT automatically make an agent High Risk. Internal tools, insight-only copilots, and agents with robust HITL design can remain Limited or Minimal Risk. The HITL architecture column in your agent design is your primary EU AI Act compliance lever.
⚖️
Module 04 → Module 05
EU AI Act is one of five live regulatory frameworks your agents must satisfy simultaneously.

SR 11-7 is already in force for every AI model your firm runs. GDPR Art 22 covers automated decisions on individuals — affecting Credit Underwriting, Fraud, and KYC right now. SEC/FINRA guidance hits Wealth Mgmt and Trading Desk. DORA makes Azure, AWS, and OpenAI designated Critical Third Parties subject to BoE/ECB oversight — impacting every agent using a managed RAG stack. The matrix below shows exactly which regulation hits which of your 10 agents, and at what severity.

View Full Regulatory Stack → Module 05 ↓

Module 05 — Regulatory Compliance Stack

AI, Data & Model Risk — All 5 Regulatory Frameworks

SR 11-7 · EU AI Act · GDPR Art 22 · SEC AI / FINRA · DORA — mapped to every agent with impact severity. Your compliance architecture must satisfy all five simultaneously, not one at a time.

Regulatory Brief v1.1
FS · Capital Markets
CAIBots © 2025
HIGH RISK — Aug 2026
EU AI Act
4 risk tiers. High Risk obligations for credit, KYC, fraud, insurance. HITL, explainability, traceability, conformity assessment required.
Credit ScoringKYC/AML FraudHITL Explainability
Primary lever: HITL architecture can downgrade HIGH → LIMITED. Conformity assessment required before go-live for all Annex III systems.
EU · Enforcement Aug 2026 for High Risk
LIVE
SR 11-7 / SS1/23 Model Risk Mgmt
Fed / PRA model risk framework — covers all AI/ML models. Requires validation, governance, performance monitoring, and model inventory.
Model InventoryValidation Challenger ModelsMRM
Scope: Every AI model in production must be in the model inventory with documented development, validation, and ongoing monitoring.
US (Fed) / UK (PRA) · In force now
LIVE
GDPR Art 22 — Automated Decisions
Right not to be subject to solely automated decisions — right to explanation, human review, and contestation of AI-driven outcomes.
ExplainabilityHuman Review ContestationData Subject Rights
Trigger: Any decision based solely on automated processing that produces legal or similarly significant effects on an individual.
EU / UK GDPR · In force now
LIVE
SEC AI Guidance / FINRA
US guidance on AI in investment advice, conflicts of interest in predictive analytics, algo trading disclosures, and best-interest obligations.
USInvestment Advice Algo TradingReg BI Conflicts of Interest
Key rule: AI-driven recommendations must meet Reg BI best-interest standard. Algo trading systems require disclosure and controls documentation.
US (SEC / FINRA) · In force now
LIVE
DORA — ICT / AI Third Party Risk
AI model providers (OpenAI, Azure, AWS) may be designated Critical Third Parties — subject to BoE/ECB direct oversight and contractual requirements.
Cloud AICTP Oversight Concentration RiskExit Plans Resilience Testing
Impact: Firms using Azure OpenAI, AWS Bedrock, or Google Vertex AI as managed RAG providers must document dependencies, test resilience, and have exit plans.
EU · Jan 2025 · Direct BoE/ECB oversight
Agent × Regulation Cross-Reference Matrix
Impact: ●●● Critical   ●●○ High   ●○○ Medium   ○○○ Minimal / None
Agent
EU AI Act
Annex III · Aug 2026
SR 11-7 / SS1/23
Model Risk · Live
GDPR Art 22
Auto Decisions · Live
SEC AI / FINRA
Invest. Advice · Live
DORA
ICT 3rd Party · Live
🏛️
Credit Underwriting Agent
HIGH Risk (EU AI Act)
Critical
Annex III explicit · creditworthiness blocks service access
Critical
Model inventory · independent validation · challenger model required
Critical
Automated credit decision on individual → Art 22 right to explanation
Low
Reg BI if consumer lending · less direct for commercial credit
High
Azure / AWS scoring dependency → CTP concentration risk
🛡️
KYC / AML Compliance Agent
HIGH Risk (EU AI Act)
Critical
Annex III · identity verification, individual onboarding rights
Critical
AML model validation · FinCEN compliance model inventory
Critical
Onboarding rejection = significant legal effect on individual
Low
BSA / FinCEN obligations · not primarily SEC/FINRA scope
High
Sanctions screening via cloud APIs → CTP oversight applies
🔍
Fraud Detection Agent
HIGH Risk (EU AI Act)
Critical
Account blocking / transaction denial directly affects financial rights
Critical
Fraud models require independent validation · SAR narrative models in scope
Critical
Blocking account = significant legal effect · explanation required
Low
Indirect through securities fraud detection rules
High
Real-time cloud inference → operational resilience obligation
📋
Insurance Claims Processing
HIGH Risk (EU AI Act)
Critical
Annex III · insurance pricing and claims decisions
High
Claims adjudication models in inventory · performance monitoring required
Critical
Claims denial = legal/financial effect on individual · right to contest
None
Insurance outside SEC/FINRA primary scope
High
Cloud document processing pipeline → CTP dependency
📊
Wealth Mgmt Advisor Copilot
LTD–HIGH (HITL dependent)
High
HIGH if auto-executes · LIMITED if human advisor decides final action
High
Suitability models in inventory · portfolio model validation required
High
Investment advice significantly affects individual financial position
Critical
Reg BI best-interest · AI recommendation conflicts of interest · MiFID II suitability
High
Cloud LLM for research synthesis → CTP concentration risk
📉
Trading Desk Copilot
MINIMAL (EU AI Act)
Minimal
Insight only — execution stays with trader · no individual rights impact
Critical
Algo trading models → SR 11-7 model validation · robust model governance required
None
No automated decision on individuals · copilot only
Critical
Algo trading disclosure · market manipulation controls · best-execution obligations
Critical
Real-time market data APIs critical ops dependency → DORA operational resilience
⚠️
Risk Management Agent
LTD–HIGH (EU AI Act)
High
HIGH if counterparty decisions; LIMITED if internal reporting only
Critical
VaR / stress test models → mandatory model inventory and independent validation
Low
Internal risk metrics · no direct individual decisions
High
Market risk model disclosures · model risk narrative for regulators
Critical
Risk systems are critical infra → DORA resilience testing, ICT incident reporting
🔬
Investment Research Assistant
MINIMAL (EU AI Act)
Minimal
Internal tool · analyst makes all decisions · no individual impact
High
Research synthesis models in model inventory · performance tracking required
None
Analyst makes final call · not automated decision on an individual
High
Research distribution · analyst conflicts of interest · fairness of research obligations
High
RAG pipeline on Azure/AWS → CTP dependency; alternative data vendors in scope
📈
Regulatory Reporting Agent
LIMITED (EU AI Act)
Low
Outputs to regulators not individuals · disclosure obligations only
High
BCBS / CCAR / IFRS models in inventory · narrative generation model governance
None
Regulatory outputs, not decisions on natural persons
High
SEC / FINRA filing accuracy · AI-generated disclosures require human certification
High
Reporting pipelines = critical ops → DORA ICT risk management plan required
💛
Customer Service Agent
LIMITED (EU AI Act)
Low
Must disclose AI nature · transparency obligation only
Low
Chatbot model in inventory · lighter governance vs. decision models
Low
Providing info only · no binding automated decisions on individuals
Low
Complaint handling · escalation protocols · FINRA communication standards
Low
Cloud chatbot dependency · standard CTP monitoring
Aggregate Compliance Burden by Agent
Sum of regulatory impact scores across all 5 frameworks
🛡️KYC / AML Compliance Agent
Highest
🔍Fraud Detection Agent
Highest
🏛️Credit Underwriting Agent
Highest
📉Trading Desk Copilot
Very High
📋Insurance Claims Processing
Very High
⚠️Risk Management Agent
High
📊Wealth Mgmt Advisor Copilot
High
📈Regulatory Reporting Agent
Medium
🔬Investment Research Assistant
Medium
💛Customer Service Agent
Low
What CAIBots Builds Into Every Agent Deployment
1
EU AI Act Compliance
  • HITL workflow built into every HIGH Risk agent
  • Conformity assessment documentation package
  • Immutable decision audit trail
  • Explainability layer on model outputs
  • EU AI Act database registration support
2
SR 11-7 / SS1/23 MRM
  • Model inventory registration for all agents
  • Independent validation documentation
  • Challenger model framework for critical agents
  • Ongoing performance monitoring dashboards
  • Model risk narrative for OCC/Fed examination
3
GDPR Art 22
  • Human review pathway on all individual decisions
  • Explanation generation at point of decision
  • Contestation workflow for declined decisions
  • Data subject rights logging and audit trail
  • DPIA documentation for automated processing
4
SEC AI / FINRA
  • Reg BI best-interest controls for Wealth Advisor
  • Algo trading disclosure documentation
  • Conflicts of interest detection and logging
  • Research fairness and distribution controls
  • FINRA supervisory procedures for AI systems
5
DORA ICT / AI Risk
  • CTP dependency mapping for all cloud AI vendors
  • Concentration risk assessment (Azure/AWS/GCP)
  • Exit plan and fallback architecture documentation
  • ICT resilience testing schedule per DORA Annex
  • BoE/ECB oversight readiness package
EU AI Act
Regulation (EU) 2024/1689 · Annex III high-risk use cases · EBA, ESMA, ECB AI guidelines. Classification is indicative — legal assessment required.
SR 11-7 / SS1/23
Federal Reserve SR 11-7 (2011) · PRA SS1/23 (2023). Applies to all US/UK regulated firms operating AI/ML models in material business decisions.
GDPR Art 22
EU GDPR (2016/679) Article 22 · UK GDPR equivalent. Applies to any automated processing that produces legal or similarly significant effects on individuals.
SEC AI / FINRA
SEC Reg BI (2019) · FINRA Rule 2010/4512 · SEC Staff Bulletin on AI in investment advice (2023). Algo trading: FINRA Rules 3110, 4370.
DORA
EU Regulation 2022/2554 · In force Jan 2025. CTP designation by ESAs. Applies to all EU-regulated financial entities using ICT third-party service providers.
Ready to Deploy?

Build your enterprise AI agent workforce
on the CAIBots platform.

From RAG pipeline architecture to a 5-framework regulatory compliance stack — CAIBots engineers your agentic platform end-to-end. Live in 14–30 days, SR 11-7 compliant and EU AI Act HITL-ready from day one.

SR 11-7 Model Risk Built-In
EU AI Act HITL Architecture
GDPR Art 22 Explainability
DORA CTP Mapping
Live in 14–30 days