03 · AI Governance

Govern the AI your organisation already has.

Most organisations deploy AI faster than they govern it. SREPRIMER closes that gap — building the policies, risk assessments, and technical controls that keep your AI systems trustworthy, secure, and compliant.

55%of enterprises have no formal AI governance framework*
$4.88Maverage cost of an AI-related data breach
€30Mmaximum EU AI Act fine for non-compliance
2 wksto your first risk scorecard with SREPRIMER

The problem isn't that your team is negligent. It's that governance hasn't kept pace.

AI deployment moved faster than policy, faster than regulation, and faster than most security teams could respond. Three gaps show up in almost every organisation we assess.

Shadow AI

Invisible AI across your organisation

Employees use ChatGPT, Copilot, Gemini, and dozens of other tools without IT awareness. Customer PII, legal strategy, and internal financials pass through unreviewed external systems every day.

LLM01–10

Customer-facing AI with no security controls

Chatbots and AI agents are deployed without input validation, output filtering, or access scoping. A single prompt injection can expose another customer's data or override your system's instructions entirely.

2026

Regulatory obligations nobody currently owns

The EU AI Act, India's DPDP Act, and RBI digital lending guidelines create specific, measurable obligations for AI systems. Most teams cannot identify where their compliance exposure begins.

From inventory to governed — in 90 days.

We follow the NIST AI Risk Management Framework as the structural foundation for every engagement. Four functions. One coherent program. Measurable at every stage.

Every deliverable maps to a framework your auditors and regulators already recognise — so the work you do with us is work you can defend externally.

NIST · GOVERN

Set the foundation

NIST · MAP

Know what you have

NIST · MEASURE

Quantify the gaps

NIST · MANAGE

Close them systematically

Three tiers. One clear progression.

Every engagement starts with a Discovery Sprint. Most clients continue to a Governance Build. The best-positioned organisations move to Ongoing Advisory. Each tier builds directly on the one before.

01

Discovery Sprint

AI inventory, risk scorecard, and executive readout — in two to three weeks

Example Risk Scorecard — Week 3 Delivery
Policy & Ownership
1/10
AI Inventory
2/10
Prompt Injection Controls
0/10
Excessive Agency Controls
0/10
Data Privacy
3/10
Monitoring & IR
3/10
Overall Score 24 / 100

Your baseline, in two weeks.

The Discovery Sprint is a structured assessment run as a guided conversation — not a form to fill out. We interview your team, map your AI systems, and produce a visual Risk Scorecard that shows exactly where you stand against NIST AI RMF and OWASP LLM Top 10.

Duration2 - 3 weeks
FrameworkNIST AI RMF · OWASP LLM Top 10
AudienceCISOs, CTOs, Compliance leads
📋 Deliverables
  • Complete AI system inventory
  • Risk classification per system
  • Visual 10-dimension scorecard
  • Executive readout session (60 min)
  • Top 5 quick-win recommendations
  • Written summary in quick time
✓ Best for
  • First governance engagement
  • Board-level risk visibility
  • Pre-audit baseline
  • Enterprise procurement blocker
  • Decision within 30 days
02

Governance Build

Full NIST gap assessment, policies, controls, and live OWASP demo — 6–8 weeks

Governance Build — Process
1
Weeks 1–2
Discovery Sprint

AI inventory, scorecard, executive readout

2
Weeks 3–4
Gap Analysis + Live Demo

NIST RMF assessment, OWASP attack demonstration for leadership

3
Weeks 5–6
Policy + Controls Build

AI Use Policy, technical controls, vendor assessments

4
Weeks 7–8
Training + Playbooks

Staff training workshop, incident response playbook

A complete, working governance program.

Everything in the Discovery Sprint, plus the full Governance Build — policies drafted and approved, technical controls implemented, your team trained, and incident response playbooks ready to use. Not a report. A program.

Duration6–8 weeks
FrameworkNIST AI RMF · OWASP LLM Top 10 · EU AI Act
IncludesEverything in Discovery Sprint
📋 Additional Deliverables
  • NIST AI RMF full gap assessment
  • AI Use Policy — drafted & approved
  • Live OWASP attack demo session
  • LLM input / output controls review
  • Vendor risk assessments (top 3 APIs)
  • Staff training workshop (2 days)
  • AI Incident Response Playbook
✓ Best for
  • SOC 2 AI criteria coverage
  • EU AI Act readiness
  • Enterprise client procurement
  • Post-breach remediation
  • Investor due diligence prep
03

Ongoing Advisory

Dedicated advisor, quarterly reviews, regulatory tracking — rolling engagement

What You Have After 12 Months
AI Use Policy — approved and communicated
Full AI system inventory, updated quarterly
NIST AI RMF gap assessment — closed
LLM input/output controls in production
EU AI Act applicability mapped and addressed
AI Incident Response Playbook — tested
Staff trained — all departments covered
4× quarterly governance reviews completed

Governance that evolves with your AI.

AI systems change. Regulations change. New attack techniques emerge. Ongoing Advisory gives you a dedicated SREPRIMER advisor who tracks all of it — and makes sure your governance program stays current and effective.

CadenceMonthly check-in · Quarterly review
IncludesEverything in Governance Build
AdvisorNamed point of contact, Hyderabad-based
📋 Additional Coverage
  • Dedicated named AI Risk Advisor
  • Quarterly controls testing
  • Regulatory change tracking
  • New AI system review & sign-off
  • AI incident response support
  • Annual staff training refresh
✓ Best for
  • Ongoing regulatory obligations
  • Frequent new AI deployments
  • Board-level AI risk reporting
  • ISO/IEC 42001 certification path

Every assessment covers the full OWASP LLM Top 10.

We map every finding to a specific OWASP LLM risk category — so your team knows exactly what class of threat each gap represents and how to defend against it.

LLM01Prompt InjectionCritical
LLM02Sensitive Data DisclosureCritical
LLM03Supply Chain VulnerabilitiesHigh
LLM04Data & Model PoisoningHigh
LLM05Improper Output HandlingHigh
LLM06Excessive AgencyCritical
LLM07System Prompt LeakageMedium
LLM08Vector & Embedding WeaknessesHigh
LLM09MisinformationHigh
LLM10Unbounded ConsumptionHigh
NIST AI RMF 1.0

AI Risk Management Framework

The US federal standard for enterprise AI risk governance. Four functions — GOVERN, MAP, MEASURE, MANAGE — form the backbone of every SREPRIMER engagement.

nist.gov — AI RMF 1.0
OWASP LLM TOP 10

Top 10 LLM Application Risks (2025)

The most referenced security standard for AI and LLM applications. Every SREPRIMER assessment maps findings to specific OWASP risk categories with concrete defenses.

genai.owasp.org — LLM Top 10
EU AI ACT (2024)

World's First Binding AI Law

Risk-tiered from minimal to prohibited. High-risk AI in hiring, credit, and healthcare faces strict obligations from August 2026. We assess your exposure and map your path to compliance.

eur-lex.europa.eu — EU AI Act
ISO/IEC 42001

AI Management System Standard

The international certification standard for AI governance programs. Aligns with ISO 27001. SREPRIMER Governance Build engagements are structured to prepare clients for 42001 certification.

iso.org — ISO/IEC 42001
DPDP ACT (INDIA)

Digital Personal Data Protection Act

In force. Mandatory data breach notification obligations apply to AI systems processing personal data. SREPRIMER maps DPDP obligations specifically for AI architectures and LLM data flows.

meity.gov.in — DPDP Act
MITRE ATLAS

Adversarial Threat Landscape for AI

The AI equivalent of MITRE ATT&CK. Maps adversarial techniques against machine learning systems. Referenced in SREPRIMER technical sessions for security and engineering teams.

atlas.mitre.org

What makes this different from standard compliance consulting.

SREPRIMER produces working programs — tested controls, trained teams, and scorecards your board can actually read.

We show the attack, then fix it

📅

First scorecard in two weeks

🎯

AI-specific, not repurposed

🇮🇳

India-native, globally aligned

📊

Outputs your board can read

🔍

We govern the AI you actually have

Built on
NIST AI RMF 1.0 OWASP LLM Top 10 EU AI Act ISO/IEC 42001 DPDP Act MITRE ATLAS RBI AI Guidelines ISO/IEC 23894 SOC 2 AI Criteria SEBI AI/ML Circular

Ready to govern your AI?

60 minutes. We'll identify your highest-priority AI governance gaps and show you what a programme would look like for your organisation.