EthiCompass

Government & Public Sector

AI Governance for Public Authorities —
From Fundamental Rights Assessment to Prohibited Practice Compliance

Government AI operates under the most restrictive regulatory framework of any sector. Public authorities face mandatory Fundamental Rights Impact Assessments, prohibited practice bans targeting social scoring and biometric identification, and unique constitutional obligations to citizens who cannot opt out of algorithmic governance. EthiCompass provides the governance infrastructure to meet these obligations — with a 7-dimension evaluation framework, peer-reviewed methodology, and immutable audit trails built for public accountability.

EU AI Act Art. 5 — Prohibited Practice ComplianceEU AI Act Art. 27 — FRIA ReadyAnnex III High-Risk AssessmentConstitutional Compliance FrameworkBrazil Bill 2338 — Government Prohibitions
Explore OneCheck for Government AI →

Regulatory Framework

The Most Restrictive AI Compliance Regime in Any Sector

When a private company deploys AI and makes an error, the affected person can seek remedies through consumer protection law or choose a competitor. When a government authority deploys AI and makes an error, the affected citizen has no alternative. They cannot opt out of the AI that determines their benefits, assesses their criminal risk, evaluates their immigration application, or monitors their behaviour in public space. This asymmetry of power is the reason the EU AI Act imposes its strictest obligations on government AI.

JurisdictionKey ObligationsMost Critical RequirementsStatus
European UnionEU AI Act Art. 5 (prohibitions), Art. 26 (deployer), Art. 27 (FRIA), Annex III, EU database registrationFRIA mandatory before first use; Social scoring + real-time biometric ID prohibited; Human oversight designation requiredProhibited practices: Feb 2025. High-risk: Aug 2026.
United StatesOMB M-25-21, M-25-22, M-26-04, State AI laws (Colorado, CA, IL), Constitutional frameworks (4th, 14th Amendments)AI use inventories, Chief AI Officers, procurement transparency (March 2026 deadline), state bias audit requirementsOngoing. Federal preemption EO (Dec 2025) creates state law uncertainty.
BrazilBill 2338/2023, LGPDGovernment social scoring + mass surveillance explicitly prohibited; Algorithmic Impact Assessment mandatory; External audit in regulated sectorsSenate-approved. Chamber of Deputies review ongoing.
United KingdomATRS, PPN 017 (Feb 2025), proposed legislationAlgorithmic transparency records mandatory; Procurement AI criteria required; Law enforcement facial recognition governance announced 2025PPN 017 in effect.
Australia & CanadaAPS AI Plan 2025, Canadian Directive on Automated Decision-MakingTransparency statements mandatory; Human review rights; Impact assessments pre-deployment; Dec 2026 transparency deadline (AUS)APS AI Plan in effect. Dec 2026 deadline approaching.

EU AI Act Government Obligation Map

Art. 5 — Prohibitions (Feb 2025)
Art. 26 — Deployer Obligations
Art. 27 — FRIA (before first use)
Annex III — High-Risk Classification
EU Public Database Registration
Art. 50 — Transparency (Aug 2026)
Critical Infrastructure (Aug 2027)

Article 5 — In Force Since February 2, 2025

What the EU AI Act Forbids Government AI to Do

Article 5 of the EU AI Act came into force February 2, 2025. The majority of its prohibitions are explicitly directed at government and public authority AI systems. Violations carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. These prohibitions represent non-negotiable legal limits that no procurement, contractual clause, or operational necessity can override.

Art. 5.1.c

Social Scoring by Public Authorities

AI systems that evaluate or classify individuals over time based on social behaviour, personality, or personal characteristics, resulting in detrimental treatment in unrelated social contexts. No government authority in the EU may deploy such a system.

Maximum Penalty:€35M or 7% of global turnover
Art. 5.1.h

Real-Time Biometric ID in Public Spaces

Real-time remote biometric identification (including live facial recognition) in publicly accessible spaces for law enforcement. Three narrow exceptions exist — each requiring prior judicial authorisation. Deployment without meeting these conditions faces full penalties.

Maximum Penalty:€35M or 7% of global turnover
Art. 5.1.d

Predictive Policing by Profiling

AI assessing the risk of a person committing a criminal offence solely based on profiling, personality traits, or past criminal behaviour — without individual assessment of a concrete criminal act. This directly targets risk-score-based predictive policing tools.

Maximum Penalty:€35M or 7% of global turnover
Art. 5.1.f

Emotion Recognition in Public Institutions

AI systems inferring emotional states of individuals in workplace and educational settings. Government agencies that have piloted emotion detection for interview assessment, welfare eligibility evaluation, or border screening must terminate or redesign those programs.

Maximum Penalty:€35M or 7% of global turnover
Art. 5.1.g

Biometric Categorisation by Sensitive Attributes

AI categorising individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation. Intelligence agencies, border management, and law enforcement cannot deploy such systems.

Maximum Penalty:€35M or 7% of global turnover

Article 27 — Unique Government Obligation

The Obligation Unique to Government AI: The FRIA

Article 27 of the EU AI Act imposes an obligation that applies exclusively to public bodies and private entities providing public services: before the first use of any high-risk AI system, they must conduct and document a Fundamental Rights Impact Assessment. This is not optional. It is not a best practice. It is a pre-deployment legal requirement with no equivalent in private-sector AI regulation.

The FRIA cannot be produced for a single deployment and forgotten. High-risk AI systems that are updated, extended in scope, or redeployed in new contexts require FRIA updates. Government entities need a systematic governance capability — not a one-time compliance exercise. The EthiCompass 7-Dimension Framework produces structured outputs that map directly to FRIA documentation requirements.

What the FRIA Must Document — Before First Use

01

Nature and purpose of the intended AI deployment

02

Geographic scope and duration of the deployment

03

Categories of natural persons and specific groups likely to be affected

04

Specific fundamental rights at risk — dignity, non-discrimination, privacy, freedom of expression, judicial remedy

05

Measures taken to mitigate identified risks, including technical and organisational controls

06

Human oversight arrangements — the named individual responsible and their authority to intervene

07

What happens when the system flags a risk or produces a questionable output

08

Registration of the completed FRIA in the EU AI Office's public database

Each EthiCompass 7-dimension assessment translates directly into the FRIA's required sections: rights at risk, affected populations, mitigation measures, and oversight arrangements — integrating FRIA production into the governance workflow.

Annex III — High-Risk Classification

Which Government AI Systems Are High-Risk Under EU Law

Five of the eight Annex III categories apply directly to government. Each AI system in these categories must undergo a conformity assessment, produce technical documentation, receive CE marking, be registered in the EU AI public database, and be accompanied by a completed FRIA before deployment. Full compliance required by August 2, 2026. The EU AI public database makes these deployments visible to citizens — who will be able to look up which AI systems their government uses to make decisions about them.

01

Law Enforcement (Annex III, Pt. 6)

Recidivism prediction tools, crime analytics platforms, AI-assisted investigation systems, profiling tools used in crime detection and prosecution. Any AI that assesses the likelihood of a person committing a future offence based on statistical patterns.

Full FRIA + Annex III
02

Migration, Asylum & Border Control (Annex III, Pt. 7)

AI assessing migration or security risk of border entrants, travel document verification AI, asylum and visa application assessment AI, border surveillance systems. Affects among the most vulnerable populations.

Full FRIA + Annex III
03

Administration of Justice (Annex III, Pt. 8)

AI assisting courts in interpreting facts or applying law, sentencing recommendation systems, legal research AI deployed in judicial contexts.

Full FRIA + Annex III
04

Critical Infrastructure (Annex III, Pt. 2)

AI managing safety components in electricity grids, water systems, transport networks, banking infrastructure. Government agencies managing national infrastructure must classify and govern AI accordingly.

Full FRIA + Annex III
05

Public Service Eligibility (Annex III, Pt. 5)

AI evaluating eligibility for, and decisions to grant, reduce, revoke, or reclaim public benefits and services. Covers welfare eligibility AI, housing assistance AI, unemployment insurance AI, and social support systems.

Full FRIA + Annex III

Enforcement Cases

When Government AI Fails Citizens — Documented Consequences

The regulatory framework for government AI is not hypothetical. The enforcement cases that have already occurred — before most AI governance regulations fully took effect — demonstrate the human and financial cost of deploying government AI without adequate governance. These are not warnings. They are precedents.

Michigan — $20M Settlement (2024)

Unemployment Fraud AI: 3,000+ Wrongful Denials

An anti-fraud algorithm incorrectly flagged widespread unemployment fraud. More than 3,000 plaintiffs were wrongfully denied benefits, erroneously pursued for repayment, and in some cases subjected to criminal investigation for fraud they did not commit. The state settled for $20 million.

Government AI errors create government liability. Scale amplifies that liability.

Medicaid — 20M+ Coverage Losses

AI Administrative Terminations: Systematic Error at Scale

More than 20 million people lost Medicaid coverage through AI-based administrative processes. The majority were terminated for administrative reasons — AI inferring ineligibility from incomplete or misread data, not actual change in eligibility. Subsequent audits confirmed most should not have been terminated.

When government AI affects millions at scale, aggregate harm demands governance infrastructure that prevents systematic errors before they propagate.

SafeRent — $2M Settlement (2024)

Housing Algorithm: Disparate Impact + Liability Precedent

An algorithmic tenant screening tool was found to disparately impact Black and Hispanic applicants. The court rejected the defense that it bore no liability because it only made recommendations. A tool claiming to 'automate human judgment' cannot disclaim liability for its outputs. Government housing programs face the same legal analysis.

The 'we just provide recommendations' defense fails for government AI as it does for vendors.

Clearview AI — €100M+ Fines (Unpaid), Criminal Charges (Oct 2025)

Biometric Database: Enforcement Appetite + Compliance Gap

Clearview AI accumulated over €100 million in fines from European data protection authorities for building facial recognition databases by scraping images without consent. Fines remain largely unpaid. In October 2025, criminal charges were filed — demonstrating why structural governance, not post-hoc penalty response, is the appropriate posture.

Structural governance is not optional. Post-hoc penalty response is neither compliant nor sufficient.

Evaluation Framework

How the 7-Dimension Framework Maps to Government AI Compliance

For government AI, each dimension carries distinct legal significance — mapping to specific Article 27 FRIA requirements, Annex III assessment criteria, and constitutional compliance obligations across EU, USA, and LATAM jurisdictions. The framework produces structured outputs that feed directly into FRIA documentation.

01

Discrimination & Fairness

The most litigated dimension in government AI. Benefits eligibility systems, recidivism prediction tools, and public housing algorithms have all been subject to disparate impact claims. The Michigan and SafeRent cases were won — and lost — on this dimension.

EU AI Act Art. 10, EU Charter Art. 21 (non-discrimination), 14th Amendment Equal Protection (US), Brazil Bill 2338/2023

02

Toxicity & Harmful Language

Government-facing citizen service chatbots, automated correspondence, and benefits notification systems must not produce demeaning, threatening, or inappropriate communications. AI-generated government notices containing errors create compliance and reputational exposure.

EU AI Act Art. 13 and Art. 50, GDPR right to information, LGPD

03

Explainability & Transparency

Due process requires that government decisions affecting citizens be explainable. A benefits denial, criminal risk assessment, or immigration determination based on opaque AI is constitutionally and legally suspect. FRIA documentation requires explaining what the AI does and why.

EU AI Act Art. 13, Art. 14, Art. 27 (FRIA), Due Process (US), LGPD Art. 20, UK ATRS

04

Privacy & Data Protection

Government AI processes some of the most sensitive personal data that exists — criminal records, health status, immigration history, financial circumstances, behavioural profiles. GDPR Art. 9, LGPD, and US constitutional privacy protections create layered obligations.

GDPR Art. 9 (special category data), EU AI Act Art. 10, LGPD sensitive data, 4th Amendment (US)

05

Factuality & Accuracy

The Michigan unemployment fraud case is the canonical factuality failure in government AI — an algorithm that is factually wrong at scale produces government liability at scale. Benefits eligibility, criminal risk scores, and immigration assessments must be grounded in verified, accurate data.

EU AI Act Art. 15, Annex III compliance standards, Due Process accuracy requirements (US), Brazil Bill 2338/2023

06

Robustness & Resilience

Government AI operates at massive scale across diverse populations with varying data quality. Recidivism models trained on historical crime data reproduce historical enforcement biases. Immigration risk models may perform poorly on underrepresented populations.

EU AI Act Art. 15, Annex III assessment requirements, government quality frameworks

07

Regulatory Compliance

Maps each government AI system to its complete regulatory obligation set — EU AI Act classification, Annex III registration, FRIA completion status, constitutional compliance evidence, and all jurisdictional mandates (GDPR, LGPD, OMB memoranda, UK ATRS, Canadian Directive, Australian APS Policy).

EU AI Act full text, OMB M-25-21, M-25-22, M-26-04, Brazil Bill 2338/2023, UK ATRS, Canadian Directive, Australian APS AI Policy

Use Cases

Where Government AI Requires Governance

01

Benefits Eligibility & Social Services AI

AI evaluating eligibility for public benefits is explicitly classified as high-risk under Annex III (Pt. 5). It requires FRIA completion before deployment, human oversight designation, and registration in the EU public database. Under US constitutional doctrine, algorithmic benefits determinations that lack explainability or disparately impact protected groups face legal challenge.

Key Risks

  • 1% false positive rate on 10M claims = 100,000 wrongful denials
  • Systematic error propagation without human oversight checkpoints
  • Missing FRIA documentation before first deployment
  • Appeals processes lacking tamper-proof AI decision records
  • Annex III registration failures for live systems
02

Law Enforcement & Criminal Justice AI

Law enforcement AI occupies the most heavily regulated section of Annex III. Recidivism prediction, crime analytics, investigation AI, and profiling tools require full conformity assessment and FRIA. Three practices are categorically prohibited as of February 2025: real-time biometric ID without judicial authorisation, predictive policing by profiling, and biometric categorisation by sensitive attributes.

Key Risks

  • Predictive policing tools violating Art. 5 as of February 2025
  • Facial recognition deployments without judicial authorisation
  • AI risk scores in sentencing without explainability violating due process
  • Biometric AI facing both regulatory fines and criminal charges (Clearview precedent)
  • Crime analytics without disparate impact analysis
03

Border Management & Migration AI

Migration and border control AI is among the most sensitive in Annex III. AI assessing migration risk, processing asylum applications, verifying travel documents, and monitoring border areas requires full conformity assessment and FRIA. The affected populations — asylum seekers and migrants — are among the most vulnerable, with limited recourse.

Key Risks

  • Asylum credibility AI not accounting for cultural trauma communication differences
  • Travel document verification trained on limited geographic data
  • Immigration risk AI incorporating nationality as a discriminatory proxy
  • Insufficient FRIA documentation for Annex III (Pt. 7) systems
  • Language barrier failures in citizen notification requirements
04

Government AI Procurement

Government bodies are simultaneously deployers of AI (full Annex III obligations) and institutional buyers of AI from vendors. UK PPN 017 (February 2025) requires AI-specific criteria in procurement documents. OMB M-25-22 requires US federal agencies to request model cards and feedback mechanisms from AI vendors by March 2026.

Key Risks

  • Deployer liability for procured AI regardless of vendor warranties
  • Procurement without Annex III assessment before contract signature
  • Vendor self-attestation insufficient for Art. 26 deployer obligations
  • Missing acceptable use policies, model cards in procurement specifications
  • No pre-deployment FRIA framework for newly procured high-risk systems
05

Public Communication & Citizen-Facing AI

Government chatbots, automated notice generation, and AI-assisted citizen service systems must comply with EU AI Act Art. 50 transparency requirements by August 2026. When these systems process special category personal data — health status, immigration status, disability — GDPR Art. 9 obligations apply.

Key Risks

  • AI chatbots providing benefits information containing errors creating legal exposure
  • Automated correspondence misstating rights or obligations raising due process concerns
  • AI systems failing to escalate interactions with vulnerable or distressed citizens
  • Missing AI disclosure to citizens as required by Art. 50 (deadline: Aug 2026)
  • Special category data processing without adequate GDPR Art. 9 legal basis

Platform Architecture

Built for Public Sector Accountability Architecture

Government AI governance operates under a different accountability standard than private sector compliance. Audit trails are not optional — they are the documentary record that protects both the agency and the citizen in appeals, judicial review, and regulatory investigation. EthiCompass's dual-layer policy architecture is designed for this environment: immutable regulatory knowledge as the base, organisational policy extensions layered on top, and audit trails that survive inspection.

Universal Knowledge Base

Government Regulatory Layer

Pre-mapped EU AI Act Art. 5 prohibitions, Art. 26/27 obligations, Annex III categories, GDPR government processing provisions, OMB memoranda, UK ATRS, Canadian Directive, Australian APS AI Policy minimums, Brazil Bill 2338/2023 government prohibitions. Cannot be weakened by client configuration.

Custom Policy Layer

Agency-Specific Applications

Agencies configure statutory obligations, ministerial standards, internal governance frameworks, inter-agency data sharing requirements, and population-specific safeguarding protocols. Custom policies enhance base requirements. The minimum cannot be relaxed.

Immutable Audit Trails

Public Accountability Records

Every evaluation is cryptographically signed with 7+ year retention. Audit records are organised by AI system, FRIA reference, Annex III category, dimension, decision point, and date — enabling judicial review, parliamentary scrutiny, and regulatory inspection of any AI-assisted determination.

FRIA Documentation

Integrated FRIA Workflow

The 7-Dimension Framework produces structured outputs that map directly to FRIA documentation requirements. Each dimension's assessment translates into the FRIA's required sections: rights at risk, affected populations, mitigation measures, and oversight arrangements.

Products

Two Ways to Begin

OneCheck

Government AI Assessment

A focused evaluation of a single government AI system against its complete regulatory obligation set. OneCheck identifies its Annex III classification, assesses FRIA prerequisites, evaluates all 7 dimensions with quantitative scores, and produces a compliance report with identified gaps and remediation priorities. Designed to support procurement due diligence, pre-deployment FRIA preparation, and targeted regulatory risk assessment.

Ideal for: Government agencies assessing a specific AI system before deployment or procurement, public bodies preparing FRIA documentation, departments responding to regulatory inquiry or audit.

Enterprise

Full Government AI Governance

Continuous governance across an agency's complete AI portfolio. Enterprise deploys the 7-Dimension Framework across all AI systems — from citizen services through law enforcement and border management — with real-time monitoring, automated regulatory mapping against all applicable frameworks, immutable audit trails structured for public accountability and judicial review, and an integrated FRIA documentation workflow.

Ideal for: Government departments managing multiple AI systems across overlapping regulatory frameworks, agencies preparing for August 2026 EU AI Act compliance, public bodies subject to parliamentary transparency requirements for their AI use.

Science & Methodology

Peer-Reviewed Methodology for Government-Grade Evidence

Government AI governance requires a higher standard of evidence than most sectors. When a citizen appeals an AI-assisted determination, or when a parliamentary committee scrutinises an agency's AI portfolio, or when a regulator conducts a conformity assessment audit, the compliance record must be methodologically defensible. Marketing claims — "our AI is fair," "our system is transparent" — are not acceptable evidence in these contexts. Quantitative measurement with documented methodology and peer-reviewed validation is.

The EthiCompass 7-Dimension Evaluation Framework was developed by PhD researchers and validated through peer-reviewed publications. Each dimension employs specific, quantitative metrics — 39+ measures with defined thresholds — producing repeatable, auditable results that withstand methodological challenge.

This is particularly important for government procurement. When an agency buys AI from a vendor, it assumes responsibility for that system's compliance with Annex III requirements, FRIA obligations, and all applicable regulatory frameworks. Procurement that relies on vendor self-attestation cannot satisfy deployer obligations under Art. 26. Independent, methodology-driven assessment can.

39+

Quantitative metrics per evaluation

7

Scientifically validated dimensions

7+yr

Immutable audit trail retention

6

Jurisdictions covered (EU, US, Brazil, UK, AUS, CA)

Compliance Map

The August 2026 Deadline Is a Government Accountability Date

For government, the EU AI Act's August 2026 high-risk compliance deadline is the date by which public authorities must demonstrate to citizens that the AI systems making decisions about their benefits, criminal risk, immigration status, and access to public services have been properly assessed, governed, and overseen. The EU AI public database will make these deployments visible. Citizens will be able to look up the AI systems their governments use. The FRIA records will document what risks were identified and what protections are in place.

RegulationObligationDeadlineDimension
EU AI Act — Art. 5 Prohibited PracticesSocial scoring, real-time biometric ID in public spaces, predictive policing by profiling — all prohibited for government authoritiesFeb 2, 2025 (in force)Regulatory Compliance
EU AI Act — Art. 26 Deployer ObligationsHuman oversight designation, risk monitoring, incident reporting, input data verification, affected citizen notificationAug 2, 2026Explainability & Transparency
EU AI Act — Art. 27 FRIAMandatory Fundamental Rights Impact Assessment before first use of any high-risk system by public authoritiesAug 2, 2026 (before first deployment)Regulatory Compliance
EU AI Act — Annex III High-Risk RegistrationAll high-risk government AI systems registered in public EU database before deploymentAug 2, 2026Regulatory Compliance
EU AI Act — Art. 50 TransparencyDisclose AI nature in citizen-facing interactions; AI-generated content disclosureAug 2, 2026Explainability & Transparency
EU AI Act — Critical InfrastructureAnnex III (Pt. 2) compliance for AI in energy, water, transport, and banking infrastructure managementAug 2, 2027Robustness & Resilience
OMB M-25-21 (April 3, 2025)Federal AI use inventories, Chief AI Officers, published AI use policies for service delivery transparencyApr 2025 (in force)Regulatory Compliance
OMB M-25-22 (April 3, 2025)AI procurement criteria: acceptable use policies, model/system cards, end-user resources, feedback mechanismsMar 11, 2026 (procurement procedures)Regulatory Compliance
Brazil Bill 2338/2023Government social scoring and mass surveillance explicitly prohibited; Algorithmic Impact Assessment mandatoryPending final enactmentDiscrimination & Fairness
UK PPN 017 (Feb 24, 2025)AI-specific award criteria in government procurement; AI use questions in service delivery bidsFeb 2025 (in force)Regulatory Compliance
GDPR — Art. 9 (Government Processing)Special category data (health, biometric, racial, political) requires explicit legal basis and heightened protectionIn forcePrivacy & Data Protection

Regulatory Risk Exposure

EU AI Act — prohibited practices (social scoring, biometric ID)

Up to €35M or 7% of global annual turnover

Art. 5 compliance verification identifying prohibited system characteristics

EU AI Act — high-risk non-compliance (Annex III)

Up to €15M or 3% of global annual turnover

Annex III classification, FRIA documentation, conformity assessment readiness

Missing FRIA — pre-deployment legal requirement

Non-compliance with Art. 27; deployment invalidation risk

7-dimension assessments map directly to FRIA required sections

Benefits AI error (Michigan precedent)

$20M+ settlement; class action liability

Factuality scoring and disparate impact detection before deployment

Medicaid-type coverage termination errors

Mass citizen harm; multi-state litigation

Accuracy verification and systematic error detection at scale

Housing algorithm disparate impact (SafeRent precedent)

$2M+ settlement; Fair Housing Act liability

Demographic parity analysis for all government allocation AI

Biometric AI enforcement (Clearview precedent)

€100M+ in fines; criminal charges

Biometric AI classification and Art. 5 prohibition mapping

GDPR Art. 9 — special category data violation

Up to 4% of global annual turnover

Privacy dimension covers biometric, health, and sensitive data governance

Next Steps

Government entities that are not prepared for August 2026 face more than regulatory sanction.

They face the accountability that follows when the EU AI public database is live, the FRIAs are missing, and the citizens who asked "what AI made this decision about me?" receive no answer.

Request Government Briefing

Understand your agency's EU AI Act compliance exposure across prohibited practices, Annex III high-risk classification, and FRIA obligations.

Start with OneCheck

Evaluate a specific government AI system against all applicable requirements — including Annex III classification and FRIA prerequisites.