Government & Public Sector
Government AI operates under the most restrictive regulatory framework of any sector. Public authorities face mandatory Fundamental Rights Impact Assessments, prohibited practice bans targeting social scoring and biometric identification, and unique constitutional obligations to citizens who cannot opt out of algorithmic governance. EthiCompass provides the governance infrastructure to meet these obligations — with a 7-dimension evaluation framework, peer-reviewed methodology, and immutable audit trails built for public accountability.
Regulatory Framework
When a private company deploys AI and makes an error, the affected person can seek remedies through consumer protection law or choose a competitor. When a government authority deploys AI and makes an error, the affected citizen has no alternative. They cannot opt out of the AI that determines their benefits, assesses their criminal risk, evaluates their immigration application, or monitors their behaviour in public space. This asymmetry of power is the reason the EU AI Act imposes its strictest obligations on government AI.
| Jurisdiction | Key Obligations | Most Critical Requirements | Status |
|---|---|---|---|
| European Union | EU AI Act Art. 5 (prohibitions), Art. 26 (deployer), Art. 27 (FRIA), Annex III, EU database registration | FRIA mandatory before first use; Social scoring + real-time biometric ID prohibited; Human oversight designation required | Prohibited practices: Feb 2025. High-risk: Aug 2026. |
| United States | OMB M-25-21, M-25-22, M-26-04, State AI laws (Colorado, CA, IL), Constitutional frameworks (4th, 14th Amendments) | AI use inventories, Chief AI Officers, procurement transparency (March 2026 deadline), state bias audit requirements | Ongoing. Federal preemption EO (Dec 2025) creates state law uncertainty. |
| Brazil | Bill 2338/2023, LGPD | Government social scoring + mass surveillance explicitly prohibited; Algorithmic Impact Assessment mandatory; External audit in regulated sectors | Senate-approved. Chamber of Deputies review ongoing. |
| United Kingdom | ATRS, PPN 017 (Feb 2025), proposed legislation | Algorithmic transparency records mandatory; Procurement AI criteria required; Law enforcement facial recognition governance announced 2025 | PPN 017 in effect. |
| Australia & Canada | APS AI Plan 2025, Canadian Directive on Automated Decision-Making | Transparency statements mandatory; Human review rights; Impact assessments pre-deployment; Dec 2026 transparency deadline (AUS) | APS AI Plan in effect. Dec 2026 deadline approaching. |
EU AI Act Government Obligation Map
Article 5 — In Force Since February 2, 2025
Article 5 of the EU AI Act came into force February 2, 2025. The majority of its prohibitions are explicitly directed at government and public authority AI systems. Violations carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. These prohibitions represent non-negotiable legal limits that no procurement, contractual clause, or operational necessity can override.
AI systems that evaluate or classify individuals over time based on social behaviour, personality, or personal characteristics, resulting in detrimental treatment in unrelated social contexts. No government authority in the EU may deploy such a system.
Real-time remote biometric identification (including live facial recognition) in publicly accessible spaces for law enforcement. Three narrow exceptions exist — each requiring prior judicial authorisation. Deployment without meeting these conditions faces full penalties.
AI assessing the risk of a person committing a criminal offence solely based on profiling, personality traits, or past criminal behaviour — without individual assessment of a concrete criminal act. This directly targets risk-score-based predictive policing tools.
AI systems inferring emotional states of individuals in workplace and educational settings. Government agencies that have piloted emotion detection for interview assessment, welfare eligibility evaluation, or border screening must terminate or redesign those programs.
AI categorising individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation. Intelligence agencies, border management, and law enforcement cannot deploy such systems.
Article 27 — Unique Government Obligation
Article 27 of the EU AI Act imposes an obligation that applies exclusively to public bodies and private entities providing public services: before the first use of any high-risk AI system, they must conduct and document a Fundamental Rights Impact Assessment. This is not optional. It is not a best practice. It is a pre-deployment legal requirement with no equivalent in private-sector AI regulation.
The FRIA cannot be produced for a single deployment and forgotten. High-risk AI systems that are updated, extended in scope, or redeployed in new contexts require FRIA updates. Government entities need a systematic governance capability — not a one-time compliance exercise. The EthiCompass 7-Dimension Framework produces structured outputs that map directly to FRIA documentation requirements.
What the FRIA Must Document — Before First Use
Nature and purpose of the intended AI deployment
Geographic scope and duration of the deployment
Categories of natural persons and specific groups likely to be affected
Specific fundamental rights at risk — dignity, non-discrimination, privacy, freedom of expression, judicial remedy
Measures taken to mitigate identified risks, including technical and organisational controls
Human oversight arrangements — the named individual responsible and their authority to intervene
What happens when the system flags a risk or produces a questionable output
Registration of the completed FRIA in the EU AI Office's public database
Each EthiCompass 7-dimension assessment translates directly into the FRIA's required sections: rights at risk, affected populations, mitigation measures, and oversight arrangements — integrating FRIA production into the governance workflow.
Annex III — High-Risk Classification
Five of the eight Annex III categories apply directly to government. Each AI system in these categories must undergo a conformity assessment, produce technical documentation, receive CE marking, be registered in the EU AI public database, and be accompanied by a completed FRIA before deployment. Full compliance required by August 2, 2026. The EU AI public database makes these deployments visible to citizens — who will be able to look up which AI systems their government uses to make decisions about them.
Recidivism prediction tools, crime analytics platforms, AI-assisted investigation systems, profiling tools used in crime detection and prosecution. Any AI that assesses the likelihood of a person committing a future offence based on statistical patterns.
AI assessing migration or security risk of border entrants, travel document verification AI, asylum and visa application assessment AI, border surveillance systems. Affects among the most vulnerable populations.
AI assisting courts in interpreting facts or applying law, sentencing recommendation systems, legal research AI deployed in judicial contexts.
AI managing safety components in electricity grids, water systems, transport networks, banking infrastructure. Government agencies managing national infrastructure must classify and govern AI accordingly.
AI evaluating eligibility for, and decisions to grant, reduce, revoke, or reclaim public benefits and services. Covers welfare eligibility AI, housing assistance AI, unemployment insurance AI, and social support systems.
Enforcement Cases
The regulatory framework for government AI is not hypothetical. The enforcement cases that have already occurred — before most AI governance regulations fully took effect — demonstrate the human and financial cost of deploying government AI without adequate governance. These are not warnings. They are precedents.
Michigan — $20M Settlement (2024)
An anti-fraud algorithm incorrectly flagged widespread unemployment fraud. More than 3,000 plaintiffs were wrongfully denied benefits, erroneously pursued for repayment, and in some cases subjected to criminal investigation for fraud they did not commit. The state settled for $20 million.
Government AI errors create government liability. Scale amplifies that liability.
Medicaid — 20M+ Coverage Losses
More than 20 million people lost Medicaid coverage through AI-based administrative processes. The majority were terminated for administrative reasons — AI inferring ineligibility from incomplete or misread data, not actual change in eligibility. Subsequent audits confirmed most should not have been terminated.
When government AI affects millions at scale, aggregate harm demands governance infrastructure that prevents systematic errors before they propagate.
SafeRent — $2M Settlement (2024)
An algorithmic tenant screening tool was found to disparately impact Black and Hispanic applicants. The court rejected the defense that it bore no liability because it only made recommendations. A tool claiming to 'automate human judgment' cannot disclaim liability for its outputs. Government housing programs face the same legal analysis.
The 'we just provide recommendations' defense fails for government AI as it does for vendors.
Clearview AI — €100M+ Fines (Unpaid), Criminal Charges (Oct 2025)
Clearview AI accumulated over €100 million in fines from European data protection authorities for building facial recognition databases by scraping images without consent. Fines remain largely unpaid. In October 2025, criminal charges were filed — demonstrating why structural governance, not post-hoc penalty response, is the appropriate posture.
Structural governance is not optional. Post-hoc penalty response is neither compliant nor sufficient.
Evaluation Framework
For government AI, each dimension carries distinct legal significance — mapping to specific Article 27 FRIA requirements, Annex III assessment criteria, and constitutional compliance obligations across EU, USA, and LATAM jurisdictions. The framework produces structured outputs that feed directly into FRIA documentation.
The most litigated dimension in government AI. Benefits eligibility systems, recidivism prediction tools, and public housing algorithms have all been subject to disparate impact claims. The Michigan and SafeRent cases were won — and lost — on this dimension.
EU AI Act Art. 10, EU Charter Art. 21 (non-discrimination), 14th Amendment Equal Protection (US), Brazil Bill 2338/2023
Government-facing citizen service chatbots, automated correspondence, and benefits notification systems must not produce demeaning, threatening, or inappropriate communications. AI-generated government notices containing errors create compliance and reputational exposure.
EU AI Act Art. 13 and Art. 50, GDPR right to information, LGPD
Due process requires that government decisions affecting citizens be explainable. A benefits denial, criminal risk assessment, or immigration determination based on opaque AI is constitutionally and legally suspect. FRIA documentation requires explaining what the AI does and why.
EU AI Act Art. 13, Art. 14, Art. 27 (FRIA), Due Process (US), LGPD Art. 20, UK ATRS
Government AI processes some of the most sensitive personal data that exists — criminal records, health status, immigration history, financial circumstances, behavioural profiles. GDPR Art. 9, LGPD, and US constitutional privacy protections create layered obligations.
GDPR Art. 9 (special category data), EU AI Act Art. 10, LGPD sensitive data, 4th Amendment (US)
The Michigan unemployment fraud case is the canonical factuality failure in government AI — an algorithm that is factually wrong at scale produces government liability at scale. Benefits eligibility, criminal risk scores, and immigration assessments must be grounded in verified, accurate data.
EU AI Act Art. 15, Annex III compliance standards, Due Process accuracy requirements (US), Brazil Bill 2338/2023
Government AI operates at massive scale across diverse populations with varying data quality. Recidivism models trained on historical crime data reproduce historical enforcement biases. Immigration risk models may perform poorly on underrepresented populations.
EU AI Act Art. 15, Annex III assessment requirements, government quality frameworks
Maps each government AI system to its complete regulatory obligation set — EU AI Act classification, Annex III registration, FRIA completion status, constitutional compliance evidence, and all jurisdictional mandates (GDPR, LGPD, OMB memoranda, UK ATRS, Canadian Directive, Australian APS Policy).
EU AI Act full text, OMB M-25-21, M-25-22, M-26-04, Brazil Bill 2338/2023, UK ATRS, Canadian Directive, Australian APS AI Policy
Use Cases
AI evaluating eligibility for public benefits is explicitly classified as high-risk under Annex III (Pt. 5). It requires FRIA completion before deployment, human oversight designation, and registration in the EU public database. Under US constitutional doctrine, algorithmic benefits determinations that lack explainability or disparately impact protected groups face legal challenge.
Key Risks
Law enforcement AI occupies the most heavily regulated section of Annex III. Recidivism prediction, crime analytics, investigation AI, and profiling tools require full conformity assessment and FRIA. Three practices are categorically prohibited as of February 2025: real-time biometric ID without judicial authorisation, predictive policing by profiling, and biometric categorisation by sensitive attributes.
Key Risks
Migration and border control AI is among the most sensitive in Annex III. AI assessing migration risk, processing asylum applications, verifying travel documents, and monitoring border areas requires full conformity assessment and FRIA. The affected populations — asylum seekers and migrants — are among the most vulnerable, with limited recourse.
Key Risks
Government bodies are simultaneously deployers of AI (full Annex III obligations) and institutional buyers of AI from vendors. UK PPN 017 (February 2025) requires AI-specific criteria in procurement documents. OMB M-25-22 requires US federal agencies to request model cards and feedback mechanisms from AI vendors by March 2026.
Key Risks
Government chatbots, automated notice generation, and AI-assisted citizen service systems must comply with EU AI Act Art. 50 transparency requirements by August 2026. When these systems process special category personal data — health status, immigration status, disability — GDPR Art. 9 obligations apply.
Key Risks
Platform Architecture
Government AI governance operates under a different accountability standard than private sector compliance. Audit trails are not optional — they are the documentary record that protects both the agency and the citizen in appeals, judicial review, and regulatory investigation. EthiCompass's dual-layer policy architecture is designed for this environment: immutable regulatory knowledge as the base, organisational policy extensions layered on top, and audit trails that survive inspection.
Universal Knowledge Base
Pre-mapped EU AI Act Art. 5 prohibitions, Art. 26/27 obligations, Annex III categories, GDPR government processing provisions, OMB memoranda, UK ATRS, Canadian Directive, Australian APS AI Policy minimums, Brazil Bill 2338/2023 government prohibitions. Cannot be weakened by client configuration.
Custom Policy Layer
Agencies configure statutory obligations, ministerial standards, internal governance frameworks, inter-agency data sharing requirements, and population-specific safeguarding protocols. Custom policies enhance base requirements. The minimum cannot be relaxed.
Immutable Audit Trails
Every evaluation is cryptographically signed with 7+ year retention. Audit records are organised by AI system, FRIA reference, Annex III category, dimension, decision point, and date — enabling judicial review, parliamentary scrutiny, and regulatory inspection of any AI-assisted determination.
FRIA Documentation
The 7-Dimension Framework produces structured outputs that map directly to FRIA documentation requirements. Each dimension's assessment translates into the FRIA's required sections: rights at risk, affected populations, mitigation measures, and oversight arrangements.
Products
OneCheck
A focused evaluation of a single government AI system against its complete regulatory obligation set. OneCheck identifies its Annex III classification, assesses FRIA prerequisites, evaluates all 7 dimensions with quantitative scores, and produces a compliance report with identified gaps and remediation priorities. Designed to support procurement due diligence, pre-deployment FRIA preparation, and targeted regulatory risk assessment.
Ideal for: Government agencies assessing a specific AI system before deployment or procurement, public bodies preparing FRIA documentation, departments responding to regulatory inquiry or audit.
Enterprise
Continuous governance across an agency's complete AI portfolio. Enterprise deploys the 7-Dimension Framework across all AI systems — from citizen services through law enforcement and border management — with real-time monitoring, automated regulatory mapping against all applicable frameworks, immutable audit trails structured for public accountability and judicial review, and an integrated FRIA documentation workflow.
Ideal for: Government departments managing multiple AI systems across overlapping regulatory frameworks, agencies preparing for August 2026 EU AI Act compliance, public bodies subject to parliamentary transparency requirements for their AI use.
Science & Methodology
Government AI governance requires a higher standard of evidence than most sectors. When a citizen appeals an AI-assisted determination, or when a parliamentary committee scrutinises an agency's AI portfolio, or when a regulator conducts a conformity assessment audit, the compliance record must be methodologically defensible. Marketing claims — "our AI is fair," "our system is transparent" — are not acceptable evidence in these contexts. Quantitative measurement with documented methodology and peer-reviewed validation is.
The EthiCompass 7-Dimension Evaluation Framework was developed by PhD researchers and validated through peer-reviewed publications. Each dimension employs specific, quantitative metrics — 39+ measures with defined thresholds — producing repeatable, auditable results that withstand methodological challenge.
This is particularly important for government procurement. When an agency buys AI from a vendor, it assumes responsibility for that system's compliance with Annex III requirements, FRIA obligations, and all applicable regulatory frameworks. Procurement that relies on vendor self-attestation cannot satisfy deployer obligations under Art. 26. Independent, methodology-driven assessment can.
39+
Quantitative metrics per evaluation
7
Scientifically validated dimensions
7+yr
Immutable audit trail retention
6
Jurisdictions covered (EU, US, Brazil, UK, AUS, CA)
Compliance Map
For government, the EU AI Act's August 2026 high-risk compliance deadline is the date by which public authorities must demonstrate to citizens that the AI systems making decisions about their benefits, criminal risk, immigration status, and access to public services have been properly assessed, governed, and overseen. The EU AI public database will make these deployments visible. Citizens will be able to look up the AI systems their governments use. The FRIA records will document what risks were identified and what protections are in place.
| Regulation | Obligation | Deadline | Dimension |
|---|---|---|---|
| EU AI Act — Art. 5 Prohibited Practices | Social scoring, real-time biometric ID in public spaces, predictive policing by profiling — all prohibited for government authorities | Feb 2, 2025 (in force) | Regulatory Compliance |
| EU AI Act — Art. 26 Deployer Obligations | Human oversight designation, risk monitoring, incident reporting, input data verification, affected citizen notification | Aug 2, 2026 | Explainability & Transparency |
| EU AI Act — Art. 27 FRIA | Mandatory Fundamental Rights Impact Assessment before first use of any high-risk system by public authorities | Aug 2, 2026 (before first deployment) | Regulatory Compliance |
| EU AI Act — Annex III High-Risk Registration | All high-risk government AI systems registered in public EU database before deployment | Aug 2, 2026 | Regulatory Compliance |
| EU AI Act — Art. 50 Transparency | Disclose AI nature in citizen-facing interactions; AI-generated content disclosure | Aug 2, 2026 | Explainability & Transparency |
| EU AI Act — Critical Infrastructure | Annex III (Pt. 2) compliance for AI in energy, water, transport, and banking infrastructure management | Aug 2, 2027 | Robustness & Resilience |
| OMB M-25-21 (April 3, 2025) | Federal AI use inventories, Chief AI Officers, published AI use policies for service delivery transparency | Apr 2025 (in force) | Regulatory Compliance |
| OMB M-25-22 (April 3, 2025) | AI procurement criteria: acceptable use policies, model/system cards, end-user resources, feedback mechanisms | Mar 11, 2026 (procurement procedures) | Regulatory Compliance |
| Brazil Bill 2338/2023 | Government social scoring and mass surveillance explicitly prohibited; Algorithmic Impact Assessment mandatory | Pending final enactment | Discrimination & Fairness |
| UK PPN 017 (Feb 24, 2025) | AI-specific award criteria in government procurement; AI use questions in service delivery bids | Feb 2025 (in force) | Regulatory Compliance |
| GDPR — Art. 9 (Government Processing) | Special category data (health, biometric, racial, political) requires explicit legal basis and heightened protection | In force | Privacy & Data Protection |
Regulatory Risk Exposure
EU AI Act — prohibited practices (social scoring, biometric ID)
Up to €35M or 7% of global annual turnover
Art. 5 compliance verification identifying prohibited system characteristics
EU AI Act — high-risk non-compliance (Annex III)
Up to €15M or 3% of global annual turnover
Annex III classification, FRIA documentation, conformity assessment readiness
Missing FRIA — pre-deployment legal requirement
Non-compliance with Art. 27; deployment invalidation risk
7-dimension assessments map directly to FRIA required sections
Benefits AI error (Michigan precedent)
$20M+ settlement; class action liability
Factuality scoring and disparate impact detection before deployment
Medicaid-type coverage termination errors
Mass citizen harm; multi-state litigation
Accuracy verification and systematic error detection at scale
Housing algorithm disparate impact (SafeRent precedent)
$2M+ settlement; Fair Housing Act liability
Demographic parity analysis for all government allocation AI
Biometric AI enforcement (Clearview precedent)
€100M+ in fines; criminal charges
Biometric AI classification and Art. 5 prohibition mapping
GDPR Art. 9 — special category data violation
Up to 4% of global annual turnover
Privacy dimension covers biometric, health, and sensitive data governance
Next Steps
They face the accountability that follows when the EU AI public database is live, the FRIAs are missing, and the citizens who asked "what AI made this decision about me?" receive no answer.
Understand your agency's EU AI Act compliance exposure across prohibited practices, Annex III high-risk classification, and FRIA obligations.
Evaluate a specific government AI system against all applicable requirements — including Annex III classification and FRIA prerequisites.
Related Industries