[ GOOGLE SOLUTION CHALLENGE 2026 · UNBIASED AI DECISION TRACK ]

Your ML models are silently discriminating.

FairOps detects bias in production ML systems every 15 minutes, explains it in plain English, and automatically mitigates — before a compliance officer or a journalist does.

View on GitHub
GCP-Native 12 Fairness Metrics EU AI Act Ready
LIVE

A model trained on yesterday's data can develop bias tomorrow. There is no alert when a hiring model starts rejecting candidates of a specific gender.

Hiring Credit Healthcare Criminal Justice Content

From bias to fix in under 4 hours.

01

DETECT

3 lines of SDK code. Predictions stream to Cloud Pub/Sub → Dataflow → BigQuery. 12 fairness metrics computed every 15 minutes with 95% bootstrap confidence intervals.

02

EXPLAIN

SHAP TreeExplainer pinpoints which features drive the bias. Gemini Pro writes a plain-English audit report for your compliance officer — not your data scientist.

03

MITIGATE

A 10-step Vertex AI Pipeline automatically selects the right AIF360 algorithm, retrains with fairness constraints, validates on live traffic, and promotes the debiased model. Under 2% accuracy loss.

12 Production Fairness Metrics

Computed simultaneously on every audit window. All with bootstrap CIs and chi-square significance gating.

# Metric Name Threshold Breach What It Catches
1 demographic_parity_difference 0.10 > Selection rate gap between groups
2 equalized_odds_difference 0.08 > Performance gap at same error rates
3 equal_opportunity_difference 0.05 > True positive rate disparity
4 disparate_impact_ratio EEOC 0.80 < EEOC 4/5ths rule violation
5 average_odds_difference 0.07 > Combined error rate fairness
6 statistical_parity_subgroup_lift 1.25 > Worst-case group advantage
7 predictive_parity_difference 0.08 > Precision gap across groups
8 calibration_gap 0.05 > Score miscalibration by group
9 individual_fairness_score 0.85 < Similar inputs treated differently
10 counterfactual_fairness 0.06 > Causal fairness under intervention
11 intersectional_bias_score 0.12 > Multi-dimensional discrimination
12 temporal_drift_index 5.00 > Bias getting worse over time

† If chi-square p > 0.05, severity is overridden to LOW regardless of metric value. Statistical noise is not bias.

Automated Severity → Automated Action

CRITICAL

Condition

disparate_impact_ratio < 0.65 · OR · any metric > 3× threshold · OR · 3+ metrics breached simultaneously

Action

Vertex AI Pipeline triggered immediately (synchronous)

HIGH

Condition

disparate_impact_ratio ∈ [0.65, 0.80) · OR · 2 metrics breached

Action

Cloud Tasks queue, 1-hour delay

MEDIUM

Condition

1 metric breached, value < 2× threshold, p < 0.05

Action

Logged + dashboard highlight + next retrain cycle

LOW / PASS

Condition

p > 0.05 (statistical noise) · OR · no breach

Action

Audit trail only / Clean record

3 lines. That's the entire integration.

$ pip install fairops-sdk
Copied ✓
from fairops_sdk import FairOpsClient

client = FairOpsClient(
    project_id="fairops-prod",
    model_id="hiring-classifier",
    model_version="v2.1",
    use_case="hiring",
    tenant_id="acme-corp",
)

# Call after every prediction. That's it.
client.log_prediction(
    features={"age": 35, "sex": "Male", "education": "Bachelors"},
    prediction={"label": "approved", "score": 0.87, "threshold": 0.5},
)

FairOps handles everything else automatically.

Streaming to Cloud Pub/Sub
Schema validation + PII tokenization (Cloud DLP)
Demographic enrichment (BISG + ACS 2022)
12-metric audit every 15 minutes
Gemini Pro explanation on breach
10-step Vertex AI mitigation pipeline
Immutable Cloud Spanner audit trail
PDF compliance report export (EU AI Act / EEOC)
0 Fairness Metrics
< 5 min Bias detection latency
< 2% Accuracy loss limit
0 %
EU AI Act coverage

GCP-Native. Zero Infrastructure Management.

Every component is a managed GCP service. No Kubernetes. No self-hosted anything.

Your Model FairOps SDK Cloud Pub/Sub Cloud Dataflow BigQuery/Auditor Vertex AI Pipe Cloud Run / Gemini DB
Cloud Pub/Sub Cloud Dataflow BigQuery Vertex AI Pipelines Vertex AI Model Registry Cloud Run Cloud Spanner Gemini Pro Looker Studio Secret Manager Cloud DLP Cloud Armor Cloud KMS

Built for the regulatory moment.

Not bolted on after the audit letter arrives.

EU AI Act

Title III high-risk system requirements. Articles 9, 12, 13 covered. Audit logs, record-keeping, transparency obligations satisfied by default.

EEOC 4/5ths Rule

disparate_impact_ratio directly implements the 80% rule. Threshold 0.80. Computed on every 15-minute window. Breach triggers immediate pipeline.

GDPR Art. 22

Right to explanation for automated decisions. DiCE counterfactual examples + Gemini Pro narratives generated per individual, on demand.

India DPDPA

PII tokenized (not deleted) via Cloud DLP. Purpose limitation satisfied. Data stays in your GCP project — never leaves your perimeter.

Tested on documented real-world harm.

Not toy datasets fabricated for benchmarks.

COMPAS Recidivism

ProPublica

Black defendants assigned 2× higher recidivism risk scores. FairOps detects this in the first 15-minute audit window.

HMDA Mortgage Data

US CFPB

Racial and income-based lending discrimination across ZIP codes. Intersectional bias visible across demographic cross-products.

UCI Adult Income

UC Irvine ML Repository

Gender pay gap proxy in income prediction. A vanilla RandomForestClassifier gives disparate_impact_ratio ≈ 0.38 — well below the 0.80 EEOC floor.

ACS PUMS 2022

US Census Bureau

Intersectional race + gender bias in employment decisions. Multi-dimensional discrimination detected via metric #11.

FairOps is free. Forever.

MIT licensed. GCP-native. No proprietary runtime. No usage fees. No vendor lock-in beyond the cloud provider you already use.

Built by Toro Bees · Google Solution Challenge 2026 · Track: Unbiased AI Decision