FairOps detects bias in production ML systems every 15 minutes, explains it in plain English, and automatically mitigates — before a compliance officer or a journalist does.
3 lines of SDK code. Predictions stream to Cloud Pub/Sub → Dataflow → BigQuery. 12 fairness metrics computed every 15 minutes with 95% bootstrap confidence intervals.
SHAP TreeExplainer pinpoints which features drive the bias. Gemini Pro writes a plain-English audit report for your compliance officer — not your data scientist.
A 10-step Vertex AI Pipeline automatically selects the right AIF360 algorithm, retrains with fairness constraints, validates on live traffic, and promotes the debiased model. Under 2% accuracy loss.
Computed simultaneously on every audit window. All with bootstrap CIs and chi-square significance gating.
| # | Metric Name | Threshold | Breach | What It Catches |
|---|---|---|---|---|
| 1 | demographic_parity_difference | 0.10 | > | Selection rate gap between groups |
| 2 | equalized_odds_difference | 0.08 | > | Performance gap at same error rates |
| 3 | equal_opportunity_difference | 0.05 | > | True positive rate disparity |
| 4 | disparate_impact_ratio EEOC | 0.80 | < | EEOC 4/5ths rule violation |
| 5 | average_odds_difference | 0.07 | > | Combined error rate fairness |
| 6 | statistical_parity_subgroup_lift | 1.25 | > | Worst-case group advantage |
| 7 | predictive_parity_difference | 0.08 | > | Precision gap across groups |
| 8 | calibration_gap | 0.05 | > | Score miscalibration by group |
| 9 | individual_fairness_score | 0.85 | < | Similar inputs treated differently |
| 10 | counterfactual_fairness | 0.06 | > | Causal fairness under intervention |
| 11 | intersectional_bias_score | 0.12 | > | Multi-dimensional discrimination |
| 12 | temporal_drift_index | 5.00 | > | Bias getting worse over time |
† If chi-square p > 0.05, severity is overridden to LOW regardless of metric value. Statistical noise is not bias.
disparate_impact_ratio < 0.65 · OR · any metric > 3× threshold · OR · 3+ metrics breached simultaneously
Vertex AI Pipeline triggered immediately (synchronous)
disparate_impact_ratio ∈ [0.65, 0.80) · OR · 2 metrics breached
Cloud Tasks queue, 1-hour delay
1 metric breached, value < 2× threshold, p < 0.05
Logged + dashboard highlight + next retrain cycle
p > 0.05 (statistical noise) · OR · no breach
Audit trail only / Clean record
from fairops_sdk import FairOpsClient
client = FairOpsClient(
project_id="fairops-prod",
model_id="hiring-classifier",
model_version="v2.1",
use_case="hiring",
tenant_id="acme-corp",
)
# Call after every prediction. That's it.
client.log_prediction(
features={"age": 35, "sex": "Male", "education": "Bachelors"},
prediction={"label": "approved", "score": 0.87, "threshold": 0.5},
)
FairOps handles everything else automatically.
Every component is a managed GCP service. No Kubernetes. No self-hosted anything.
Not bolted on after the audit letter arrives.
Title III high-risk system requirements. Articles 9, 12, 13 covered. Audit logs, record-keeping, transparency obligations satisfied by default.
disparate_impact_ratio directly implements the 80% rule. Threshold 0.80. Computed on every 15-minute window. Breach triggers immediate pipeline.
Right to explanation for automated decisions. DiCE counterfactual examples + Gemini Pro narratives generated per individual, on demand.
PII tokenized (not deleted) via Cloud DLP. Purpose limitation satisfied. Data stays in your GCP project — never leaves your perimeter.
Not toy datasets fabricated for benchmarks.
Black defendants assigned 2× higher recidivism risk scores. FairOps detects this in the first 15-minute audit window.
Racial and income-based lending discrimination across ZIP codes. Intersectional bias visible across demographic cross-products.
Gender pay gap proxy in income prediction. A vanilla RandomForestClassifier gives disparate_impact_ratio ≈ 0.38 — well below the 0.80 EEOC floor.
Intersectional race + gender bias in employment decisions. Multi-dimensional discrimination detected via metric #11.
MIT licensed. GCP-native. No proprietary runtime. No usage fees. No vendor lock-in beyond the cloud provider you already use.
Built by Toro Bees · Google Solution Challenge 2026 · Track: Unbiased AI Decision