Every forecast ConflictRadar publishes is tracked against its actual outcome. Our calibration curve and Brier score are public. If our model is wrong, it's visible here.
Palantir, Dataminr, and Recorded Future don't publish their accuracy. We do — because a forecast you can't audit isn't worth trusting.
Predicted probability vs. observed outcome, grouped into decile buckets. A perfectly calibrated model plots on the diagonal.
Not enough resolved forecasts yet.
We publish calibration once we've resolved a statistically meaningful sample. Forecasts auto-resolve against ground-truth events on their resolution date. This page will populate as the model accumulates a track record.
Every published forecast carries a resolution question, a due date, and resolution criteria. On the due date we compare the predicted probability to the observed outcome and record the squared error (Brier contribution). Nothing is hand-picked.
Brier ranges 0 (perfect) to 1 (maximally wrong). 0.25 is an uninformed coin-flip baseline. ViEWS-style academic conflict models run ~0.15. Superforecasters aggregate to ~0.11. We publish ours regardless.
Forecasts blend statistical baselines (ViEWS-style event counts), LLM causal reasoning against our entity graph, and liquid prediction-market odds where available. Per Article 50 of the EU AI Act (enforceable 2 August 2026), every AI-assisted output in the product carries a disclosure label linking here.
Full methodology