Michael Bishop
Forecasting · AI Governance · Ottawa
I work on forecasting — how to build systems and find people that make better predictions about uncertain futures. My research has focused on what separates the best forecasters from everyone else, how teams outperform individuals, and how to design infrastructure that makes collective intelligence more accurate.
I contributed to the Good Judgment Project team that won the IARPA crowdsourced forecasting tournament and later led the forecasting component of DARPA SCORE, a large-scale effort to predict the replicability of social and behavioral science research. Currently I work on AI governance in Ottawa.
Research
- Forecasting the publication and citation outcomes of COVID-19 preprints
- Are replication rates the same across academic fields? Community forecasts from the DARPA SCORE programme
- What Makes Foreign Policy Teams Tick? Explaining Variation in Group Performance at Geopolitical Forecasting
- Assessing Objective Recommendation Quality through Political Forecasting
- Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions
- The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics
Awards
- 2021 Top Prize, Clinical Trials Forecasting (Human Forests)
- 2014 1st Place, American Civics Exchange
- 2013 2nd of 3,000+, GJP Probabilistic Forecasting Tournament
- 2012 1st of 300+, Good Judgment Project Prediction Market