An idiot’s guide to prediction markets

I have been trying to understand how to form better predictions for a long time. I will use writing this blogpost as an instrument for doing so.

I came across an interesting question on manifold.markets, which is a website that gives people play money to predict the outcomes of events unfolding in the world.

I give this a 75% chance of happening, which is in fact a number I arrived at by writing this post. I will detail my thought process below.

What is my prediction, and how much money did I spend on this?

I predict that the answer is “Yes”, and I bought $200 worth of shares on it.

Why do I think that the answer is “Yes”?

There are multiple reasons, really:

How many questions must one solve to get a gold medal at an International Math Olympiad?

Historically, one must solve at least 4/6 questions, and get some partial points on the others.

What is the probability of me winning the bet?

Let us use Bayes’ theorem to answer that. In 2020, AI could hardly solve any math questions that require creativity. By 2022, it can verify complex proofs as well as answer the easier IMO questions. It can also formulate mathematical conjectures, and help mathematicians prove big results. It has gotten better at answering open-ended questions like “What is the meaning of life?”, successfully predicted protein-folding, etc. How much harder are the more difficult IMO questions than the easier ones? Let’s see: around 550 students participate at the IMO every year, and around 2-3 students get perfect scores. Hence, if one assumes that at least 400 students solve at least one problem, we can guess that the harder IMO questions are at least 100 times harder than the easier ones. Assuming that AI becomes only 10 times better at solving math questions every 2 years, this is not good news for this prediction to come true by 2025.

Does this suggest that the IMO challenge should be solved by 2026, and not 2025?

Yes. If we continue our rate of progress, and no further algorithmic advances are made that rapidly speed up progress, we should be golden by 2026.

What should the chances of the challenge being answered by 2025 be?

Assuming that each additional year until 2026 adds 25% to this probability, maybe 75%?

What will cause me to update this probability?

Reading papers along the lines of “AI can now solve almost all questions on the IMO”, or “AI likely to require human supervision in the near future” would cause me to update my prediction.

What about the fact that the Metaculus dashboard is much more pessimistic about this question than me?

If I turn out to be wrong, I will update my preferences, and look at Metaculus predictions to set my expectations (base rate).

Published by -

Graduate student

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: