Simple bench is the only reasoning benchmark written in natural language at which English-speaking humans (and yes, even 'smart highschoolers') can score 90%+, while frontier LLMs get less than 50%. It is an encapsulation of the reasoning deficit found in AI like ChatGPT.
These questions are fully private, preventing contamination, and have been vetted by PhDs from multiple domains, as well as the author - Philip, from AI Explained - who first exposed the numerous errors in the MMLU (Aug 2023). This was celebrated by, among others Andrej Karpathy.
I regularly update the leaderboard as new models are released or existing models are improved. Our evaluation methodology ensures a fair comparison across different AI architectures and companies.
I am also actively interested in sponsorship of the benchmark, or other business enquiries (i.e. testing private models). Contact aiexplained@outlook.com.