We need a new Turing test to assess AI’s real-world knowledge

We Need a New Turing Test to Assess AI’s Real-World Knowledge

A fresh set of benchmarks could help specialists to better understand artificial intelligence.

Artificial intelligence (AI) models can perform as well as humans on law exams when answering multiple-choice, short-answer and essay questions, as shown in a preprint at SSRN. However, they struggle to perform real-world legal tasks.

Some lawyers have learnt this the hard way, and have been fined for filing AI-generated court briefs that misrepresented principles of law and cited non-existent cases.

Author: Chaudhri, principal scientist at Knowledge Systems Research in Sunnyvale, California.

AI models can perform as well as humans on law exams, but struggle with real-world legal tasks.

Author’s summary: New benchmarks are needed to assess AI’s real-world knowledge.

more

Nature Nature — 2025-10-30