Incorporating human error into machine learning

  • 10 August 2023

Researchers including a 2023 Gonville & Caius College graduate are developing a way to incorporate one of the most human of characteristics – uncertainty – into machine learning systems.

Human error and uncertainty are concepts that many artificial intelligence systems fail to grasp, particularly in systems where a human provides feedback to a machine learning model. Many of these systems are programmed to assume that humans are always certain and correct, but real-world decision-making includes occasional mistakes and uncertainty.

Researchers from the University of Cambridge, along with The Alan Turing Institute, Princeton, and Google DeepMind, have been attempting to bridge the gap between human behaviour and machine learning. The team adapted a well-known image classification dataset so that humans could provide feedback and indicate their level of uncertainty when labelling a particular image. The researchers found that training with uncertain labels can improve these systems’ performance in handling uncertain feedback, although humans also cause the overall performance of these hybrid systems to drop.

Katherine Collins from Cambridge’s Department of Engineering is first author on the paper. The team’s results will be reported at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES 2023) in Montréal this week.A man in a red jumper smiling

Matthew Barker (Engineering 2019), pictured, who completed his MEng degree at Caius in the summer, said: “We know from decades of behavioural research that humans are almost never 100% certain, but it’s a challenge to incorporate this into machine learning.

“We’re trying to bridge the two fields, so that machine learning can start to deal with human uncertainty where humans are part of the system.

“In some ways, this work raised more questions than it answered. But even though humans may be mis-calibrated in their uncertainty, we can improve the trustworthiness and reliability of these human-in-the-loop systems by accounting for human behaviour.”

For the story, visit the University of Cambridge website.

The paper is online: [2303.12872] Human Uncertainty in Concept-Based AI Systems (arxiv.org)

2 minutes