
The need for AI safeguards with human-in-the-loop systems
AI systems have problems. They don’t always work right. They hallucinate, lie, or simply forget the all-important closing bracket in a JSON payload. AI systems need accountability, safeguards and oversight. But what should that look like? Human-in-the-loop (HITL) and its companion human-on-the-loop are in a nascent stage. These strategies, designed