The EU now requires a human name attached to every AI decision that affects a livelihood.

New enforcement measures end the era of 'the computer says no' as a valid legal defence.

In March 2026, the European Commission introduced enforcement measures requiring that any AI-driven decision affecting a person's livelihood — from hiring filters to credit scoring — must have a verifiable human audit trail. The machine provides the data. The human provides the authority. And the human's name is on the decision.

For the last three years, the corporate world rushed toward total algorithmic automation to cut costs. This regulation forces a structural reversal. It follows the same pattern as the safety driver requirements in autonomous vehicles: the technology exists, but the legal liability remains human. When the state mandates that a person must sign off on a machine's output, the AI is demoted from decision-maker to junior researcher.

If your process does not have a name attached to the final click, it is now a legal liability in the EU.

💡
SO WHAT?
Map every automated decision point in your current operations that affects people's livelihoods. If your business relies on AI to make autonomous choices without a human override, implement human-in-the-loop checkpoints now — the regulatory direction is clear, and it is spreading beyond the EU.

Source: European Commission