Article 14 of the EU AI Act requires meaningful human oversight of high-risk AI systems. Here is what 'meaningful' looks like in practice.
The text is short. The implication is large.
Article 14 (4) requires that natural persons can intervene on the operation of high-risk AI systems and override their outputs.
Meaningful intervention requires the system to expose its reasoning legibly, in time, with the authority to change the outcome.
Approval gates, not approval theater.
A button that says 'approve' next to a black box is not oversight. Oversight requires the gate to expose the evidence chain, the alternatives considered, and the override path.
Huginn gates expose all three by default. Sleipnir gates honor them at execution time.
Override is not the same as veto.
The Article 14 right is the right to change the outcome, not just to block it. That implies the system has to surface what would have been done, what could have been done, and what is being done.