The Second-Order Problem AI Governance Isn’t Ready For, Yet.
Why the real risk of AI products isn’t failure, it’s slow human drift.
Most conversations about AI governance today focus on the obvious risks. Hallucinations. Bias. Misuse. Safety failures. Accuracy thresholds.
These are important. They’re also incomplete because the most consequential risk of AI products isn’t when they fail loudly. It’s when they work too well and quietly change how humans think, decide, and act over time.
From System Failure to Human Drift
Traditional product failures are easy to spot. Something breaks. A metric drops. Users complain.
Second-order effects don’t announce themselves like that. They emerge gradually, across weeks and months, as subtle shifts in human behaviour:
growing dependency,
reduced agency,
narrowing choices,
eroding judgment.
Nothing crashes, nothing violates policy, the dashboards look fine and yet, something important is changing. The system may be accurate. The outcomes may be positive. But the human behaviour may be shrinking.
Dependency: When Success Looks Like Reliance
Consider AI assistants that help with writing, planning, summarising, and decision-making.
First-order view:
Time saved goes up.
Task completion improves.
Users report satisfaction.
Second-order reality:
Users stop attempting first drafts.
Planning becomes delegated by default.
Confidence shifts from “I think” to “Let me see what AI suggests.”
Nothing here is misuse. Nothing here is error. Dependency isn’t a bug, it’s a side effect of usefulness and unless you’re explicitly looking for it, dependency is indistinguishable from delight.
Reduced Agency Disguised as Convenience
Now look at consumer products that automate financial decisions, scheduling, or daily routines.
First-order win:
Friction disappears.
Decisions feel effortless.
Outcomes are objectively “better.”
Second-order drift:
Users lose fluency.
They struggle to explain why a decision was made.
Manual intervention feels intimidating, even risky.
The system didn’t take control. The user gradually stopped exercising it. Accuracy does not equal empowerment. Optimization does not equal agency.
Current governance frameworks don’t flag this because nothing went wrong, but something meaningful was lost.
Behavioural Narrowing Through Helpful Personalization
Personalization is one of AI’s greatest strengths, and one of its quietest risks.
First-order success:
Relevance improves.
Engagement increases.
Users feel “understood.”
Second-order effect:
Choices narrow.
Exploration declines.
Serendipity fades.
Over time, the system feeds back what already works, reinforcing familiar patterns.
The product isn’t manipulative. It’s just increasingly confident, and increasingly limiting.
Behavioral narrowing doesn’t feel restrictive. It feels comfortable and comfort is the hardest thing to govern against.
Cognitive Offloading and Skill Atrophy
AI copilots now assist with writing, coding, research, ideation, and synthesis.
First-order gain:
Output quality improves.
Velocity increases.
Barriers to entry fall.
Second-order cost:
Users stop practicing foundational skills.
They struggle to assess quality without AI assistance.
Judgment weakens because critique is outsourced.
The danger isn’t laziness, it’s diminished discernment. When humans stop being good judges, even perfect outputs become dangerous. The harm is not in what the system produces but in who the user becomes.
Why Current Guardrails Don’t Catch This
Second-order human effects are hard to govern because they share four traits:
They’re longitudinal. They emerge over time, not sessions.
They’re probabilistic. Not every user is affected equally.
They’re contextual. What’s helpful for one person may be harmful for another.
They look like success early on. Rising usage, satisfaction, efficiency.
First-order thinking asks: “Did the system work?”
Second-order thinking asks: “What is the system shaping?”
Most product organizations are still optimized for the first question.
The Real Governance Challenge
The hard problem isn’t identifying these risks.
It’s this:
How do you turn slow, human-level behavioural drift into something a system can be constrained against?
You can’t:
set a clean threshold for “too dependent,”
A/B test “loss of agency” in two weeks,
or optimize for “human growth” the way you do CTR.
This is why AI governance can’t remain:
intent-based,
accuracy-based,
or policy-only.
It has to become product-led and longitudinal.
What Expanded Guardrails Might Look Like
This doesn’t mean banning autonomy or slowing innovation to a crawl.
It means evolving guardrails to include:
signals of declining agency,
patterns of over-acceptance,
shrinking choice diversity,
reduced user initiation,
increasing cognitive offloading.
It may mean designing:
friction on purpose,
reflection moments,
periodic re-engagement,
explainability that restores understanding, not just transparency.
In the next phase of AI products, restraint may become a competitive advantage.
What This Demands of Product Leaders
For consumer PMs, this is an uncomfortable shift.
It requires unlearning:
the instinct to remove all friction,
the belief that optimization is neutral,
the assumption that delight is always benign.
And relearning:
systems thinking,
behavioural foresight,
long horizon accountability.
Closing Reflection
AI will increasingly shape how humans decide, explore, and think.
The real risk isn’t loss of control overnight. It’s gradual surrender without noticing.
The second-order problem isn’t a future edge case. It’s the default outcome of success at scale.
And unless product leaders expand how they define “good outcomes,” we’ll keep building systems that work perfectly while quietly narrowing the humans they’re meant to serve.
If this resonated, subscribe to The Long Sprint - reflections from the never-ending product sprint called a career.



Couldn't agree more. This article perfectly articulates the real AI governance challange! It's so important to think beyond obvious failures and consider the subtle human drift. Thank you for this crucial insight!