What Cybernetics Taught Me About Product Design Cybernetics is not a theory of machines, but of viable systems under change. This reflective piece explores how that perspective reshaped my approach to AI-driven product design.
AI Adoption Is a Systems Problem, Not a Training Problem Resistance to AI is rarely about mindset or education. This essay argues that adoption emerges naturally when systems invite participation in their understanding.
Agent Networks Are Not About Intelligence — They’re About Responsibility The real value of agent systems is not smarter output, but clearer accountability. This piece reframes agents as a way to scope responsibility rather than centralize intelligence.
Why “Workflow” Is the Wrong Primitive for AI Product Design Workflows assume predictability; AI introduces variation. This essay explains why assessment must precede flow when systems are asked to interpret uncertain inputs.
Designing with Distinctions, Not Features Systems fail less from missing features than from blurred boundaries. This piece shows why making distinctions explicit is often more powerful than adding new capabilities.
The Inbox Is the Most Underdesigned System in Enterprise Software The inbox is where reality enters software systems — and where interpretation is most often postponed. This essay reframes the inbox as a boundary of meaning, not a list of messages.
When a System Can Speak in Sentences, It Can Be Governed Dashboards and metrics expose data, but not understanding. This piece explores why systems become governable only once they can articulate their current state in human language.
Meaning Comes Before Automation Automation fails quietly when systems act before they understand what they are dealing with. This essay argues that stabilizing meaning is a prerequisite for trust, not an optional refinement.