Why IT Systems Fail When They Ignore Human Behavior

Many large IT initiatives fail for a remarkably simple reason:
they do not fit how people actually behave.
Over the years, I have seen ambitious systems with strong business cases, solid architecture, and executive backing quietly die after launch. Not because they were badly built, but because the solutions ignored existing user behavior and failed to compensate for it.
Two very different experiences, separated by more than twenty years, illustrate this pattern clearly.
A lesson from early telecom
More than two decades ago, I was leading a team building a solution at TDC aimed at identifying gaps in mobile network coverage.
The idea was clever on paper. At the time, TDC had around 17,500 employees spread across the country. Why not turn them into a distributed network of “human measurement stations”? Whenever someone experienced poor mobile coverage, they could report it through a simple web interface. In theory, this would generate valuable, real-world data to guide network expansion.
In practice, the system never stood a chance.
At the time, reporting could not be done directly on a mobile phone. Nokia’s 3120 was state of the art, and making an actual phone call was the killer feature, apart from being able to play Snake. That meant employees had to notice the coverage issue, remember exactly where they were, possibly write it down, and later sit at a computer to report it. The process was anything but effortless.
Despite this friction, something interesting happened initially. People genuinely wanted to help. In the first weeks, reports flooded in. Employees went out of their way to contribute.
But two things were missing.
First, the system did not align with natural behavior. It required delayed reporting, precise memory, and extra effort disconnected from the moment of experience.
Second, the workflow was not designed end to end. There was no feedback loop. Contributors never learned what happened with their reports, whether the data influenced network planning, or if improvements were even planned in their area.
The result was inevitable. Usage dropped sharply and quickly. Not because people were unwilling, but because the system demanded effort without fitting into daily behavior or returning visible value.
It was a technically sound solution that failed behaviorally.
A small experiment at home
Recently, I ran a much smaller experiment in my own household.
We are five adults living together: two parents and three young people. Coordination is constant. Groceries, dinner planning, who walks the dog, what we are running out of, etc.
For years, this coordination lived on Post-its on the refrigerator. It mostly worked, until it didn’t. Someone would use the last of something and forget to write it down. We regularly ran out of basic goods.
The system only worked if you were in the kitchen at exactly the right moment, or remembered to return later. That is not how attention works in real life—at least not in our family.
Curious about the current wave of AI-assisted development tools, I dusted off my old coding skills and converted every Post-it into a simple shared digital hub accessible from our phones: grocery list, meal planning, dog walking schedule, budget status, and shared reminders. Nothing too ambitious. Just a lightweight interface that mirrored what already existed.
Adding an item to the list no longer depended on being in the kitchen. Standing in the grocery store, it is now effortless to check the app and see what we are missing. If someone realizes they used the last of something earlier in the day, they can add it whenever and wherever they are.
Adoption required no conversation. No training. No enforcement. No explanation.
Now we rarely run out of basic goods. Not because our family suddenly became more structured, but because the system stopped competing with our behavior.
The pattern behind both examples
These two cases could not be more different in scale. One involved thousands of employees and critical infrastructure. The other is a small household experiment. Yet they fail and succeed for the same reason.
Systems tend to break when they depend on perfect timing or flawless discipline. When they assume people will remember to do something later. When they add effort without providing immediate, tangible benefit. And when they fail to close the loop by showing users that their efforts actually matters.
In contrast, systems tend to thrive when they embed themselves into existing routines. When they appear at moments where attention already exists, rather than demanding new ones. When they reduce cognitive and physical friction instead of adding to it. And when they support habits people already have, rather than asking them to abandon those habits in favor of new ones.
The leadership question
With the ability to experiment faster than ever, the key leadership question is no longer:
“Can we build this?”
or “Can AI build it for us?”
It is:
Are we clear about how this system supports existing behavior, or about which structural changes are required to make the desired new behavior feel natural?
If a system fits how people already work, adoption often happens quietly. If it doesn’t, leaders must be explicit about what needs to change: roles, workflows, incentives, decision rights, or feedback loops.
Training and cultural reinforcement can accelerate that shift. But when neither behavioral fit nor structural change is addressed, systems tend to fail.
I have seen this pattern play out in large, well-funded initiatives.
And I have seen the opposite succeed in much smaller settings.
Final thoughts
On a more personal note, we are genuinely happy with our new little digital family basecamp (image below). It has quietly removed everyday friction from our household, without anyone having to think much about it. That is probably the highest compliment you can give a system.
And for me, it was also unexpectedly satisfying to return to building something, to experiment, observe real behavior, learn, and adjust. Even if AI did most of the heavy lifting, the experience reminded me why I fell in love with development in the first place.
A snapshot of our web app.
Image credits: CharGPT and me.