When Revenue Stops Measuring Value

Why Our Most Successful Systems Now Produce Harm

There is a quiet, persistent sensation many people share, though few articulate clearly.

It is the feeling that the things making the most money in our society increasingly feel bad for us.

Not in an obvious way. Not as open cruelty or visible collapse. But as a steady erosion: a background anxiety after using certain platforms, a sense of distortion when consuming news, a subtle depletion after engaging with systems designed to be “successful.” We scroll more, yet feel less informed. We are more connected, yet more isolated. Services become more efficient, yet life feels thinner.

This isn’t nostalgia. And it isn’t simply a complaint about modernity.

It points to something structural — a reversal that has occurred quietly enough that we still struggle to name it.

Revenue as a Proxy for Value

For most of modern history, revenue functioned as a rough proxy for value.

This was never perfect, but it worked well enough. If a product or service earned money, it usually meant someone found it useful. If it caused harm, that harm tended to be visible and local. Reputation mattered. Bad actors were constrained by proximity, law, and social memory.

Money flowed toward competence because incompetence was costly. You could not easily profit by degrading your customers without them noticing, leaving, or warning others.

There was an implicit contract: revenue followed usefulness because consequences were difficult to hide.

That assumption shaped everything — business, media, politics, even culture. We learned to read financial success as evidence of contribution.

But that assumption no longer reliably holds.

The Scale Break

What changed was not human nature. It was scale.

As systems grew larger, faster, and more abstract, the relationship between action and consequence began to stretch. Harm no longer appeared immediately. It no longer appeared locally. And increasingly, it no longer appeared clearly at all.

At sufficient scale, damage becomes distributed across millions of people, delayed across months or years, and absorbed quietly into health systems, families, and individual psychology. The source of the harm becomes hard to trace, even harder to prove, and easiest to deny.

At that point, revenue stops reflecting whether something is good for people and starts reflecting whether something can extract attention, dependency, compliance, or behaviour.

This is the moment the proxy breaks.

Revenue stops measuring value.
It starts measuring extractability.

The Inversion

Early in a system’s life, profit and benefit tend to align. Helping people works. Solving real problems works. Revenue follows service.

But as systems optimise for scale, a bend appears.

Beyond that bend, the most profitable behaviours are no longer the most beneficial ones. They are the most efficient ones — efficient at capturing time, emotion, habit, or belief. At this stage, harm does not need to be intentional to be profitable. It only needs to be tolerated.

This is the inversion point.

Not where people become malicious, but where optimisation quietly rewards outcomes that degrade human wellbeing because those outcomes outperform healthier alternatives on the chosen metric.

This is why the language of “evil” becomes tempting here — not as a moral accusation, but as a description of harm that emerges without hatred, cruelty, or even awareness.

Evil, in this sense, is not motivation.
It is output.

Why Harm Wins

Once the inversion point is crossed, certain patterns repeat with remarkable consistency.

Anxious users engage more than calm ones.
Outraged audiences share more than informed ones.
Dependent customers are more profitable than satisfied ones.
Fear mobilises faster than trust.
Identity binds tighter than truth.

These are not secrets. They are measurable facts.

Systems tuned to maximise revenue will naturally drift toward the most responsive human vulnerabilities, because those vulnerabilities convert more reliably into attention, time, and money than wellbeing does.

Importantly, this does not require anyone to decide to harm others. The system simply follows the gradient. If a harmful outcome produces more revenue than a healthy one, the system will select it — just as water flows downhill.

This is optimisation without a conscience.

Why Good Intentions Don’t Save Us

At this point, the conversation often turns moral. Surely better leadership could fix this. Surely ethical guidelines, content moderation, or regulation could realign incentives.

Sometimes these measures help at the margins. Often they fail.

They fail because the underlying signal remains unchanged.

Individuals inside large systems cannot consistently act against the metric that determines survival. If restraint reduces revenue, restraint is punished. If care slows growth, care is sidelined. If truth reduces engagement, truth becomes optional.

This is not because people are weak or corrupt. It is because systems reward what they measure.

Blaming individuals for this dynamic is emotionally satisfying but analytically useless. The problem persists even when leadership rotates, policies change, or intentions improve.

The signal keeps winning.

Why the Pattern Repeats Everywhere

One of the most unsettling aspects of this inversion is how reliably it appears across domains.

Technology platforms.
News media.
Politics.
Health services.
Education.
Even parts of the non-profit sector.

Different missions. Different cultures. Same outcome.

The reason is simple: the inversion does not care what the system claims to value. It only cares how success is quantified.

Any system that treats revenue — or a close proxy like engagement, growth, or market share — as its dominant measure of success will eventually face the same pressure. If scale outpaces accountability, harm will outperform health.

This is not a failure of capitalism, socialism, or any ideology in particular. It is a failure mode of optimisation itself when feedback becomes abstract and consequences are externalised.

The Uncomfortable Implication

The most uncomfortable implication of this dynamic is not that some systems are “bad.”

It is that some things cannot be safely optimised at scale.

Certain forms of value require friction. They require slowness, presence, and limits. They resist being measured cleanly. They degrade when extracted too efficiently.

When revenue becomes the universal yardstick, those forms of value are quietly excluded — not because anyone opposes them, but because they do not perform well on the metric.

The result is a civilisation that becomes extremely good at producing revenue and increasingly poor at producing wellbeing, wisdom, or trust.

Where This Leaves Us

This essay does not offer a solution. That is deliberate.

The first task is not redesign, but recognition.

We cannot correct what we continue to mismeasure. We cannot fix systems whose harm we still mistake for success.

The question is no longer whether revenue matters — it does. The question is whether it can continue to function as our primary signal of value in systems powerful enough to reshape psychology, culture, and society itself.

When revenue stops measuring value, success becomes dangerous.

And the danger is not dramatic. It is quiet, cumulative, and profitable.

Which is why it has been so easy to miss.