Remember when Netflix was $8 a month and had everything?
Now you need Netflix, Hulu, Disney+, HBO Max, Amazon Prime, and Paramount+ just to watch what used to be on cable. And half of them have ads now unless you pay even more.
YouTube TV costs more than cable did.
Uber was supposed to kill expensive taxis. Now it costs the same or more, and drivers make less.
Airbnb promised affordable local experiences. Now it's more expensive than hotels with cleaning fees, service fees, and a 40-item checkout list.
Every tech company from the 2010s followed the same playbook. Make something great. Price it low. Get everyone hooked. Then slowly make it worse while charging more.
And I'm terrified the automation industry is heading the same direction.
The Pattern Everyone Followed
Here's what happened with every 2010s tech company.
Phase 1: Make something genuinely better and cheaper. Netflix was actually better than cable. Uber was actually cheaper than taxis. Airbnb was actually more affordable than hotels.
They weren't lying. The value was real.
Phase 2: Grow at all costs. Raise venture capital. Undercut competitors. Lose money on every transaction but make it up in volume (or don't, just grow). Get everyone dependent on your platform.
Phase 3: Extract. Once you have market dominance and everyone's hooked, start increasing prices. Add fees. Remove features. Make the experience worse while charging more.
That's not a conspiracy theory. That's literally the business model. It's called "enshittification" and it's how most venture-backed tech companies operate.
They call it "finding product-market fit" and "optimizing unit economics." What they mean is "we got you hooked, now we're raising prices."
Why I Think About This Constantly
I run an automation consulting business. My pitch is basically: "I can save you 20 hours a week and thousands of dollars a year by automating manual work."
That's valuable. That's real. My clients see those results.
But I'm watching the broader automation and AI industry, and I see the same pattern starting.
AI tools that started free are adding paywalls. Automation platforms that were affordable are raising prices 40%. "AI consultants" are popping up everywhere promising to solve everything with ChatGPT.
The gold rush phase. Where everyone's trying to grab market share and growth, not thinking about sustainable value.
And I keep asking myself: how do I not become that?
The Temptation Is Real
Here's what I could do if I wanted to maximize short-term revenue.
Overpromise. Tell every client AI will solve all their problems. Build hype about capabilities that don't exist yet. Get them signed up before they realize the limitations.
Underbuild. Deliver the minimum viable solution. Get paid. Move to the next client. Don't worry about whether it actually works long-term.
Create dependency. Build systems they can't modify themselves. Make them reliant on me for every change. Recurring revenue forever.
Raise prices aggressively. Once they're dependent, increase prices. They're already invested, they'll pay.
That's the playbook. And it works in the short term. You'd make a lot of money quickly.
But you'd also be doing exactly what streaming services and rideshare companies did. Delivering real value initially, then extracting more while delivering less.
Why I'm Trying To Do The Opposite
I could charge way more than I do. I know what my competitors charge. I'm leaving money on the table.
But I'm optimizing for something different.
I want clients who come back. Not because they're locked in. Because the first thing I built actually worked and they trust me with the next thing.
I want to build systems they can modify themselves. Not because I'm scared of losing recurring revenue. Because they should own what they pay for.
I want to be honest about what AI can and can't do. Not because honesty is noble. Because overpromising destroys trust, and I need trust to do good work.
I want sustainable pricing. Not "low to get you hooked then raise it later." Just fair pricing that reflects the value delivered.
That's not altruism. That's self-interest aligned with client success.
What This Looks Like Practically
A potential client asked me last month to build an AI system to handle their customer service.
I could have said yes, taken the money, built something that technically worked but didn't actually solve their problem.
Instead I told them their customer service process was broken. AI would just make the broken process faster. They needed to fix the process first, then we could automate it.
That conversation cost me a sale. They didn't want to hear that fixing their process would take work.
But it was the honest answer. And lying to close a sale turns me into exactly what I'm trying not to become.
I had another client who wanted ongoing support after a project. Wanted me on retainer to make changes and updates.
I built the system in a way they could modify themselves. Walked them through how it worked. Gave them documentation. Told them to call me if they got stuck, but they probably wouldn't need to.
That cost me recurring revenue. But they own their system. They're not dependent on me. They'll call me when they have the next automation opportunity because they trust I won't lock them in.
That's the tradeoff. Less short-term revenue. More long-term relationships.
The Part That's Hard To Admit
I don't know if this model actually works long-term.
Maybe I'm being naive. Maybe the extraction model wins because that's how capitalism works. Maybe being ethical about consulting is just leaving money on the table for no reason.
I see other consultants charging 3x what I charge. Building dependency. Raising prices on existing clients. Making more money than me.
Maybe they're right and I'm wrong.
But I've watched what happened to tech companies that went the extraction route. They made a lot of money. They also created a lot of resentment.
People hate their streaming subscriptions now. They hate rideshare prices. They hate Airbnb fees. They use them because they're stuck, not because they want to.
That's not the relationship I want with clients.
What Ethical Automation Actually Looks Like
I think ethical automation consulting means a few specific things.
Be honest about what AI can and can't do. Don't overpromise. Don't sell capabilities that don't exist yet. Don't pretend you can automate things that require human judgment.
Build systems clients can own and modify. Don't create dependency. Make yourself optional, not essential.
Charge fair prices that reflect value delivered. Not "as much as they'll pay" or "whatever the market will bear." Actual value-based pricing where the client wins too.
Think long-term. Optimize for clients coming back because they want to, not because they're locked in.
Be transparent about costs and tradeoffs. If something's going to be expensive to maintain, say so upfront. If there's a simpler solution that costs less, recommend that even if it means less revenue.
This might sound idealistic. Maybe it is.
But I've seen what happens when you optimize purely for extraction. You make money. You also build a business you eventually hate, working with clients who resent you.
I'd rather build something sustainable.
The Test I'm Using
Here's my personal test for whether I'm heading the wrong direction.
If a client from three years ago called me today, would they be happy to hear from me? Or would they think "oh great, what's he trying to sell me now?"
If the answer is the second one, I've become what I'm trying not to become.
So far, clients from years ago still refer people to me. Still call when they have new automation needs. Still say good things about working together.
That tells me I'm probably on the right track.
But I have to keep checking. Because the temptation to extract instead of deliver is always there.
The tech companies from the 2010s didn't start out planning to enshittify their products. They just slowly optimized for growth and revenue until they got there.
I'm trying not to make the same mistake.
We'll see if it works.