ai forces preference

written april 2026 · draft

I was visiting my dad recently, and before I left we started talking about AI. He's been going down the rabbit hole: online courses, videos, trying to stay current, because he's worried about being replaced if he doesn't keep up. I don't think he's wrong to pay attention, but I think the concern is pointed at the wrong part of the system.

Refusing to engage with AI is like refusing to use the internet in 1999: you can take that stance, but you'll get left behind. My dad is a sharp guy, industrial engineer by training, went back for his MBA, spent time at GE and IBM in global operations. He's dealt with big systems and real complexity his whole career, but he's never been deep in software the way I have, so I understand where the anxiety lands. Here is how I actually think about it.

Any system has the same shape: inputs, a process, outputs, and results. Cooking makes this easy to see.

🥩 beef
🧄 garlic
🧅 onion
🫒 olive oil
👨‍🍳
Process
the cook
🍽️
Output
the meal
😋
Result
satisfaction

Engineering works the same way: requirements and context are the inputs, the engineer is the process, working software is the output, and the value that software creates in the world is the result.

📋 requirements
💬 context
🗄️ data
💻
Process
the engineer
⚙️
Output
software
📈
Result
business value

For most of history, the process was where all the value in the chain lived.

Leetcode, competitive interviews, prestigious programs, PhDs: people spent careers mastering the craft of execution, and that made sense. When the process was rare and hard to replicate, the person who mastered it became the most important part of the system.

Inputs
★ Process
where value lived
Outputs
Results

This is why Gordon Ramsay is famous: he mastered Beef Wellington, and the Great British Baking Show exists because people find genuine mastery of craft compelling. When the process is rare, the person who owns it becomes the bottleneck and the most valuable node in the chain.

It's also why engineering and management operated as two separate worlds: engineers owned the process, leaders defined the outputs, and the split made sense because both required full attention. They stayed in their lanes.

Engineering
Inputs
★ Process
owned here
the divide
Management
Outputs
defined here
Results

AI can code now, not just autocomplete: it can design systems, write tests, debug, review pull requests, and do in minutes what used to take a team of ten engineers days. The process stopped being scarce.

★ Inputs
where value lives now
AI
does the process
Outputs
Results

When people ask whether AI will take jobs, they're asking about the process, and for a lot of process-focused roles the answer is yes: the drive-through operator, the junior coder, anyone whose job was to execute a well-defined task reliably at scale. That work is going away, not because AI is magic, but because the process stopped being the hard part. What's scarce now is not the ability to build, it's knowing what to build and why: what context do you give it, what makes a good output versus a bad one, what are you actually trying to accomplish? That's judgment, taste, preference.

Worth saying: you should still verify what AI gives you. It gets things wrong, and the outputs are only as good as the context you provide. But models are improving fast, and new ones keep coming. The gap between "impressive" and "reliable" is closing with every release.

The Lottery

Lottery winners are a useful extreme. Someone spends a lifetime inside a system where work equals earning: you show up, you get paid, the harder you work the more you make. That logic becomes identity. Then overnight the process is gone, and a lot of them fall apart, not because they're bad people but because they've never had to think clearly about what they actually wanted. Without the process to orient around, they don't know what they were working toward. AI is doing this to whole categories of workers right now, just slower.

The Fitness Example

If you woke up tomorrow in perfect shape, your wardrobe doesn't fit, your diet has to change, your daily rituals are gone. The habits, the structure, the discipline are what give daily life its texture. When the output shows up without the process, it can feel wrong even if it's exactly what you said you wanted. The disruption isn't the result, it's the removal of the process you'd built your days around.

The Vegan Room

Gordon Ramsay can master Beef Wellington, but present it to a room full of vegans and the craft is irrelevant. Nobody buys it. I don't like Beef Wellington either. The process being excellent doesn't save you if no one wants what it produces. A lot of people spent careers perfecting execution without ever asking whether anyone wanted what they were building, and now that the process is going away, that question is unavoidable.

Take my friend LJ. He genuinely enjoys working out: not for the attention, not because of what it signals, but because he loves it. I've watched him evolve and grow his social media presence, and now he genuinely enjoys giving back, helping kids, sharing what he's built. When LJ came to visit me for a few days, I watched his process up close and I was genuinely surprised by how much goes into being in that kind of shape. The discipline, the consistency, the detail. It wasn't a performance, it was just who he is.

That's different from a lot of men who say they want a six-pack. When you go one level deeper, it's usually not really about the body: it's about what the body is supposed to get them, the attention, the status, the feeling of meeting a standard they've been told to care about. The Ozempic industry exists partly for this reason, and the plastic surgery industry exists partly for this reason, because beauty standards are compelling enough to make people want things they can't fully explain.

If the beauty standard shifted tomorrow and the ideal male body became something completely different, a lot of those same men would shift with it, not because they're dishonest, but because the want was never really theirs: it was the system's want, and they adopted it. Change the standard, change the preference. LJ wouldn't change. I know people who bake and give everything away and don't need anyone to eat it: they might throw it out or leave it on the porch, and whether someone wants it doesn't change whether they bake. The process and the output are genuinely theirs, and that's the difference.

Understanding why you want the output, whether it's really yours or just a response to external pressure, determines whether you'll know what to do when the process disappears or the standard changes.

The same dynamic plays out at scale in the food system. The actual ingredients in a lot of American food are bad, and nobody would knowingly choose pesticides or mechanically separated chicken if they understood what it was before it became a nugget. But you can't just present bad inputs and expect people to prefer them, so what has to happen is that you make people prefer them anyway.

You do that partly through price: a quality burger, real beef, good sourcing, might run fifteen or eighteen dollars, while a McDonald's version is a dollar. Once you make the gap large enough, people start asking themselves whether the difference is really worth it, and once they start rationalizing the cheap option, you've shaped their preference. They've started justifying an input they wouldn't have chosen consciously if they understood it fully.

Then you make them okay with the outputs: I'll be fine, I have health insurance, life is short anyway. People arrive at reasons to accept results they didn't really choose, and once you've shaped both the inputs and the acceptable outputs, the system runs itself. I'm not making claims about intent here, but if you map out the incentives, if staying sick is more profitable than getting well, that's what the system looks like.

not a conspiracy theorist

With AI doing the process, the question becomes: how good are your inputs? Bad data in, bad outputs out, and if the context is shallow the output reflects it. The process can be technically excellent and it won't matter if what you started with isn't right. This is true for cooking, engineering, fitness, any of it: you have to know what you're actually working with before you can expect the output to make sense.

Beyond the inputs, you have to know what you actually want from the results, not what you've been told to want, not the output that meets some external standard, but the one that makes sense for where you are and what you're actually trying to do, and why.

There's a concrete version of this already playing out in data infrastructure. The semantic layer is a place where teams agree on what they want the outputs to mean before anyone builds anything: what does "revenue" mean, what counts as an active user. Before it existed, every team ran their own query and got a different answer. It's a shared location for common wants. You define what good looks like once, and every downstream process answers the same question, whether that's a human query, an AI agent, or a dashboard. That's what the shift to AI requires more broadly. Not better prompting, but clearer definitions of what you're actually after.

What I think happens from here: the hard line between engineering decisions and business decisions starts to go away. It had to exist when the process was the hard part and you needed specialists who owned the craft, but with that gone, decision-making collapses into one thing: someone who understands the inputs, knows what outputs they're after, and can work end to end with AI to get there. The divide between manager and engineer gets smaller, more people have to think all the way through the system, and junior engineers ten years from now will ship what senior engineers ship today. The way we interact with code changes more than the output does.

The jobs going away aren't the end of the story, they're the beginning of a harder question: what do you actually want to build, make, or do? Not what you've been trained to produce, but what you'd reach for if the process wasn't the obstacle. Psychology matters more than people realize right now: understanding what someone wants, why they want it, and what they'll do when they get it is the actual hard work going forward.

What I'd encourage: figure out your preferences first, and then use AI to do the things you couldn't do before. The open source ecosystem makes it easier than it's ever been to just try something without permission or infrastructure.

Preference is a muscle: you get better at knowing what you want by practicing wanting things deliberately, understanding why you want them, and working backwards from there. The people who figure out the output first will be a step ahead of the people still perfecting a process that AI is about to do for them.

have thoughts about ai? let's talk!

← cedric turner ii
Semantic Layer

A semantic layer sits between your data and your tools, translating business definitions into consistent, reusable logic. Any downstream consumer — a query, a dashboard, an AI agent — gets the same answer to the same question.