前幾天應了一位大學同學的邀約。 和他的CS學生們聊AI。 所以原稿是英文。不好意思,西岛姐,您说是不是值得讨论

The Real Shift in AI Isn’t Coding—It’s How We Think

 

Now, let’s all hold up three fingers… wave them across your face… and say hello, everyone.

Don’t worry, Professor Chang—I’m not crazy… yet.

But why am I asking you to do this?

 


 

The other day, one of my coworkers told me about a video he came across. In the video, the person was doing all kinds of things—but there was one thing they wouldn’t do.

They wouldn’t wave three fingers across their face.

Turns out… It was an AI-generated video.

 


 

This is actually a known trick to spot deepfakes.

But—don’t get too comfortable thinking you’ve found a reliable test.

Because things are changing very quickly in the AI world.

 


 

Just last Friday, I was at an AI Summit at MIT.

One of the panelists, a professor, was explaining how the human brain works. He said, “Humans can take what we learn and turn it into graphs. AI can’t do that.”

And the moderator interrupted him and said,

“No, professor—that’s old news. AI can do that now.”

The professor was surprised. He asked, “Since when?”

The moderator said, “About two weeks ago.”

Professor Zhang and I studied computer science together in college. Back then, the language we used in our AI class was LISP. You’ve probably never heard of it—and that’s okay. It’s not very relevant anymore.

These days… AI speaks English.

After college, I worked at a textile company maintaining their inventory system. My hometown is in central Taiwan, where summer temperatures hit 90 to 100 degrees.

And I had a lot of friends in the company… because my office housed the mainframe stacks—and it was the only air-conditioned room.

mainframe stacks, running COBOL.

After about a year, I left and came to Boston for my master’s in computer science.

After grad school, I stayed in Boston and have been coding ever since. My first job was as a junior programmer working with C++.

Then Java came along.

And as you probably know—software engineers tend to have… strong opinions.

Back then, at least in my circle, we believed in C++.

Java? That was for people who couldn’t manage memory.

Of course… It took a few years, but Java became dominant.

 


 

I’ve been in this field for decades, and I’ve seen a lot of “best practices” come and go.

Usually, it takes years for people to adopt something new. Part of that is inertia—we get comfortable. Once we believe something works, it’s hard to change.

But what’s interesting is:

That cycle is getting shorter.

 


 

Take AI.

ChatGPT was released on November 30, 2022—and within days, millions of people were using it.

At the time, I’ll be honest—I didn’t pay much attention.

To me, it felt like… a shiny new toy.

It wasn’t until the second half of 2025 that I really started experimenting—with tools like Cursor, Copilot, and agents for writing tests.

And honestly? It was… okay.

 


 

But then things changed—very quickly.

Since January 2026, especially after Davos (the World Economic Forum), it felt like overnight everyone started talking about Anthropic.

We started using Claude Code on March 10th.

I remember that date very clearly.

Vibe Coding

That was the day I started taking vibe coding seriously.

In other words, I began spending a lot more time thinking carefully about my prompts—describing exactly what I wanted, how it should behave, and what the output should look like.

 


 

And here’s something funny:

When Claude processes your prompt… it costs tokens.

By the end of March, I was checking my token usage every day.

If you’re not familiar with tokens—it’s a bit like an amusement park.

You pay tokens… to go on rides.

And honestly? That’s exactly how it feels.

I’m spending tokens… to take my rides with Claude Code.

 


 

So what does that actually look like in my day-to-day work?

We now use tools like Claude to speed up how we design workflows.

Before AI, when we got new feature requirements from a product manager, we’d gather around a whiteboard and break down the requirements.

For example:

“We want to pop a dialog to perform X, When the user finishes action A.”

Simple, right? But how do we tell machines to do that?

We would translate that into detailed specs:

  • How do we detect action A?

  • Where does the dialog appear?

  • What size is it?

  • What does it look like?

And that’s just a tiny piece.

Turning product requirements into solid technical specs takes time—and it’s collaborative. We debate, challenge each other, and try to find edge cases.

 


 

Now?

We can work on multiple projects in parallel.

How?

Each of us has a team of Claude agents living inside our IDEs.

 


 

The process is actually the same at its core—but now we do it through agents.

So yes—I write more prompts and less code.

But that doesn’t mean I’m doing less engineering.

It just means the work starts differently.

Instead of jumping straight into code, I:

  • describe the problem

  • define the expected behavior

  • explain where it fits in the system

Of course, I go back and forth with Claude until we reach a solid plan.

 


 

But here’s the key:

You can’t trust AI blindly.

It will get things wrong.

And this is really important to understand—because it explains both the power and the risk.

Generative AI is fundamentally based on probability.

 


 

It’s not “thinking” the way we do.

It predicts the next word based on patterns it has learned from massive amounts of data—and then chooses from the words that are most likely to come next.

Then it does that again.

And again.

And again.

So what feels like intelligence…

is actually a chain of very sophisticated guesses.

 


 

A simple way to think about it:

It’s like your phone’s autocomplete… but on steroids.

It doesn’t know the answer.

It just knows what answer is most likely.

 


 

And sometimes, that goes very wrong.

You may have heard about the Mata v. Avianca case in New York.

For those who have not heard of it, it is about

A lawyer used ChatGPT to write a legal brief—and it confidently cited cases.

The problem?

Those cases didn’t exist.

The judge fined the lawyers $5,000.

So the lesson is simple:

AI can sound very convincing—but you still have to verify everything.

How do you do that?

You ask AI to explain its reasoning step by step: how did it reach that conclusion, and where did it get the sources?

I have a story that illustrates this well.

A friend of mine is a law professor who has published many papers. Recently, he was working on a new paper and needed examples and cases to support his point. He used an AI model to help search for them, and of course, the AI confidently found cases that seemed to support his theory.

But my friend, being a very experienced scholar, always asks for the source. And again, AI provided citations.

Here’s what set him apart: he cross-checked those citations with a different AI model.

And the source could not be found anywhere.

So he went back to the original model and asked, very directly, “I couldn’t verify the source. Did you make it up?”

And AI admitted it—without shame.

I guarantee you, this will happen again and again.

The lesson is simple: always analyze, always verify.

 


 

So from my perspective, the biggest shift is this:

I used to spend my time writing code for machines

Now I spend a surprising amount of time explaining things to machines.

And that’s what prompt engineering is.

 


 

If you’ve ever used ChatGPT and thought:

“Why is this answer so bad?”

Take another look at your prompt, and ask yourself why does the model not understand my question? what’s missing? how do I explain it in a clear way?

 


 

Prompt engineering is really about:

How do I give instructions so the AI actually does what I mean?

And it turns out—it’s not that different from writing a spec.

I’ve heard people start to call this Spec-Driven-Development. (SDD)

 


 

What makes a good prompt

Over time, I’ve found good prompts usually include:

  • a clear goal

  • enough context

  • constraints, this is important, you don’t want agent to delete your data without permission.

  • and expected output format

 


 

If there’s one thing that really matters—it’s context.

AI only knows what you tell it right now.

So, if you leave things out, it fills in the blanks… sometimes very creatively.

So in real work, I include:

  • where to look for the existing pattern

  • what are the files need to be changed

  • and system details

Without detailed context, AI guesses and hallucinates.

 


 

Why fundamentals matter

Prompt engineering depends on strong fundamentals.

Because you need to:

  • understand the problem

  • recognize good vs bad solutions

  • catch subtle errors

If you don’t… the AI might sound right—but be completely wrong.

So AI doesn’t replace engineering skills.

It raises the bar.

 


 

Closing

So the biggest change isn’t just productivity.

It’s really how we think about software development. In a team, we collaborate, but each person owns a piece of the work. Now, I have agents owning those pieces.

I feel like a conductor of an orchestra—I set the direction, and the agents play their parts.

 


 

There are so many free resources today—you can learn almost anything.

I actually used ChatGPT to help map out my own learning path—and even track progress.

So yes—use AI.

But don’t abuse it.

 


 

What do I mean by “abuse it”?

Don’t just read summaries.

Summaries are like fast food.

They’re quick—but not very nutritious.

If you really want to learn:

slow down, grab a coffee, and actually read.

 


 

With that said,

For quick learning, I like a couple YouTube channels:

  • 3Blue1Brown (great for fundamentals)

  • IBM Technology (more practical insights)

 


 

Speaking of judgment,

You can’t make good decisions without fundamentals.

And just as important—you need to know what you don’t know.

Critical thinking

In an AI world, everything comes down to:

critical thinking

You have to:

  • question results

  • analyze deeply

  • validate constantly

And that’s not something you learn on a weekend.

Your diploma does not mark the end of learning.

It’s the beginning of a lifelong habit of learning.

 


 

We’re lucky—there’s so much knowledge already out there.

Not just online—but in books that have been shaping how people think for decades… sometimes centuries.

 


 

If you’re wondering where to start, here are a few books that really changed how I think.

First—Daniel Kahneman’s Thinking, Fast and Slow.

Kahneman's book helps you notice when your brain is coasting… so you can actually slow down and think.

So instead of just trusting your gut, you start asking yourself: “Am I really thinking this through… or just reacting?”

 


 

Then there’s Yuval Noah Harari—Sapiens and Nexus.

In Sapiens,

Harari's point is that humans have always been running on shared stories. And the people who get to write those stories have enormous power.

Sapiens makes you question the stories we believe.

In Nexus,

Harari points out “more information doesn’t mean more truth.”

What actually happened is that whoever controlled the flow of information gained power.

Reading this book makes you question who’s telling you those stories, and why.

And finally, Adam Grant’s Think Again.

This one is about rethinking—how to question your own assumptions.

If Kahneman shows you how your thinking can go wrong,

Grant shows you what to do about it.

Happy Reading!

 

所有跟帖: 

快讀了一遍,嗯,有意思,值得討論!很巧,我剛看完了電視劇”The Capture”第三季,裏麵的劇情就是圍繞著 AI控製 -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 14:46:58

的deepfakes,看着很离奇,但和你帖子里很相同。我觉得IT行业是在创造/应用AI和与AI“战斗”最前线,首当其冲, -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 14:52:45

所以是应该好好讨论,这次和以往不同的技术革命会给IT,给社会,给人带来什么影响。我自己(除了刚工作时做过DB外)一直是做 -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 14:58:09

实时软件( mostly in C :-),觉得实时可能比应用软件受AI的影响要小一点儿?(我已经离开了rat race -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 15:00:25

我们是首当其冲,却不会是最后那个。。。 -碼農學寫字- 给 碼農學寫字 发送悄悄话 碼農學寫字 的博客首页 (0 bytes) () 04/18/2026 postreply 20:43:46

是啊!觉得IT处在一个非常矛盾位置:制造出了却控制不了AI,可能最后让自己和“世界”被自己制造的恶魔吞下 。。。 -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 21:00:11

But the genie is out of the bottle now ... ... Sigh -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 21:00:58

加:在等咖啡時,一位陌生女士走过来說馬上要離開了,請我們坐到他們的沙發去。這種真實的人之間溫暖是AI代替不了的啊!看: -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (239 bytes) () 04/18/2026 postreply 15:07:40

Interestingly, “emotions” was one of the questions posted to -碼農學寫字- 给 碼農學寫字 发送悄悄话 碼農學寫字 的博客首页 (0 bytes) () 04/18/2026 postreply 20:33:45

One of the panelist, and the answer is “does AI need emotion -碼農學寫字- 给 碼農學寫字 发送悄悄话 碼農學寫字 的博客首页 (0 bytes) () 04/18/2026 postreply 20:35:39

I think this is a philosophical question or statement. Have -碼農學寫字- 给 碼農學寫字 发送悄悄话 碼農學寫字 的博客首页 (297 bytes) () 04/18/2026 postreply 20:40:02

嗯。(记得移花在笑坛说过骗子用AI在微信上骗钱的事情,链接见下,是很可怕)工具被谁掌握是很难控制的 。。。 -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (112 bytes) () 04/18/2026 postreply 21:15:09

在外面呢,回去再慢慢回 -碼農學寫字- 给 碼農學寫字 发送悄悄话 碼農學寫字 的博客首页 (0 bytes) () 04/18/2026 postreply 16:56:52

中文的连接在这里 -碼農學寫字- 给 碼農學寫字 发送悄悄话 碼農學寫字 的博客首页 (140 bytes) () 04/18/2026 postreply 16:58:14

Back home now! Had a quick peek of Ur blog, be honest, -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 18:16:50

English version is much easier 2 understand than GPT的中文翻译 ~~ -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (0 bytes) () 04/18/2026 postreply 18:17:52

A hilarious true joke about GPT is below 4 Ur laughter: -最西边的岛上- 给 最西边的岛上 发送悄悄话 最西边的岛上 的博客首页 (2776 bytes) () 04/18/2026 postreply 18:22:03

听硅谷大厂的朋友说,他们现在在编程工作中被要求必须使用AI,已经成为衡量工作表现 KPI的指标之一 -JoyAnna.- 给 JoyAnna. 发送悄悄话 JoyAnna. 的博客首页 (0 bytes) () 04/19/2026 postreply 12:04:08

是這樣啊,不過,風一陣一陣的。。。:) -碼農學寫字- 给 碼農學寫字 发送悄悄话 碼農學寫字 的博客首页 (0 bytes) () 04/19/2026 postreply 15:58:23

请您先登陆,再发跟帖!