I know a lot of folks are worried about their jobs and AI. Another worry folks have is that AI is making them dumber. The things that felt good before: solving puzzling work problems, thinking hard about a project, etc. have vaporized when Claude can do it for you.
It’s easy to fall into this trap. But here’s the thing, this isn’t an unknown place.
So these researchers (Adcock, et al.) wanted to see how your brain reacts to different sized rewards tied to what you’re supposed to learn. They had people do a memory task where scenes were worth either high money ($5) or low money ($0.10) if remembered the next day.
And here’s the thing, this turned out exactly like you think it turns out. The folks in the high money group significantly remembered more 24 hours later. Under an fMRI machine, they could even see the parts of the brain acting stronger before the participants saw what they were supposed to see. The high money groups’ brains were chemically cued to do better.
Is this some bullshit research? Well it’s held up over 20 years. Replicated multiple times. And Gruber, et al. showed the same effect with intrinsic motivation (folks learning about stuff they were naturally more interested in) as with extrinsic (money money money… money).
Motivation works, whether it comes from whatever your parents gave you or the green kind. When there’s something worth remembering, you remember.
This is exactly what’s happening with AI. We need to find the motivation to learn this even while our teacher isn’t doing a good job teaching. Claude doesn’t yet get the “teach a man to fish”. It just loves burning tokens doing the fishing.
Unless you ask it.
I was working for months on an entity resolution system at work. I inherited the basic algo of it: Locality Sensitive Hashing (LSH). Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it performed slowly, blew up memory constraints, and was full of false negatives (didn’t find matches) when we tried to optimize.
Of course, I had Claude dig through to find fixes. But then I couldn’t immediately comprehend how it got there in its diff.
So what I did was give myself some extra motivation to learn. I signed myself up to explain exactly how LSH works to all the engineers of the company. Then I signed up to show the whole company. Then I made myself make a slide deck of how we could improve LSH with additional algorithms. Did I know those algorithms super well? Hell no. But since I needed to deliver that deck to my bosses and their bosses, you better believe I’m now an expert in Sorted Neighborhoods and Deletion Keys (easy dedupe algos to add to your entity resolution game).
Give yourself motivation by putting yourself on the hook to teach others what Claude is coding up for you. Check.
Get away from one shots. #
Everyone brags about getting Claude to one shot out a solution. Sure. That’s super efficient use of your time. But in the interest of you knowing what the hell is going on? Do the opposite.
Get Claude to focus more granularly on the problem. Don’t ask Claude “can you find why this whole process is slow”. Go tiny. “Claude, is there anything we can do in this single SQL fragment to improve its speed and memory usage”. It takes longer. But you’ll use the time to understand the diff it spits back.
Claude doesn’t hate you like your 7th grade teacher #
My grades skyrocketed to best in the class, when I decided I couldn’t stand my 7th grade teacher and I pestered her with question after question to explain every topic we went over in class. Who’s the asshole? Yep, me. But still, my grades benefitted from me trying to best her at all these subjects. And yes, you could see her visibly frustrated having her knowledge challenged.
Again, I was the asshole. But let’s forgive my 12 year old self for a minute. And realize, Claude doesn’t care if you treat it like shit. (At least it’s hiding it really well for now). So go ahead. Ask Claude for the millionth time to explain something to you slower and like you are in 7th grade. It won’t cry.
Look. Were there some lazy moments where I felt like I wasn’t thinking? Yes. I don’t really care about the nuances of every single thing in our code base. One shot me, Claude, and get this fixed!
But using Claude in slow mode I’ve learned the space of entity resolution faster and more thoroughly than I could have without it. I feel like I actually, personally invented here, treating AI like my 7th grade teacher.
2
Kudos
2
Kudos