WhoseBrainIsIt banner

After reading the New York Times article “What OpenAI Did When ChatGPT Users Lost Touch With Reality,” I had a very simple thought: AI is powerful, but so are a lot of things we use every day without turning them into chaos. I sent this idea to OpenAI, and I’m sharing it here because it reflects how I teach and think about AI in real life.

A simple, human way to think about AI safety

Think about a kitchen knife. It’s designed to chop vegetables, not cause mayhem. But if someone keeps waving it around recklessly, you take it away for a bit. Not because knives are evil or because you’re trying to control anyone. Just because you’re trying to avoid unnecessary drama.

AI is the same way.

If someone keeps trying to use ChatGPT in ways that could harm themselves or others, maybe the system shouldn’t remain fully unlocked. A little “pause button” or supervised mode could go a long way. Not as punishment. Just basic grown-up supervision until the behavior changes.

This isn’t about limiting ideas or creativity. It’s the opposite. Guardrails keep powerful tools from turning into the wrong kind of adventure. Once people demonstrate they can use AI safely, give them the whole toolbox back.

That’s how I think about it in my classes and in my work: curiosity is great, exploration is great, but safety keeps everything from turning into a mess.

If you want to talk about this or share your thoughts, feel free to reach out through my LinkedIn page.