A simple, human way to think about AI safety
Think about a kitchen knife. It’s designed to chop vegetables, not cause mayhem. But if someone keeps waving it around recklessly, you take it away for a bit. Not because knives are evil or because you’re trying to control anyone. Just because you’re trying to avoid unnecessary drama.
AI is the same way.
If someone keeps trying to use ChatGPT in ways that could harm themselves or others, maybe the system shouldn’t remain fully unlocked. A little “pause button” or supervised mode could go a long way. Not as punishment. Just basic grown-up supervision until the behavior changes.
This isn’t about limiting ideas or creativity. It’s the opposite. Guardrails keep powerful tools from turning into the wrong kind of adventure. Once people demonstrate they can use AI safely, give them the whole toolbox back.
That’s how I think about it in my classes and in my work: curiosity is great, exploration is great, but safety keeps everything from turning into a mess.
If you want to talk about this or share your thoughts, feel free to reach out through my LinkedIn page.