As AI keeps accelerating, it’s not just “better chatbots” anymore. GPT-3.5 and GPT-4 opened the door, and now we’re watching people wrap those models in autonomous agents that can plan, execute, and iterate. That is a big deal.
It will not be long before we see easily accessible agents that can browse the web, access files, see images, hear audio, generate & deliver audio, execute code, and iterate on their own decisions in real time. We’ll see also see in-app agents with millions of users forming emotional bonds with chatbots; some of those users will be children. That combination will not end well.
These is the part where we should slow down for a second and ask the annoying questions. Who gets harmed? Who gets watched? Who is responsible when the thing “just did what it was told” and someone gets burned?
Below is how I think about the ethics of AI and the risks of autonomous agents, plus some practical ways to push for responsible use.
