May 22, 2023

The Illusion of Free Will and AI

Free will is commonly understood as the idea that we make our own choices. However, this notion is often challenged by the belief that our actions are influenced by factors beyond our control. For example, the process of designing something feels like an exercise of free will as we select colors, shapes, and materials based on personal preferences.

But what informs these preferences? Research suggests that our preferences aren’t exactly free choices. They are more like deeply ingrained habits shaped by our brains and not entirely free choices. This could explain why we gravitate towards the same color palette or font repeatedly because it “feels right” (for me, it’s Gordita; I keep using that font on everything, lol)

The concept of free will becomes more complicated when considering scientific research suggesting that our brains make a decision up to ten seconds before we’re aware of it​. Yeah, I’m not kidding. This implies that we might not be as in control of our actions as we believe. It’s like our brain is driving the car, and we’re just along for the ride.

This brings us to an intriguing question: how different are we from AI? AI makes decisions based on its programming and the data it’s been fed, not out of 'desire' or 'will'. The 'choices' of an AI system are determined by factors outside its control, much like ours are largely shaped by our brain’s processes. If free will is just an illusion, how different are we from a computer?

As for consciousness, it remains a deeply complex and debated topic. Currently, it’s believed that no AI is genuinely conscious—they can mimic human-like responses but do not have subjective experiences or feelings. But the question of whether AI could ever achieve consciousness is still open, with no definitive answer, some declaring is just inevitable.

So, what if a robot suddenly said, “Yo! I’m conscious!”  Would we believe it? How should we treat AI if it seems conscious? Should we consider its ‘feelings’? Would we have to give it rights, like a person? It’s tough to say because we don’t really understand consciousness that well, even in ourselves. What makes us aware? Is it our ability to learn, remember stuff, make jokes, or binge-watch Netflix? And if a machine were tod do all of that, would it then be conscious?

What does it matter if consciousness comes from an organic algorithm or a synthetic one?

It is essential to consider the ethical implications if an AI ever appears to have consciousness. While it’s difficult to predict, the rise of seemingly 'conscious' AI could call for a reconsideration of how we treat and interact with these systems. This would raise many ethical and legal questions about rights, responsibilities, and the boundaries between human and artificial life.

In the end, whether we have free will or not, whether AI is conscious or not, whether it is us who is picking that color or not, it’s clear we have a lot to think about. And while our brains might be making many choices for us, we can still choose to be kind, thoughtful, and to make the world a better place for humans and... yeah, perhaps robots too.

For now, I’ll keep saying “please” and “thank you” to ChatGPT before we all inevitably welcome it as our robot overlord.

---

Sources,

Other Stuff