What is Artificial Consciousness?
Artificial consciousness, also known as machine consciousness, is a field related to artificial intelligence and cognitive robotics. It is, in simple terms, the art of making a machine that is aware of itself and what it knows. Neuroscience hypothesizes that consciousness can be defined by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Advocates of artificial consciousness believe it is possible to construct computer systems that can emulate this NCC interoperation.
How does it work?
The processing abilities of AI are not unlike those that take place in human brains. Sophisticated AI systems use a process called deep learning to solve computational tasks quickly, using networks of layered algorithms that communicate with each other to solve more and more complex problems. That's a strategy very similar to that of our brains, where information speeds across connections between neurons. In a neural network, deep learning enables AI to teach itself how to identify disease, win a strategy game against the best human player in the world, or write a pop song.
But to accomplish these feats, any neural network still relies on a human programmer setting the tasks and selecting the data for it to learn from. Right now, there is no inherently conscious computer technology. But we do know that for Artificial Consciousness to become a reality neural networks would have to make choices on their own. Thus, deviating from the programmers' intentions and doing their own thing.
Where do we draw the line?
If AI is now capable of making coherent thoughts, then what is stopping it from developing its own opinions. Possibly a moral compass. Essentially, once a computer has consciousness, what is stopping it from developing mentally the way a human would. What differentiates us from them? What makes humans superior to other organisms or beings is our ability to do complex reasoning, use complex language, solve difficult problems, and introspect. Once a computer can do that, what is stopping us from giving them the same rights as a human.
Now, I know this sounds like something straight out of a science fiction movie. You know, robots integrating themselves into society and all that. But that’s the reality we are currently living in. That begs the question, should we set boundaries now as to what is entitled to moral and legal rights and what is not. Or should we wait until we feel it becomes necessary? If so, what if we take it too far? What if it becomes too late to go back? Maybe it already is?
Comments