We tend to see the complex world through our lenses. We see, we interpret, we filter and we assume our view of the world is correct. We receive validation from the external world, and adjust our behavior and thoughts to align with what's acceptable.
For those who work in advanced theory, such as Artificial Intelligence, or Physics, what have you, ideas are formulated on rules and theories. Theories change over time.
We base our thinking, our thoughts, our theories and hypothesis on justifiable evidence. And those facts are the building blocks for projecting out into the future.
Man can not travel faster than light.
Objects fall at 9.6 meters per second.
The planets circle the Sun, the Solar System spins around in the Galaxy, and it too spins around, always moving away from the center.
There are some fundamental concerns or issues, staring us right in the face, that we have no answers.
Are ghosts real?
Is there an afterlife?
When does life begin in the womb?
Who built the pyramids of Egypt, why, and what technology did they emply?
What is all the dark matter in the Universe?
How do we create life?
With the rise of Artificial Intelligence, surrounding all the doomsday theories, such as no jobs, humans residing in zoos, and serving our new lords, the computers and robots, we are baffled by the creation of new beings. When machines become intelligent, far superior to the current Alpha bipedal hominids that dominate the planet. With intellectual capacity beyond comprehension.
Is that a possibility, perhaps. Like anything, there's a statistical probability it could happen, similar to a monkey sitting at a typewriter will eventually produce a masterpiece such as Shakespeare. Could happen.
If you listen, there is talk of creating such an intelligent being or beings. And the concern, perhaps justifiable, that these beings could become destructive. If they were given a task to amplify production of a specific item at maximum efficiency, they could destroy the earth in their attempts, because logically, it makes perfect sense.
And in order to prevent such an occurrence, steps need to be put in place, to prevent. So what steps should be taken?
We don't know. Because the number of potential variables is unknown. There is no way possible to code for every scenario. We don't have the time, resources or expertise. So one option is to build an environment, in which the machines could learn by training themselves based on given rules. They would learn right from wrong. Not just black and white decisions, but ambiguous open ended questions.
Is it okay to kill a person? No. Except in war or as capital punishment.
Is it okay to steal? No. Unless there's a justifiable reason. Like what?
Is it okay to lie? No. Unless it causes unnecessary harm. What determines the difference between a white lie and a real lie?
Is it okay to maximize personal gain? In sports, it's acceptable, unless outside the governing rules such as steroids.
Could machines live in a world, where they always tell the truth, always do the right thing, always in line with the given rules of society? Do humans always live within the given rules of society?
There are severe consequences for breaking the rules, yet we have jails filled with people and people roam the streets breaking rules without getting caught. Drunk drivers. Bank robbers. Petty thefts. On and on.
How could machines and humans coexist in a world where the rules are shades of gray and the robots follow every rule and the humans do not?
So if we can't write code to outline the blueprint of acceptable behavior based on our current mantra of ethics and morals, how do we accomplish such a feat?
Well, the current state of machine learning works by feeding information as input, letting the machine learn over time, and then produce a statistical probability that the output is accurate with a certain degree of accuracy. The elephant in the room is that we as humans do not really understand how the machines learns. For all practical purposes, it's a "black box".
This black box, we could alias with "Pandora's box", if we don't know exactly how it works, and the machines continue to advance in accuracy and capacity, then what. We don't understand it at the simple level, how about as it becomes more complex, beyond our comprehension.
Just build in some method to control the growth, like pull the plug or shut it down. If we compare this to the internet, how could we pull the plug on the internet. It's decentralized so much that there is no single off switch. The internet has a life of it's own. So too, smart machines could soon approach such a state.
Even if we had the capacity and ability to instill morals and ethics from the root foundation, who gets to decide which rules to follow. Do we have the programmers feed the machines the moral conduct of humanity. Over time, how would new rules be introduced, and depreciated rules removed.
We view the world through the narrow lenses of our 5 senses, our historical framework for truth and our biased experiences over time. What if our human capacity, as great as it is on this planet, is infantile compared to the great wisdom of the Universe, and in our effort to produce smart machines in an incubator, to create new life, without the understanding, compassion and wisdom required, and release Pandora's Box into the Universe, with no way of shutting it down.
A similar comparison could be the advent of the nuclear bomb. Except that technology is exiled off somewhere in secrecy, never to be used except in extreme circumstances. Would Intelligent Machines fall under the same domain, or will it be created commercially, and if so, who would have governing capacity. Creating Artificial General Intelligence could be many years in the future, but then again, it could be here sooner than we think, with all the research funding increases over the past few years.
For me, I think its a fascinating subject and has lots of potential benefits. But then again, there's a lot to consider. Time will tell.