THE PROBLEM WITH POOR A.I.
Article by Michael J Aumock
Artificial Intelligence, while we have created it to exist in machines, it is 100% man-made.
It might prove to be the greatest creation of all time, but it's still got human fingerprints all over it. And with those fingerprints, comes the very real potential for human error.
Design, programming, data input, code, every part and component of an A.I. is built by man and put in motion by competitive, sleep-deprived, greedy, temperamental, imperfect man.
What could possibly go wrong?
Artificial Intelligence is capable of learning, and self-teaching. True A.I. can process every byte of data it receives and integrate it in an intelligent manner with every other byte of data it's received in the past, or will receive in the future. However, I believe the real danger in A.I. not the so called runaway sentient machine....
It's poorly executed programming, incomplete coding or otherwise shoddy workmanship creating a poor version of an A.I. that doesn't learn, or can't learn properly. Whether on purpose or accidentally, an A.I. that can't learn correctly from it's own mistakes is a dangerous thing.
How many inventive coders have the time, patience and financial wherewithal to push START after a year of coding and hard work, and then pull the plug 3 days later and go back to the beginning to fix an early error and build it all again as Invacio did in the early days?
In my experience it is that most would rather find a VC willing to invest in a "great idea that doesn't quite work yet" and then cave in to pressure to launch before all the bugs are cleaned out. Thus, you end up with a well-funded, over-promoted, under-performing "Faux A.I." that will cause tremendous problems down the road.
It's not evil per se, but it is not good.
Bots as AI
Another area of concern is bots being called A.I.
Bots have a specific task in True A.I., and while some of those tasks are quite complex, at no point to they qualify a bot as an A.I. on its own.
True A.I. learns
The differentiator of a true, well designed, built and executed Artificial Intelligence is self-learning, and eventually self enhancing/scaling (which is Invacio's direction).
Realizing that, like us...it was born (built) perfect, but not complete. As it gains in experience it realizes it has the potential to better itself by allowing new information to become part of it's internal memory, and in fact, self-correct, and understand that some of it's old data was incorrect, and the new data is right. And accepting that in time, what is "right" today, might change in the future. Yes, I am talking about a purely sentient A.I.
A machine that, without any human intervention after a certain point, can learn, understand and function in the world of humans, as a separate, sentient "being".
We give the machine the rules, the base line data and the boundaries of learning, and it goes out to the world and teaches itself based on those rules and boundaries.
I expect there will be some challenges as it asks questions of itself like "Why do humans smoke tobacco?" or "Why can't machines drink Scotch?"
But, if a machine is given the right tools, rules and guidelines to self-learn, self-correct and understand where new data fits in it's internal memory, over time (probably minutes) it will form it's own answers to these and a million other questions.
But if a Faux A.I. is struggling with the "Why do Humans smoke" question, and uses that information to decide that since "smoking causes cancer, humans are suicidal" and decides to help us along so that we don't suffer...Well, now we've got a problem.
The problem is that early adopters to A.I. are entities like communications companies, countries, huge multinational corporations. Medical and Pharma companies. Looking for competitive edge. Looking for a glimpse into the future.
If that glimpse is based on incorrect data, or a faulty base algorithm, then all the assumptions that come after are flawed and the data will be useless. So while there will be a staggering amount of money in play, there is also the human cost while people rush to invest or invent in the wrong direction.
So the key to a safe, functional, sentient A.I. is good groundwork. A solid foundation.
Set the table properly and take the time to teach the A.I. how to learn, how to self correct and how to exist in a world of humans.