4/19/2016

Smart Machines and Robots May Be Inflexible in the Future

In society, you are free.  Free to do as you please.  As long as you don't harm yourself or others.  And don't incite riots or threaten people in high positions.

Other than that, you can sort of do what you want.  Just keep in mind, you're not going to win any popularity contests by spewing out things people don't want to hear.

They spend a great deal of effort avoiding certain subject, heads in the sand, ostrich style.


The other issue, lots of orgs are scanning and capturing your tweets, posts, blogs, everything really.  They form these social profiles of everyone.  And those can and will be used against you.  Can cost you your job.  All sorts of implications.

So let's say we take one of the Artificial Intelligent bots, let it scan the entire internet, form it's own conclusions.  One thing it may pick up is the fact that lots of people have lots of opinion.  How do you determine what's acceptable and what's not.

Well, one group force fed a Bot to think that Hitler was a good guy.  The ramifications aren't that serious, except it exposes a great weakness in the process of training a Bot.  It learns over time, based on information.  You feed it, it learns, you feed it some more, it learns more.


How is a Bot supposed to learn what's okay and what's not.  There are so many exceptions.  People say that learning the English language is so difficult because the same word could have different meaning and different spelling, for example, To, Too, Two, and There, Their, etc.

Learning the patterns of the planet must be a lot more complicated.  Especially the norms of each society.  In other words, the culture which is made up of written and unwritten rules that base the values of society.

If a Bot learns that people drive auto's on the right side of the road, yet see images in England for example, they drive on the left hand side.  And how do you know to use the metric system vs the American units of measurement?  So many details to learn.

Most of it can be learned through context.  Like a dictionary of items to events to places and time and sentiment.  A complex web of neural networks of acceptable behavior.


So who trains the models?  The computer programmers?  No, they only apply the business requirements from the business.  And where do they get requirements, from upper management.  And where do they get requirements, from the board and shareholders.  Or mandates from Government or the Military.  Could go high up the chain actually.

Yet who authorized them?  Just because we do things a certain way, why should we dictate the rules?  Just because we have the technology and resources and knowledge and ability?  How can we ensure that the proper morals and ethics are applied evenly and consistently.  And who's allegiance are the intelligent machines accountable too?

Who basically owns the responsibility of ensuring ethical and moral smart machines?  That's a good question.

Just like any new technology, can be used for good and / or evil.  Could you imagine, years from now, an army of robots patrolling, who issues their orders?  Simply an invisible layer of autonomous beings hiding the order givers behind the curtains.


How do you feel about IVR systems and traffic lights?  They automate to some degree, yet very inflexible.  I sure hope these new smart machines are a bit more tolerable and easy to work with than our current systems.

Yes, automation and robots can and will automate jobs.  But they could also patrol the place, keep humans in line.  Not sure robots respond to bribes or friendship history to get you off the hook.  But then again, they may apply the rules evenly, so no discrimination based on color, creed, sexuality or what have you.

So there you go.  Covered a lot of ground in this post.  Thanks for reading~!