1/14/2018

How do we Scale & Speedup Artificial Intelligence

People have filters.  If you ask someone that drinks at bars a question about a bar, chances are, they will have more information that you care to know, as in what bars are in the vicinity, the culture of clients, the menu, etc.

Same with a person that enjoys coffee.  They have an internal map of all the coffee joints in town, hours, locations, people, baristas.

Reason being, they are domain experts in their specialized interest.

They say the average mind listens to thousands of signals a second.  Because the fact is we are bombarded by too much stimuli, so we filter out the stuff not important to survival or interest.  Perhaps running anomaly detection routines in the background for protection, subconsciously in the back of the brain.

The mind is always awake, processing stimuli into information for processing.  We filter on the basics, push aside the not important stuff, and go about our lives.  Perhaps we have special areas of interest that we are knowledgeable.  Perhaps we learn up to a point, then shut down anything new, at a certain age.

Artificial Intelligence is intelligence produced artificially.  Intelligence is derived as any other computer.  It takes Input.  Processes the input.  Returns Output.

We have developed multiple ways to teach computers patterns over time by feeding input data, process through multiple layers to produce output results based on percentages of predictability.

We have specific sets of data to train specific data models.  We tune these models for better accuracy, over time so the training can learn based on new data.

We have data models that watch other data models, receive input via feedback loops, for instant feedback, so the models can learn faster with better accuracy.

Yet, they are limited in their domain.  They may be experts in a specific area, but chances are, not multiple domains, in real time.

Input.  Process.  Output.

That is how computers work, and to some degree, that is how brains work.  Brains could be considered very advanced data models, how the data gets stored & archived, accessed on demand, memory, are sort of becoming more known, yet still a black box indeed.  Brains are extraordinary equipment and are a mystery.

Artificial Intelligence has progressed recently, as in the past 10 to 15 years.  Because data sets are more available, processing power has increased, software is freely available and expert thinkers and designers are hard at work at well funded organizations working night and day tirelessly to solve this unique riddle.

Some of the issues confronted are domain specific models do not scale easily, they take time to train, perhaps not real time models.  The process of obtaining data, cleansing to some degree, processing models through multiple layers is tedious, not fast and performed by trained professionals in unique office conditions.

With all the advancements so far, as in winning at Jeopardy, becoming master level at video games such as Othello, Atari, Go, Chess, Poker, how do we integrate multiple layers of AI across multiple domains in real time as well as scale globally with increased accuracy, better performance and lower costs.

Input.  Process.  Output.

With AI, we have facial recognition, classification of objects, speech, vision, natural language processing, predictions based on statistical probability, anomaly detection based on data points that do not comply with expectations.  Most of this is performed via computer software.  Robotics are entering the space as well as in manufacturing jobs and machinery to increase efficiency across physically demanding and repetitious patterns.

So if a software program can be trained by processing large sets of data, what if we could teach machines to learn faster.  As in learn by example.  A picture says a thousand words.

What if a computer could simply watch an action over time, and learn the techniques to duplicate the behavior, with efficiency and accuracy.  How would that be done.

Well, if you had a camera, that translated the external world into a digital world, for processing, that would handle the data input.

The processing could be short wired to watch the patterns, learn the best practices as well as exceptions.  The core learning wouldn´t take that long, its learning the exceptions that may take longer, and those can be archived and appended over time.  This is a challenge for humans as well, as long as things go as expected, people can process and move forward.  Its when exceptions occur, how do you handle, forward to manager as they have more experience or authorization, same with computers, flag occurrence for future follow up.  Otherwise continue as usual.

Teach computers to teach themselves by showing them actions to perform, may speed up the training of models.  And those models can be integrated with other models.  Each component becomes expert in their niche, other models can access each other model, such as a bee hive.  A series of combs together form a structure, the bee hive.  Each comb could display its meta data, what the model does, what its domain expertise is, how to interact with it, who created it, when, how often updated, etc. etc.

A combination of multiple domains spiced together across giant networks, to form a unified collection of knowledge across multiple domains in real time, scaled across the planet, continuously being updated with newer information.  New data models could leverage the already learned knowledge from other Data Models across similar domains.

When we think of Artificial Intelligence, we think technology, as in data or programming or algorithms, or what have you.  What you also need to consider are the liberal arts, the actual arts, the sciences, different cultures as in Anthropology, history, medical, banking, traffic patterns, engineering, politics, governments.  There is no set of knowledge outside the realm of AI, it encompasses everything, including languages, religions, tactical warfare, currencies, economies, etc..

It may be possible to speed up the development of Artificial Intelligence by leveraging the ability of software applications to learn by watching, osmosis.  We can not depend on huge volumes of new data sets, to take hours to program and train specific models in specific domains, which do not scale and perhaps not real time.

We need AI systems that can scale, faster, more accurate, by learning the basics of new systems by watching repetitious patterns and learning, as well as pickup up information from other trained models in other or similar domains.  We can expose these trained models using meta-data, telling users information about specific models, so they can be leveraged, why reinvent the wheel when that info has already been learned.

We have made great strides in the field of Artificial Intelligence.  I wonder if we could speed things up a bit, by introducing new ways to train models, and bypass the process of training models based on large data sets.  Let the models train themselves by watching and learning, over time, in real time, as well as leverage a network of trained models across the globe.

Any opinions on this line of thought.