8/28/2016

The Historical Buildup to Information of Things IoT



Looking back at history of enterprise computing, we began with the Mainframe computer.  It sat somewhere hidden from sight, and ran the programs and stored the data.  It was programmed by full time programmers who knew the business, the applications, the domain knowledge as well as the data.

Soon we ventured into "fat" client applications.  With the Business layer, Database layer and User Interface layer, combined into the applications that ran on a user’s desktop.

Next, we separated the business layer from the database layer from the user interface layer into client-server apps.  We created classes for each, bundled them into DLLs.  

Next we pushed the DLLs into a COM pool which could be accessed in memory and called from the client app.

Next, we hooked up the internet to the same DLLs, which gave them multipurpose and added value.

Next, we pushed those DLLs onto the web as Web Services.  Called from Servers or Web applications, using SOAP XML calls with security.

Next, we had single page web applications using JavaScript to do all the heavy lifting through connected components that magically flowed together in a complex set of file structures on the server.  Combined with AJAX type calls from the client, it seemed to duplicate our traditional client server applications from long ago.

So what's next?  Internet of Things or IoT.  Applications running out in the wild in a distributed ecosystem.  Apps run on devices such as sensors.  They capture data in real time, perhaps store locally, and then flush the contents back to the main server, typically the cloud.  These micro bursts of packets of information flow from anywhere, anytime. 

 

It uses a specific protocol which bundles the info into small packets that get pushed to the server, where a program is waiting patiently for one, thousands or millions of incoming messages per second.

That data typically gets written on the server perhaps to a Hadoop Big Data repository, or is Streamed to a relational database.  It can also be analyzed in real time and send alerts to other apps or other streams or triggers to instantiate other events or processes.

The IoT devices out in the wild are tracked and monitored by pinging them periodically to verify they still have a heartbeat as well as to push updates and hot fixes.  For all practical purposes they are disconnected until they send or receive information.

Because they are distributed applications disconnected from the main server, they require reliable connects, impenetrable security, standardized communication patterns, knowledgeable programmers on the back end to program, patch, maintain and replace down the road.

The devices need a small operating system, perhaps a small database or way to store the data such as XML, network cards, communication protocols and a way to stay powered.

What happens in five years when the batteries stop working.  Or the operating system is no longer supported.  Or the vendor goes out of business or acquired by another company.  Lots of things to consider.

Security is imperative and as we know, any device connected to the internet can be compromised.  Packet sniffing watching messages fly by, exposing holes in the sensor network downstream and other vulnerabilities.

What's next?  Perhaps smarter chips, than contain artificial intelligence machine learning, to process the information in real time without having to send packets back to the server.  

How about connected devices out in the wild?  Thermostats connected to the Garage door opener sensor connected to the smart phone.  A web of interconnected smart devices running 24/7 disconnected from the main server.  

Or a decentralized "Hub" that radio's serves as a negotiator to transmit multiple IoT devices all day every day.

 Wow.  Talk about automation and intelligent machines.  We are definitely entering new territory.


And there you have it~!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.