8/16/2018

The Slow Uphill Climb

Popeyee was a fine character.  Just going about his business.  Mean old Brutus wouldn't have any of that.  That gorgeous Olive Oil with baby.  And didn't Wimpy think to get a job, pay for his own burgers.  Come on man, pull your own weight.

They say Gilligan's Island had undertone.  Professor represented white collar, Skipper blue collar, the Howell's were upper class, movie star, mid-west gal, and Gilligan was the commie red that messed up everyone's plans, every single time.  They didn't get paid much, for one of the best television shows of all time.  Gilligan wasn't a bad role model.  Just play stupid.

Sure had some good shows back then.  Sat 3 feet in front of the TV, didn't have remote, lucky to have color, played Pong, before Atari, Pin Ball, Pac-Man.  Those arcade games sure got a lot of our quarters.  I sold bubble gum on the bus and school for extra money.  Hand held games like head to head football, basketball, Rubik's cube.  Those were the days.

Then the computers showed up.  Eighth grade, first course on TRS-80, took typing class, had an IBM-PC at home to hack on DOS, just dual floppy drives, no hard drives, or mouse.  Used to dial into BBS using 1200 baud modem.  Took computer class in high school, sat in the back, everyone copied off me, teacher never knew I was there.  Signed up for wood shop, they moved me to electronics, we build an AM radio, final exam was to troubleshoot why it didn't work, something didn't get soddered correct, had to fix it.

Just coasted through childhood, made high honor roll most times, somehow got into college, even graduated in 4 years, got a diploma and everything.  College was sort of like 13th grade, just transition from high school to college, courses were more difficult, even more displaced in classes of 800 people, you have to find time to study.

After graduation, there's no more school.  You have to go out into the world and find a job.  What kind of job can an Anthropology major do, with no business world experience, during a deep recession?  What are you supposed to do, got to pay for car insurance, pay the rent, electric bill, gas money.  Nobody really explained the entire concept of adulthood, sort of landed there, looked around, uh, what am I supposed to do now?

Well, turns out they have temp agencies to get you jobs for a day.  Maybe $5 an hour.  Couple times a week.  It's a living.  Except, being on the bottom of the financial totem pole, you don't have too many options, other than up.  My mother called one day, they're interviewing at her work, for credit analyst, I applied, had to take a basic math test on addition, subtraction, spoke with the hiring manager, started a week later.  They train you for a week, then throw you out on the call center, taking calls from live sales people in the store, with customers applying for credit in real time, at the cash register, ask basic demographic questions, run credit report, respond with answer, in under 5 minutes.  My average talk time was under 2 minutes.

After getting hired full time, after a year or more, they asked me to come to work early, compile the credit analysts call log from prior day into Lotus 1-23, had some macros.  A position opened up for lead, I applied, didn't get it though, they gave it to some guy, I asked why, my manager said, "he's starting a family, needs the extra money", I sort of thought that was discrimination, I'm no lawyer though.

So I found a job listing in the newspaper, credit analyst, a mile from my job at the time, drove over in shorts and tennis shoes, dropped off resume, reception says, do you have a minute, the hiring manager would like to speak with you, sure.  I interviewed in tennis clothes, got the job, after, he asked if I knew anyone else looking for credit analyst job at the finance company, sure did, got two people hired from previous job.  My salary was under $20k, yet I was so thrilled, hiring manager bumped to $21k out of pity I suppose.

We used to work weekends, take credit apps from retail stores, it was pure chaos.  We had temps pick up the fax, hand it to a data entry temp, who passed it to a credit analyst.  It would get backed up, the store would call in, asking for status, they got so mad.  I went to the director who hired me, said, why don't we just number the loan apps as they arrive, track them on paper, it worked out great.  Then we got downsized, I interviewed for asst. manager in finance branch, got the job, said if I decline, forfeit severance, so I moved the next day, started new job that Monday, no money for moving expense, to another state.

I worked in a finance company, high risk, high rate.  I sold so many loans the first month, my manager submitted for employee of the month, and got it.  I was good at selling loans, sort of like selling bubble gum o the bus in 6th grade.  Also had to collect, call people, track them down, get payment, even drive to their houses.  Almost got bit by a dog, and threatened by a customer never to show up at the house again.  So I applied for a credit analyst position at the bank.  I knew somebody there, although wasn't 100% qualified, applied, got the job, it was fun, except had to approve loans for cars, RV campers, land, HELOC, cash, CD, you name it.  Was a bit different, learned on the fly.  They asked for volunteer to assemble the prior day credit analyst numbers, I volunteered, everyday, I got to compile reports for a few hour, was way better than approve, decline, decline, approve.  So I asked the bank to pay for a night class, c++, got an A, reimbursed, and got hired into the IT department.  They said if you want a descent raise, you have to quit and come back.  So I found a reporting position.

Since then, salary has increase quite a bit, about 4 and half times. I'm still writing reports, 22 years later, hasn't changed much, actually its changed a lot, just have to get the data and numbers to match.  It's still fun for the most part, hard to believe they pay you for this, but if those are the rules of the game, I'm in.  For some reason, I seem to enjoy working with the numbers and data and solving tough problems.  Believe it or not, even in IT, many folks just don't like to think for long periods of time.  I may not be smarter, but somehow I see the solutions, over time, they tend to repeat, and of course, its usually something quite obvious that others overlook, kind of like me.

I really don't know what to become when I grow up.  Luckily, writing reports has been a good fall back for the bulk of my career.   One day, maybe, I'll figure it out.  In the meantime, I'll be writing reports for a client, trying to get the numbers to match.  Not sure how any of this can be traced back to my degree in Anthropology, but at this point, not sure anyone even notices.

And so it goes~!

SSIS Automation using SSIS

Do you ever sit down and write code.  For hours.  Never get up.  Even to eat or use the rest room.

That's what I've been up to the past few days.  Writing c# code.  In SSIS script component.

It's quite tasty.  It's actually programming.  Within SSIS.  In the data space.  It's an SSIS package to create other SSIS packages.  Using custom code, XML templates.  Reading from source system.  And Excel file template.  It outputs ".DTSX" packages.  To automate the process.  Rather than write each package one at a time.  Automation is key.

Also writing U-SQL scripts.  Create, truncate and dropping tables in Azure Data Lake Analytics U-SQL database.  And pushing CSV files up to Azure Data Lake Store.

Sure is fun to be heads down in the code.  I'll be there in 5 minutes.  2 hours later, where are you?  I'll be there in 5 minutes.

And so it goes~!

8/13/2018

Time Stops for Worm Holes

Can you stop time?  Just open a slit in the space time continuum.  Drop down a worm hole.  Appear in a far off galaxy.  Hang out for a while.  Return home in time for dinner.

Populate Mars.  Maybe occupy some pre-built structures, from before they lost atmosphere, flew to Earth, built a race of working bi-pedal hominids.  History is fluid, most of the truth evaporated.

Atlantis.  Lost?  Shifted south.  As India shifted north.  Smashed, created some mountains.  Platonic shift?  Europia.  Popcorn please.  Fiction is so fascinating.

Big Bang.  Any witness'?  Got a few questions.  Infinity.  For how long?  Expand forever.  Where is the edge?  Been there a few times.  Took the worm hole.  Found the slit.  When time stopped.  Back for dinner.

Three senses?  Wow.  What about all the others?  Like doing math with no odd numbers, and excluding 7, 5, 1 and 9.  Sort of limiting.

Before arrival, sat around, with the guides, what would you like to accomplish this time, while in physical form?  Well, purgatory sounds like a good learning experience.  And poof.  Here we are.

Predict the future?  Can't predict the past.  Fiction.  Reality?  An aligned hallucination.  1. Work 2. Spend 3. Repeat

Unlearning is the first sign of intelligence.  Tabula Rasa.  Blank slate.  Dinner plate.  Time to go.  Famished.

7/24/2018

Azure Programming in U-SQL

Here's something I noticed recently.

I enjoy working in the Microsoft space.

When you work in Microsoft space of technology, things are fairly consistent.  Drop down menu's.  Command buttons.  Documentation.  It's fairly uniform across most applications.

When you compare that to open source or another vendor owned software, perhaps the fonts are different, or the screens are jumbled and difficult to read.

In other words, when you work with the same software apps by Microsoft, you intuitively know where things are on the screen, you know what to expect when you install an application, you have confidence the app is going to work as expected and most likely, it will integrate with other products of same vendor.

Now keep in mind, if you've ever tried to install Visual Studio for instance, you are aware of the number of dependencies and extra installs required just to get everything up and humming like dot net frameworks and such.  And there's a bunch of different places to download the software.  And try messing with the Gaac or Registry settings, no easy task.

All that is a side note.  What I"m talking about is ease of use, the common themes across products and its interaction with other apps.

We're not talking about deprecating good products, like Visual Studio 6, forcing developers to move to .net object oriented languages.

Speaking of new languages, I've been working with U-SQL for the past few months and I will say its a fantastic language.  You can develop the code in notepad and copy paste into the Azure portal, click submit and watch it run in real time, it shows the code, the errors, graphs, execution plans.  

Also, you can take that same code and run in Visual Studio then hop over to Azure portal and watch the job run in real time.  You can also see the Azure objects from Visual Studio, export data in VS or Azure, and watch jobs run in VS or view objects from Solutions Explorer.

It's amazing to write code in on a VM, execute the code to pull from on-premise data, push to the Cloud Azure Data Lake Storage Group, mount that data using U-SQL and send to U-SQL database table.  Hybrid Programming is the future.  When you execute a job, you can see the Estimated Cost / Charges on the web page.

This environment is so open and fluid and really opens up a lot of options, and it has the look and feel you get with similar vendor products.  I've been working with Microsoft products since 1996 professionally with Visual Studio 3,4,5,6, then ASP then .net, SQL Server, MSBI and now Azure.

Azure Cloud Programming has a lot of good features and opportunity.  Bite off a particular area to work in, I recommend U-SQL.  Good stuff.

6/28/2018

Attending the Real BI Conference at MIT in Boston

Wrapping up day 2 of the Real Business Intelligence Conference, See Beyond the Noise, held at MIT in Boston.  Overall, it was a great event and glad to attend.

https://www.realbusinessintelligence.com/

The nice thing about the conference, was the venue.  Just being at MIT, you sort of feel smarter.

Next, the quality of speakers was great, with a mix of topics ranging from Futurism to GDPR to Real World Data Project success stories.

Food, snacks, great.

I like the single auditorium, single track approach.  Lots of conference have a number of multiple tracks running parallel and having to choose between multi options.

The conference also had a community feel to it, as in getting to meet people from all over the world, have discussions of quality, on variety of topics around technology.

The two day approach was nice, as some conferences run 4 or 5 days, and very technical, by the end of the week, your brain is fried, don't remember much and actually skip some of the sessions to give the mind a rest.

This conference was concise, personal, quality speakers and content.

If you're interested in attending next year, the website is:  https://www.eventbrite.com/e/2019-real-business-intelligence-conference-tickets-47210074604

The most surprising topic was around GDPR, which is basically a world mandate, to protect EU user's data, the rules, and implications surrounding non compliance. Seems like a good opportunity for consulting firms to hold business' hands in getting compliant.

The main theme in almost every talk, "Ask better questions."  Overall, very happy to have attended the Real BI conference 2018 at MIT in Boston.  Even got a photo with Howard Dresner, who coined the term "Business Intelligence" term in 1989, his firm hosted the event

Thanks for reading~!

6/21/2018

Tennis Against the Wall 2

Another tennis video.  Mostly forehands.  Against the wall.


Gusts of ambition.  On a warm summers day.

6/17/2018

Tennis Wall

Been playing sports all my life.  Gave up baseball, basketball, bowling, soccer and swimming to excel at Tennis.  Funny thing, my 6th grade teacher also taught tennis at the club and I took clinics from him after school.  Been playing tennis on and off since high school.  One thing you learn over time, the wall is the greatest opponent.  Today decided to try again, give the wall another go at it.  Granted, got my cigar in hand, few volley's, in the blistering heat.  Sometimes I play left handed.  Used to be in pretty good shape, play for hours.





Tennis is the greatest sport, can play your entire life.


And so it goes~!

6/05/2018

URL References for U-SQL Getting Started

Here's a basic list of URL to get familiar with U-SQL, which is Microsoft's fairly new language to access Azure Data Lake.  Sort of a blend of C#, Linq, SQL, Windows Functions and a bunch more:


Get started with U-SQL in Azure Data Lake Analytics

Plug-in for Azure Data Lake and Stream Analytics development using Visual Studio

Develop U-SQL scripts by using Data Lake Tools for Visual Studio

SAMPLE (U-SQL)

Operators (U-SQL)

SQL Sampling Methods

SAMPLE Expression (U-SQL)

Introduction to the U-SQL Tutorial (version 0.7)

U-SQL Language Reference

Introducing U-SQL – A Language that makes Big Data Processing Easy

Develop U-SQL scripts by using Data Lake Tools for Visual Studio



Analyze Website logs using Azure Data Lake Analytics


Overview of Azure PowerShell


Install Azure CLI 2.0

Accessing diagnostic logs for Azure Data Lake Analytics

Use the Vertex Execution View in Data Lake Tools for Visual Studio


5/17/2018

3 Hot Tech Trends to Disrupt Everything

Artificial Intelligence on the edge.  Latest cutting edge technology.  In other words, let the AI model reside in Internet of Things devices sensors.  Ingest data, ping the Model, look for anomaly, fire off message to home base to alert.  Models can be created locally, pushed to the edge, where they reside.  Can update Models over time.  Seems like a good distributed AI model in real time.

We have devices to monitor people's vitals in real time, send messages back to home base, to alert if need be.  Combine the two, you've got some serious monitoring ability.

Due to security concerns, people have suggested embedding "chips" into children, so they are easily track-able.  Some push back from advocates, borders on ethical concerns, do we want to cross the boarder on people basic freedoms.

We can embed chips in cows perhaps, monitor them from a distance.  Seems like a stone's throw away, humans could be next.  First people volunteer, then offer service, similar to Flu Shots.

Internet of Things had security concerns out of the gate, opens up vulnerabilities, someone could tap into your home security through your thermostat, once in, scan your files, embed Trojan Horse, even Ransomware.  These are not good, yet they are real threats.  Suffice to say, if someone wants to hack you, they can usually find a way, as any device connected to internet is suspect.

With talks of cyber currency to overtake traditional paper money, we could soon see the disappearance of physical wallets.  Then all transactions will be documented in real time, audit trail, using the upcoming technology Blockchain.  Basically a distributed ledger system to handle transactions.  It uses a technique to add new transaction to the chain, by collectively validating the hash key, which is unique and created by hashing the prior key.  If you transaction is valid, it will be added to the stack and committed, and can never be altered, modified or deleted.  This should allow a valid history of all transaction.  With that said, financial transactions would no longer need to go through traditional methods, where the money is placed on hold until the nightly batch pushes the monies here and there.  It will be instant.  This can and will be applied to currency, voting, stock transfers, healthcare records, just about anything and everything.  This will disrupt all industries.

Taking the IoT example, what if they add microphones and cameras to people.  That would surely open up new avenues for monitoring, instead of current methods of smart phones, smart listening devices and cameras littered throughout society.  It would be a tighter mechanism for sure.

So it would appear, the latest hot trends in technology surround Artificial Intelligence, on the Edge, using Internet of Things, along with Blockchain.  These three technologies are primed to disrupt everything.


5/10/2018

Close the IT Skills Gap by Encouraging STEM to Girls

Any mention of storing data in the Cloud a few years ago, people didn't trust it.  Fast forward, Cloud is where it's at, for all your data needs.  In addition to the arsenal of associated technologies to piece together to form infinite possible solutions.  Sometimes it takes a while to adopt change.

So one of the hot topics today is the recent talent shortage.  There just aren't enough qualified people to fill open positions throughout the world.  Although more emphasis on STEM programs at younger ages, the stigma of being a nerd still exists in schools.

So another hot topics today is the lack of women in IT.  Traditionally, IT was staffed by men and some of the cultures revolved around either the "good old boy" mentality or the "bro" mentality.  This was two fold, as it prevent women from entering the field and cause some women to exit the field.

So how do we get two birds with one stone.  By getting more women in the IT profession by removing the legacy "bro" mentality and teaching girls about STEM early on.  Math, Statistics, Science, Project Management, Management, Innovation, Creativity and every other skill in between.

Another factor at play, that needs to be addressed, is Women in Technology need to be assured equal pay for equal work.  There's no reason to subvertly discriminate a specific gender in this day and age, when all people are created equal.

So the solution to IT shortage across the globe is to involve girls in the STEM program at early age, remove any stigma associated with "intelligence" and pay people fair value across the board.



5/09/2018

Hologram Keyboards Connected to Virtual Cloud Computers

If I had to use a thumb mouse to perform job functions, I'd have to find a new career.  I am unable to use this input device long term.  Which brings up the question of input devices.

From punch-cards, to keyboards, to mouse, to smart phone swiping, to smart watches, to voice recognition using AI Natural Language Processing.

We went from dumb terminals, to PC, to laptops, to Smart Devices and Pads, to smart watches.

Computers are no longer the "main" devices to connect to the web and interact.

What if we took the dumb terminal approach, had the PC hosted virtually in the Cloud, connect via any device.  That would reduce the costs for PC, Laptop, etc.  Access your computer from any device, anywhere, any time.

What if we had keyboards that were not physical, as in holograms.  Simply start up your hologram keyboard, connect through your internet connection via wireless network or smart phone, connect to your virtual hard drive in the cloud, that contains all your programs.

Seems like one plausible next step in the evolution of PC on every desktop.

Likewise, what if the hologram keyboard could connect to IoT devices out in the wild.

Would surely open up new opportunities, and markets.


5/08/2018

Orgs Need a New Role to Manage New Cloud Offerings

Listening to the Microsoft Build live stream this week, it's clear that technology is changing.  AI and Azure are the hot topics.  It seems everything is moving to the Cloud.  And AI is to find its way into all products.  There were many new announcements, which many folks have reported on through industry sites and blogs.

It seems that Microsoft now does everything, they have tentacles in every technology.  And they are driving and partnering with a lot of industry leaders to create new products as well as open technology to the masses.

Blockchain, IoT, Machine Learning Models, Drones, PowerBI, Databases of all varieties, Live Code Sharing, and everything in between.

The presentations are fairly high level, yet profound, in that with a few clicks you can create webs, push to Azure and be live in no time flat.

Due to the fact the number of new products, the evolution of existing products, the integration with new and existing products, I would venture to say the newness is mind boggling.  And because nobody can know everything, you may have to identify a sector in which to master and become expert with deep knowledge, or go wide and learn some of the basic across a wide sector.

In addition, I would venture to say we need a new role to be created.  And that is someone within the organization to keep tabs of the available features, how they integrate with legacy and new features, and provide expert advice to internal teams including upper management.  Reason being, technology has exploded, splintered and fragmented, and due to the frequency of new products, features and integration mechanisms, we need an expert person or team of people to keep current with all the latest trends.  By staying abreast of current technology, you can gain leverage by producing newer technology, newer features, help migrate off legacy systems, to save costs and reduce complexity.

This person or team may also keep tabs of competing offerings from other Cloud solutions, for integration purposes and such.

We need a residential expert on Cloud offerings as new position within organizations.  I believe the Partner Program does a good job of this now, what I'm talking about is embedding within your org.  Reason being, the coders have enough task loads to meet agile sprint deadlines, keep internal and customers happy, meet their internal goals as well as ongoing career goals.  Having to burden the heavy load of knowing everything, may be the camel that broke straws back, or something to that effect (LoL).



Suffice to say, technology is the hot job market of today, and tomorrow!

5/07/2018

Artificial Intelligence Winter is over Spring has Sprung

Listened to Microsoft Built this am.

Future is Azure + Office365.

Looking to host Azure as Worlds Computer.

Includes Intelligent Edge, Server-Less, Event-Driven, "ubiquitous" computing.

Azure stack is just a year old they said.

Runs on Linux and Windows.

Using AzureML, to cross languages.

Open Source Azure IoT Edge.

Need a data page, to identify where data derived.

50 Data Centers across the globe.

Azure IoT Edge plus AzureML Models allow alerts to be sent based on embedded sensors, identifies issues in real time.  Demo on Camera device in Drones for Intelligent Edge.

Commercial Drone License required to fly in auditorium.

Stream info from Pipes to AzureML, finds anomaly, sends in real time to laptop, to Model developed in Cloud.  Scales in real world, saving companies time and expense.  Then update AzureML Model and redeploy fast.

Not just insights, create Frameworks, to send to developers, to allow developers to "commoditize", allow all developers to have technology in hand and bake into custom applications.

Azure Cognitive Services now has 35 tools, which can be customized, bring your own label data, deploy models where you need it, in your applications: vision, speech, language few examples.

In order to democratize these AI devices, announced speech SDK and reference kits.  Embed and deploy in any device.  Consumer and industrial side.

Conversational AI, bots were talked about 2 years ago.  Need ability to brand customer facing agents.  Converse over multiple AI agents.


Who Owns the Data - the Chief Data Officer

Who owns the data? 

The data gets captured from front end systems perhaps, or capture web log files, or downloaded off the web, Hadoop clusters, perhaps CSV files or JSON, Streaming Analytics data sent from IoT mini burst packets, OData feeds, archives & backups, or good old legacy data.

So it would appear IT owns the data.  Because it resides in files, databases, mainframes that sit internally on a shelf in the data center on-sight or centralized location at another location.

Or perhaps it resides in the Cloud.  If so, the vendor stores the data and is responsible for back ups and concurrency across the globe, so does the Vendor own it?  Well, they only capture or store the data.  So does organization that owns the Cloud actually owns the data?  Or the Vendor?

Yet the data ends up in ETL jobs, converted into Data Warehouses, Data Models, Reports, Visualizations, Machine Learning models, etc.  So does the developer that cleanses, pushes the data to new systems, models, reports, aggregates the data, do they own the data?

How about the Business Units, they know the business model, or at least their piece of the puzzle.  Does the Business own the data? What about data residing on file shares across the network, does IT own that, or the business?

What about insights derived from the data, who owns that?

I'd say it needs to roll up the Chief Data Officer, a fairly new role, that intersects IT CIO and the Business, and everything else in between, and reports to the CFO or CEO.  Or the Data Competency Center, which performs similar if not identical roles.

The CDO is responsible for the entire data stack.  From data creation to data ingestion to data storage to data mashing to reporting to data science.  He or she can matrix other departments for skills, domain knowledge and assistance as needed, including the hiring of consultants.  The CDO works with IT and accounting to purchase software, align for costs savings, document data across the entire org as well as how and when data flows through the entire ecosystem.




Who owns the data?  I venture to say the Chief Data Officer owns the data.

Blockchain Must Overcome Similar Challenges as Hadoop

For Blockchain to be considered a global enterprise level database (ledger), it must scale at the transaction level, in real time, ensure security based on token (incremental keys) that guarantee authenticity, and must be transparent.

Hadoop tried to create real time transactions to mimic traditional databases, yet Map Reduced limited its ability.  It wasn't until Map Reduce was pushed up a level, to become just another tool in the toolbox, that we began to see improvements in query speed.  I'm not sure they were able to insert new records the Hive databases to match standard OLTP databases, although I have not been keeping up to date on this.

So for BlockChain to scale enterprise wide, it will need to overcome the challenges that Hadoop faced.  Hadoop was typically contained within an org or in the Cloud, where Blockchain is scattered across the globe, so distance is potentially greater.  And I imagine once the record is placed on top of the stack, the other nodes must be notified to establish the agreed upon contract to know its legit.

Also, the bandwidth must be able to handle thousands of transactions per second, to mimic OLTP databases, which handle insertions via locks and such.

So BlockChain must handle increased Volumes, across great distances, negotiate valid contracts and update across the chain, in potentially real time.  And since these contracts could be used for stock trades, currency exchanges, and voting polls, it will need to be 100% accurate, secure and transparent.

Tough order to fill.  Let's watch as time progresses, see how things pan out.


5/05/2018

Running the Month End

I was commuting to work about an hour each way.  The company was laying people off, RIF almost every week, with guards at the door.  I found a job posting closer to home, paid $12k less, to write Crystal Reports.  I took the job for shorter commute.

I was tasked with writing Crystal Reports.  The director said you are also tasked with writing the month end process in Crystal Reports.  It was already running in Visual Basic so I reviewed the code, stepped through one line at a time.  I went to the boss, I think I found an error in the code.  That's impossible.  That code balances the entire company, no way it could be wrong.  Okay, went back to the desk.

Few days later, show me that code that isn't working.  Sure enough, there was a bug.  The Inner and Outer loops weren't jiving correctly.  I re-wrote the VB code into SQL used joins and it ran a lot faster.  So that began the process of writing the month end.

I ran the month end for many years, streamlining the code each time.  It consisted of financial data.  Written Premium, Earned Premium Unearned Premium and Inforce Premium.  The numbers had to tie at the monthly level, year to date level and inception to date level, every month.  If any bugs in front end system caused back end data issues, had to track it down, correct it and re-run the month end.

Tracking down the errors took some effort, as it wasn't merely off by $10.  More like off by +200, -190, usually more complicated.  Running the month end took tremendous effort as it ran through the night, I'd babysit the job and do checks along the way.  It also had Commission reports to people got paid based on the numbers.  And we had so many days to close the books.  I worked directly with the owners as they double checked all the numbers each month.

I also had lots of work to do when it wasn't month end.  I wrote an ACH application to send batches to the bank.  Create and maintain other database, fix bugs in the front end ASP application, which connected to back end Visual Basic DLLs that ran in transaction server.

I may have been the most tenured developer on the team as many people came and went.  Each month, I ran the month end.  I knew developers at other companies and it seemed they were doing some cooler technologies.  I probably ran the month end a few too many times, as I was burned out.  I found other jobs a few times and tried to resign, yet, it was a difficult decision to leave as I invested so much time and effort in the process and the company made sure I was taken care of.  I ended up leaving the job abruptly, in hindsight, I could have done a better job of transitioning.

Instead, I found a job teaching tennis and doing websites for people.  That down time gave me the opportunity to relax the mind, and in doing so, I got married, and went back to work doing Crystal Reports.  I don't have anything to show for all the hard work, except the knowledge gained has helped downstream for other Insurance clients.

Those were some good times, running the month end, solving data issues and closing the books for the month.  Glad I had the opportunity to help out for three or four years.

And so it goes~!



5/03/2018

On the Forefront of the Personal Computer Revolution

What were you doing in 1983?  I know what I was doing.  I was on the IBM PC.

That sure was a long time ago.  And fortunate to have the top of the line computer of its day to tinker with.

What type of things did we do?  We programmed in PC-DOS, not MS-DOS.  We formatted floppy disks, the big ones, double sided, double density.  We didn't have hard drives.  Everything got booted up into RAM, then you could begin.

Color Monitor, Epson Dot Matrix Printer, 1200 Baud Model.  Top of the line.

They had local BBS or Bulletin Board Systems we could dial into, peruse around, look for good stuff, to download, find new phone numbers and such.  Even had the ability to page the SysOps, which I did from time to time, to ask them questions.  I was 13 years old, typing over the computer, to an adult, who had a full time job, and ran the BBS out of the garage.

It was fun.  It was new.  It got baked into my operating system.  Working with the computer.

So after graduation, it was just a matter of time before I got into IT full time, with an Anthropology major.  Guess how many people asked, "How does an Anthropologist make a living in Computer Programming?" 

Well, growing up on a PC from early age and having a parent work for IBM for 34 years helped.  We were tinkering on the PC long before Windows / OS2, Internet, Mobile Devices.  We were at the forefront of a personal computer revolution.  Not too shabby!



Accumulators of Data are New Gatekeepers of Reality

What is reality.  

The Sun revolves around the Earth, as everyone knows, those who disagree will be ex-communicated and beheaded.  That was reality a while ago.  Now we all know the Earth revolves around the Sun.

We all have a basic idea of what reality is and is not.  And for those who voice their opinions that don't mesh with current dogma, we no longer ex-communicate, we place straight jackets on them.

How does society obtain the correct version of reality.  Many ways actually.

Family upbringing.  News and media outlets.  Movies.  Arts.  Sciences.  Schools.  Playground banter.

We pick up clues and eventually assimilate into practical sound mind upstanding people of the community.

Only problem.  What if the information we obtain is not 100% accurate.  Well, its based on facts.  Facts determined by whom.  Scientists.  Who funds the scientific programs.  What programs are allowed through the filter and what are not.  Who maintains the gatekeeper role to decide which facts are allowed and which are not.

You see, I have a lot of free time.  Sometimes I look out the window.  Sometimes its raining and other times its not.  What's interesting is this.  I noticed that it rained on all the major holidays.  Why, because everyone was stuck inside their houses.  I watched this re-occur for 10 years.  Without fail, it rained every single holiday, including election day.  I know this to be true.

For some reason, I downloaded some weather data from the web, to do some analysis, and sure enough, when viewing the holidays for the past 10 years, there was no indication of rain or precipitation.  How could that be, I saw it rain, my clothes got wet, it happened.

Yet according to the data, it never rained.  So which is accurate.  That which is documented or that which is experienced.  Well, history tells us that majority rules.  Reality is shaped and played out based on community agreement.  In this case, I was over-ridden.  My view of reality was in correct.  Or was it.


Perhaps I simply downloaded an outdated or incorrect data set.  Maybe.  Maybe not.

So instead of regarding our view of the world through the text books we read in classrooms, our new reality will be based on what's in the data.  So this would dictate that those who keep and store the data, are the new gatekeepers of our view of reality.  And possibly, we could alter history with a few delete statements here and there, a few update statements, and perhaps a few insert statements.  Stranger things have been known to happen.

So I put it to you.  How important is the accumulation of data going forward.  Time will tell.  And if it's raining out, and its a major holiday, I say, it never rained.

And there you have it~!


4/26/2018

Introducing a Simple Framework for Working with Data

This week I blogged about 5 new features to data.  It starts off simple, builds upon previous idea, to form the building blocks of Strong Artificial General Intelligence, a grandiose concept indeed:

Tag Data at Time of Inception - integrate a framework such that data gets tagged upon inception using XML tree like structure to capture meta-data for external use

Open Data Set Framework - standards applied to generic data sets for public or private consumption

Open Reporting Tools - generic report reader seamlessly ingest Open Data Sets - allow any user to work with data to find insights

Global Data Catalog - Cloud Based storage of Metadata for consumption by Open Data Set ingestion

Automate Machine Learning Artificial Intelligence Ingestion to dynamically scan Global Data Catalog for purposes of Unsupervised Machine Learning Ingestion, to automatically build and refresh Data Models in real time to answer specific questions in any domain

Programmers have built Frameworks for a variety of languages.  Frameworks serve the ecosystem by organizing concepts and techniques into re-usable patterns.  Not sure why the world of Data has steered clear for so long, I'm proposing a new foundation, a series of non threatening concepts, when combined, will produce results greater than each individual line item idea.

Remember to tell 'em who you heard this from first, before its gobbled up and re-distributed as someone else' idea.  Jon Bloom.

As always, thanks for reading~!


http://www.bloomconsultingbi.com/2018/04/apply-structure-to-data-sets-for.html

http://www.bloomconsultingbi.com/2018/04/self-describing-data-tagged-at-time-of.html

http://www.bloomconsultingbi.com/2018/04/to-reach-artificial-general.html

http://www.bloomconsultingbi.com/2018/04/self-describing-data-tagged-at-time-of.html



How to Survive the Rise of Automation, Intelligence and Robotics

The great chasm that divides society will be of knowledge and how that translates to marketable skills.  

With the rise of automation, many manual tasks will be performed by Robots and / or Algorithms.  Reason being, human capital is not cheap, automation is.

Once a computer model is trained in specific domain, at expert level, it's speed, accuracy and documented audit trail would be no match for average people.

In order to survive the next economy, one must have knowledge and the ability to translate that into a necessary skill that's in demand.

A Data Scientist could train a machine learning model, by feeding it information about court cases, going back 500 years.  The Model would learn the logistics, the exceptions, the probability of outcomes over time, and be a source of information going forward, so long its updated over time and verified for accuracy.

That translates to reduced demand for those in the legal profession, like research.  Imagine having tons of valid info at your fingertips, in real time, scanning millions of court cases on the fly.

Now, ripple that to scenario to other professions and you see very fast the impact automation will have on society.

Throw in Robots, Self Driving Vehicles, Transportation and Logistics, Food Service, Education and many more industries will be severely impacted.

With fewer individuals able to earn gainful employment, less money flowing through economy, perhaps slow down in GDP, the stress and burden on society could increase as costs and consumer debt rises, the picture becomes a bit more bleak.

There's mention of Basic Income, yet if you begin to review what a global welfare system would look like, you see very quickly there are many holes.  As in who will finance a great chunk of society, would crime and black market increase, what would people do during idle time, will population increase or decrease, what chance will offspring have to become educated and find employment.

However.  Those that have quantifiable legitimate skills, that are in demand, would find work.  Perhaps in technology, or a service that requires on-site tasks, or something creative that requires humans specifically.  They will have pick of the litter, luxuries not available at lower rungs, as their skills will be demand.

Looking at things from this perspective, you would imagine any youngster frantically learning everything they can get their hands on, as their future could depend on such knowledge and skills, in order to stay afloat, down the road, when automation and robotics make their way into mainstream society.

And there you have it~!


4/24/2018

To Reach Artificial General Intelligence We Must First Tag All the Data at Time of Creation

What are the basic steps a report writer performs to do their job?

  1. Obtain requirements by mapping out required Fields, Aggregates, Filters
  2. Write SQL Statement(s) using Tables, Views, Joins, Where Clauses, Group By, Having clauses
  3. Validate Data
  4. Push to Production

What if we applied Self Service over this process.

  1. Users specify requirements by mapping out required Fields, Aggregates, Filters
  2. Table or View Joins were already created in background, User select fields, aggregates, filters, etc.
  3. Data validated prior to model deployment, so in reality data should be accurate
  4. Model uses Production data, can save off Self Service report, schedule to run on frequency


What if we applied Semi-Automated-Self-Service process to deliver reports.

  1. All data elements, tables, views, fields, existing reports with report title, report use / function, existing fields, parameters, would all get stored into a Metadata repository similar to Data Dictionary or Data Catalog ahead of time.
  2. User specify what problem they are trying to solve
  3. System would pull specific fields from pool of available fields that correspond to answering the asked question
  4. Report would self generate for user consumption

What if we applied Weak Artificial Intelligence to deliver reports.

  1. User specify what problem they are trying to solve
  2. AI would process request, pull associated data to support answer to question
  3. User receives instant response with high percentage probability correct answer

What if we applied Strong Artificial Intelligence to deliver reports.

  1. AI system would generate their own questions
  2. AI system would know where to find their answer
  3. AI system would solve their own problems unassisted by human intervention

How do we get to Strong AI?

My guess, AI Systems require data which is labeled or tagged, to perform Unsupervised Machine Learning, to build and run Models, to derive fair amount of accuracy of probability.  Most of the world's data is not tagged.  It also doesn't mash well, out of the box, with other data sets.  For example, if you have a data set of financial transactions of specific customers, how do you join that data set to a data set of home values over time.  There aren't any pre-defined keys that you are aware of.

So if we tag the data at time of creation, sort of like a self referencing, self documenting XML file associated with a data set or SQL Table, you basically create a WSDL of high level data structure, along with audit trail to track changes over time, along with any revisions or changes or updates or deletes to the record set, perhaps IP address of where data was born, time stamps, etc.

Any ingestion process could read this new self defining WSDL type file, determine what the data set consists of, fields names, field types, etc. such that it could automatically deduce the contents of the data, without having to ingest everything.  By doing so, the AI ingestion process, could read a global encyclopedia of archived data sets, continually added over time, and pull in any required data set for consumption, to add to the model, refresh, in order to derive an answer to a question, with high degree of accuracy based on probability.

What I'm saying is by tagging the data at creation time, with an externally consumable file, the AI ingestion system is empowered to pull in specific data sets it finds useful to support a model to answer questions.  This open data framework is flexible to support automation and would support the building blocks to Artificial General Intelligence at rudimentary levels with room to grow into full blown true Artificial General Intelligence (AGI).

Similar Post: 

Self Describing Data Tagged at Time of Creation




4/22/2018

Self Describing Data Tagged at Time of Creation

It used to be, here's a 4 million dollar project, implement a software system, go live.  Uh, what about the reporting.  Oh, we should probably deliver some reports, 3 months after go live.

Reason being, data never got respect.  The Johnny Dangerfield of IT.  Nobody took consideration of the data, the downstream reporting aspect.  No wonder the data was so difficult to work with, not joining to other data sets, 20 tables to get a query to work, left outer, right outer, inner, cross, why are these queries taking so long to execute, why don't these number match the other reports, or these numbers are great, except they're a month old stale, sure would have been great at quarter end.

So how do we fix things?  Go to the root of the matter.  Tag the data at inception time of creation.  How do we tag it, by self referencing the key aspects pertinent downstream, perhaps an XML tree like structure, self referencing and self describing.  Future developers or automatic ingestion programs could add to the tree downstream, original value, original field type, original date, then who modified, when what is new value, perhaps why, etc.

Self describing data, down the the database -> schema -> table -> field -> row -> exact field.

Make it generic, use a standard format template framework, using XML for each field, table, schema, database.

And make it consumable by a generic report reader.

But we'd have to modify our storage databases, to handle this new self describing XML format.  Okay, modify them.  That's considered progress.

So what are the benefits of tagging data at time of creation?


  • Accountability
  • Audit Trails
  • Consumable data ingestion without tagging time for Machine Learning (unassisted by human intervention)
  • Consistent Patterns
  • Transparent Data Sets
  • Public Open Data Sets
  • Store data for life
  • Certify Data Sets


What if we stored the type of data, the key elements available for joining to other data, we could create programs to read the WSDL self referencing files, have the program perform the joins unassisted, throw that data into a model, without having to tag the data, for unsupervised machine learning processing.

How about expose that data via Web Service, so people can pull that data, in ODATA format, or JSON format, or XML format, you name it.

And how about receiving data directly from Internet of Things (IoT) devices, process events and micropings of data across the internet.

And how about reading from BlockChain distributed ledgers across the globe, exposed via Web Service.

Or how about sending data to a Web Service, and storing the returned data contents back to your system.  Here's a row of data, which specific characteristics, what is the result, let's store that off, and build models off that.


So that opens up a market for providing pre-built Web Service data, for subscription or per data item fee.

The potential is unlimited.

Self Describing data, tagged at time of creation.  Even Johnny Dangerfield got some respect eventually.

See similar post: http://www.bloomconsultingbi.com/2018/04/apply-structure-to-data-sets-for.html




4/20/2018

Apply Structure to Data Sets for Accountability to Ripple Across Globe

When I began coding in Visual Basic 4, 5 and 6, there were many advantages.  Such as rapid application development, code was dynamic in that entry level people to accountants to highly complex code could be written and maintained over time.  The downside, the code was not object oriented and so not considered a true language by the core programmers.  It was soon shelved for the most part although code still exists in production in many shops as well as VBA embedded in Office products.

When I programmed Java J2EE, the in thing were frameworks like Struts or Hibernate, the list has grown tremendous.  The nice thing about frameworks are the consistency, so you could dive into someone else's code and get up to speed fairly quick.  It also segregates things into nice buckets, like front end UI or middle tier layer or back end database layer so people can become experts in specific niches.  Each iteration of framework seems to correct some things and possibly cause new issues or obstacles and there's always the backwards comparability issue and having to completely re-write applications.

Let's take a look at the world of data.  Data gets created and stored in dozens of applications including Microsoft SQL Server, Microsoft Access, Oracle, Sybase, MySQL, and the list goes on and on.  We started with VSAM Flat files on the mainframe, but have grown into Cloud based solutions, Graph databases, NoSQL value / pair database, and Hadoop storage using HDFS or Storage in the cloud across clusters of servers.  Data is data.  There're different types for strings and numbers and blobs, there's XML and JSON to send data to other systems, places, people, there's Excel to manipulate, store and transport data and we can access reports via web, mobile, email, network drives, PDF and we can schedule that data to run specific times.

When XML first arrived on the scene, there was mention of replacing Object Oriented coding patterns, perhaps premature speculation.  Except you could flow data using XML and apply a transformation using XSLT.  And for the web pages developers, we have CSS to apply standard tags to elements so they display in certain way, a way to organize and formalize the structure and output of web pages.  Although no design or architecture is bullet proof to handle every possible scenario, these are good efforts to apply standards to technologies.

Back to the world of data, why don't we have similar standards.  Why don't we have the ability to tag our output data sets with specific information.  For example, where was the date created, in what system, date time stamp, format of data upon inception.  And then have tags along the way, so when the data gets sent to others, we have an audit log of who touched the data when, how, and perhaps why, maybe throw in some IP address', some user ids, etc.  Perhaps embed these attributes within the data, as a self referencing data view-able to others along the trail.  So we have some accountability.  This would benefit the proliferation of data, especially in the world of open data sets.  Not just a text file describing the data, but row level and field level attributes as well.

In addition to audit trails, perhaps apply attributes such as field description, such as text:varchar(27), float(9,2), etc. so the data could be plopped into another reporting tool without modification or effort.  Perhaps having a generic report format, with a generic report reader, that can read any data set, with some features to modify the data once in the tool, like group by this field, sort by this, filter by this.  Yet it would be generic and not specific to Windows or Linux or Web or Mobile, it would be completely transportable and completely dumbed down such that it would simply appear in the report reading / writing tool and the user could be off and running, with the knowledge of where the data was derived from and who touched it along the way.

Lastly, we could offer some type of certification as the data.  This set of data was validated on such and such date, by whom, what the data consists of, and is available on the web, for free, or perhaps purchase, bundled and ready to go.  Think about the downstream ramifications, we tagged this set of data with key attributes, which is validate and certified, and you can obtain this data set, to load into your machine learning systems or artificial algorithms for perhaps unsupervised AI.  And if others exposed their data as well, you could have unsupervised AI talking with other AI all day every day creating and updating models in real time with a certain degree of accuracy and accountability via audit logs.  Throw in Internet of Things (IoT), and you have devices talking with servers and proliferating that data to others in real time for a pinball effect of data bouncing all over the place in a network of machine learning AI model based digital world.


See similar post: http://www.bloomconsultingbi.com/2018/04/self-describing-data-tagged-at-time-of.html




4/19/2018

Hand Writing to become Dead Language similar to Latin

I don't speak Latin.  Nor do I understand hieroglyphics.  Because they aren't taught in everyday schools, except medical schools for Latin.

How many times per day do you write something, with a pen or pencil.  For me, probably never.  Except at the bank, filling out deposit or withdraw slips.  Or perhaps signing name on credit card machine using fictitious pen applied to screen.

Seriously, I don't write anything with pen and paper anymore.  There's no need.  I jot down notes using Google Keep, its accessible on smart phone and laptop.  I keep to-do list, notes from appointments, ideas that pop into head, lists of pro's and con's on variety of subjects and decisions.

I feel that standard writing will soon go away.  You're already seeing signs of this regarding cursive handwriting.  Typing is the current trend.  And that's probably not the definitive input device, as voice commands are already quite common, on the smart phone as well as car voice commands.

Yep, basic handwriting may go the way of Latin, perhaps taught in schools, as a dead language, handwriting could become a thing of the past.

If you have comments or questions on this blog post, send your hand written comment, we'll be sure to reply as soon as possible.

And so it goes~


4/18/2018

Massive Data Breech of Significant Proportions

There's a lot of commotion in the news, justly so, regarding data privacy.  Many corporations are selling data.  Some of this data could be considered private, as in medical, as in perhaps Hippa data.  That's sort of a no-no.

You have to look back at the Equifax data breech, to realize the implications.  Before doing reports, I approved loans, as in reading credit reports for Equifax, TransUnion and TRW.

Credit reports tell us everything about a person, their shopping habits, payment records, previous address', any bankrupcy/foreclosure/unpaid student debt, etc., in a nice, easy to read format.  I could ask the customer their basic demographics over the phone, enter into the mainframe terminal at my desk, and pull the credit report, have a decision with an average talk time of under 2 minutes.  That's average, meaning many calls were under 2 minutes.

What does the app + credit report tell us.  It tells us their potential repayment probability, plus, throw in some good old fashion experience plus hunches, into the credit decision to approve or decline, you'll receive a letter in the mail in to 7 to 10 business days explaining the reason why.  

Of course, there were over-rides to the system, but not many, maybe 10%.  So we agreed with the computer model 90% of the time.  And the model bounced up the demographic info such as age, address, length of time on job along with credit report and spit out an answer almost immediately.  It would compare against a model of past customers and their payment history.  We do the same thing now, yet its called data science.

My boss said, from reading the customer's credit report, you should know if they drink whiskey, beer or milk.  In other words, we should have a good picture in our minds of who this person is based on the credit report and demographics.

Back to the main story, with a data breech of credit reports, that data can be constructed to form a financial picture of every person in that data set, not to mention all the credit card info, loan info, social security numbers, address and previous address, alias, you name it.

Its a massive breech of significant proportions.  And that data is swimming through the hands of dark web, being sold and resold.  How about that, food for thought.


And so it goes~!