- Variety of Data Sources (structured Relational data)
- Extract Transform and Load (ETL)
- Data Warehouse Repository (Star / Snowflake Schema)
- Analytics / Presentation layer (OLAP Cube)
In the world of Hadoop, you also have similar architecture:
- Variety of Data Sources (structured and unstructured data)
- HDFS Hadoop Filesystem layer containing variety of file types
- Extract Load and Transform (ELT)
- Analytics / Presentation layer
Data Warehouse must contain Relational Data. DW has size limitations, at some point there's query response degradation. Requires beefed up server(s). Costly to host, maintain and enhance. Good developers are hard to find. If business rules change or a merger or acquisition, often difficult to merge data with other repositories. DW has a solid methodology proven over the past 20 years with repeatable patterns.
Data can be relational but not required. Handles greater volume of data. Reduced cost based on commodity hardware, licensing and server requirements. Can integrate into existing Data Warehouses. Good developers difficult to find. The number of Hadoop components can be overwhelming and daunting for developers to learn all and stay current. SQL on Hadoop opens door to existing skill sets, bypassing complex Map Reduce coding. There's a number of 3rd party offering to leverage. Hadoop is ever evolving.
Based on the extreme hype over the past year or two, some people including me have suggested the hype factor. Hadoop did not replace the Data Warehouse. It enhanced it. By creating Hybrid Data Warehousing. The best of both worlds. Which means that finding the right skill set has gotten even more difficult.
I see more and more people interested in Hadoop, even the ones who had no idea what is was a few years go. Many IT people realize they must learn about Hadoop just to stay current. In contrast, not many of these organizations have production level clusters, they may or may not have 10 node clusters as sandboxes to interrogate data sources, sentiment analysis and process large batch jobs.
Future of Hadoop
Hadoop 1.0 is past. Hadoop 2.0 is here, including YARN and TEZ and Docker along with the slew of other offerings. It has fragmented into many pieces, many vendors and no one size fits all solutions. But there's still a lot of opportunity to be had. With Machine Learning, building Models for Artificial Intelligence, large volumes of data, processing unstructured and disparate data sources, I feel that Hadoop will be part of my career. And if you're job consist of collecting, processing, parsing or analyzing data, chances are, it will become part of your skill set as well.