Hi there,

I’ve got plenty of people and companies asking me more details about how I created these successful Data Warehouses into Big Data Space.

So here I try to explain in the following video, the whole architecture that I’ve been using for the last 4 years (in 3 different companies).

If you have further questions, please, don’t hesitate to contact me through email albertgd@gmail.com or just let’s get in touch with LinkedIn


Did you like this? Share it:

read more

Video about my Data Vault framework for Big Data and other relational databases

 


Did you like this? Share it:

read more

 

If you ask any experienced DV architect which is the most complicated part of implementing a DV solution, there will be a big chance that convincing the stockholders to use the “new” Data Vault concept.

Basically, the DW world got a very bad ratio of successful projects, so nobody is willing to “try” new things. Along these lines, there are benefits on the DV methodology, but one of the cons you can find is that there are not a big community neither plenty of experts available on the market (so it is normal that people have some fears about this methodology).

So I would like to expose, how I did manage to convince one of my last clients about implementing this kind of solutions.

 

1) Problem statement

 

  • Long initial delivery phase per project
  • Fully manual process
  • Difficult to maintain / Fix
  • Reports are not reusable across different businesses
  • Not possible to adjust quickly to business requirements
  • Unable to do automated data lineage
  • For new projects we need to reverse engineer from ground zero to find the EBOs/meaningful data

 

2) Current situation

 

  • Reverse engineer existing data marts could take a very long time due to a non clear data lineage and lack of documentation
  • Any new report requires an analysis of the databases and tables to find the required EBOs
  • Manual creation of ETL and maintenance of the same is a mandatory step
  • Long initial delivery phase could lead to refactor and delay the project considerably
  • Customers doesn’t have an easy access to create their own jobs and extract any data they need

 

3) To be (hypothesis)

 

  • Short initial delivery phase
  • Automation can be applied (ELT and DDL)
  • Easy to maintain and Troubleshooting
  • Easy to reuse (80% reusable VS 20% non-reusable)
  • Faster deliveries (quicker iteration between delivery team and business) will help to shape the end result as business is expecting
  • Generic reports can be reused for all business Fully automated data lineage
  • Every time the EDW project covers one more EBO, all the departments can use it in a very simplified and easy way, so they don’t need to reverse engineer

 

4) Assertions

 

  • Applying automation will reduce human error and the speed will be increased
  • Having a unique repository for all our data, well structured, will help other teams to easily create their marts
  • Reusing same generic report across multiple services will save resources
  • Creating or modifying virtual marts will be faster than physical marts

 

5) Criteria of Success

 

  • Ability to deliver value in biweekly basis
  • Enterprise business objects can be reused
  • Ability to run same report for different businesses
  • Expect a new data consumer to avoid reverse engineer the EBO for each business, finding a useful and easy EBO already built in EDW project
  • Reduce Average time to production for one business to 2/3

 

6) Definition of done

 

  • Framework will automate the ingestion of data only providing one view to feed our EDW
  • Minimum number of tables and links will provide enough data to start doing reporting
  • Generic reports will show the same generic insights across every business already modelled in the EDW

 

7) Comparison

 

Task

Current

EDW

Initial delivery phase Long initial phase Short initial phase
Creation of ETL/Tables Manual process + lifecycle Automated
Maintenance Difficult – modify physical tables and check impact Very easy – modify virtual tables (views) no impact on current production
Reusability 20% reusable / 80% non usable 80% reusable / 20% non reusable
Data lineage Not available Fully integrated
Initial analysis Requires to check all databases/tables to find EBO every time We need to find the EBO only once, then shape for easy consumption
Agile Not possible to deliver quickly Delivery value every 2 weeks

 

Glossary

EBO = Enterprise Business Object
DDL = Data Definition Language
ELT = Extract Load Transform
ETL = Extract Transform Load
EDW = Enterprise Data Warehouse

 


Did you like this? Share it:

read more

Data Maturity Model is a way to classify how bad or well is your company doing in all the data related fields.  Basically, the topic is so big, which includes the security, the data governance, etc… Today I want to expose only the Levels around the Enterprise Data Warehouse and see how your analytics is doing:

  

DMM1 (circa 1990-2000)

  • * No standards
  • * Reactive approach
  • * No master data plan
  • * No strategy
  • * Redoing isolated excels for each report

 

DMM2 (circa 2000-2005)

  • * Standards established
  • * People try to copy all operational Data together in a centralize DB and call it Data Warehouse (no modelling
  • * Waterfall approach
  • * No data Governance

 

DMM3 (circa 2005-2010)

  • * Data Specialist are hired and start doing some nominal Data Governance
  • * Some kind of MDM (third party tool) is used inefficiently
  • * Agile lite or selective
  • * First form of DW or marts creating Star Schemas and some Slowly Changing Dimensions. Rigid and not very scalable, having a big cost on maintenance and very slow process

 

DMM4 (circa 2010-now)

  • * Hadoop in Development
  • * Some analytic stores
  • * MDM managing all companies meta data, using a third party tool well accommodated to the specific business or creating a custom one
  • * Data layers put in place
  • * DW with data quality and reducing the maintenance cost. Dimensional Modelling or other techniques created by experts
  • * Chief Data Officer
  • * All Agile
  • * ETL Automation

 

DMM5 (now – future)

  • * Hadoop in production
  • * Scalable and easy to maintain Enterprise Data Warehouse that will provide marts to different Departments or subjects
  • * Columnar stores
  • * NoSQL
  • * Data Virtualization. Ability to recreate your virtual data marts avoiding the whole lifecycle of each mart (schema_on_read)
  • * MDM managing all companies governance data by subject. Providing insights and user friendly
  • * Chief Information Architect
  • * Data as asset in financial statements

 

So, how do I get into DMM4 or DMM5? for that I would recommend to read these two articles:


Did you like this? Share it:

read more

Introduction

Let me explain my experience leading the whole Data Architecture (split by layers, see on my previous post here) and also leading and creating from zero the whole Enterprise Data Warehouse solution for my current company.

 

About the company

Avoiding names, I would like to introduce few facts about this company. It is one of the largest ecommerce companies in the world with more than 13,000 employees. This company has more than 70 active companies or services attached to its core company, providing a huge variety of business such as: Insurance, Telecommunication, bank, credit card, travel, sports (bought soccer club, beisbol club, etc),…

 

About the data

So for that, we have more than 500 databases distributed around the world with thousands and thousands of tables, as mentioned before, talking about very different business domains. Size of it, still unknown, but so far we have in our hands around 2PB (and growing).

 

How to implement Agile DW with DV2.0

 

First: Data ingestion (Bring all your data into your Data Lake and keep history records)

So in order to get all this data into our History Stage (copy of the original + history), we did developed a Python script and a MetaData Management to automate all the ingestion, but that’s another topic that I will cover in another post, here I am going to focus on the EDW solution to get as much as possible from all this data.

  

 

Second: prepare automation for your Data Vault 2.0

This is very important step as it will provide a huge benefit for a cheap price. Talking about my own experience, I created a whole framework in Python code that auto populates Hub, Satellite, Link and SameAsLink objects in about 2 weeks of Development.

How does it work? Easy! You just need to create a view that follows the Data Vault naming convention. Let’s say that we create a Satellite view to feed our DV, and it will automatically populate, not only a Satellite (table and load), but also Hubs reusing the same view (possibilities to automate are limitless). With a bit of code in our framework, our view will be able to:

  • * Create hub table (if not exists).
  • * Load hub information (only new data).
  • * Create Satellite table (if not exists)
  • * Drop and create views such as vw_[name]_current and vw_[name]_history.
  • * Load Satellite information (new, updated and deleted data).
  • * Get metadata and governance to know which Entities, DV Objects and business we are loading.

 

This is a sample of my naming convention for the framework:

[DV Object]_[Entity]_[Business]_[common/[others]] ==> sat_customer_bank_common

 

Third: split your Satellites using the flexibility of DV2.0

So, the beauty of Data Vault is its scalability and flexibility, which is perfect for complex scenarios such as developing an EDW with an initial unknown scope for a very big company. Since companies are so different from each other and we don’t know how many entities we will need at the end, we started creating some basic entities, such as: customer, order, orderdetail, product, item and so forth.

So we are also putting all the common attributes in a common Sat for each entity, because, even though Phone, a Bank or an eCommerce companies have almost nothing in common, they will still have customers and these customers will share common attributes such as Name, Address, DOB, Gender, etc. This is just finding semantical understanding to our data and using it.

For those attributes that are specific for each business, then we can just create a special satellite for customer banks (salary, rate, etc), other satellites for travel business (frequent flyer number, preferred destinations, etc).

That means, we don’t need to design the whole EDW in one goal, neither we need to include all the business of each entity in order to start getting value.

With the common attributes, we can reuse the same customer segmentation or KPI reports across each business or all together (or by groups) only with one report and a filter for the company/ies that we want to analyse.

What about Item, Order, Order detail, etc? Easy find the semantical understanding of the data. For example, in eCommerce item could be an ipad2, in bank could be loan package 30 years, in travel could be travel package, or even a hotel room (i.e.: “Holiday Inn – Atlanta – Double standard“) and so forth. So later you’ll be able to reuse the same report to check which item was the most successful during one period (agnostic of which kind of product it is).

NOTE: Of course, other good practice is to split your satellites by change rate, so the columns that change the most could be in one satellite and ETL it very often and the ones with low change/priority could be loaded daily or weekly etc.

 

Fourth: Supernova layer or “Business Delivery Layer” (c)

Thanks to our MetaDataManagement, we can automate creation of views removing the DV complexity. So, Business users can start looking at the EDW with a 3NF language that they are more comfortable or create (an even automate) our virtual Star Schemas, based on only views, so much easier to create, recreate and fine tune.
* BTW, I prefer to call it “Business Delivery Layer” .


 

Some references:

 

Pure (and real) Agile Data Warehouse

So the first and second steps are king of one off, once is developed and running in an automation way.

The third step is the one that can be done, one entity/business at the time and that means, because we are not aiming to deliver all of them at the same time. So we can apply agile sprints at target for a number of entities and business for this sprint (let’s say 2 weeks).

Once these new entities/business are in production, you can easily go to the fourth step and create the supernova of it (aim for automation when possible) and data consumers can start enjoying this part of the EDW, or at least, checking if they need something else, if they are missing things etc. So if that happens in the supernova layer, as they are all views, it will be very fast to fix it, if it is a major change in the DV layer, then next sprint we can fix it just creating new satellites (always adding/appending, never rebuilding or throwing away anything).

 


Did you like this? Share it:

read more