Data Warehousing and Business Intelligence

18 September 2016

Rating Dimension

Filed under: Business Knowledge,Data Architecture,Data Warehousing — Vincent Rainardi @ 2:32 am

In an investment banking data warehouse, we have an instrument dimension (aka security dimension). This instrument dimension is used in many fact tables such as trade, risk, valuation, P&L, and performance.

In this instrument dimension we have rating attributes (columns), from the 3 agencies (S&P, Moody, Fitch), an in-house rating and many combination between them. We may have the ratings at both the issuer/obligor level, and at instrument/issue level. We could also have a hierarchy of ratings, e.g. AA+, AA and AA- are grouped into AA. There are separate ratings for cash instruments.

Usual Design

The usual design is to put all ratings attribute in the instrument dimension. This is because the granularity of rating is at instrument level. In other words, rating is an attribute of an instrument.

Issues with the Usual Design

The issues with doing that are:

  1. Cash and FX may not be created as an intrument but they can have ratings.
  2. Some fact tables requires the rating outside the context of an instrument.

An example for point 1 is a USD bank balance (settled cash) in a GBP portfolio, which is given a rating of AAA. In the instrument dimension there is no instrument called USD bank balance. The other example is USD cash proceeds which is an unsettled cash.

An example for point 2 is a fact table used for analysing the average ratings of a portfolio across different date. This is called Portfolio Credit Quality analysis. On any given day, a credit portfolio has 3 attributes: highest rating, lowest rating and average rating. The average rating is calculated by converting the rating letter to a number (usually 1 to 27, or using Moody’s rating factor, link), multiply the rating number with the weight of each position, and sum them up. There can be several different criteria, for example: including or excluding debt derivatives, funds, cash, or equity, and doing lookthrough or not.

For example, portfolio X on 16th Sep could have these different attributes:

  1. Average rating: AA+
  2. Highest rating: AAA
  3. Lowest rating: BB-
  4. Average rating excluding cash: AA
  5. Average rating excluding cash and derivatives: BBB+
  6. Average rating excluding cash, derivatives and equity: BBB-
  7. Average rating including funds: AA-
  8. Average rating with lookthrough: A+

So the fact table grain is at portfolio level and date. We therefore cannot use instrument dimension in this fact table, and therefore it is necessary to have rating in its own dimension.

Another example of a fact table which requires rating not at instrument dimension is sectoral credit quality. A credit portfolio consists of many bonds and credit derivatives. Each of these bonds (or CDS) are of certain industry sector. A bond issued by BP is in Energy sector. A bond issued by Santander is in Financial sector. The average rating for each sector is calculated by converting each rating to a number, then multiplying it by the weight and sum up for all positions in that sector.

If we use GICS (link) as an example, and government, on any given day a credit portfolio can have sectoral credit quality like this: (usually excluding cash and equity, but including credit derivatives and debt funds)

  1. Consumer Discretionary: AA+
  2. Consumer Staples: AA-
  3. Energy: A+
  4. Financials:AAA
  5. Health Care: BB-
  6. Industrials: BBB-
  7. Information Technology
  8. Materials: BBB+
  9. Telecommunication Services: AA
  10. Utilities: BB+
  11. Real Estate: B+
  12. Government: AAA

Above is the average rating for each sector, for a given day, for a particular portfolio.

Design Choices for Rating Dimension

To satisfy the above requirement (fact tables which analyse rating at a level higher than instrument, e.g. at sector level or portfolio level), we can design the rating dimension in several ways:

  1. Rating schemes and hierarchies are created as attributes
  2. Rating schemes are created as rows, with hierarchies as attributes
  3. Rating schemes and hierarchies are created as rows

Before we dive into the details I’d like emphasize that we should not try to find out which approach is the best one, because different projects will require different approaches. So instead of asking “What is the best approach” we should ask “What is the most suitable approach for my project?”

And secondly I’d like to remind us that if the rating data is always used at instrument level than it should reside in the instrument dimension, not in a rating dimension on its own.

Approach 1. Rating schemes and hierarchies are created as attributes

approach1

Of course we should always have RatingKey 0 with description = Unknown. This row is distinctly difference to NR (No Rating). NR means S&P gives the bond a rating of NR. RatingKey 0 means that we don’t have any info about the rating of this bond.

The distinct advantage of this approach is that we have a mapping between different rating schemes. Moody’s Baa3 for example corresponds to S&P’s BBB-.

The Grade column is either Investment Grade (AAA down to BBB-) and High Yield (HY, from BB+ down to D).

Approach 2. Rating schemes are created as rows, with hierarchies as attributes

approach2

The distinct advantage of approach 2 over approach 1 is RatingKey 33 means Aa3, we don’t have to tell it that we meant Moody’s, like in approach 1. In Approach 1, RatingKey 4 can means Aa3 or AA-, depending whether we meant Moody’s or S&P.

The distinct disadvantage of approach 2 compared to approach 1 is we lost the mapping between S&P and Moody’s. In Approach 1 we know that Aa3 in Moody’s corresponds to AA- in S&P, but in Approach 2 we don’t know that. In the majority of the circumstances, this is not an issue, because an instrument with Moody’s rating of Aa3, may not have an S&P rating of AA- any way. Instead, its S&P rating could be AA or BBB+. So the mapping between S&P and Moody’s are the book standard, academic assumption, not the reality in the real world. In the above example, nobody cares whether the corresponding S&P rating for Aa3 is AA-. What the fund managers and analysts want to know is what Moody’s really assigned to that bond.

Approach 3. Rating schemes and hierarchies are created as rows

approach3

The distinct advantage of Approach 3 over Approach 1 and 2 is that RatingKey 31 means AA tier, it is clear and not ambiguous. In Approach 1 and 2, RatingKey 2,3,4 all means AA tier.

The other advantage of Approach 3 is the ability to create short term rating and cash rating, because they are simply different schemes.

Why do we need to keep the Tier column in Approach 3? Because we still need to convert AA+, AA and AA- to AA.

Rating Value (aka Rating Number)

In some projects we need to convert the rating letters to numbers. For this we can create an attribute in the above rating dimension (any of the 3 approaches) called Rating Value. It is usually AAA = 1, AA+ = 2, AA = 3, AA- = 4, and so on. NR (No Rating) and WR (Withdrawn Rating) got 0 rather than 23.

rating-value

Note that in unlike in S&P and Moody’s schemes, in Fitch scheme the D rating is differentiated into 3: DDD, DD, and D. So DDD = 22, DD = 23, D = 24.

Miscellaneous

A common mistake is that people assume there is CC+ and CC- rating. There isn’t. Only CCC is split into 3 (CCC+, CCC, CCC-), but CC and C are not.

Moody’s rating of Ca corresponds to 2 ratings in S&P scheme: CC and C. This is why approach 1 is tricky, because the rating does not have one to one relationship between 1 scheme and another. Another example is as I mentioned above, the D rating in S&P corresponds to 3 ratings in Fitch: DDD, DD and D.

20 June 2016

Data Lake vs Data Warehouse

Filed under: Data Architecture,Data Warehousing — Vincent Rainardi @ 6:22 pm

A Data Lake is a storage and analytic system which stores structured and unstructured data from all source systems in the company in its raw form. The data is queried, combined and analysed to get useful patterns and insights. It is built on a cost effetive Hadoop storage infrastructure, and can be dynamically scaled.

A Data Warehouse is a system that retrieves and consolidates data periodically from source systems into a dimensional or normalised data store. It usually keeps years of history and queried for business intelligence or other analytical activities. It is typically updated in batches, not every time a transaction happens in the source system.

Data Lake Advantages

Because data from all source systems are there in the data lake, the main advantage of data lake is it enables the analyst to get analytics and insight from multiple systems. Yes the data is still in their raw form, so would requires some processing, but it is a lot quicker (1-2 week) than building a data warehouse (6-12 months). And that is the second advantage: time to build.

The third advantage of data lake is its ability to store unstructured data such as documents and social media, which we can query and combine with structured data from databases.

The fourth and probably the most powerful business case for a data lake is the cost efficiency. In investment banking world for example, we can store market data (e.g. prices, yields, spreads, ratings) not only for the securities that we hold but all securities in the market, cheaper than if we store it in Oracle or SQL Server.

The fifth advantage of a data lake is the flexibility. Unlike Oracle or SQL Server, data lakes dynamically scales up. If this morning it is 2 TB, in the afternoon it can be 3 TB. As new data arrives, and we need new capacity, we can add storage easily. As we requires more computing power, we can get more computing power there and then, instantly. There is no need to wait a few days for adding a new node in a SQL always-on availability groups, or adding storage to the SAN, or extending the computing power of an Oracle Grid. This means that we do not need to spend a lot of money upfront paying for 3 TB data and 16 processors. We can just start with 300 GB and 4 processors, and expand when required.

Data Lake Disadvantages

The first issue is that the data lake technology is immature. The language which can query across database and unstructure files only has very limited features (each vendor has different language, for Microsoft Azure it is U-SQL). It is probably only 10% of what PL/SQL or T-SQL can do. We can solve this by putting QlikView or Tableau on top. We use U-SQL only to query individual table and we join the data in QlikView/Tableau and do further processing there.

The second issue is cost (but I don’t find this argument holds water). The issue is: it actually costs a lot of money to store data from all source systems in the company, let alone storing external market data which requires a lot of licence cost.

Let’s take Microsoft Azure Data Lake for pricing example. Being in the cloud, the price is only US$ 0.04 per GB and US$0.07/1 million transactions (ref: link). Let’s say that “data from ALL systems in the company” is 500 GB. And every day we store this data in the data lake. So 500 GB per day x 365 days = 182,500 GB per year x $0.04/GB = $7300 per year. Let’s say we have 10 million transactions per day, which is 3650 million transactions per year x $0.07/million transaction = $256 per year. So it is only cost us about $7500 per year. This is a very reasonable price to pay, to have “all the data in the company in one place”. Even 5 years later, when the volume grows to 5 times, it is only about $35,000 per year. Still very reasonable price to pay.

The third issue is performance. Because data lakes stores the data in its raw format, the query which joins the different data could be running like a dog. Luckily Data Lake is running on HDFS, a distributed file system, which is very fast. So yes it is slower than a data warehouse, but it is not too bad. We are not talking about 30 minutes to run a query, but something like 20-30 seconds (compared to DW query which is say 1-3 seconds).

Data Lake being “exploration tool”, I can tolerate a little bit of slowness. After we proof that (for example) the risk data we query is making money, then it can pay for creating a Risk data mart specifically for that risk analysis purpose.

The forth issue is skill. Data Lake requires a superb business knowledge. Because the analyst needs to join tables from different source system. What is the difference between Real Modified Duration and Effective Real Duration and Modified Duration? But this issue is the same whether we are building ad Data Warehouse or a Data Lake. Both requires good business knowledge.

I don’t find U-SQL is difficult to understand for people with SQL knowledge, which is a common skill. But how about Machine Learning? That is difficult to master, right? Yes, that is true, but it is worth paying an expert data scientist to discover the insight in the data, because this insight can be used to save cost or boost our revenue. The potential benefit is 10-100 times the salary of the data scientist.

Conclusion

So considering all the pros and cons above, I am in favour for creating a Data Lake. In addition to having a Data Warehouse of course.

 

1 April 2016

Data Sourcing

Filed under: Data Architecture,Data Warehousing — Vincent Rainardi @ 8:16 pm
Tags: ,

One of the critical activity in a data warehouse project (and in any data project) is data sourcing. The business users have a few reports which need to be produced from the data warehouse. There is no data for that yet in the data warehouse, so we look at the report and ask ourselves: “Where can I get the data from to produce this report?” The plan is to find the data source, and then bring the data into the data warehouse.

There are 6 steps of data sourcing:

  1. Find the source
  2. Reproduce the numbers
  3. Verify the extraction
  4. Check the coverage
  5. Check the timing
  6. Check the cost

Step 1. Finding the source

We find out how the report was created, who created it, and from where. The report could be a PDF which is created from PowerPoint, which was created from an Excel spreadsheet, which was created from a file, which was exported from a system. It is this system that we need to check. Find the tables, the columns and the rows where the data is coming from.

Step 2. Reproduce the numbers

Once we have located the tables, we then try to reproduce the numbers we see on the report. For example, the report could be like this:

Asset Allocation

Try to reproduce the first number (24.3%) by querying the tables which you think the data is sourced from. In doing this you will need to look at the data processing happening in the Excel spreadsheet. Do they exclude anything? Is it a straight forward sum of the holdings in government bonds? Do they include supranational? Government agencies? Regional? Municipal? Emerging Market Debt?

If you can match 24.3% then great. If not, investigate if the data is manually overridden before it was summed up. For example, in Excel, there might be a particular bond which was manually classified as Government even though the asset class is Corporate, and this is because it was 80% owned by the government, or because it was backed by the government.

We need to particularly careful with regards to the totals. For example, on the “Allocation by Country of Risk”, if the total portfolio value is $200m, but they exclude FX forwards/swaps, or bond futures, or certain derivatives, then the total portfolio value could decrease to $198m, and all the percentages would be incorrect (they are slightly higher).

Understanding the logic behind the report is critical in reproducing the numbers. In-depth industry knowledge will be helpful to understand the logic. For example:

  1. The credit rating for each debt security is categorised into IG (Investment Grade) if it is BBB- or above, and into HY (High Yield) if it is BB+ or below, or NR.
  2. The credit rating used for point 1 is the average between S&P, Moody’s an Fitch, except a) for own fund use look-through, b) for outside fund use IMMFA
  3. The “Allocation by Country of Risk” table excludes cash and FX forwards/swaps.
  4. Each country is categorised into Developed, Emerging or Frontier market.
  5. When determining the country of risk in point 4, for derivative use the underlying.

In the above case, if we are not familiar with the investment banking industry, it would take a long time for us to understand the logic. So, yes, when doing Data Sourcing, it is best if it is done by a Business Analyst with good knowledge & experience in that industry sector.

Step 3. Verify the extraction

Once we can reproduce the numbers, we need to verify if we can get the data out. A few gotchas are:

  1. The numbers are calculated on-the-fly by the system, and not stored anywhere in the database. If this is the case, find out from the vendor if they have an export utility which produces a file after the numbers have been calculated.
  2. Are we allowed to connect to that database and query it? Do not assume that we can, because I’ve encountered a few cases that we are not allowed to do that. It could be because of the system work load / performance (it is a critical transaction system and they don’t want any big query ruining the front end users), or it could be because they have provided daily extract files which all downstream systems must use (instead of querying the database directly). From the system admin point of view, it makes sense not to allow any external query runs on the database, because we don’t know what kind of damage those external queries can cause, it could block the front end queries and causing a lock.
  3. Loosing precision, i.e. the data must be exported from the database but during the export process the precision decreases from 8 decimal places to 2 decimal places.
  4. There is a security restriction because the it is against the “chinese wall” compliance rules (in investment banking, the public-facing departments must not get data from the M&A department)
  5. The system is being migrated, or rewritten, so it is still in a state of flux and we need to wait a few months.
  6. The system is not a solid “production quality”, but only a “thrown-away”, which means that within a few months they could be dropping those tables.

Step 4. Check the coverage

This step is often missed by many business analysts. We need to check if all the products that we need is available in that system. If the report we are trying to reproduce is reporting 4000 products from 100 branches, but the source system/tables only covers 3000 products from 70 stores, than we’ve got to find out where the other 1000 products and 30 stores are sourced from. Are they produced from a different system.

Not only product and stores. We need to check the coverage in terms of: customer/clients, portfolios, securities, line of business, underwriting classes, asset classes, data providers. And the most important coverage check is on dates, e.g. does the source system have data from 2011 to present? It is possible that the source system only have data from 2014.

Step 5. Check the timing

After checking the coverage, we need to check if the data is available when we need it. We need to check these 3 things: data is too late, available extraction window, the data is overwritten.

  1. Data is too late: If our DW load starts at 2.15 am, will the data be available before that? If not, could the business user live with a bit of stale data (data from 2 days ago, i.e. if today is Wednesday, the latest data in the DW would be Monday data).
  2. Available extraction window: In the production environment, when can we extract the data from that source system? If from 11pm to 6am there is an overnight batch running, and we can’t run our extract during that time, then the ealierst we can run is 7am. If the DW load takes 3 hours, DW users can access it at 10am. Is that too late for the users or not?
  3. The data is overwritten: the data from the source system can be updated many times during the day and when we extract it at night, we have no trace of these changes. Is that ok? Do we need intraday, push-driven data load into the DW? Or would 10 minutes data extraction frequency (pull-driven) be enough?

Step 6. Check the cost

There is no free lunch. We need to check how much it would cost us to use that source data.

  1. If the data is valuable (such as prices, yield and rating from Bloomberg, Reuters and Markit) we would have to pay the data providers. We need to check the cost. The cost could be per $5 security, per call, so it could easily be $20-30k per day. The cost is usually shared with other department.
  2. Check with the data provider, if you use the data only as an input to your calculation, and you don’t publish it / send it on to any external parties (clients, etc.), would it still cost you a lot? Even if you don’t have to pay the data providers, your DW project might still have to share the cost with other departments. Say the data provider is Markit and your company pays $300k/year for prices data and it is currently shared by 5 departments ($60k each). Your project may have to bear the cost of $50k/year ($300/6).
  3. The cost could be a killer to the whole thing, i.e. even if #1 to #5 above are all ok, if the cost of the data is $50k, it could force you to cancel the whole project.
  4. Sometimes other department has to create the data for you. Let’s day yield calculation, or risk engine, or OTC pricing engine, and the requirement from the BI/DW is specific so they have to develop it. It could take them 3 months x 3 people and they could cross charge your project $50k (one off). And that could also be a killer to the DW project.
  5. Developing interface: some systems do not allow external system to pull the data out. They insist to develop an export, and charge the cost to your DW project.
  6. Standard data interface system: some large companies (such as multinational banks) have standard interface (real time, end of day, etc.), and the central middle ware team might charge your DW project some low amount (say $2000 one off + $50/month) to use that standard data interface system. Say you need FX rate data from FX system, and there is already a standard message queue for FX rates with publication time of 11am, 1pm and 3pm. So you “subscribe” to this publication MQ and pay the cost (project cross charge).

16 March 2016

Different Measures for Different Product Types

Filed under: Data Architecture,Data Warehousing,Investment Banking — Vincent Rainardi @ 8:33 am

What I mean by a measure here is a time-variant, numerical property of an entity. It is best to explain this by example. In the investment industry, we have different asset classes: equities, bonds, funds, ETFs, etc. Each asset class has different measures. Equities have opening and closing prices, daily volume, market capitalisation, daily high and low prices, as well as annual and quarterly measures such as turnover, pretax profit and EPS. Bonds have different daily measures: clean and dirty prices, accrued interest, yield and duration. Funds have different daily measures: NAV, alpha, sharpe ratio, and volatility, as well as monthly measures such as 3M return, 1Y return, historic yield, fund size and number of holdings. ETFs have daily bid, mid and offer prices, year high and low, and volume; as well as monthly measures such as performance. The question is: what is an appropriate data model for this situation?

We have three choices:

  1. Put all measures from different product types into a single table.
  2. Separate measures from each product types into different tables.
  3. Put the common measures into one table, and put the uncommon measures into separate tables.

My preference is approach 2, because we don’t need to join across table for each product type. Yes we will need to union across different tables to sum up across product types, but union is much more performant than join operation. The main weakness of approach a is column sparsity.

On top of this of course we will need to separate the daily measures and monthly measures into two different tables. Annual and quarterly measures for equities (such as financial statement numbers) can be combined into one table. We need to remember that measures with different time granularity usually are from different groups. For example, the prices are daily but the performance are monthly.

Static Properties

In addition to different time-variant properties (usually numerical), each asset class also different static properties (can be textual, date or numeric). For example, equities have listed exchanges, industry sectors, country of domicile, and dividend dates. Bonds have issuers, call and maturity dates, and credit ratings. Funds have benchmark, trustee, legal structure and inception date. Examples of numerical properties are minimum initial investment and annual charges for funds; outstanding shares and denomination for equities; par and coupon for bonds. Some static properties are common across asset classes, such as ISIN, country of risk, currency.

Static properties from different asset classes are best stored in separate tables. So we have equity table, bond table, fund table and ETF table. Common properties such as ISIN, country of risk, etc. are best stored in a common table (usually named security table or instrument table).

Why not store all static properties in a common table? Because the properties are different for each asset class so it is like forcing a square peg into a round hole.

Historical Data

For time variant properties it is clear that the table already stores historical data in the rows. Different dates are stored as different rows. What we are discussing here is the historical data of the static attributes. Here we have two choices:

  1. Using SCD approach: store the historical values on different rows (called versions), and each row is only valid for certain time period. SCD stands for slowly changing dimension, a Kimball approach in data warehousing.
  2. Using Audit Table approach: store the historical rows in an audit table (also called history table). This is the traditional approach in normalised modelling. The main advantage is that the main table is light weight and performant.

When to use them? Approach a is suitable for situations where the historical versions are accessed a lot, whereas approach b is suitable for situations where the historical versions are very rarely accessed.

The main issue with approach a is that we need to use “between” on the validity date columns. In data warehousing we have a surrogate key to resolve this issue, but in normalised modelling we don’t. Well, we could and we should. Regardless we are using appraoch a or b, in the time-variant tables we need to store the ID of the historical row for that date. This will make getting historical data a lot faster.

29 December 2015

DimMonth, DimQuarter and DimYear

Filed under: Data Architecture,Data Warehousing — Vincent Rainardi @ 6:21 am
Tags: ,

Sometimes the grains of our fact tables are monthly, quarterly, or yearly. In such cases, how do we create DimMonth, DimQuarter and DimYear? Some of the questions in these cases are:

  1. Why do we need to create Month as a dimension? We can’t Month column in the fact table remain as Month, not as a dimension key?
  2. What does DimMonth look like? What are the attributes? How about DimQuarter?
  3. Should we create DimMonth as a physical table, or as a view on top of DimDate?
  4. What is the surrogate key of DimMonth and DimQuarter? Do we create a new SK, or do we use the SK from DimDate?
  5. Do we need to create a dimension for year? It seems weird because it only has 1 column (so that would be against the “degenerate dimension” concept)

For example, in the source system we have a table which stores the monthly targets for each store:

Or quarterly target like this:

How do we create a fact table for this? Should we create it like this?

Question 1 above: Why do we need to create Month as a dimension? Why can’t we leave it as Month in the fact table, not as a dim key, like this?

Question 2 above: if we decided to keep the column as MonthKey, how should DimMonth look like? What are the attributes? Should DimMonth be like this?

What attributes should we put there?

  • Do we need quarter and year?
  • Do we need half year? e.g. 2015 H1 and 2015 H2
  • For month name, should we use the short name or long name? e.g. Oct 2015 or October 2015?
  • Do we need an attribute for “Month Name without Year”, e.g. October?
  • Do we need “Month End Date” column? e.g. 2015-10-30
  • Do we need “Month is a Quarter End” indicator column for March, June, Sep and Dec?
  • Do we need “Number of days in a month” column? e.g. 30 or 31, or 28 or 29.

Or should we use “intelligent key” like this?

Question 3 above: Should we create DimMonth as a view of DimDate like this?

create view DimMonth as

select min(DateKey) as MonthKey, MonthNumber, MonthName, Quarter, [Year]

from DimDate

group by MonthNumber, MonthName, Quarter, [Year]

 

What are the advantages of creating DimMonth and DimQuarter as a view of DimDate? (compared to creating it as a physical table) What are the disadvantages?

 

I think with the above questions and examples we are now clear about what the issue is. Now let’s answer those questions.

 

Q1. Do we need to create Month as dimension? We can’t Month column in the fact table remain as Month, not as a dimension key, like this?

 

We need the Month column in the fact table to be a Dim Key to a month dimension because we need to access Year and other attributes such as Quarter.

 

Bringing Year into the Sales Target fact table like below is not a good idea, because it makes it inflexible. For example, if we want to add Quarter column we have to alter the fact table structure.

 

Using a Dimension Key to link the fact table to a Month dimension makes it a flexible structure:

 

There is an exception to this: Snapshot Month column. In a monthly periodic snapshot fact table, the first column is Snapshot Month. In this case, we do not need to create this column as a dimension key, linking it to DimMonth. In this case, we do not need a DimMonth. Because we do not need other attributes (like Year or Quarter). A monthly periodic snapshot fact table stores the measures as of the last day of every month, or within that month. For example: number of customers, number of products, number of orders, number of orders for each customer, the highest price and lowest price within that month for every product, the number of new customers for that month, etc.

 

Q2. What does DimMonth look like? What are the attributes?

 

Obviously, the grain of DimMonth is 1 row per month. So we are clear about what the rows are. But what are the columns? Well it depends on what we need.

I usually put MonthNumber, MonthName, Quarter and Year in DimMonth, because they are frequently used.

I don’t find “Month Name without the Year” as a useful attribute. I rarely come across the need for “Half Year” attribute.

“Month is a Quarter End” column is also rarely used. Instead, we usually use “Quarter” column.

“Month End Date” and “Number of days in a month” are also rarely used. Instead, we usually use “IsMonthEnd” indicator column in the DimDate.

For month name, should we use the short name (Oct 2015) or the long name (October 2015)? I found that the short name is more used that the long name. But the month number (2015-10) is even more frequently used that the short name

Q3. Should we create DimMonth as a physical table, or as a view on top of DimDate?

This is really the core of this article. A view on top of DimDate is better in my opinion, because we avoid maintaining two physical tables. And it makes the dimensions less cluttered.

If we make DimMonth and DimQuarter as a physical dimensions, in SSMS Object Explorer, when we open the table section we would see these:

DimDate

DimMonth

DimQuarter

DimYear

But if we create DimMonth and DimQuarter as views, then we will only see DimDate in the Object Explorer’s table section. The DimMonth and DimQuarter will be in the view section.

The main disadvantage of creating DimMonth as a view from DimDate is that it is less flexible. The attribute column that we want to appear in DimMonth should exist in DimDate. But I found that DimMonth usually only need 2 or 3 attributes i.e. Month, Quarter, Year; and all of them are available in the DimDate table. So this is not an issue.

Avoiding maintaining 2 physical tables is quite important because when we extend the date dimension (adding more years i.e. more rows) and we forget to extend DimMonth and DimQuarter, then we will cause an error.

The other consideration is of course the performance. I do not find the performance of DimMonth and DimQuarter to be an issue. This is because DimDate is not too large, and more importantly because the monthly and quarterly fact tables are small, less than 1 million rows. They are much smaller than daily fact tables which have millions or billions of rows.

 

Q4. What is the surrogate key of DimMonth and DimQuarter? Do we create a new SK, or do we use the SK from DimDate?

If we create DimMonth and DimQuarter as physical tables, then the surrogate key can either be pure surrogate (1, 2, 3, …) or intelligent key (201510, 201511, 201512, etc.)

But if we create them as a view of DimDate, then the surrogate key can be either the first day of the month (20151001, 20151101, 20151201, etc.) or the month itself (201510, 201511, 201512, etc.). I prefer the latter than the former because it is more intuitive (intelligent key) and there is no ambiguity like the former.

The script to create the view for DimMonth with SK = 201510, 201511, 201512, etc. is like this:

create view DimMonth as

select distinct convert(int,left(convert(varchar,SK_Date),6)) as MonthKey,

[MonthName] , Quarter, [Year]

from DimDate

 

Q5. Do we need to create a dimension for year?

 

No we don’t need to create DimYear, because it would only have 1 column.

What should we call the dim key column in the fact table then? Is it YearKey or Year? We should call it YearKey, to be consistent with the other dim key columns.

A dimension which only has 1 column, and therefore be kept in the fact table is called a Degenerate Dimension. A Degenerate Dimension is usually used to store identifier of the source table, such as Transaction ID and Order ID. But it is also perfectly valid for dimensions which naturally only have one attribute/column, like Year dimension. See my article about “A dimension with only one attribute” here: link.

23 December 2015

Data Dictionary

Filed under: Data Architecture,Data Warehousing — Vincent Rainardi @ 8:15 am
Tags: ,

During my 10 years or so doing data warehousing projects, I have seen several initiatives doing data dictionary. A data dictionary is a list of definitions used in the system. Each definition is about usually about 10 to 50 words long. And there are about 50 to 200 definitions. What being defined is mostly technical terms, such as the meaning of each field in the database. This is the origin of data dictionary, grew from the need to explain the fields. For example, delivery date, product category, client reference number, tax amount, and so on. I saw this in 1998 all the way to today (2015).

This data dictionary was created by the business analyst in the project, who was trying to explain each field to clear any ambiguity about the terms used in the system. I found that the business people view it differently. They found that it is not too useful for them. The data dictionary is very useful to the new starter in the IT development project, but not too useful for the business. I’ll illustrate with several examples below.

In the database there are terms like: CDS, obligor, PD, yield. The BA defines these terms by copying a definition from internet searches, so they ended up with something like this:

  • Credit Default Swap (CDS): a financial contract on which the seller will pay the buyer if the bond defaults
  • Obligor: a person or entity who is legally obliged to provide some payment to another
  • Probability of Default (PD): likelihood of a default over a particular time frame
  • Yield: the interest received from a bond

The CDS definition is trying to explain a big concept in one sentence. Of course it fails miserably: what bond? What do you mean by “defaults”? After reading the whole sentence, the readers are none of the wiser. The readers will get much better understanding about CDS if they read the Wikipedia page about CDS.

The Obligor definition is too generic. It is too high level so that it is useless because it doesn’t provide the context to the project/system. In a credit portfolio project, obligor should be defined as the borrower of the loan, whereas in a fixed income portfolio project, obligor should be defined as the issuer of the bond.

The PD definition contains a common mistake in data dictionary: the definition contains the word being defined, because the author doesn’t have a good understanding about the topic. What is a default? It is not explained. Which PD is used in this field, Through The Cycle or Point In Time? Stressed or Unstressed? Is it calculated using linear regression or discriminant analysis? That is what the business wants to know.

The Yield definition does not explain the data source. What the business users need is whether it is Yield To Maturity, Current Yield, Simple Yield, Yield to Call or Yield to Worse. There are different definitions of yield depending on the context: fixed income, equity, property or fund. The author has firmly frame it for fixed income, which is good, but within fixed income there are 5 different definitions so the business want to know which one.

Delivery mechanism: Linking and Web Pages

To make the life of the business users easier, sometimes the system (the BI system, reporting system, trade system, finance system, etc.) is equipped with hyperlinks to the data dictionary, directly from the screen. So from the screen we can see these words: CDS, obligor, PD, yield are all hyperlinks and clickable. If we click them, the data dictionary page opens, highlighting the definition. Or it can also be delivered as a “balloon text”.

But mostly, data dictionaries are delivered as static web pages. It is part of the system documentation such as architecture diagram, content of the warehouse, support pages, and troubleshooting guide.

It goes without saying that as web pages, it should have links between definitions.

I would say that the mechanism of delivery only doesn’t contribute much to the value. It is the content which adds a lot of value.

Accessibility

A data dictionary should be accessible by the business, as well as by IT. Therefore it is better if it is delivered as web pages rather than links direct from the system’s screens. This is because web pages can be accessed by everybody in the company, and the access can be easily controlled using AD groups.

Data Source

The most important thing in data dictionary is in-context, concise definition of the field. The second most important thing is the data source, i.e. where is this field coming from. System A’s definition of Duration could be different from system B’s. They may be coming from different sources. In System A it might be sourced from Modified Duration, whereas in System B it is sourced from Effective Duration.

Because of this a data dictionary is per system. We should not attempt building a company-wide data dictionary (like this: link) before we completed the system-specific data dictionaries.

How deep? If the field is sourced from system A, and system A is sourced from system B, which one should we put as the source, system A or system B? We should put both. This is a common issue. The data dictionary says that the source of this field is the data warehouse. Well of course! A lot of data in the company ended up being in the data warehouse! But where does the data warehouse get it from? That’s what the business what to know.

Not Just the Fields

A good data dictionary should also define the values in the field. For example, if we have a field called Region which has 3 possible values: EMEA, APAC, Americas, we should explain what APAC means, what EMEA means and what Americas means (does it include the Caribbean countries?)

This doesn’t mean that if we have a field called currency we then have to define USD, EUR, GBP and 100 other currencies.  If the value of a field is self-explanatory, we leave it. But if it is ambiguous, we explain it.

If the field is a classification field, we should explain why the values are classified that way. For example: the value of Asset Class field could be: equity, fixed income, money market instruments (MMI), CDS, IRS. Many people would argue that CDS and MMI are included in fixed income, so why having separate category for CDS and MMI? Perhaps because MMI has short durations and the business would like to see its numbers separately. Perhaps because the business views CDS as hedging mechanism rather than investment vehicle so they would like to see its numbers separately.

Summary

So in summary, a data dictionary should:

  1. Contain in-context, concise definition of every field
  2. Contain where the field is sourced from
  3. Contain the definition of the values in the field
  4. It should be system specific
  5. It should be delivered as web pages
  6. It should be accessible by both the business and IT

A good data dictionary is part of every data warehouse. I would say that a data warehouse project is not finished until we produce a good data dictionary.

18 December 2015

Measure or Attribute

Filed under: Data Architecture,Data Warehousing — Vincent Rainardi @ 10:00 pm

Last week two people asked me about this subject so I thought it would be beneficial for others if I write it. It is about whether a number should be a measure (a fact) or an attribute (of a dimension). Apologies in advance I can’t reveal the real case as it is a client’s case. But I will present several similar cases to illustrate this issue. As always, it is better to learn by example (cases) than by theory.

First case: worker rate

The first case is the timesheet system in the construction industry. The daily rate of the worker is the number in question. The hourly rate is recorded as an attribute of the worker. The fact table recorded the hours each worker worked in the project, every day. The data mart is used to analyse the cost the project (by work area), and to plan for the next 6-12 months. The cost for each worker for each day is simply the rate multiplied by the hours worked.

There are 2 issues here. One, when the rate entered last month is incorrect, how do we correct it. Second, how to plan for next year using the current rate + 5%, and current work hours -5%.

In this case, it is better to treat the rate as a measure, located in the fact table for actual costs as well as in the fact table for planning. This way we can correct the past rates and recalculate last month’s costs. We can also change the future rates (in the planning fact table) and recalculate the future costs. Yes we will occupy more disk space because we are repeating the rate on every row, but that is a very good price to pay for the advantage we get: more flexibility, ability to support the required functionality and higher performance.

The need to put the rate in the worker dimension arises when we need to know the historical rates for each worker. But this need can also be satisfied if we store the rate as a measure. Granted it would be less performant as we have to plough through lots of fact rows to distil the data to get distinct historical rate per worker. So if this is a mandatory requirement, it is a good idea if we put rate an type 2 attribute, in addition to putting the rate as a measure.

Second case: product price

This is a classic example from the retail industry. But it is also applicable to banking and insurance, pharma, energy and telecom sectors. In a shop, each product has a price. The same product can have different price in different stores. The price can change at any time. The price reviews can happen weekly or monthly, or ad-hoc. It can be done at the store level, city level or regional level. The actual price paid depends on discount or promotion applied for each transaction, at the time the customer purchase the product.

The issue here is, whether we should store the price of the product in a sales fact table, or as an attribute in the product dimension. I maintain an opinion that the price must be store in the sales fact table, for each transaction. In addition, the price should also be stored in the price list fact table. The price on the price list fact table is the official price for each store, for that day. Whereas the price on the sales fact table is the actual price paid, which could be different from price list, because of the offer or promotion applied to that transaction. For example, buy one get one free, or three for the price of two. Or, because the product is in a bundle such as cheese and wine, pyjamas and sleeper, a dictionary and thesaurus, etc. Or, because the product is simply discounted on that day, perhaps due to expiry date, or because the manager want to get rid of the stock quickly.

Having a price in the product dimension, in my opinion is not right. Product attributes such as size and colour, model and optional extras, are true product properties and therefore should be in the product dimension as attributes. These attributes are the same across stores, they are truly the property of a product. But prices can be different across stores, so they are not sole property of a product.

The price may be attached not to an individual product code, but to several product codes. For example, regardless of colour, the prices of the products are the same. Prices can be dynamic, e.g. prices of tickets (entertainment industry or transport industry) of the same show or flight can be different from one seat to the next, from one booking to the next, from one time slot to the next. Prices can be in bands, for example in utility industry (electricity and gas) and in telecommunication industry (mobile calls) the first 100 kWh can have different price to the second 100 kWh, Friday price can be different to Sunday calls.

The concept of price list fact table is more accommodative to satisfy the above demanding requirements, compared to putting price as a type 2 attribute in the product dimension.

11 November 2015

Indexing Fact Tables

A bit on Primary Key

Yesterday a friend asked me why there was no primary key on a fact table. I explained that we did have a fact table surrogate key (FSK) on that fact table, but I made it as a unique non-clustered index because we needed the physical ordering (the clustered index) to be on the snapshot date as it was a daily periodic snapshot fact table, queried mostly by using the snapshot date.

The purpose of having a primary key (PK) is to enforce uniqueness in one column of the table. We can achieve the same thing, by creating a unique key, hence we do not need a PK in that fact table.

We need to pause and think, if in the fact table we need a unique identifier of the rows. If we need to refer a fact row from another fact row (self referencing), then we do need a PK, which is usually a single column bigint FSK. But this unique identifier single column bigint FSK does not have to be an FK, it can be a non-clustered unique index.

The second purpose of having a PK is to enforce not null. This is really not the function of the PK, but more of a requirement of a PK. A PK requires that the column must be not-nullable. So not-nullable is a property of the column itself, not a property of the PK. And we implement this non-nullability when declaring the column on the table DDL.

We need to bear in mind that a PK has nothing to do with clustered or non-clustered indexes. SQL Server will automatically implement a PK as either a unique clustered index (UCI) or a unique non-clustered index (UNCI), depending on whether a clustered index already exists.

The Clustered Index

A clustered index does two things:

  1. Sorting the table physically
  2. Act as the row locator in non-clustered indexes

Point a) is for the performance of the queries. If we don’t partition a periodic snapshot fact table on the snapshot date, the next best thing is to cluster the table on the snapshot date.

But point a) is also for the performance of the update and delete. It is rare, but in some cases we need to update a periodic snapshot fact table (PSFT). So far I only found 1 case where I need to update a PSFT, out of about 150 PFSTs that I have created over the last 10 years. When updating fact table, it is absolutely crucial that the partitioning key, or the clustered fact table if you don’t have it partitioned, to be on the business date, plus the columns used as the joining criteria between the fact staging table and the PSFT. The clustered index should not be on the query criteria columns. It is the job of the non-clustered index to support the query.

Point b) means that the narrower the clustered key, the smaller the non-clustered indexes. Some people think that the narrow clustered key means that the non-clustered index will also have better performance but in my opinion and observation this performance increase is negligible.

So that’s the clustered index. For an insert-only PSFT we put the clustered index on the snapshot date plus the query criteria column to support the query performance. For an insert-and-update PSFT we put the clustered index on the joining criteria of the update statement.

For example, if the joining criteria of the update is snapshot date key + order ID (a degenerate dimension, the PK in the source table), whereas the query criteria is snapshot date key + account key, then for insert-only PSFT the clustered index should be snapshot date key + account key whereas for update PSFT the clustered index should be on snapshot date key + order ID.

The join SQL engine takes when updating the fact table depends on not only the clustered index of the PSFT, but also on the clustered index of the fact staging table (the source of the merge). If we do use the Merge command, we should convert it to update & insert commands. See my article here (link) about the Merge command’s inefficiency.

Partitioning

We can replace the physical ordering functionality above with partitioning. It is common and it make sense to partition a PSFT, I agree. But I would recommend not to partition the fact table when we create it, but later on. This is because of these reasons:

  1. We should not spend the time unless it is required, and when we create the fact table we don’t yet know if the partitioning is required.
  2. When the table is populated and queried we can learn about its condition and behaviour. Much, much better than imagining. For example, is the partitioning required to support query performance, or loading performance?
  3. We may have purging on the fact table, limiting the volume so that we don’t need to partition it because the performance is OK.
  4. We may need to create a new column for the partitioning key.
  5. When we create the fact table, we don’t yet know how the data will be distributed and queried. Purging can change the data distribution. We don’t know the query pattern, for example, 90% of the queries might be on today’s data so we should put it into a separate partition.
  6. Point e above dictates the partition maintenance strategy, i.e. whether we have partitioning functions splitting the table into 10,000 partitions or to 100 partitions with a “sliding window” maintenance. At the creation time, we have very limited knowledge of this.

Non Clustered Indexes

Each surrogate key is ideally indexed. Not combined as 1 index, but as separate indexes. All as non-clustered indexes (NCI). Say we have order date key, customer key, product key, store key, ship date key. Then we create 1 NCI on order date key, 1 NCI on customer key, 1 NCI on product key, 1 NCI on store key and 1 NCI on ship date key.

We should not combine these NCIs into 1 NCI because the second, third, and forth column of the combined NCI will not be used. Because their ordering is not sequential.

See also two of my articles which are related:

  • Indexing Fact Tables in SQL Server (link)
  • Primary Key and Clustered Index in the Fact Table (link)

12 September 2015

EAV Fact Tables

Filed under: Data Architecture,Data Warehousing — Vincent Rainardi @ 4:20 am
Tags: ,

A few weeks ago I came across EAV fact tables. EAV is Entity Attribute Values data model (read this for a background on EAV structure). It is a data model which enables us to add column into a table without actually changing the structure. At first I thought this EAV approach has no place in Kimball dimensional model. But after thinking and rethinking I saw that it had some advantages too, not just disadvantages. And in some cases it is appropriate to be used.

So below I’ll explain what it it is, the advantages, the disadvantages and the verdict about it.

What does it look like?

An EAV Fact Table looks like this:

What does an EAV Fact Table look like

The above example is taken from retail industry (a fashion retailer), where they analysed the profitability of each product line every day. What is a product line? They have 9 product groups: women, men, kids, shoes, handbags, accessories, watches, jewelry, beauty. The women product group consists of 23 product lines: dresses, tops, jumpers, cardigans, shirts, T-shirts, blazers, capes, jackets, coats, skirts, shorts, trousers, jeans, leggings, tights, socks, nightware, lingerie, leisureware, swimware, suits, new.

The above EAV fact table is product line performance fact table. Every day, based on the sales figures, profit margins, direct costs and overheads, they calculated various performance measure for each product line: sales growth, profit margin, product distribution, margin stability, cost effectiveness, price variation, colour variation, size variation, style variation, etc. TypeKey 217 means 1 week sales growth, 218 means 1 week margin stability, 219 is product distribution, and so on. Some measures are time-based, so they have periods such as 1 day, 1 week or 1 month. Some measures have 2 versions: net and gross. Some measures have location variations i.e. local and global. Similar measures are grouped.

Performance measurements are different for each product line. Measurements applicable for a product line may not be appliable to other product line. Using EAV structure fits the structure of performance measurement data, and makes it flexible. Because there are so many product lines with so many different performance measures, almost each week they have a new performance measure. This is because in the business analytic software they can create a new measure at any time. They define the formula for that measure, and on which product lines the measure are applicable, the different range of time periods applicable to that new measure, whether it is gross or net measurement, and whether it is global or local based.

What’s bad about it?

The main down side of EAV fact tables is: when we query the fact table we may need to pivot it. If we need to retrieve the weekly margin stability for every product lines, we could just filter on Performance Type Key = 128 and we get what we need. But if we need to retrieve all time variances for sales growth (1 week, 2 weeks, 1 month, etc) for a certain product lines, then we will need to pivot the data.

This pivoting is annoying because we have to hardcode the performance type names to make them as columns. At times it could be so annoying that we wished we had a normal Kimball style fact table so we didn’t have to pivot the data.

What’s good about it?

In Kimball star schema, in the fact tables the measures are created as fixed columns. If we have a new measure, we will need to create a new column. If the retail analytics software is churning out new measures almost every week, our development cycle will not be able to cope with the pace. In a mature warehouse, it will probably take about a month to complete the analysis, design, development, testing and release process, just to add 1 column. Equal to probably 10 man days.

If the employee cost rate is $550 per day (sum of salary, bonus, medical, dental, vision, relocation, life insurance, accident insurance, training, pension, cost of HR staff, childcare, cost of facilities, and 401k for a year, say $120k, divided by 250 working days per year minus 30 days vacation and sick time) that 1 column would cost $5500. It is very not cost effective.

Having a flexible structure like the EAV fact table means that we don’t have to worry about new performance measure churned by the retail analytic software almost every week, ever. That saves $5500 per week, or $275,000 per year, which is a lot of money.

Final verdict

If new measures are created quite often (more than once per quarter) and the measures are different for every product lines (or every customer, or other dimensionality such as instrument or asset type), then the EAV Fact Table approach is justified.

If the measures are quite stable (changes are less than once per quarter), and the measures are the same for every dimensionality, then an EAV Fact Table is not justified. We should build it as a normal, Kimball-style fact table.

But that is just my opinon. You could have different experiences, and therefore different opinions. Hence I would like to hear your comments in this topic.

31 May 2015

Responsibilities of Data Architects

Filed under: Data Architecture — Vincent Rainardi @ 7:49 am

What does a data architect do? Designing tables? That was what I thought at first. But it turned out that it’s not just that. The main responsibilities of a data architect are:

  1. Data modelling and analysis (45%)
  2. Data loading and delivery (25%)
  3. Data availability, performance, security (15%)
  4. Data quality & governance (10%)
  5. Other (5%)

These percentages are applicable if you are a data architect in an “end user” organisation i.e. a bank, health care company, insurance company, retail companies, telecommunication company, airline, etc. The percentages are not applicable if you work as a consultant, meaning that you are a data architect in an IT consultancy firms such as TCS, Accenture, Cognizant, IBM. See under the “Other (5%)” section below for details.

1. Data Modelling and Analysis

  • Understand the data
    It is the most important duty of a data architect to understand the data in the organisation. If he works in an asset management, he needs to understand what is yield, spread, duration, etc. Without understanding the data well, one cannot hope to function well as a data architect.
  • Data modelling
    It is not just about designing tables. But also XML files and schemas (XSD), OLAP Cubes, format of interface files, etc. Not just relational, but also dimensional. Not just tables, but also indexes, logs, partitions, database and data servers. Triggers, CDC, audit too. Physical data modelling / PDM as well as logical / LDM. Let’s be honest, we rarely dwell into the physical aspect. We often ignore the physical make up of a database. We usually leave it to the DBA.
  • Data Analysis
    We often leave data analysis to the Business Analyst. My job is to design the data model, but the content of it is not my business. That’s the usual attitude of a data architect. This is wrong. We as a DA needs to understand where everything is. When we join a company as a starter, we need to spend a great deal of time understanding where the supplier SIC is, what the branch code is made up of, how many joins are required to query the historical data, how the profitability is stored, how the customer ID is generated, etc.

2. Data Loading and Delivery

Data Loading

The data doesn’t get there by itself. It is delivered into the tables by some processes. It is our job to understand the exact details of the data loading mechanism. Understanding the batch architecture, understanding what each step of the loading process does. Whether they are using Active Batch or Autosys, we need to understand how it works. How to create an conditional alert, a loop, monitoring loading time, etc. It’s true that the operational aspect of it is the job of production support, but the architectural design of the data loading process is our responsibility as a DA.

A data architect which only does data modelling and doesn’t understand about the data loading process is like a blind man. When and how the data is loaded into the data store affect the data significantly. For example, if a large, static table is constantly be truncated and reloaded every day, would you be quiet about it? If the staging table is thousand times bigger than the main target table, reaching 4 billion rows, would you not investigate it? If the loading SP is using merge causing nested loop of 1 billion rows, would you just sit and watch? We need to be fully aware of all these things.

We as a data architect need to control the purging process, the loading process, index maintenance, log maintenance. If you work as a DA in SQL Server and somebody mention Olla Hallengren and you raised your eye browse asking “Who is that?”, you can’t claim yourself to be a good data architect. We have to understand how the databases are maintained, and how the data is maintained. That’s the minimum. After understanding it, we need to improve it. Offline backup, restore process, common staging area, reuseable mappings and lookups, capacity planning, we need to improve all that.

Yes we work with the DBAs on these, but we can’t leave it to the DBAs. Does a DBA care about reusable Lookups? Do they even understand it? Yes we work with Production Support team on these, but we should not leave it to them.

Data Delivery

Data loading is about piping the data into the data stores. Data delivery is about piping the data out from the data stores, into the business applications. O yes, we as a data architect need to understand and improve this process as well. If the data delivery is not efficiently done, we need to improve it. Six different applications may be requiring the same sales or costing data and they all do 10-join query involving huge tables, hundreds of times a day. Wouldn’t it be better if we create a different structure (additional structure) to store the data? A DBA would only look at the indexes, but they don’t have the knowledge about the data structure to be able to suggest the additional tables. We do!

Hence it is our job to make the data delivery as efficient as possible. How many reports are accessing this particular area of the database? Should we create a flatten table to support those main reports? Why are those reports using views, not stored procedures? (SPs can have temp tables, loop, passed-in-parameters; views can’t)

Architecting

After we are in control of the data loading and data delivery and data maintenance processes, it is only then we can attempt to improve the architecture. You see, architecture is not just about the structure of the database, but also about designing the processes and mechanism that load and extract the data, into and from the databases.

Architecting the data flow means seeing it from ten thousand feet. At this height we can see things like:

  • Why there are 2 jobs pushing promotional sales data into the same set of tables?
  • At 7 am new data is being loaded into those main tables, whilst at the same time the cube processing are reading from those table causing locking. No wonder we have time outs.
  • Many reports and processes read the relational warehouse directly, shouldn’t they be reading the dimensional marts, the flatten structures?
  • Should we shield the structure by implementing views? That would make the query simpler, but also good for security (maintaining permissions)
  • If many applications need FX rates, should we load that first and signal the consuming systems when it’s ready?

We design the data loading and data delivery process as efficient as possible. That’s architecting!

3. Data Availability, Performance, Security

Data availability

Data availability is not just about high availability e.g. the databases should be available and accessible 99.99% of the time. It is also about the timing: it is no good if the position-level performance data or the sales data is available at T+2 (two days after the event). We should make it available on T+1 and if possible on T using intraday load.

The sooner the data is available to the end users the better it is. We have to always look at the data availability from the end user point of view. We have to think about the data delivery all the way to the end users who consumes it, not just half way into the Data Warehouse. For example, it is no good if we get the purchase data (or the risk data, or any other data relevant to your industry) into the Data Warehouse on the same day, but the report delivers the risk data to the end user on T+1 because it was waiting for rating data from a vendor.

Data availability is also about retrieving old data. Consider this conversation: User: can I view the sales data in that region for that product group in 2003? IT: I’m sorry Sir, it’s not available in the database, we have to load it first as the database only has data from 2005. User: ok, how long is it going to take to load? IT: about 4 hours. User: So I can view it tomorrow? IT: No Sir, because we have to bring the tape from Hounslow into here (Chester) and it will take a week. User: ???
And the following week: IT: the 2003 data is ready Sir, you can view it on the screen. User: Why there is only 2003 data? Where is 2004 to 2015 data? IT: Oh, the archive files are arranged by year. If we load 2003 data, we lose all the other year. User: ???
Or, something like this: User: why the 2003 data doesn’t have London region? It only has South East. In 2007 we divided South East into 4 regions. IT: That regional classification didn’t exist in 2003 Sir, and this is 2003 data. User: I know that, but can’t you load the 2015 regional classification? IT: No we can’t. User: Then this data is useless! We need to be able to analyse this 2003 sales data by current product grouping, and by current regional divisions, so we can compare it with the this year’s sales performance!

Query and Load Performance

This is about how performant is the query which retrieves the data from the data store, how fast the reports run, how long does the data export take, how long does the cube processing take, how long does the data load/upload/download take, how long does the data publishing take.

Structures like code-decode tables (link), rows with begin and end dates, etc could hamper the performance of the query. They may be suitable for certain type of query, but absolutely useless for another type of query and causes huge performance problem. As a data architect we need to take care performance issues like these.

It’s not just about the good days. More importantly, it is about the bad days. If during the morning data load, the database server has a problem, when will the data load finishes? 9 am? 1 pm? That’s what the business user wants to know. Do you have a secondary server automatically taking over the data loading?

The business users expect the Cognos Reports or Hyperion Cubes to work within a 1-2 seconds, day-in, day-out. If it takes 1-2 minutes to run a report or a cube query, that’s not good. Business users concern about the performance of what affect them. Not how many milliseconds the query runs! They don’t want to know if it is because of out-of-date statistics, missing indexes, incorrect partitioning, cartesian joins, badly written code, nested loop, locking, latching or dead lock. They do not care about the technicalities of it! If we have to put a secondary server, RAID 1+0, Fiber Channel SAN, Amazon Redshift or Hadoop – so be it. It is our job (data architect) to ensure that the Reports and Cubes respond time is always 1-2 seconds.

And if we don’t know what Redshift is, if we don’t know who Claudera are, if we don’t know how Twinfin FPGA works (or Oracle RAC), then we are not in the business, we are not good Data Architects. We as Data Architects need to know a great deal about stuff that influence the performance.

Data Security

This is not “oh tell that to Info Sec” debate. We Data Architects work with Information Security people. We work with Compliance guys. We work with Infrastructure guys, with DBAs. We work with Production Support team and Helpdesk team. We work with external penetration test consultant. We work with anybody required to deliver data security, i.e. to ensure that only the relevant users can access relevant data. To ensure that nobody in the organisation (and outside!) can see something that they are not supposed to see.

Reviewing database accounts, reviewing the way the security grouping are arranged, how the database roles are structured (group within group, group hierarchy, etc), Kerberos authentication, view layers, cube roles, database permissions, segregation of network between Dev and Prod, etc. all of these are our reponsibilities as a Data Architect.

It is not just the databases, but also data on the web portals, data we receive/send on files, maintenance of the security grouping (usually via a web interface), system accounts used in data loads, system accounts used by reports, segregations of permissions in the applications, and the security of data in the archive and backups.

We have to understand the concept of Chinese Walls, extranet security, penetration testings, data masking, legallity of data retention periods and scope, public key cryptography (especially in retail banking sector) and security authentication process.

In retail banking sector, the customer data obtained and retained by Singapore, Switzerland, Luxembourg branches may not be viewable by the bank employees in the US, UK or France. In the health care sector, the personal health data obtained in UK, Germany, Australia, and US should not be used or disclosed “in a form which can identify the patient”, without their consent. But the business may want to analyse the data to find the trend at regional and global level, without identifying individual bank account or patient.

We as a Data Architect needs to find a solution to make this happen, for example using data masking, encryption, or nulling out. In retail industry for example, we could be storing customers’ credit card numbers but replace the first 12 digits with XXX so that the Call Centre users can’t view the complete numbers. In banking for example, the name of certain accounts may be blanked out. But because of the sector, country and size of the account, users are still able to identify the account, for instance there might be hundreds of large petrochemical companies in China but there’s only one large cocoa plantation in Peru.

4. Data Quality and Governance

This is not just about implementing data quality software like Trillium and IDQ to improve the quality of customer data. It is the whole framework of data quality. It is about defining the rules to certify the quality of the data, data profiling and clensing, creating workflows for DQ monitoring and exception handling, establishing Data Stewards who identify the business users who need to fix incorrect data, implementing tools for data reconciliation and data lineage, “entity” resolution for Single Customer View, the list goes on.

In Asset Management sector, if the country, currency, or classification of a position is not set, the breakdown of a portfolio will be reported incorrectly. In corporate banking, identifying an “entity” is crucial in managing derivative transactions and could be done in waterfall manner using Markit RED (Reference Entity Database), MIN (Moody’s Issuer Number), ISIN/CUSIP/SEDOL of issues, Reuter/Bloomberg Ticker of the security, and Entity Legal Name + Country of Incorporation.

The reporting of data quality is also Data Architect’s responsibility, i.e. how many breaks this month, how good/bad it is compared to last month, which DQ rules was violeted, how much over the tolerance limit were the breaks, does it affect the top level numbers, was it caused by the raw data coming from the source or caused by a bug in our systems, how many days did it take to be fixed, has it been previously identified and reported, etc.

Common DQ issues in banking are:

  • Duplicate records, i.e. two securities with the same ISIN
  • Inclusion/exclusion of outstanding trades
  • Incorrect instrument attributes
  • Ambiguity in identifying where yield or duration is sourced from, and how it was calculated
  • Timing of NAV which changes due to single swing price adjustments
  • Incorrect risk calculation due to incorrect instrument attributes

Data governance is difficult to get funding if it is too “blue sky”. It should focus on things that quickly bring business benefits, such as: completeness of customer data, business rules to check data accuracy, ensure that key fields are maintained.

5. Other

It is part of a Data Architect role to deal with the non-technical aspects, ie. Management, dealing with “politics” i.e. resolving conflict of interest (with regards to data) between different groups in the organisation. In a good organisation this is only 5% of the time, but in reality, in many organisation, this “management and politics” work could easily become 30% of our time, making us unable to deliver the 4 responsibilities above with good results.

Data Architects in IT Consultancy Firms and Vendors

I mentioned in the beginning that the percentages are not applicable if we were a data architect working for an IT consultancy firm such as TCS, Accenture, and Cognizant. Or a if we were a data architect working for a database vendor, a BI/DW vendor or a data integration vendor such as IBM, Teradata, SAS, and Informatica. This is because we also have an additional responsibility to help sell data-related projects to the clients. This includes pre-sales activities such as preparing proposals, drafting potential solutions, conducting presentation, preparing demonstrations, proof of concepts, preparing business case including setting up the data, conducting research, answering RFPs particularly the technical pages, share customer feedback with the product development team and marketing team, assisting costing calculation.

Data architects working for an IT consultancy firm or vendor do not normally do sales or account management, but they could be asked to help with support work for certain accounts. Usually as a third line support (the first line being the helpdesk/technical support team, and the second line being the developers/product development team, or training team in certain cases). This support work is usually because they were working on that project before.

And then after they are deployed in the projects, their role become similar to DAs in the “end user” organisations, but with extra time doing the “internal agendas” (work not for the client but for the consultancy firm or vendor itself), for example: working on other proposal or presentation to other client.

The percentages could be something like this:

“In the office” time: (aka “on the bench” time)

  • Proposal 60%
  • Presentation, demo, PoC 20%
  • Research & documentation 10%
  • Support 5%
  • Management 5%
“In the project” time: (depends on what project, e.g. DQ project is different to BI project)

  • Data Modelling & Analysis 40%
  • ETL Design / Data Loading 30%
  • Data Quality Framework 20%
  • Management 5%
  • Internal agendas 5%

Create a free website or blog at WordPress.com.