Pages

Ads 468x60px

Labels

Tuesday 22 July 2008

Business Intelligence Landscape Documentation – The Gold Copy

Software systems, in general, need good comprehensive documentation. This need becomes a “must-have” in case of BI systems, as these systems evolve over a period of time. My recommendation (a best practice, if you will) is to create a “Gold Copy” documentation of the enterprise BI landscape. The “Gold Copy” is distinct from the individual project documentation (this would continue to exist) and is updated with appropriate version controls over a period of time.
The “Gold Copy” would comprise of the following documents related to the Business Intelligence environment:
1) Architecture and Environment
This document has two broad sections. The first section explores the physical architecture and then attempts to explore each of the architectural components in more detail. The second section explores the environmental and infrastructure requirements for development, production and operations.
2) Data Sources
This document explores the various source systems from which data is acquired by the datawarehouse. The document attempts to provide a layman’s view of what data is being extracted and maps it to the source system data structures where they data originate from.
3) Data Models
This document builds on the two previous documents. The document attempts to map the source data onto the datawarehouse and the data mart models and provides a high-level picture of the subject orientation.
4) Reporting and OLAP
The document takes a closer look at information delivery processes and technology to the end user from the datawarehouse. The initial section explores the architectural components by way of middleware server software and hardware as well as the end user desktop tools and utilities used for this purpose. The second section looks at some of the more important reports and describes the purpose of each of them.
5) Process
This document takes a process view of data movement within the enterprise datawarehouse (EDW) starting from extraction, staging, loading the EDW and subsequently the data mart. It explores the various related aspects like mode-of-extraction, extract strategy, control-structures, re-start and recovery. The document also looks at naming conventions followed for all objects within the datawarehouse environment.
The above set of documents cover the BI landscape with its focus on 3 critical themes – Architecture Track, Data Track and Process Track. Each of these tracks have a suggested reading sequence of above mentioned documents
Architecture Track – This theme focuses entirely on components, mechanisms and modes from an architectural angle. The suggested reading sequence for this track is – Architecture and Environment, Data Models, Reporting and OLAP
Data Track – This theme focuses on data – the methods of its sourcing, its movement across the datawarehouse, the methods of its storage and logistics of its delivery to the business users. The suggested reading sequence for this track is -Sources, Data Models, Reporting and OLAP.
Process Track – This theme focuses on datawarehouse from a process perspective and explores the different aspects related to it. The suggested reading sequence for this track is – Architecture and Environment, Process, Reporting and OLAP.
I have found it extremely useful to create such documentation for enterprise wide BI systems to ensure a level of control as functional complexity increases over a period of time.
Thanks for reading. Please do share your thoughts. Read More About  Business Intelligence

Friday 18 July 2008

Data Integration Challenge – Error Handling

Determining the error and handling the errors encountered in the process of data transformation is one of the key design aspects in building a robust data integration platform. When an error occurs how do we capture the errors and use them for effective analysis. Following are the best practices related to error handling
  1. Differentiate the error handling process into the Generic (Null, Datatype, Data format) and the Specific like the rules related to the business process. This differentiation enables to build reusable error handling code
  2.  
  3. Do not stop validations when the record fails for one of the validations; continue with the other validations on the incoming data. If we have 5 validations to be done on a record, we need to design that the incoming record is taken through all the validations, this ensures that we capture all the errors in a record in one go
  4.  
  5. Have a table Error_Info; this has the repository of all the error messages. The fields would be ErrorCode, ErrorType and the ErrorMessage. The ErrorType would carry the values ‘warning’ or ‘error’, the ErrorMessage would have a detail description of the error and the ErrorCode a numeric value which is used in place of the description.
  6.  
  7. In general each validation should have an error message, we could also see the table Error_Info as a repository of all error validations performed in the system. In case of business rules that involve multiple fields, the field ErrorMessage in the table Error_Info can have the details of the business rule applied along with the field name, we can also create an additional field Error_Category to group the error messages
  8.  
  9. Have a table Error_Details; this stores the errors captured. The fields of this table would be KeyValue, FieldName, FieldValue and ErrorCode. The KeyValue would hold the value of the primary key of the record which has an error, the FieldName would store name of the field which has an error, the FieldValue has the value of the field which has failed or is an error, the ErrorCode details the error through a link to the table Error_Info.
  10.  
  11. Write each error captured as a separate record in the table Error_Deatils i.e., if a record fails for two conditions like a NULL check on field ‘ CustomerId’ and the data format check on the field ‘Date’ then ensure we write two records one for the NULL failure and one for the data format failure
  12.  
  13. To retain the whole incoming record have a table structure Source_Datasame as the incoming data. Have a field FLAG in the Source_Data, a value of ‘1’ would say that the record has passed all the validations and ‘0’ would say that it has failed one or more validations
  14.  
In summary the whole process would be to read the incoming record, validate the data, for any validation failure assign the error_code and pipe the errors captured to the Error_Details table, once all validations completed assign the FLAG value (1- Valid record, 0-Invalid record) and insert that record into the Source_data table. Having the data structure as suggested above would enable efficient analysis of the errors captured by the business and IT team.
Read More About  Data Integration Challenge

Tuesday 8 July 2008

Competencies for Business Intelligence Professionals

The world of BI seems to be largely driven by proficiency in tools that I was stumped during a recent workshop when we were asked to identify BI competencies. The objective of the workshop, conducted by the training wing of my company, was to identify the competencies required for different roles within our practice and also to define 5 levels (Beginner to Expert) for each of the identified competencies.
We were a team of 4 people and started listing down the areas where expertise is required to be a successful BI practice. For the first version we came up with 20 odd competencies ranging from architecture definition to tool expertise to data mining to domain expertise. This was definitely not an elegant proposition considering the fact that for each of the competencies we had to define 5 levels and also create assessment mechanisms for evaluating them. The initial list was far too big for any meaningful competency building and so we decided that we have to fit all this into a maximum of 5 buckets.
After some intense discussions and soul searching, we came up with the final list of BI competencies as given below:
2) BI Solutions
3) Data Related
4) Project / Process Management
5) Domain Expertise
BI Platform covers all tool related expertise ranging from working on the tool with guidance to being an industry authority on specific tools (covering ETL, Databases and OLAP)
BI Solutions straddles the spectrum of solutions available out-of-the-box. These solutions can be packages available with system integrators to help jump-start BI implementations at one end (For ex: Hexaware has a strong proprietary solution around HR Analytics) to the other extreme of Packaged analytics provided by major product companies (Examples are: Oracle Peoplesoft EPM, Oracle BI Applications (OBIA), Business Objects Rapid Marts etc.)
Data Related competency has ‘data’ at its epicenter. The levels here range from understanding and writing SQL Queries to Predictive Analytics / Data Mining at the other extreme. We decided to keep this as a separate bucket as this is a very critical one from BI standpoint for nobody else has so much “data” focus than the tribe of BI professionals.
Project Management covers all aspects of managing projects with specific attention to the risks and issues that can crop up during execution of Business Intelligence projects. This area also includes the assimilation and application of software quality process such as CMMI for project execution and Six Sigma for process optimization.
The fifth area was “Domain Expertise”. We decided to keep this as a separate category considering the fact that for BI to be really effective it has to be implemented in the context of that particular industry. The levels here range from being a business analyst with the ability to understand business processes across domains to being a specialist in a particular industry domain.
This list can serve as a litmus paper for all BI Professionals to rate themselves on these competencies and find ways of scaling up across these dimensions.
I found this exercise really interesting and hope the final list is useful for some of you. If you feel that there are other areas that have been missed out, please do share your thoughts.
The team involved in this exercise: Sundar, Pandian, Mohammed Rafi and I. All of us are part of the Business Intelligence and Analytics Practice at Hexaware.