Date post: | 30-Dec-2015 |
Category: |
Documents |
Upload: | carissa-booker |
View: | 39 times |
Download: | 5 times |
Data Modelling Techniques for Better Business Intelligence A Focus on the Data Modelling Process
Introducing MIKE2.0
An Open Source Methodology for Information Development
http://www.openmethodology.org
© 2008 BearingPoint, Inc. 2CROSS
Better Business Intelligence through an Information Development Approach
Agenda
Better Business Intelligence
─ The Keys to Better Business Intelligence
─ Guiding Principles for Better Business Intelligence
MIKE2.0 Methodology: A Focus on Data Modelling
─ 5 phased-approach to Better Business Intelligence
─ Example Task Outputs from Strategy Activities
─ Example Task Outputs from Implementation Activities
Lessons Learned
© 2008 BearingPoint, Inc. 3CROSS
Business IntelligenceDefining Better Business Intelligence
Business Intelligence refers to a set of techniques and technologies that are used to gather information from a repository through reports and analytical tools.
Reporting and Analytics can be considered the "front end" of the Business Intelligence environment
Reporting and analytics involve a combination of automated and user-driven steps
Business Intelligence typically involves accessing repositories where data is brought together from many different systems across the organisation.
Can be considered the "back-end" of the Business Intelligence environment
The back-end is generally an automated process
The delivery approach for Business Intelligence projects is different that functional or infrastructure-related projects.
Seen as more of a "journey" than more functionally-oriented development – The focus is on incremental delivery
Testing can be challenging – It is inherently more difficult to simulate all user cases
© 2008 BearingPoint, Inc. 4CROSS
Business IntelligenceDefining Better Business Intelligence
In the past many Business Intelligence initiatives have failed:
Most failures were typically due to the "back-end" or the SDLC process
Organisations want a better Business Intelligence environment more than ever – and the capabilities they need today are even more sophisticated
Back-end issues have primarily been related to:
Data Integration
Metadata Management
Data Quality Management
Data Modelling
Delivery approach issues were primarily related to:
Lack of a strategic vision that allowed for incremental delivery
Poorly defined requirements
Inadequate testing
Architectural flexibility
In order to move to a reliable and effective Business Intelligence environment, the focus must be on getting these areas right and taking an Information Development approach.
© 2008 BearingPoint, Inc. 5CROSS
The MIKE2.0 MethodologyAn Open Source Methodology for Information Development
Key Constructs within MIKE2.0
SAFE (Strategic Architecture for the Federated Enterprise) is the architecture framework for the MIKE2.0 Methodology
Information Development is the key conceptual construct for MIKE2.0 – Develop your information like you do with applications
The Overall Implementation Guide provides the overall set of Activities and Tasks that bring everything together; the Usage Model determines what is used depending on the type of project
Supporting Assets are detailed artifacts that link to Activities. Supporting Assets include:
─ Tools and Technique papers
─ Software Assets
─ Deliverable Templates
─ Sales assets
─ Open Source Examples: http://mike2.openmethodology.org/index.php/MIKE2:Supporting_Assets
MIKE2.0 Solutions tie Supporting Assets to the Overall Implementation Guide
─ Technology Backplane Solutions are technically-oriented (e.g. MIKE2.0 for Business Intelligence)
─ More Solutions are under development, including Business Solutions and Vendor Solutions
MIKE2.0 recommends a new organizational model and governance standards to deliver the Information Development Centre of Excellence
Many Private MIKE2.0 Assets are stored on internal BearingPoint content management systems
© 2008 BearingPoint, Inc. 6CROSS
The MIKE2.0 MethodologyAn Open Source Methodology for Information Development
MIKE2.0 provides a Comprehensive, Modern Approach
Scope covers Enterprise Information Management, but goes into detail in areas to be used for more tactical projects
Architecturally-driven approach that starts at the strategic conceptual level, goes to solution architecture
A comprehensive approach to Data Governance, Architecture and strategic Information Management
MIKE2.0 provides a Collaborative, Open Source Methodology for Information Development
Balances adding dynamic new content with release stability through a method that is easier to navigate and understand
Allows non-BearingPoint users to contribute
Links into BearingPoint's existing project assets on Intraspect
Unique approach, we would like to make this "the standard" in the new area of Information Development
© 2008 BearingPoint, Inc. 7CROSS
The MIKE2.0 MethodologyKey Activities for Data Modelling
triggerscalls
participates
has
specifies
processed_by
processes1
controls
represented_by
used_by1
stored_in
Holds
groups
reads
fed_into
CRUDs
used_by2
processed_by3
exists_for
offered_to_2takes2
produces6
gathers6
constructed_of
comes_from
owns
influences
influenced_by
generates groups
described_by
run_event_log
run_instance_id: NUMBER(10)event_dte: DATElog_timstamp_txt: VARCHAR2(40)
log_type_cde: VARCHAR2(20)log_msg_txt: VARCHAR2(1000)
run_instance_module
run_instance_id: NUMBER(10)
module_id: NUMBER(10)run_suite_id: NUMBER(10)run_seq: NUMBER(4)module_sequence_num: NUMBER(4)module_name: VARCHAR2(200)module_desc: VARCHAR2(200)module_path: VARCHAR2(200)module_command: VARCHAR2(200)module_parm1: VARCHAR2(200)module_parm2: VARCHAR2(200)module_parm3: VARCHAR2(200)module_parm4: VARCHAR2(200)module_start_dte: DATEmodule_end_dte: DATEmodule_return_code_num: NUMBERrun_success_cde: VARCHAR2(20)run_process_dte: DATElog_path_txt: VARCHAR2(200)log_file_name_txt: VARCHAR2(200)log_search1_txt: VARCHAR2(200)log_search2_txt: VARCHAR2(200)log_search3_txt: VARCHAR2(200)log_search4_txt: VARCHAR2(200)log_ignore1_txt: VARCHAR2(200)log_ignore2_txt: VARCHAR2(200)log_ignore3_txt: VARCHAR2(200)log_ignore4_txt: VARCHAR2(200)log_whole_lie_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
run_module
module_id: NUMBER(10)
module_name: VARCHAR2(200)module_desc: VARCHAR2(200)module_path: VARCHAR2(200)module_command: VARCHAR2(200)module_parm1: VARCHAR2(200)module_parm2: VARCHAR2(200)module_parm3: VARCHAR2(200)module_parm4: VARCHAR2(200)module_table_txt: VARCHAR2(200)module_health_sql_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATElog_path_txt: VARCHAR2(200)log_file_name_txt: VARCHAR2(200)log_search1_txt: VARCHAR2(200)log_search2_txt: VARCHAR2(200)log_search3_txt: VARCHAR2(200)log_search4_txt: VARCHAR2(200)log_ignore1_txt: VARCHAR2(200)log_ignore2_txt: VARCHAR2(200)log_ignore3_txt: VARCHAR2(200)log_ignore4_txt: VARCHAR2(200)log_whole_lie_ind: CHAR(1)
run_suite
run_suite_id: NUMBER(10)
suite_name: VARCHAR2(20)suite_desc: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEsource_id: NUMBER(10)suite_parm1: VARCHAR2(200)suite_parm2: VARCHAR2(200)suite_parm3: VARCHAR2(200)suite_pram4: VARCHAR2(200)
run_suite_instance
run_suite_id: NUMBER(10)run_seq: NUMBER(4)
run_start_dte: DATEsuite_parm1: VARCHAR2(200)suite_parm2: VARCHAR2(200)suite_parm3: VARCHAR2(200)suite_pram4: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEcollection_id: NUMBER(10)
run_suite_module
run_suite_id: NUMBER(10)module_id: NUMBER(10)
module_sequence_num: NUMBER(4)update_uid: VARCHAR2(40)update_dte: DATE
run_document
dcmnt_idntfr: CHAR(10)
CIDN: CHAR(10)document_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_instance_item
run_instance_item_id: NUMBER(10)
item_id: NUMBER(10)run_suite_id: NUMBER(10)run_seq: NUMBER(4)collection_type_cde: VARCHAR2(20)passed_staging_tests_ind: CHAR(1)item_char10: CHAR(10)item_code: VARCHAR2(40)item_num_id: NUMBER(10)update_uid: VARCHAR2(40)update_dte: DATE
mng_source
source_id: NUMBER(10)
source_nme: VARCHAR2(200)
mng_table
table_nme: VARCHAR2(200)
source_id: NUMBER(10)schema_txt: VARCHAR2(20)has_cidn_ind: CHAR(1)has_dcmnt_idntfr_ind: CHAR(1)cidn_col_name_txt: VARCHAR2(30)dcmnt_col_name_txt: VARCHAR2(30)is_support_table_ind: CHAR(1)sql_grant_txt: VARCHAR2(200)sql_synonym_txt: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEdelta_enabled_ind: CHAR(1)delta_prev_ver_schema_txt: VARCHAR2(30)delta_prev_ver_table_txt: VARCHAR2(30)delta_output_table_txt: VARCHAR2(30)delta_vers_cache_num: NUMBER(4)delta_where_filter_txt: VARCHAR2(1000)
run_customer
CIDN: CHAR(10)
CUSTOMERNAME: VARCHAR2(200)CDBOR_CUST_ROLE_NAME: VARCHAR2(200)LOGICAL_DELETE_IND: CHAR(1)CUSTOMER_STATE_NUM: NUMBER(4)validation_comment_txt: VARCHAR2(1000)investigation_comment_txt: VARCHAR2(1000)NUM_PRODUCTS_NUM: NUMBER(4)NUM_DOCS_NUM: NUMBER(4)MAX_PRODUCT_IMPL_STAGE_NUM: NUMBER(4)MAX_MCS_VRSN_NUM: NUMBER(4,2)update_uid: VARCHAR2(40)update_dte: DATE
Run_Product_Offer
product_cde: VARCHAR2(40)
product_tech_name: VARCHAR2(200)product_state_cde: VARCHAR2(20)product_impl_stage_num: NUMBER(4)product_bus_name: VARCHAR2(200)PRODUCT_CMNT_TXT: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATEproduct_class_grp_cde: VARCHAR2(20)
run_collection
collection_id: NUMBER(10)
collection_type_cde: VARCHAR2(20)collection_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_items_in_collection
collection_id: NUMBER(10)item_id: NUMBER(10)
item_char10: CHAR(10)item_code: VARCHAR2(40)item_num_id: NUMBER(10)update_uid: VARCHAR2(40)update_dte: DATE
Run_Tables_by_Product
product_cde: VARCHAR2(40)table_nme: VARCHAR2(200)
Main_table_for_product_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
run_file_register
file_id: NUMBER(10)
sub_dir_nme: VARCHAR2(200)ext_file_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_module_file_usage
module_id: NUMBER(10)file_id: NUMBER(10)
update_uid: VARCHAR2(40)update_dte: DATE
run_module_product
product_cde: VARCHAR2(40)module_id: NUMBER(10)
update_uid: VARCHAR2(40)update_dte: DATE
run_tables_by_module
module_id: NUMBER(10)table_nme: VARCHAR2(200)
update_uid: VARCHAR2(40)update_dte: DATE
Run_Customer_product2
CIDN: CHAR(10)product_cde: VARCHAR2(40)
number_of_services_num: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
run_suite_table_stats
run_suite_id: NUMBER(10)run_seq: NUMBER(4)table_nme: VARCHAR2(200)stat_type_cde: VARCHAR2(20)
stat_value_num: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
mng_column
table_nme: VARCHAR2(200)column_nme: VARCHAR2(30)
column_seq: NUMBER(4)column_key_seq: NUMBER(4)column_compare_ind: CHAR(1)column_copy_ind: CHAR(1)identifier_col_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
Run_Worksheet_Tab
tab_name_txt: VARCHAR2(40)
MCS_VRSN_txt: VARCHAR2(20)update_uid: VARCHAR2(40)update_dte: DATEproduct_cde: VARCHAR2(40)
MNG_SERVICE
service_collection_id: NUMBER(10)service_seq: NUMBER(4)
service_fnn: CHAR(9)service_state_num: NUMBER(4)SERVICE_RULE_ID: NUMBER(10)VALID_IN_RASS_IND: CHAR(1)USE_THIS_SERVICE_IND: CHAR(1)validation_comment_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATE
MNG_SERVICE_COLLECTION
service_collection_id: NUMBER(10)
CUST_5LETTER_NAME: VARCHAR2(5)DCMNT_IDNTFR: CHAR(10)USE_THIS_COLLECTION_IND: CHAR(1)LOGICAL_DELETE_IND: CHAR(1)NTWK_IDNTFR: CHAR(9)FILENAME: VARCHAR2(200)CREATED_BY_NAME: VARCHAR2(200)AUTHOR_NAME: VARCHAR2(200)COLL_STATE_VALUE: NUMBER(4)DOC_TYPE_CDE: VARCHAR2(20)CREATE_DATE: DATEMODIFIED_DATE: DATEARCHIVE_DATE: DATEMCS_Vrsn_TXT: VARCHAR2(30)parsing_comment_txt: VARCHAR2(1000)validation_comment_txt: VARCHAR2(1000)investigation_comment_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATECIDN: CHAR(10)NUM_WRKSTH_NUM: NUMBER(4)NUM_PRODUCTS_NUM: NUMBER(4)NUM_INVALID_SHTS_NUM: NUMBER(4)MCS_VRSN_NUM: NUMBER(4,2)number_2dec: NUMBER(4,2)
MNG_SERVICE_METADATA
SERVICE_RULE_ID: NUMBER(10)
SERVICE_TYPE_CDE: VARCHAR2(20)TABLE_NME: VARCHAR2(30)COLUMN_NME: VARCHAR2(30)RASS_SERVICE_TYPE_CDE: VARCHAR2(20)VALID_IN_RASS_NUM: NUMBERINVALID_IN_RASS_NUM: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
RUN_SOURCE_INFLUENCES_PRODUCT
source_id: NUMBER(10)product_cde: VARCHAR2(40)
update_uid: VARCHAR2(40)update_dte: DATE
MNG_DME_SHEET
DCMNT_IDNTFR: CHAR(10)WK_SHEET_NME: VARCHAR2(40)
PRODUCT_CDE: VARCHAR2(20)update_uid: VARCHAR2(40)update_dte: DATE
MNG_CUSTOMER_5LETTER
CUST_5LETTER_NAME: VARCHAR2(5)
CUSTOMER_NAME: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATENUM_DOCS_NUM: NUMBER(4)MAX_MCS_VRSN_NUM: NUMBER(4,2)MAX_PRODUCT_IMPL_STAGE_NUM: NUMBER(4,2)ASSIGNED_TO_TXT: VARCHAR2(12)NUM_DOCS_LT20_NUM: NUMBER(4)NUM_DOCS_20s_NUM: NUMBER(4)NUM_DOCS_30s_NUM: NUMBER(4)NUM_DOCS_40s_NUM: NUMBER(4)NUM_DOCS_50s_NUM: NUMBER(4)NUM_DOCS_60s_NUM: NUMBER(4)NUM_DOCS_70s_NUM: NUMBER(4)NUM_DOCS_80s_NUM: NUMBER(4)NUM_DOCS_90s_NUM: NUMBER(4)NUM_DOCS_100s_NUM: NUMBER(4)NUM_DOCS_200s_NUM: NUMBER(4)NUM_DOCS_300s_NUM: NUMBER(4)one_possible_CIDN: CHAR(10)
Party(People/Org. of Interest
& their Relationship)
Campaign
Organization
Event(Content/TXN, etc.,)
Channel(ATM, Kiosk, etc.,)
Features
Product
LocationArrangement
(Accounts, etc.,)
Validate Strategic Business Requirements
Refine Strategic Business Requirements to Detailed Requirements
Categorise Detailed Business Requirements
Prioritise Detailed Business Requirements
Determine Detailed Analytical Requirements
© 2008 BearingPoint, Inc. 8CROSS
MIKE2.0 Methodology: Phase OverviewThe 5 Phases of MIKE2.0
Information Development through the 5 Phases of MIKE2.0
Improved Governance and Operating Model
Strategic Programme Blueprint is done once
Phase 2Technology Assessment
Phase 3, 4, 5
Increment 1
Increment 2
Increment 3
Begin Next Increment
Continuous Implementation Phases
Development
Deploy
Design
Operate
Roadmap & Foundation Activities
Phase 1Business
Assessment
© 2008 BearingPoint, Inc. 9CROSS
MIKE2.0 Methodology: Phase OverviewTypical Activities Conducted as part of the Strategy Phases
Phase 1 – Business Assessment and Strategy Definition Blueprint
1.1 Strategic Mobilisation1.2 Enterprise Information Management Awareness
1.3 Overall Business Strategy for
Information Development
1.4 Organisational QuickScan for
Information Development
1.5 Future State Vision for Information Management
1.6 Data Governance Sponsorship and Scope
1.7 Initial Data Governance Organisation
1.8 Business Blueprint Completion
1.9 Programme Review
Phase 2 – Technology Assessment and Selection Blueprint
2.1 Strategic Requirements for BI Application
Development
2.2 Strategic Requirements for Technology Backplane
Development
2.3 Strategic Non-Functional Requirements
2.5 Future-State Logical Architecture and Gap
Analysis
2.6 Future-State Physical Architecture and Vendor
Selection
2.7 Data Governance Policies
2.9 Software Development Lifecycle Preparation
2.10 Metadata Driven Architecture
2.11 Technology Blueprint Completion
2.4 Current-State Logical Architecture
2.8 Data Standards
© 2008 BearingPoint, Inc. 10CROSS
Information Development through the 5 Phases of MIKE2.0
Improved Governance and Operating Model
Strategic Programme Blueprint is done once
Phase 2Technology Assessment
Phase 3, 4, 5
Increment 1
Increment 2
Increment 3
Begin Next Increment
Continuous Implementation Phases
Development
Deploy
Design
Operate
Roadmap & Foundation Activities
Phase 1Business
Assessment
MIKE2.0 Methodology: Task OverviewTask 1.3.2 and 1.3.3 Define Strategic CSFs and KPIs
Activity 1.3 Overall Business Strategy for
Information DevelopmentResponsible Status
Task 1.3.1 Define Strategic Business Vision
Task 1.3.2 Define Strategic Critical Success Factors (CSFs)
Task 1.3.3 Define Strategic Key Performance Indicators (KPIs)
Task 1.3.4 Define Strategic Success Measures
Task 1.3.5 Define StrategicChange Drivers
Task 1.3.7 Define High-Level Information Requirements
Phase 1 – Business Assessment and Strategy Definition Blueprint
1.1 Strategic Mobilisation1.2 Enterprise Information
Management Awareness
1.3 Overall Business Strategy for
Information Development
1.4 Organisational QuickScan for
Information Development
1.5 Future State Vision for Information Management
1.6 Data Governance Sponsorship and Scope
1.7 Initial Data Governance Organisation
1.8 Business Blueprint Completion
1.9 Programme Review
© 2008 BearingPoint, Inc. 11CROSS
MIKE2.0 Methodology: Task OverviewTask 1.3.2 and 1.3.3 Define Strategic CSFs and KPIs
What if AnalysisBalanced Score CardQuantitativeLinear
Analytical Reporting provides focus to address KPI's which drive the business
Critical Success Factors (CSFs)
Key Performance Indicators (KPI's)
© 2008 BearingPoint, Inc. 12CROSS
Phase 1 – Business Assessment and Strategy Definition Blueprint
1.1 Strategic Mobilisation1.2 Enterprise Information
Management Awareness
1.3 Overall Business Strategy for
Information Development
1.4 Organisational QuickScan for
Information Development
1.5 Future State Vision for Information Management
1.6 Data Governance Sponsorship and Scope
1.7 Initial Data Governance Organisation
1.8 Business Blueprint Completion
1.9 Programme Review
Information Development through the 5 Phases of MIKE2.0
Improved Governance and Operating Model
Strategic Programme Blueprint is done once
Phase 2Technology Assessment
Phase 3, 4, 5
Increment 1
Increment 2
Increment 3
Begin Next Increment
Continuous Implementation Phases
Development
Deploy
Design
Operate
Roadmap & Foundation Activities
Phase 1Business
Assessment
MIKE2.0 Methodology: Task OverviewTask 1.5.7 Define Future-State Conceptual Data Model
Activity 1.5 Future-State Vision for Information Management
Responsible Status
1.5.1 Introduce Leading Business Practices for Information Management
1.5.2 Define Future-State Business Alternatives
1.5.3 Define Information Management Guiding Principles
1.5.4 Define Technology Architecture Guiding Principles
1.5.5 Define IT Guiding Principles (Technology Backplane Delivery Principles)
1.5.6 Define Future-State Information Process Model
1.5.7 Define Future-State Conceptual Data Model
1.5.8 Define Future-State Conceptual Architecture
1.5.9 Define Source-to-Target Matrix
1.5.10 Define High-Level Recommendations for Solution Architecture
© 2008 BearingPoint, Inc. 13CROSS
MIKE2.0 Methodology: Task OverviewTask 1.5.7 Define Future-State Conceptual Data Model
The Conceptual Model records the broad objects/things (sometimes called 'subject areas') that the business interacts with and names the relationships between these. The purpose of the Conceptual model is to discover the big ticket items and to name them in an agreed way.
Party(People/Org. of Interest
& their Relationship)
Campaign
Organization
Event(Content/TXN, etc.,)
Channel(ATM, Kiosk, etc.,)
Features
Product
LocationArrangement
(Accounts, etc.,)
© 2008 BearingPoint, Inc. 14CROSS
Phase 1 – Business Assessment and Strategy Definition Blueprint
1.1 Strategic Mobilisation1.2 Enterprise Information
Management Awareness
1.3 Overall Business Strategy for
Information Development
1.4 Organisational QuickScan for
Information Development
1.5 Future State Vision for Information Management
1.6 Data Governance Sponsorship and Scope
1.7 Initial Data Governance Organisation
1.8 Business Blueprint Completion
1.9 Programme Review
Information Development through the 5 Phases of MIKE2.0
Improved Governance and Operating Model
Strategic Programme Blueprint is done once
Phase 2Technology Assessment
Phase 3, 4, 5
Increment 1
Increment 2
Increment 3
Begin Next Increment
Continuous Implementation Phases
Development
Deploy
Design
Operate
Roadmap & Foundation Activities
Phase 1Business
Assessment
MIKE2.0 Methodology: Task OverviewTask 1.5.10 High Level Solution Architecture Options
Activity 1.5 Future-State Vision for Information Management
Responsible Status
1.5.1 Introduce Leading Business Practices for Information Management
1.5.2 Define Future-State Business Alternatives
1.5.3 Define Information Management Guiding Principles
1.5.4 Define Technology Architecture Guiding Principles
1.5.5 Define IT Guiding Principles (Technology Backplane Delivery Principles)
1.5.6 Define Future-State Information Process Model
1.5.7 Define Future-State Conceptual Data Model
1.5.8 Define Future-State Conceptual Architecture
1.5.9 Define Source-to-Target Matrix
1.5.10 Define High-Level Recommendations for Solution Architecture
© 2008 BearingPoint, Inc. 15CROSS
MIKE2.0 Methodology: Task OverviewTask 1.5.10 High Level Solution Architecture Options
Show below are sample outputs of high-level Solution Architecture options at the level they would be produced for this task. Typically, there will be a few architecture models with supporting text.
This proposed solution includes 3 viable options:
Use a Vendor model as the base logical data model for integrated Operational Data Store, going through a map-and-gap exercise to complete the model. This model is closely aligned to the existing data classification/taxonomy model that has been adopted organisation-wide
Develop & build a hybrid data model consisting of existing data models used across the organisation from existing systems. These base models will need to be supplemented and integrated with other models currently used in enterprise applications
Develop and build a logical, normalised data model in-house for the, based on the existing data classification/taxonomy model that has been adopted organisation-wide and a well-defined set of user requirements
Option 1
Reference Model
VendorModel
Option 2
Option 3
Reference Model
In-house
* CRM
*Productsystems
* Contract admin
System XXX System
YYY
* PricingSystems
© 2008 BearingPoint, Inc. 16CROSS
Information Development through the 5 Phases of MIKE2.0
Improved Governance and Operating Model
Strategic Programme Blueprint is done once
Phase 2Technology Assessment
Phase 3, 4, 5
Increment 1
Increment 2
Increment 3
Begin Next Increment
Continuous Implementation Phases
Development
Deploy
Design
Operate
Roadmap & Foundation Activities
Phase 1Business
Assessment
Phase 2 – Technology Assessment and Selection Blueprint
2.1 Strategic Requirements for
BI Application Development
2.2 Strategic Requirements for
Technology Backplane Development
2.3 Strategic Non-Functional Requirements
2.5 Future-State Logical Architecture
and Gap Analysis
2.6 Future-State Physical Architecture and Vendor Selection
2.7 Data Governance Policies
2.9 Software Development
Lifecycle Preparation
2.10 Metadata Driven Architecture
2.11 Technology Blueprint Completion
2.4 Current-State Logical Architecture
2.8 Data Standards
MIKE2.0 Task Overview: Task Overview Task 2.11.3 Define Capability Deployment Timeline
Activity 2.11 Technology Blueprint Completion
Responsible Status
Task 2.11.1 Revise Blueprint Architecture Models
Task 2.11.2 Define Major Technology Risks and Constraints
Task 2.11.3 Define Business and Technology Capability Deployment Timeline
Task 2.11.4 Revise Business Case
Task 2.11.5 Define Roadmap Mission Statements
Task 2.11.6 Assemble Key Messages to Complete Technology Blueprint
© 2008 BearingPoint, Inc. 17CROSS
Enterprise Wide Stakeholders Community definition with roles and responsibilities
First Enterprise Wide Enterprise Warehousing Workshop
Functional Capabilities of a comprehensive ODS, Warehouse and Data Mart environment
Enterprise Priorities mapped to the Functional Capabilities
Detail Integrated Program of Works
Detail Integration Methodology and implementation process
Initial Integrated Data Model
Initial Integrated Metadata Model
Enterprise Wide Standards for attribute models, message models
Functional requirements for the warehousing Info-Structure
Initial Data Schemas allocated in a physical environment
Initial Source systems identified for initial attributes
Business Rules for all data cleansing identified
Continuing Analysis Tasks identified
Initial Warehouse operational for testing and validation
First Increment
Completed Analysis on the availability of sources for cost information (e.g., atomic data and Cross-Over Tables)
Completed Analysis for Customer and Product Profitability Analysis
Completed Analysis on all Cross Sectional generating events.
Completed 'Whole of Customer' matching strategy across Households and Products
Production use of the initial data warehouse implementation
Full Scale Sourcing for multiple retail products
Initial Sourcing for customers and products
Second phase of Household matching and first phase of product matching
MetaData repository available in production environment
An ongoing leader of enterprise information established
Second enterprise wide workshop on data warehousing is held
First EIS dashboard based upon the Enterprise Data Warehouse deployed
The second release of the decision support models for DSS
Second Increment
Source Implementations of (e.g., atomic data and Cross-Over Tables) for cost information
Initial implementations for Customer and Product Profitability Analysis
Metadata management applications extended to a limited user'self service' environment
Messaging and Real-Time Info-Structure implemented for initial round of ODS, Warehouse and Mart access
Customer and Product ODS implementation
AR closed loop to the warehouse designed
Finance and Service information designed for incorporation in the EDW
Proprietary environment used as a Data Mart
Ongoing Data Quality Monitoring in place
EDW development and management organization established
EDW contains base information for accounts, customers and products
Third Increment
Inte
gra
ted
Meta
data
Man
ag
em
en
t
Prod 1Data Model
Prod 2Data Model
Cust Analysis
Integrated Data Model
Prod 1 Source System
Attribute Selection
Prod 2 Source System
Attribute Selection
Initial Warehouse ImplementationRevenue/Whole
of Customer
Course Correction from Partial ODS/Warehouse
Full Scale Sourcing Prod 1
Full Scale Sourcing Prod 2
Info-Structure/ODBC Integration with ODS/Warehouse
Integrated ODS Warehouse Production
Implementation
Iterative Application Integration
Initial Sourcing
Full Sourcing
Cust Design
Integrate
Integrate
MIKE2.0 Task Overview: Task Overview Task 2.11.3 Define Capability Deployment Timeline
© 2008 BearingPoint, Inc. 18CROSS
MIKE2.0 Task Overview: Task Overview Task 2.11.3 Define Capability Deployment Timeline
Whole of Customer Revenue View – The focus of this component is on bringing together the 'Whole of Customer' for Product 1 and Product 2 from the perspective of Revenue. Initial matching of customers will begin; however, this will not limit product operational systems from using the information from their own perspectives.
Whole of Product Revenue View – The focus of this component is to begin the 'Whole of Product" view. The revenue information information comes from XXXXX (source: XXXX) and XXXX. Product revenue will be tracked by the current segmentation in these systems as well as the product structures in these systems.
Decommissioning – This thread of activities will focus on the decommissioning of the current high maintenance ODS/MIS implementations. The XXXXXXX, XXXXX and XXXXX and XXXXXX Databases are key in the decommissioning process. Unneeded capabilities can be terminated while others are target for the new environment.
Dependent Data Mart Formulation – The Dependent Data Marts addressed the specific business support needs of particular Enterprise business constituencies. The Marts can contain historical as well as ODS information. They will be used for a number of activities such as reporting or query as well as analytical activities.
Common Info-Structure – This effort focuses on the hardware and network environment for the implementation and use of the Enterprise Data Warehouse Environment. ETL and EAI implementations will be key. The hardware options will address ODS, Warehouse and Mart Environments.
Complex Customer/Product Formulation – The focus of this effort will be to formulate some of the more complex definitions of customer and product. These activities, initially, will perform the required customer and product business analysis to enhance the warehouse data models.
Cross-Sectional Formulations – The focus of these efforts will be to establish the initial understandings of how the warehouse information must be summarized. Examples are: week, month, quarter, year, identified customer or product event.
First Increment Second Increment Third Increment
Initial Use of Prod 1 Info
Customer Revenue ODS and Mart Implementations
Initial Use of Local Info
Customer Matching across X and Y Products
Taxonomy of Customer Profiles
Mapping of Customer and Product Profiles
Extended Product Definitions
Extended Customer Definitions
Prod 1 Customer Revenue Load
EIS Decision Models
EIS DashboardsDaily
Mart Constituency
Inventory
Data Mart Models and Tools
Current ODS/MIS Users Inventory
Local Customer Revenue Load
Common Data Model
Product Aggregates
Product Revenue ODS and Mart Implementations
Product Y Summary
Product Revenue to Projects Analysis
Product Taxonomy
Product X Taxonomy
Common Data Model
Taxonomy of Product Profiles
SOA/Info-Structureand Security Implementation
Ongoing Data Quality ImprovementDB Hardware Implementation
Product 3
New Product Models
Monthly
Weekly Yearly
Event Driven ODS Support DSS Decision
Models
DSS Information Support
Mart Constituency Requirements
Functions to Migrate Inventory
Current ODS/MIS Function Inventory Functions to Discontinue Inventory
ETL and Warehouse Tools Implemented
Decommissioning and Discontinuing
Initial Data Mart Implementation
© 2008 BearingPoint, Inc. 19CROSS
MIKE2.0 Methodology: Phase OverviewRoadmap and Foundation Activities
The MIKE2.0 Roadmap covers the planning, requirements and conceptual design for each increment. Foundation Activities are what we want to get out "in front" in the information management imitative.
Phase 3 – Information Management Roadmap and Foundation Activities
3.1 Information Management
Roadmap Overview
3.2 Testing and Deployment Plan
3.3 Software Development
Readiness
3.5 Business Scope for Improved
Data Governance
3.6 Enterprise Information Architecture
3.7 Root Cause Analysis on Data
Governance Issues
3.9 Database Design3.10 Message
Modelling 3.11 Data Profiling
3.4 Detailed Release Requirements
3.8 Data Governance Metrics
3.12 Data Re-Engineering
3.13 Business Intelligence Initial
Design and Prototype
3.14 Solution Architecture
Definition/Revision
© 2008 BearingPoint, Inc. 20CROSS
Information Development through the 5 Phases of MIKE2.0
Improved Governance and Operating Model
Strategic Programme Blueprint is done once
Phase 2Technology Assessment
Phase 3, 4, 5
Increment 1
Increment 2
Increment 3
Begin Next Increment
Continuous Implementation Phases
Development
Deploy
Design
Operate
Roadmap & Foundation Activities
Phase 1Business
Assessment
The MIKE2.0 Methodology Activity 3.4 Detailed Release Requirements
Phase 3 – Information Management Roadmap and Foundation Activities
3.1 Information Management
Roadmap Overview
3.2 Testing and Deployment Plan
3.3 Software Development
Readiness
3.5 Business Scope for Improved
Data Governance
3.6 Enterprise Information Architecture
3.7 Root Cause Analysis on Data
Governance Issues
3.9 Database Design3.10 Message
Modelling 3.11 Data Profiling
3.4 Detailed Release
Requirements
3.8 Data Governance Metrics
3.12 Data Re-Engineering
3.13 Business Intelligence Initial
Design and Prototype
3.14 Solution Architecture
Definition/Revision
Activity 3.4 Detailed Business Requirements
Task 3.4.1 Validate StrategicBusiness Requirements
Task 3.4.2 Refine Strategic Business Requirements to Detailed Requirements
Task 3.4.3 Categorise Detailed Business Requirements
Task 3.4.4 Prioritise Detailed Business Requirements
Task 3.4.5 Determine Detailed Analytical Requirements
© 2008 BearingPoint, Inc. 21CROSS
The MIKE2.0 Methodology Activity 3.4 Detailed Release Requirements
triggerscalls
participates
has
specifies
processed_by
processes1
controls
represented_by
used_by1
stored_in
Holds
groups
reads
fed_into
CRUDs
used_by2
processed_by3
exists_for
offered_to_2takes2
produces6
gathers6
constructed_of
comes_from
owns
influences
influenced_by
generates groups
described_by
run_event_log
run_instance_id: NUMBER(10)event_dte: DATElog_timstamp_txt: VARCHAR2(40)
log_type_cde: VARCHAR2(20)log_msg_txt: VARCHAR2(1000)
run_instance_module
run_instance_id: NUMBER(10)
module_id: NUMBER(10)run_suite_id: NUMBER(10)run_seq: NUMBER(4)module_sequence_num: NUMBER(4)module_name: VARCHAR2(200)module_desc: VARCHAR2(200)module_path: VARCHAR2(200)module_command: VARCHAR2(200)module_parm1: VARCHAR2(200)module_parm2: VARCHAR2(200)module_parm3: VARCHAR2(200)module_parm4: VARCHAR2(200)module_start_dte: DATEmodule_end_dte: DATEmodule_return_code_num: NUMBERrun_success_cde: VARCHAR2(20)run_process_dte: DATElog_path_txt: VARCHAR2(200)log_file_name_txt: VARCHAR2(200)log_search1_txt: VARCHAR2(200)log_search2_txt: VARCHAR2(200)log_search3_txt: VARCHAR2(200)log_search4_txt: VARCHAR2(200)log_ignore1_txt: VARCHAR2(200)log_ignore2_txt: VARCHAR2(200)log_ignore3_txt: VARCHAR2(200)log_ignore4_txt: VARCHAR2(200)log_whole_lie_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
run_module
module_id: NUMBER(10)
module_name: VARCHAR2(200)module_desc: VARCHAR2(200)module_path: VARCHAR2(200)module_command: VARCHAR2(200)module_parm1: VARCHAR2(200)module_parm2: VARCHAR2(200)module_parm3: VARCHAR2(200)module_parm4: VARCHAR2(200)module_table_txt: VARCHAR2(200)module_health_sql_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATElog_path_txt: VARCHAR2(200)log_file_name_txt: VARCHAR2(200)log_search1_txt: VARCHAR2(200)log_search2_txt: VARCHAR2(200)log_search3_txt: VARCHAR2(200)log_search4_txt: VARCHAR2(200)log_ignore1_txt: VARCHAR2(200)log_ignore2_txt: VARCHAR2(200)log_ignore3_txt: VARCHAR2(200)log_ignore4_txt: VARCHAR2(200)log_whole_lie_ind: CHAR(1)
run_suite
run_suite_id: NUMBER(10)
suite_name: VARCHAR2(20)suite_desc: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEsource_id: NUMBER(10)suite_parm1: VARCHAR2(200)suite_parm2: VARCHAR2(200)suite_parm3: VARCHAR2(200)suite_pram4: VARCHAR2(200)
run_suite_instance
run_suite_id: NUMBER(10)run_seq: NUMBER(4)
run_start_dte: DATEsuite_parm1: VARCHAR2(200)suite_parm2: VARCHAR2(200)suite_parm3: VARCHAR2(200)suite_pram4: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEcollection_id: NUMBER(10)
run_suite_module
run_suite_id: NUMBER(10)module_id: NUMBER(10)
module_sequence_num: NUMBER(4)update_uid: VARCHAR2(40)update_dte: DATE
run_document
dcmnt_idntfr: CHAR(10)
CIDN: CHAR(10)document_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_instance_item
run_instance_item_id: NUMBER(10)
item_id: NUMBER(10)run_suite_id: NUMBER(10)run_seq: NUMBER(4)collection_type_cde: VARCHAR2(20)passed_staging_tests_ind: CHAR(1)item_char10: CHAR(10)item_code: VARCHAR2(40)item_num_id: NUMBER(10)update_uid: VARCHAR2(40)update_dte: DATE
mng_source
source_id: NUMBER(10)
source_nme: VARCHAR2(200)
mng_table
table_nme: VARCHAR2(200)
source_id: NUMBER(10)schema_txt: VARCHAR2(20)has_cidn_ind: CHAR(1)has_dcmnt_idntfr_ind: CHAR(1)cidn_col_name_txt: VARCHAR2(30)dcmnt_col_name_txt: VARCHAR2(30)is_support_table_ind: CHAR(1)sql_grant_txt: VARCHAR2(200)sql_synonym_txt: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEdelta_enabled_ind: CHAR(1)delta_prev_ver_schema_txt: VARCHAR2(30)delta_prev_ver_table_txt: VARCHAR2(30)delta_output_table_txt: VARCHAR2(30)delta_vers_cache_num: NUMBER(4)delta_where_filter_txt: VARCHAR2(1000)
run_customer
CIDN: CHAR(10)
CUSTOMERNAME: VARCHAR2(200)CDBOR_CUST_ROLE_NAME: VARCHAR2(200)LOGICAL_DELETE_IND: CHAR(1)CUSTOMER_STATE_NUM: NUMBER(4)validation_comment_txt: VARCHAR2(1000)investigation_comment_txt: VARCHAR2(1000)NUM_PRODUCTS_NUM: NUMBER(4)NUM_DOCS_NUM: NUMBER(4)MAX_PRODUCT_IMPL_STAGE_NUM: NUMBER(4)MAX_MCS_VRSN_NUM: NUMBER(4,2)update_uid: VARCHAR2(40)update_dte: DATE
Run_Product_Offer
product_cde: VARCHAR2(40)
product_tech_name: VARCHAR2(200)product_state_cde: VARCHAR2(20)product_impl_stage_num: NUMBER(4)product_bus_name: VARCHAR2(200)PRODUCT_CMNT_TXT: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATEproduct_class_grp_cde: VARCHAR2(20)
run_collection
collection_id: NUMBER(10)
collection_type_cde: VARCHAR2(20)collection_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_items_in_collection
collection_id: NUMBER(10)item_id: NUMBER(10)
item_char10: CHAR(10)item_code: VARCHAR2(40)item_num_id: NUMBER(10)update_uid: VARCHAR2(40)update_dte: DATE
Run_Tables_by_Product
product_cde: VARCHAR2(40)table_nme: VARCHAR2(200)
Main_table_for_product_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
run_file_register
file_id: NUMBER(10)
sub_dir_nme: VARCHAR2(200)ext_file_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_module_file_usage
module_id: NUMBER(10)file_id: NUMBER(10)
update_uid: VARCHAR2(40)update_dte: DATE
run_module_product
product_cde: VARCHAR2(40)module_id: NUMBER(10)
update_uid: VARCHAR2(40)update_dte: DATE
run_tables_by_module
module_id: NUMBER(10)table_nme: VARCHAR2(200)
update_uid: VARCHAR2(40)update_dte: DATE
Run_Customer_product2
CIDN: CHAR(10)product_cde: VARCHAR2(40)
number_of_services_num: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
run_suite_table_stats
run_suite_id: NUMBER(10)run_seq: NUMBER(4)table_nme: VARCHAR2(200)stat_type_cde: VARCHAR2(20)
stat_value_num: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
mng_column
table_nme: VARCHAR2(200)column_nme: VARCHAR2(30)
column_seq: NUMBER(4)column_key_seq: NUMBER(4)column_compare_ind: CHAR(1)column_copy_ind: CHAR(1)identifier_col_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
Run_Worksheet_Tab
tab_name_txt: VARCHAR2(40)
MCS_VRSN_txt: VARCHAR2(20)update_uid: VARCHAR2(40)update_dte: DATEproduct_cde: VARCHAR2(40)
MNG_SERVICE
service_collection_id: NUMBER(10)service_seq: NUMBER(4)
service_fnn: CHAR(9)service_state_num: NUMBER(4)SERVICE_RULE_ID: NUMBER(10)VALID_IN_RASS_IND: CHAR(1)USE_THIS_SERVICE_IND: CHAR(1)validation_comment_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATE
MNG_SERVICE_COLLECTION
service_collection_id: NUMBER(10)
CUST_5LETTER_NAME: VARCHAR2(5)DCMNT_IDNTFR: CHAR(10)USE_THIS_COLLECTION_IND: CHAR(1)LOGICAL_DELETE_IND: CHAR(1)NTWK_IDNTFR: CHAR(9)FILENAME: VARCHAR2(200)CREATED_BY_NAME: VARCHAR2(200)AUTHOR_NAME: VARCHAR2(200)COLL_STATE_VALUE: NUMBER(4)DOC_TYPE_CDE: VARCHAR2(20)CREATE_DATE: DATEMODIFIED_DATE: DATEARCHIVE_DATE: DATEMCS_Vrsn_TXT: VARCHAR2(30)parsing_comment_txt: VARCHAR2(1000)validation_comment_txt: VARCHAR2(1000)investigation_comment_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATECIDN: CHAR(10)NUM_WRKSTH_NUM: NUMBER(4)NUM_PRODUCTS_NUM: NUMBER(4)NUM_INVALID_SHTS_NUM: NUMBER(4)MCS_VRSN_NUM: NUMBER(4,2)number_2dec: NUMBER(4,2)
MNG_SERVICE_METADATA
SERVICE_RULE_ID: NUMBER(10)
SERVICE_TYPE_CDE: VARCHAR2(20)TABLE_NME: VARCHAR2(30)COLUMN_NME: VARCHAR2(30)RASS_SERVICE_TYPE_CDE: VARCHAR2(20)VALID_IN_RASS_NUM: NUMBERINVALID_IN_RASS_NUM: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
RUN_SOURCE_INFLUENCES_PRODUCT
source_id: NUMBER(10)product_cde: VARCHAR2(40)
update_uid: VARCHAR2(40)update_dte: DATE
MNG_DME_SHEET
DCMNT_IDNTFR: CHAR(10)WK_SHEET_NME: VARCHAR2(40)
PRODUCT_CDE: VARCHAR2(20)update_uid: VARCHAR2(40)update_dte: DATE
MNG_CUSTOMER_5LETTER
CUST_5LETTER_NAME: VARCHAR2(5)
CUSTOMER_NAME: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATENUM_DOCS_NUM: NUMBER(4)MAX_MCS_VRSN_NUM: NUMBER(4,2)MAX_PRODUCT_IMPL_STAGE_NUM: NUMBER(4,2)ASSIGNED_TO_TXT: VARCHAR2(12)NUM_DOCS_LT20_NUM: NUMBER(4)NUM_DOCS_20s_NUM: NUMBER(4)NUM_DOCS_30s_NUM: NUMBER(4)NUM_DOCS_40s_NUM: NUMBER(4)NUM_DOCS_50s_NUM: NUMBER(4)NUM_DOCS_60s_NUM: NUMBER(4)NUM_DOCS_70s_NUM: NUMBER(4)NUM_DOCS_80s_NUM: NUMBER(4)NUM_DOCS_90s_NUM: NUMBER(4)NUM_DOCS_100s_NUM: NUMBER(4)NUM_DOCS_200s_NUM: NUMBER(4)NUM_DOCS_300s_NUM: NUMBER(4)one_possible_CIDN: CHAR(10)
Party(People/Org. of Interest
& their Relationship)
Campaign
Organization
Event(Content/TXN, etc.,)
Channel(ATM, Kiosk, etc.,)
Features
Product
LocationArrangement
(Accounts, etc.,)
Validate Strategic Business Requirements
Refine Strategic Business Requirements to Detailed Requirements
Categorise Detailed Business Requirements
Prioritise Detailed Business Requirements
Determine Detailed Analytical Requirements
© 2008 BearingPoint, Inc. 22CROSS
Information Development through the 5 Phases of MIKE2.0
Improved Governance and Operating Model
Strategic Programme Blueprint is done once
Phase 2Technology Assessment
Phase 3, 4, 5
Increment 1
Increment 2
Increment 3
Begin Next Increment
Continuous Implementation Phases
Development
Deploy
Design
Operate
Roadmap & Foundation Activities
Phase 1Business
Assessment
The MIKE2.0 Methodology Task 3.9.1 Develop Logical Data Model
Phase 3 – Information Management Roadmap and Foundation Activities
3.1 Information Management
Roadmap Overview
3.2 Testing and Deployment Plan
3.3 Software Development
Readiness
3.5 Business Scope for Improved
Data Governance
3.6 Enterprise Information Architecture
3.7 Root Cause Analysis on Data
Governance Issues
3.9 Database Design3.10 Message
Modelling 3.11 Data Profiling
3.4 Detailed Release
Requirements
3.8 Data Governance Metrics
3.12 Data Re-Engineering
3.13 Business Intelligence Initial
Design and Prototype
3.14 Solution Architecture
Definition/Revision
Activity 3.9 Database Design
Task 3.9.1 Design Logical Data Model
Task 3.9.2 Develop Physical Data Model
© 2008 BearingPoint, Inc. 23CROSS
triggerscalls
participates
has
specifies
processed_by
processes1
controls
represented_by
used_by1
stored_in
Holds
groups
reads
fed_into
CRUDs
used_by2
processed_by3
exists_for
offered_to_2 takes2
produces6
gathers6
constructed_of
comes_from
owns
influences
influenced_by
generates groups
described_by
run_event_log
run_instance_id: NUMBER(10)event_dte: DATElog_timstamp_txt: VARCHAR2(40)
log_type_cde: VARCHAR2(20)log_msg_txt: VARCHAR2(1000)
run_instance_module
run_instance_id: NUMBER(10)
module_id: NUMBER(10)run_suite_id: NUMBER(10)run_seq: NUMBER(4)module_sequence_num: NUMBER(4)module_name: VARCHAR2(200)module_desc: VARCHAR2(200)module_path: VARCHAR2(200)module_command: VARCHAR2(200)module_parm1: VARCHAR2(200)module_parm2: VARCHAR2(200)module_parm3: VARCHAR2(200)module_parm4: VARCHAR2(200)module_start_dte: DATEmodule_end_dte: DATEmodule_return_code_num: NUMBERrun_success_cde: VARCHAR2(20)run_process_dte: DATElog_path_txt: VARCHAR2(200)log_file_name_txt: VARCHAR2(200)log_search1_txt: VARCHAR2(200)log_search2_txt: VARCHAR2(200)log_search3_txt: VARCHAR2(200)log_search4_txt: VARCHAR2(200)log_ignore1_txt: VARCHAR2(200)log_ignore2_txt: VARCHAR2(200)log_ignore3_txt: VARCHAR2(200)log_ignore4_txt: VARCHAR2(200)log_whole_lie_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
run_module
module_id: NUMBER(10)
module_name: VARCHAR2(200)module_desc: VARCHAR2(200)module_path: VARCHAR2(200)module_command: VARCHAR2(200)module_parm1: VARCHAR2(200)module_parm2: VARCHAR2(200)module_parm3: VARCHAR2(200)module_parm4: VARCHAR2(200)module_table_txt: VARCHAR2(200)module_health_sql_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATElog_path_txt: VARCHAR2(200)log_file_name_txt: VARCHAR2(200)log_search1_txt: VARCHAR2(200)log_search2_txt: VARCHAR2(200)log_search3_txt: VARCHAR2(200)log_search4_txt: VARCHAR2(200)log_ignore1_txt: VARCHAR2(200)log_ignore2_txt: VARCHAR2(200)log_ignore3_txt: VARCHAR2(200)log_ignore4_txt: VARCHAR2(200)log_whole_lie_ind: CHAR(1)
run_suite
run_suite_id: NUMBER(10)
suite_name: VARCHAR2(20)suite_desc: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEsource_id: NUMBER(10)suite_parm1: VARCHAR2(200)suite_parm2: VARCHAR2(200)suite_parm3: VARCHAR2(200)suite_pram4: VARCHAR2(200)
run_suite_instance
run_suite_id: NUMBER(10)run_seq: NUMBER(4)
run_start_dte: DATEsuite_parm1: VARCHAR2(200)suite_parm2: VARCHAR2(200)suite_parm3: VARCHAR2(200)suite_pram4: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEcollection_id: NUMBER(10)
run_suite_module
run_suite_id: NUMBER(10)module_id: NUMBER(10)
module_sequence_num: NUMBER(4)update_uid: VARCHAR2(40)update_dte: DATE
run_document
dcmnt_idntfr: CHAR(10)
CIDN: CHAR(10)document_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_instance_item
run_instance_item_id: NUMBER(10)
item_id: NUMBER(10)run_suite_id: NUMBER(10)run_seq: NUMBER(4)collection_type_cde: VARCHAR2(20)passed_staging_tests_ind: CHAR(1)item_char10: CHAR(10)item_code: VARCHAR2(40)item_num_id: NUMBER(10)update_uid: VARCHAR2(40)update_dte: DATE
mng_source
source_id: NUMBER(10)
source_nme: VARCHAR2(200)
mng_table
table_nme: VARCHAR2(200)
source_id: NUMBER(10)schema_txt: VARCHAR2(20)has_cidn_ind: CHAR(1)has_dcmnt_idntfr_ind: CHAR(1)cidn_col_name_txt: VARCHAR2(30)dcmnt_col_name_txt: VARCHAR2(30)is_support_table_ind: CHAR(1)sql_grant_txt: VARCHAR2(200)sql_synonym_txt: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATEdelta_enabled_ind: CHAR(1)delta_prev_ver_schema_txt: VARCHAR2(30)delta_prev_ver_table_txt: VARCHAR2(30)delta_output_table_txt: VARCHAR2(30)delta_vers_cache_num: NUMBER(4)delta_where_filter_txt: VARCHAR2(1000)
run_customer
CIDN: CHAR(10)
CUSTOMERNAME: VARCHAR2(200)CDBOR_CUST_ROLE_NAME: VARCHAR2(200)LOGICAL_DELETE_IND: CHAR(1)CUSTOMER_STATE_NUM: NUMBER(4)validation_comment_txt: VARCHAR2(1000)investigation_comment_txt: VARCHAR2(1000)NUM_PRODUCTS_NUM: NUMBER(4)NUM_DOCS_NUM: NUMBER(4)MAX_PRODUCT_IMPL_STAGE_NUM: NUMBER(4)MAX_MCS_VRSN_NUM: NUMBER(4,2)update_uid: VARCHAR2(40)update_dte: DATE
Run_Product_Offer
product_cde: VARCHAR2(40)
product_tech_name: VARCHAR2(200)product_state_cde: VARCHAR2(20)product_impl_stage_num: NUMBER(4)product_bus_name: VARCHAR2(200)PRODUCT_CMNT_TXT: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATEproduct_class_grp_cde: VARCHAR2(20)
run_collection
collection_id: NUMBER(10)
collection_type_cde: VARCHAR2(20)collection_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_items_in_collection
collection_id: NUMBER(10)item_id: NUMBER(10)
item_char10: CHAR(10)item_code: VARCHAR2(40)item_num_id: NUMBER(10)update_uid: VARCHAR2(40)update_dte: DATE
Run_Tables_by_Product
product_cde: VARCHAR2(40)table_nme: VARCHAR2(200)
Main_table_for_product_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
run_file_register
file_id: NUMBER(10)
sub_dir_nme: VARCHAR2(200)ext_file_nme: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATE
run_module_file_usage
module_id: NUMBER(10)file_id: NUMBER(10)
update_uid: VARCHAR2(40)update_dte: DATE
run_module_product
product_cde: VARCHAR2(40)module_id: NUMBER(10)
update_uid: VARCHAR2(40)update_dte: DATE
run_tables_by_module
module_id: NUMBER(10)table_nme: VARCHAR2(200)
update_uid: VARCHAR2(40)update_dte: DATE
Run_Customer_product2
CIDN: CHAR(10)product_cde: VARCHAR2(40)
number_of_services_num: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
run_suite_table_stats
run_suite_id: NUMBER(10)run_seq: NUMBER(4)table_nme: VARCHAR2(200)stat_type_cde: VARCHAR2(20)
stat_value_num: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
mng_column
table_nme: VARCHAR2(200)column_nme: VARCHAR2(30)
column_seq: NUMBER(4)column_key_seq: NUMBER(4)column_compare_ind: CHAR(1)column_copy_ind: CHAR(1)identifier_col_ind: CHAR(1)update_uid: VARCHAR2(40)update_dte: DATE
Run_Worksheet_Tab
tab_name_txt: VARCHAR2(40)
MCS_VRSN_txt: VARCHAR2(20)update_uid: VARCHAR2(40)update_dte: DATEproduct_cde: VARCHAR2(40)
MNG_SERVICE
service_collection_id: NUMBER(10)service_seq: NUMBER(4)
service_fnn: CHAR(9)service_state_num: NUMBER(4)SERVICE_RULE_ID: NUMBER(10)VALID_IN_RASS_IND: CHAR(1)USE_THIS_SERVICE_IND: CHAR(1)validation_comment_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATE
MNG_SERVICE_COLLECTION
service_collection_id: NUMBER(10)
CUST_5LETTER_NAME: VARCHAR2(5)DCMNT_IDNTFR: CHAR(10)USE_THIS_COLLECTION_IND: CHAR(1)LOGICAL_DELETE_IND: CHAR(1)NTWK_IDNTFR: CHAR(9)FILENAME: VARCHAR2(200)CREATED_BY_NAME: VARCHAR2(200)AUTHOR_NAME: VARCHAR2(200)COLL_STATE_VALUE: NUMBER(4)DOC_TYPE_CDE: VARCHAR2(20)CREATE_DATE: DATEMODIFIED_DATE: DATEARCHIVE_DATE: DATEMCS_Vrsn_TXT: VARCHAR2(30)parsing_comment_txt: VARCHAR2(1000)validation_comment_txt: VARCHAR2(1000)investigation_comment_txt: VARCHAR2(1000)update_uid: VARCHAR2(40)update_dte: DATECIDN: CHAR(10)NUM_WRKSTH_NUM: NUMBER(4)NUM_PRODUCTS_NUM: NUMBER(4)NUM_INVALID_SHTS_NUM: NUMBER(4)MCS_VRSN_NUM: NUMBER(4,2)number_2dec: NUMBER(4,2)
MNG_SERVICE_METADATA
SERVICE_RULE_ID: NUMBER(10)
SERVICE_TYPE_CDE: VARCHAR2(20)TABLE_NME: VARCHAR2(30)COLUMN_NME: VARCHAR2(30)RASS_SERVICE_TYPE_CDE: VARCHAR2(20)VALID_IN_RASS_NUM: NUMBERINVALID_IN_RASS_NUM: NUMBERupdate_uid: VARCHAR2(40)update_dte: DATE
RUN_SOURCE_INFLUENCES_PRODUCT
source_id: NUMBER(10)product_cde: VARCHAR2(40)
update_uid: VARCHAR2(40)update_dte: DATE
MNG_DME_SHEET
DCMNT_IDNTFR: CHAR(10)WK_SHEET_NME: VARCHAR2(40)
PRODUCT_CDE: VARCHAR2(20)update_uid: VARCHAR2(40)update_dte: DATE
MNG_CUSTOMER_5LETTER
CUST_5LETTER_NAME: VARCHAR2(5)
CUSTOMER_NAME: VARCHAR2(200)update_uid: VARCHAR2(40)update_dte: DATENUM_DOCS_NUM: NUMBER(4)MAX_MCS_VRSN_NUM: NUMBER(4,2)MAX_PRODUCT_IMPL_STAGE_NUM: NUMBER(4,2)ASSIGNED_TO_TXT: VARCHAR2(12)NUM_DOCS_LT20_NUM: NUMBER(4)NUM_DOCS_20s_NUM: NUMBER(4)NUM_DOCS_30s_NUM: NUMBER(4)NUM_DOCS_40s_NUM: NUMBER(4)NUM_DOCS_50s_NUM: NUMBER(4)NUM_DOCS_60s_NUM: NUMBER(4)NUM_DOCS_70s_NUM: NUMBER(4)NUM_DOCS_80s_NUM: NUMBER(4)NUM_DOCS_90s_NUM: NUMBER(4)NUM_DOCS_100s_NUM: NUMBER(4)NUM_DOCS_200s_NUM: NUMBER(4)NUM_DOCS_300s_NUM: NUMBER(4)one_possible_CIDN: CHAR(10)
The MIKE2.0 Methodology Task 3.9.1 Develop Logical Data Model
The Logical Data Model (LDM) is a more formal representation of the conceptual and contains far greater supporting detail. Relational theory is used to normalise the data, like objects may be grouped into super and sub types, many-to-many relationships are resolve.
© 2008 BearingPoint, Inc. 24CROSS
The MIKE2.0 Methodology Supporting Assets – Mapping to an Off the Shelf Data Model
Subset of a Supporting Asset:
In general the design of a Data Mart M (if Dimensional modeling – Star Schema involved), would involve the following methodology approach.
Identify a business process/requirement (i.e. ALM requirements, MIS reports etc). A DM is designed around "known" requirements
Identification of the lowest level of the process (i.e. Individual txn, individual daily/mthly snapshot), which will be represented in the fact table for this process
Analyze the elements of the business process or requirements and identify the Dimension, Measure, etc of the process with related characteristics (i.e. hierarchies, aggregates, history, etc) noted
For each business process the identification of the Fact, related Dimension tables, their contents and relationships between the tables are pursued. Where there are multiple business process or requirements within a subject area (i.e. ALM, Profitability, etc) this approach will continue
Note: If the DM logical design were based on a subset of the FSLDM, a similar process as discussed for how to design a DW would be pursued.
3 NF General (Common)
Application CIF Extension CIF
Facility Account Customer Account Relationship
Application Daily Summary
Application Monthly Summary
Application Facility Account
Application Transaction
Star SchemaCustomized (Complex)
Dimensions
Sales $RevenueVolume
Geogra-phy
Time
Color
Dimensions
Facts
Product
© 2008 BearingPoint, Inc. 25CROSS
Better Business Intelligence Lessons Learned
Define a Strategy that can be Executed
Launch a large-scale top-down strategy with a bottom-up (narrow and detailed) engagement if necessary
Always define the tactical within the strategic and plan for re-factoring and continuous improvement in the overall programme plan
Focus on improving key data elements – Don't do everything at once
Design a Strategy that is Flexible and Meaningful to the Business
Expect business requirements to change – Provide an infrastructure to handle a dynamic business
Know your risk areas in each implementation increment – Focus on foundation activities first
Be aware of technology lock-in and know the cost of "getting out" – Use an open approach
Break through limiting factors in legacy technology – This is the opportunity to kill the sacred cows
Keep the Business Engaged
Communicate continuously on the planned approach defined in the strategy – The overall Blueprint is the communications document for the life of the programme
Always focus on the business case – Even for initial infrastructure initiatives or replacement activities