+ All Categories
Home > Documents > SECTION 11 TECHNOLOGY SOLUTION - WV DHHR 11 - technology solution contains...healthcare analytic...

SECTION 11 TECHNOLOGY SOLUTION - WV DHHR 11 - technology solution contains...healthcare analytic...

Date post: 20-Mar-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
26
PAGE 163 May 17, 2011 ©2011 Thomson Reuters All Rights Reserved www.thomsonreuters.com SECTION 11 TECHNOLOGY SOLUTION RFP §4.1.11and § 3.2.2 11.1 Introduction 11.2 Overall DW/DSS Solution Architecture 11.2.1 Technical Architecture Overview 11.2.2. Data Architecture 11.3 Infrastructure 11.3.1 Environments 11.3.2 Hardware / Operating System Component 11.4 Data Architecture 11.4.1 Data Model 11.4.2 Data Acquisition 11.4.3 Data Access 11.5 Data Center and Facilities 11.5.1 Secure Hosting Site and All Work in the U.S. 11.6 Technical Support Services 11.6.1 Help Desk / Customer Support 11.6.2 Training / Ongoing Education 11.6.3 Data Management and Database Updates 11.1 INTRODUCTION Thomson Reuters is proposing a solution that includes three primary licensed commercial-off -the-shelf (COTS) products that will be accessed by BMS users: Advantage Suite, J-SURS, and i-Sight. Other software, as noted below, is also included in our solution. These COTS applications will be configured to BMS requirements, including the specifics of operations, data model, and connection to systems that supply data. The Advantage Suite, J-SURS, and i-Sight applications are described in Section 10 of the proposal. While we will begin the project with separate data extracts for each product, we will migrate to a model where data is added to Advantage Suite, which then feeds data to J-SURS. In both scenarios, data will be reconciled to the record and financially to the penny. Our solution is in production today in several state Medicaid agencies; both we and our clients have found it to be predictable and maintainable. We propose to host this solution in our Tier III Thomson Reuters Data Center near Minneapolis, MN. This shared server environment provides secure, reliable, and responsive information access to our hundreds of clients and thousands of end-users. We will provide all of the hardware, software, and other support necessary for the operation of our proposed DW/DSS applications.
Transcript

PAGE 163 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

SECTION 11 TECHNOLOGY SOLUTION RFP §4.1.11and § 3.2.2

11.1 Introduction

11.2 Overall DW/DSS Solution Architecture

11.2.1 Technical Architecture Overview 11.2.2. Data Architecture

11.3 Infrastructure

11.3.1 Environments 11.3.2 Hardware / Operating System Component

11.4 Data Architecture

11.4.1 Data Model 11.4.2 Data Acquisition 11.4.3 Data Access

11.5 Data Center and Facilities

11.5.1 Secure Hosting Site and All Work in the U.S.

11.6 Technical Support Services

11.6.1 Help Desk / Customer Support 11.6.2 Training / Ongoing Education 11.6.3 Data Management and Database Updates

11.1 INTRODUCTION Thomson Reuters is proposing a solution that includes three primary licensed commercial-off -the-shelf (COTS) products that will be accessed by BMS users: Advantage Suite, J-SURS, and i-Sight. Other software, as noted below, is also included in our solution. These COTS applications will be configured to BMS requirements, including the specifics of operations, data model, and connection to systems that supply data. The Advantage Suite, J-SURS, and i-Sight applications are described in Section 10 of the proposal.

While we will begin the project with separate data extracts for each product, we will migrate to a model where data is added to Advantage Suite, which then feeds data to J-SURS. In both scenarios, data will be reconciled to the record and financially to the penny. Our solution is in production today in several state Medicaid agencies; both we and our clients have found it to be predictable and maintainable.

We propose to host this solution in our Tier III Thomson Reuters Data Center near Minneapolis, MN. This shared server environment provides secure, reliable, and responsive information access to our hundreds of clients and thousands of end-users. We will provide all of the hardware, software, and other support necessary for the operation of our proposed DW/DSS applications.

PAGE 164 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

11.2 Overall DW/DSS Solution Architecture Our proposed solution is aligned with the architecture of the To Be vision of the Data Warehouse / Decision Support System in your 2010 APD submission to CMS.

11.2.1 Technical Architecture Overview We propose a multi-tiered architecture composed of database, application, and presentation layers. The use of multiple physical tiers allows each tier to focus on the task for which it is best suited, and implementations can be based on cost effective commodity components in flexible and scalable deployments. With an appropriate division of functions between tiers, the need for expensive high-speed communications links is eliminated. As usage grows, servers can be upgraded or even replicated to add capacity without redesigning the entire solution. The application tier includes standard business intelligence and analysis tools available via Enterprise License Agreements and proven COTS-based healthcare analytic tools. Together these applications will support all BMS needs and all user levels.

11.2.2 Data Architecture As shown in the diagram below, our proposed data architecture is a centralized dimensional data warehouse that integrates the data from the MMIS and other systems. Initially, the warehouse will be loaded with 4 years of MMIS data. It will grow over time to house 8 years of MMIS and eligibility data.

• Advantage Suite (Adv Suite) – Advantage Suite is the core of our proposed DW/DSS solution. It includes Advantage Build for database builds and updates, Ad Hoc Report Writer for user query interface, Measures Engine for performance measures, the Patient Health Record, and a Cognos business intelligence engine.

PAGE 165 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

• J-SURS – Surveillance Utilization Review System (SURS) capabilities for program integrity.

• i-Sight – Case management and case tracking to support program integrity activities.

• DataStage – ETL tool.

• DataProbe – This data exploration and investigation tool will be used by Thomson Reuters staff.

• Cognos – Part of Advantage Suite; has Report Studio and Event Studio functions for power users.

• ESRI – Geographical analysis/mapping software (ArcGIS) for designated users.

• SAS – Software for designated users doing advanced statistical analysis.

The major benefits of our proposed approach are: (1) enterprise level data integration, and (2) support of different views and specialized uses of data. This model supports your efforts to increase the quality and efficiency of healthcare through better decision-making, because it:

• Provides the flexibility to add, remove and change the products and applications using enterprise data, without requiring fundamental changes to the system.

• Allows for an incremental data-driven approach, with rapid deployment of key applications.

• Provides the flexibility of broad general use and ease in re-purposing data for specialized use.

• Is easy to grow and adapt the solution to meet ever changing needs.

11.2.2.1 Design Components The foundation of our solution is a data warehouse where we load and integrate the data that are required to meet the needs of BMS. The following concepts are central to our approach:

• A Client-Centric Data Model. For retrospective analysis, we employ a carefully designed person-centric data model. This approach makes it easy to understand how clients access care across the various care settings over time.

• Exceptional Attention to Data Quality. Our data quality assessment processes are the best in the industry, and we are well known for the reliability of the data in our databases. We have invested heavily in an automated, rules-based approach that improves data accuracy and reliability and allows human staff time to be focused on data quality for analysis and improvement.

• Data Standardization and Enhancement. We believe that raw data from disparate sources must be standardized to make it easy to use. We enhance the data with multiple forms of aggregation, summarization, and clinical groupings to further support analysis.

• Healthcare-Intelligent Content as a Vital Element. We imbue every aspect of the enterprise decision support installation with healthcare-intelligent content. This includes clinical aggregates in the database, algorithms and measures in the applications, and tools for end users of all levels.

• Continuous Growth and Improvement. We employ a business model that ensures that our clients’ DW/DSS solutions stay current with the technology and the healthcare industry. A common complaint in Medicaid is that the warehouses become dumping grounds for raw data

PAGE 166 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

that is hard to retrieve, difficult to understand, poorly documented, and impossible to integrate with other data to make it really useful. We take a proactive approach to growing the quality and usefulness of the DW/DSS every year.

• Metadata. Business, technical and operational metadata is managed in a central repository that is accessible to the users and applications of the DW/DSS. It is used by both business and technical users to enhance their understanding of the data and the processes that populate and use the data.

11.2.2.2 DataProbe Exploration System We will maintain a DataProbe environment synchronized with the data warehouse for our staff to perform data quality investigation and special purpose analytics. It will also serve a failover function. DataProbe supports power users whose data manipulation needs go beyond standard and ad hoc reporting. It was designed for very fast exploratory investigations of large volumes of healthcare data. We use it extensively to manage our clients’ databases and support data quality efforts.

11.2.2.3 System Ownership and Current and Future Coding Standards

RFP 3.1.6 (as amended). Agree that BMS retains ownership of all data, procedures, programs and all materials developed during DDI and Operations, as well as the initial licensing for installed COTS. Manufacturers’ support and maintenance for the proprietary COTS software licensing subsequent to the initial install must be provided only for the life of the contract. The source code will be held in escrow with a third-party agent acceptable to the State. Thomson Reuters agrees to do so.

RFP 3.1.7 Agree to incorporate all applicable current and future coding standards and legislated or program necessary data requirements to ensure that the DW/DSS is current in its ability to accept and appropriately employ new standards and requirements as they occur, including, but not limited to, ICD-10, HIPAA v5010, the Patient Protection and Access to Care Act (PPACA) and the Health Information Technology for Economic and Clinical Health Act (HITECH). Our solution fully complies and will continue to be compliant with any current and future Federal and State laws regarding privacy and confidentiality.

We agree to incorporate all applicable current and future coding standards and legislated or program necessary data requirements to ensure that the DW/DSS is current in its ability to accept and appropriately employ new standards and requirements as they occur, including those listed. Our support for ICD-10 is a good example of how our solution supports this new coding standard. We have invested several years preparing our COTS software to be available and in production for our clients prior to the implementation date of October 1, 2013.

11.3 Infrastructure RFP § 3.2.2, Item 1

RFP 3.1.3 Employ a Relational Database Management System (RDBMS) or Object Oriented Database Management System (OODMS), a data infrastructure that is easily configurable, role-based with 24x7 access to data, and use best in class analysis tools. Thomson Reuters agrees to do so.

Advantage Suite provides best in class analysis tools and is built on a single, well-integrated, analytically-ready dimensional data warehouse. The data warehouse is implemented in Oracle 11g and uses a high

PAGE 167 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

performance star schema data model. Oracle uses a proven database design paradigm that enables the database to be easily configurable. Access to the data is controlled via security views that support role based security profiles of end-users. We will support 24x7 access to the data warehouse, except during the state-approved maintenance windows specified in Service Level Agreement 1 in RFP Appendix 7.

Our proposed solution will be hosted and maintained in our secure, state-of-the-art Data Center. The data for the production system will be updated monthly. We employ rigorous processes to assure data quality. Our database construction and system administration processes are fully compliant with HIPAA guidelines. The other COTS applications that complete our solution (J-SURS, i-Sight, and DataProbe) use a variety of data structures that are purpose-built for the data access requirements of the application.

11.3.1 Environments 11.3.1.1 Services The Thomson Reuters Data Center offers a full service ASP arrangement. We provide the hardware, software, and operating infrastructure to support the DW/DSS. We also provide all technical and database management functions needed to ensure that business users receive timely access to the proposed DW/DSS applications. Services include not only the hardware and operations support that facilities vendors typically provide, but also DBA services, database update services, and version upgrade services. All services are provided by staff with extensive experience with our solution components. Rather than commit to a particular server and disk configuration, we commit to providing a level of service that ensures users receive the information they need within the timeframe they expect in order to make sound business decisions. Advantage Suite will be on a shared environment, with BMS data partitioned from other Thomson Reuters client data through use of a separate Oracle database schema.

In the service center arrangement, we take responsibility for all IT functions related to housing, managing, and updating our clients’ data and solution components. Our clients’ primary responsibility is to provide clean data in the agreed-upon formats and within the agreed-upon timeframes for the scheduled database updates. The following table describes the specific functions we provide:

ASP Function Description

Facilities Management

We provide facility and system security and manage a secure Data Center with raised floor, air conditioning, fire protection, conditioned power, and adequate space for servers.

Operations Management

We provide 24x7 management of the hardware. This function includes mounting tapes, server console operations, and anything that requires physical access to the hardware. If production processes fail, the process alerts the operations staff and calls the appropriate support staff to restart the process.

Network Management

Our data communications or network engineers are responsible for making sure users have connectivity to the applications. They work with all elements of the communications system including the networks, firewalls, switches, routers, cabling, and network security data communications.

System Administration

The system administration staff interact with servers at the operating system level. They install and upgrade the operating system, apply patches, and “program” the server for best performance and reliability given the application that runs on them. Often, they will also address disk space management and issue resolution.

Database Administration and Management

A Thomson Reuters DBA and data management staff will manage the organization and content of the database which is made up of tables and indexes. This function includes evaluating performance issues assisting in growth planning, systems analysis, database design and testing, and keeping the version of the DBMS up to date.

Database Updates and History Roll-off

We will perform and be responsible for processing all database updates. We will update claim/service tables monthly. As part of this update process, we will perform quality testing to ensure that the new data is accurate, of high quality, and fully reconciled, and that the update and roll-off is done correctly.

PAGE 168 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

ASP Function Description

Software Development and Testing

We have a large staff of experienced software designers and developers engaged in maintenance and enhancement of our COTS software products. Applying proven software development methodologies based on the company’s 30 years of experience in the healthcare decision support field, these staff are responsible for systems analysis, architecture design, and code development and testing.

Software Upgrades We will perform and be responsible for processing all software upgrades. In the service center arrangement, we routinely implement all software upgrades within two to three weeks after the release of the upgrade, thereby ensuring that you have access to the most recent version of the software.

11.3.1.2 Data Center Facilities

PAGE 169 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

11.3.1.3 DW/DSS Environment Specifics

PAGE 170 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

11.3.2 Hardware / Operating System Component We will use the following open systems components:

PAGE 171 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

PAGE 172 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

Requirement Thomson Reuters Response ensure that our software operating infrastructure and DW/DSS applications evolve to best meet our client needs.

Hardware monitoring capabilities

We use a variety of tools to monitor our servers and environment. These tools include HP Openview, System Insight Manager, SiteScope, Nagios and some internally developed tools/scripts. The monitoring tools can use SNMP, e mail, and/or log files. All generated alerts get sent to a centralized syslog server. We also have a performance/report monitoring process (internally developed) that serves as a benchmark and threshold indicator on report performance.

Design for capacity, scalability, and redundancy

We have built up extensive hardware and software modeling worksheets from our years of experience implementing and supporting DW/DSS systems. These models look at a variety of parameters such as total number of users, number of concurrent users, number of simultaneous reports/queries being executed, size of database, complexity of queries, growth rates, volumes, application components being utilized, hardware performance specifications, environment configuration, performance needs, and redundancy needs. These detailed models help us identify the right mix of hardware/software to meet the capacity, scalability and redundancy needs. All hardware will be configured to meet the capacity needs defined in the RFP, growth estimates, and associated service levels. However, we can scale the environment either vertically or horizontally as needs change, for instance: We can scale vertically by adding additional memory or faster processors to the blade servers We can also scale horizontally by increasing the total number of blades within the environment

Within the network, we can add additional switches or increase the number of network ports within the existing switches. Within the disk environment, we can add additional cabinets or increase the number of disk spindles within existing cabinets. The environment will be designed for 24x7x365 availability by utilizing redundant components, failover capacity and disaster recovery capacity and plans. Redundant components will be used to initially to minimize risk in the event of a component failure. This includes redundant: data communication pipes, switches, supervisor modules, routers, ports, firewalls, servers, I/O bays, disk controllers, network interfaces, SAN ports/switches, fans, power supplies, and disks. In fact, it is our standard practice to build in redundant components when possible because downtime and outages are expensive.

How we plan to meet Service Level Agreements (SLAs) if a hardware component fails

We have built in redundant components, a failover environment, and a disaster recovery environment to ensure SLA’s are met. In the event of a component failure, the redundant component such as a CPU, disk drive, power supply or fan will assume operations. We will then work through our hardware maintenance contracts to schedule a part replacement accordingly. Actual parts replacement and maintenance will go through the change management and configuration management processes and will be scheduled for when the environment is not handling a production load. If a major component fails, or if there is no available redundant component, control will be transferred to the failover environment. This should be a rare occurrence.

Hardware interoperability with other infrastructure components

The hardware and software we use has been rigorously tested for compatibility and interoperability by us, our partners, and the manufacturers. This equipment has been used both within our environment and installed within our clients environments with no known interoperability issues. To maintain interoperability, we utilize strict version control, configuration management and rigorous testing to ensure any future changes do not negatively affect interoperability. This process can get down to the BIOS or firmware levels running on specific pieces of equipment. It also takes into account manufacturers’ recommendations such as suggested disk controllers/firmware revisions when utilizing an EMC SAN environment on Linux.

11.3.3 Network Component

PAGE 173 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

Thomson Reuters agrees to install, configure, maintain, and support all hardware, software, and services up to the point where the environment connects to the BMS wide area network. This also includes any necessary services with our local area network. We also agree to provide back-up network connectivity with the capacity to support the system and its components. To prevent issues with this connectivity, it is our standard practice to provide redundancy. Multiple network pipes utilizing multiple vendors will be established to seamlessly and automatically route and manage traffic between the data communications links in the event of an issue and to eliminate any single points of failure.

Thomson Reuters agrees to provide authorized BMS staff with access to the primary facility. We also agree to provide authorized BMS staff with access to the data center and disaster recovery sites upon request and in accordance to our visitor policies and requirements. The list of authorized staff will be reviewed on a periodic basis to ensure only those who need access have it.

Thomson Reuters agrees to provide sufficient BMS network support for as many as 60 users with an estimated 30 concurrent users. We also agree to support up to ten percent growth (of concurrent and total users) per year on the network. We will utilize a variety of processes including: benchmark queries, performance monitoring tools, periodic reviews and capacity planning to ensure performance meets SLA agreements. These processes utilize both historical and forward-looking information to plan and implement sufficient production capacity to meet defined SLAs.

Thomson Reuters agrees to provide and maintain all server, network, switches, hardware, racks for mounting hardware, power cabling inside of the racks, keyboard, video and mouse (KVM) switches and/or terminal servers for access to server consoles, monitors for KVM switches, applications/Web pages/secure sockets layer devices to support https, encrypted network connections, and/or secure sockets layer requirements within the BMS hardware/software network solution.

Thomson Reuters agrees to submit to BMS all plans for connections to the BMS network. We will work with BMS staff to review and finalize how the networks will be connected. All additions or changes to any network configurations will go through the change management process so they are reviewed and approved by both BMS and Thomson Reuters staff before implementation. Where necessary, we will install and maintain data lines for required access to the BMS network from our project site, and we will terminate lines from the project site to the BMS network at the point of demarcation on the BMS network. Also, where necessary we will establish agreements with telecommunications network Vendors to install secure data lines to our data center. The majority of our clients do not require static IP addresses; doing so may require customization (network set-up) at an additional cost.

PAGE 174 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

Thomson Reuters agrees to ensure all authorized BMS staff and third parties will have remote access capability to access the production, test, UAT, and training environments. We also agree to implement firewalls/proxies between our private network and your network. We agree to provide the necessary operations staff to assist with correcting any problems associated with telecommunication hardware and software.

Our architects, operations staff, and research/development teams are always looking at new and better ways of doing things. We spend a significant amount of time testing, planning, and implementing new technologies so that our overall performance, capacity, and availability are continually improved. These efforts not only focus on telecommunications, but all areas of technology including servers, storage, backups, and databases.

We will work with all the necessary organizations (e.g., BMS, vendors, partners, industry experts) to troubleshoot and resolve interface issues. Our philosophy is a collaborative team approach. An example of this is ongoing communication regarding browser support for the desktops of end-users. As operating systems progress, it will be important for us to coordinate supported versions of browsers for our web applications. We support industry prevalent browser versions.

11.3.4 Software Component Thomson Reuters agrees to review, configure, generate, customize, install, and maintain operating system software, database management software, network software, tool software, and other system software in all environments for the DW/DSS. We will utilize processes such as design reviews, change management, configuration management, incident management, problem management, and release management to manage changes/actions within the environment. We also agree to diagnose problems related to software. These problems will be tracked via incident/problem management systems such as Salesforce and/or HP Service Manager. Application related problems will be sent through our rapid response and development teams. Identified fixes will be implemented in future patches, version upgrades, or code releases. Development activities will be controlled through our SDLC, testing and release management processes. Release management, configuration management and change management will be used to test, verify and implement approved changes.

Thomson Reuters also agrees to manage software versions, patches, and fixes, and we agree to develop and maintain relationships with vendors to keep-up-to-date on new products. In addition, we agree to develop and maintain an inventory of all software, including active versions, licensing information, interdependencies, maintenance information, and support information. This is important for overall management, software upgrades, and disaster recovery. We will assist with analysis of BMS requests for new software for appropriateness to the overall architecture.

We have standards for software installation such as data set names, architecture, and volume names. These standards are important to not only streamline installation and maintenance activities, but also to reduce risks, eliminate errors, and maintain consistency. We will schedule operating system upgrades to accommodate processing schedules and system availability needs of BMS. Any operating system upgrades will be well coordinated through our testing, change, and release management processes. Our standard approach is to first internally test new operating systems within our internal test, QA, and development systems. They are then rolled out to internal non-critical systems for further testing before they are released for installation on production systems.

PAGE 175 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

11.3.5 Database Management Component The following database management software will be used for Advantage Suite:

11.3.6 Administrative Functions SPECIFICATION Thomson Reuters RESPONSE

TEC AC12.2 The dashboard report feature provides conditional highlighting to alert viewers to the fact that a defined threshold has been exceeded.

TEC AC12.3 Our solution’s alerting functionality automatically alerts the user via e-mail.

TEC AC12.4 Alerts can be triggered by any data threshold, including trend calculations.

PAGE 176 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

SPECIFICATION Thomson Reuters RESPONSE

TEC AC12.5 Users can cancel any running report, and an administrator can cancel any report.

TEC AC12.6 Reports can be saved as a new version every time they are modified. In addition, reports with prior data can be archived.

TEC AC12.7 Thomson Reuters staff will assist to perform impact analyses based on proposed changes as needed.

TEC AC12.8 Load balancing and clustering will be used to ensure that system load is spread across all available servers. Where appropriate, on the front-end an F5 load balancer will be used to ensure that end-user requests are distributed across multiple servers.

TEC AC12.9 Our solution enables query optimization. Query optimization occurs both automatically and manually utilizing a number of methods including: Oracle Query Optimizer, custom developed code, built-in database tools/functionality, other third-party tools, user training.

TEC AC12.10 Advantage Suite provides a directory of user-defined objects that function as data views, such as custom subsets. These custom subsets definitions can be applied to any report and can be modified by the user at will. A Favorites feature and a Search function help the user find the subsets they have stored.

TEC AC12.11 The solution includes public folders and social networking functionality to allow users to share their work with others.

TEC AC12.12 The applications we propose have varying types of time-out features that ensure that users do not allow their applications to stay open past a defined period of inactivity.

11.4 Data Architecture 11.4.1 Data Model RFP § 3.2.2, Item 6

We will provide a DW/DSS data model component that is maintained in an open systems modeling tool. ERwin, our tool of choice for logical and physical data modeling, will allow for enhanced data modeling. Import and export of the model metadata for the purposes of MME will be supported using an industry XMI standard. This tool will include extensive report capabilities and out-of-the-box statistical reports which can be used for deep model analysis. In order to enforce the naming conventions and other modeling standards, the domain dictionary will be maintained, which will be applied to the model objects. Our tool will support the requirements specified in RFP Appendix 2, Section B.6 – Data Model.

Our data model is based on a proven and expandable Medicaid data model designed for OLAP uses. Using integrated tools, this data model is configured to meet the specific information needs of each state agency. Our pricing is based upon adding up to 250 custom fields to our standard Medicaid data model. Our unique database schema is designed to ensure speed of data retrieval for our healthcare applications. The Advantage Suite database uses a relational star schema model, optimized for analytic query and reporting. A star schema is a type of relational database design ideal for supporting analytic processing. In a star schema, data is organized in two types of normalized tables: fact tables and dimension tables.

• Fact Tables: The key numerical fields that are measured in order to manage the business are stored in the fact tables. Facts are typically numeric, continuous variables that are additive. We consider visits, length of stay, members, and payments as some of the critical facts used to measure healthcare performance. Fact tables are typically “long and slender,” containing many

PAGE 177 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

records, but few fields. In an administrative claim and encounter fact table, there is one row for every service.

• Dimension Tables: Dimension tables hold descriptive information about the business. Each dimension table defines a number of attributes (fields). In healthcare, we report on dimensions that describe members, providers, plans, clinical information, and time. The member dimension includes attributes such as age, sex, location, and relation. The provider dimension includes attributes such as type, location, and specialty. The dimension tables include all the textual values used for descriptive purposes (e.g., diagnosis and procedure descriptions). Compared to the fact tables, the dimension tables are “short and wide,” with fewer rows and more fields. Indexes link the dimension tables to the fact tables.

When depicted graphically, as in the figure below, this design resembles a star with large central fact tables surrounded by smaller dimension tables.

The star schema has emerged as the preferred database structure for analytic processing. A principal benefit of our Advantage Suite Data Model is performance. We organize the data so data constraints (e.g., select members greater than 19 years old with asthma) are made on the small dimension tables. We apply all of the constraints at once instead of sequentially, thereby reducing the time to retrieve the qualifying facts. Unlike normalized relational data models, the number of joins required is very limited in our Advantage Data Model, greatly improving data access speeds.

One important aspect of our star schema is that it requires a centralized database, and does not support a federated data model. From security to data integration to performance, we believe this is the best strategy for these applications.

Our Data Model was derived from careful design efforts based on 30 years of hands-on experience working with healthcare decision support data. The design has been enhanced by real-world experience testing, customizing, and refining for both fee-for-service and capitated managed care environments.

11.4.2 Data Acquisition RFP § 3.2.2, Item 2

11.4.2.1 ETL System and Processes

Data Verification and Quality Assurance We employ proven database design and data management techniques to validate, edit, scrub, standardize, transform, and enrich raw data to yield an analytically ready database. We are known for our success in continuously improving the quality of data contained in clients’ databases.

Advantage Build Process The Advantage Build process has proven successful across more than 250 large-scale healthcare installations for our clients. The Advantage Build is a comprehensive, rules-driven set of processes that

Services/ Encounters

Capitation

Episodes

Eligibility

Member

Procedure

DiagnosisPlan

Provider

Geographic

Demographic

Drug

Time Period

PAGE 178 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

transforms the raw data into analytically useful, readily accessible information. These rules-based processes convert, standardize, and integrate claims, encounter, provider, eligibility, pharmacy, and other healthcare data, and then enhance the data with aggregate and summary data to support advanced analytic capabilities such as risk-adjusted benchmarking. This makes healthcare reporting faster and more informative. We also employ the industry’s most sound and thorough processes for assessing data quality by continually testing data for completeness, validity, and reasonableness. Key steps include:

PAGE 179 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

Our Approach to Data Acquisition We agree that data acquisition and transformation (ETL) represent a major portion of the focus and effort for any successful healthcare data warehouse. Our proposed solution leverages 30 years’ experience in successfully delivering healthcare information solutions. Healthcare data quality measurement and improvement is a critical part of a successful solution.

We deliver a tool-based repository and managed workflow approach using the combination of: (1) the Thomson Reuters data acquisition engine, MDSS, (2) database utilities, (3) the DataStage ETL tool, and (4) the Advantage Suite component called Advantage Build. This approach supports data extraction, cleansing, aggregation, reorganization, transformation, derivation, and load operations within the timeframes allocated for large data volumes captured from a variety of source data and formats.

• MDSS (the Medstat Data Submission System) provides automated workflow support to all data acquisition, with immediate data quality profiling upon data receipt. Standard processes identify the data source, validate the format, create appropriate backup copies, apply privacy/security rules, inform staff of the data arrival, and launch ETL processes – all without requiring human intervention. The trigger for data acquisition is for an ASCII data extract to be delivered to MDSS.

• Oracle utilities are used in the loading and unloading of data from Oracle databases.

PAGE 180 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

• IBM WebSphere DataStage is the data transformation tool we have selected. In addition to being a leading ETL tool, it supports the capture of requisite metadata for the MME solution component.

• Advantage Build is a robust data integration and enhancement component that measures data, improves incoming data quality, and creates analytic aggregates from claim data including admissions and episodes of care.

We provide the capability to trace/report the ETL processes by including audit and control, error/exception handling, balancing, and operational statistics. The ETL process provides audit reports that track:

• Reconciliation Information on the dollars and records read into the process and dollars and records written out by the process.

• Summary and Detail Information on activity during transformation, including counts of invalid data by field, counts of records with missing data by field, the number and dollars for dropped records, and counts of records by field where the field value was “reset.”

• Process Control Information for the data file.

The results of ETL processes will be presented to end-users via the MME reporting interface.

IBM Infosphere DataStage, a powerful ETL tool, supports the collection, integration, and transformation of large volumes of data, with data structures ranging from simple to highly complex. DataStage manages data arriving in real-time as well as data received on a periodic or scheduled basis.

We support the population of summarized, aggregated structures based on detail data changes in the timeframe of the detail refresh window using both set-based and procedural constructs using our proposed ETL solution. Here are some examples of this support:

• Performance Aggregates – Oracle Materialized Views. The DBMS ensures that these are synchronized and keeps them current as new data are INSERTED into the database. Refreshing these is typically configured as a second step to the high performance INSERT or LOAD processing of the detail data in the update process.

• Clinical Aggregates. The Advantage Build process is the automated application that derives meaningful clinically-oriented aggregates, including Episodes of Care and Clinical Risk Groups. These processes incrementally recalculate aggregates only for those patients for whom new data has arrived, potentially changing the clinical or financial details of the aggregate. These may be characterized as procedural constructs.

The Advantage Build component of our solution enhances incoming transaction data with the addition of detail attributes, linkage of data (patients, claims, providers), and the creation of clinical and performance aggregates. These analytic enhancements result in a robust, analytically ready database that may be accessed by Advantage Suite reporting tools.

We have architected a solution that provides the capability to efficiently acquire, transform, and load very large data volumes. The Thomson Reuters Data Center currently runs production ETL on very large data volumes ― more than 1 billion healthcare claims per month.

DataStage supports an automated impact analysis against the ETL code base. DataStage has standard transformation objects and the capability to create custom routines. In our Data Center we have an

PAGE 181 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

extensive library of custom objects that we use for all of our DataStage jobs. Programs for all standard reporting and auditing is built into each DataStage job using custom reporting routines that we developed. DataStage supports the entry of documentation in the job, and it has a real-time debugger. DataStage offers the ability to tune caching manually.

We utilize MKS Source Integrity Enterprise Edition for versioning of ETL modules. It has been in production use in our Data Center for this purpose for more than three years. With regard to managing changes to the data source systems and their associated documentation, we use a tool we call the Transformation Design Workbook (TDW) that is described in Section 7.9, page 25.

ETL Tool – Extraction Specification Thomson Reuters Response

TEC AQ1.19 While DataStage is capable of accessing remote data sources, our normal operational model is to receive a flat file ASCII data extract that is provided to us.

TEC AQ1.20 DataStage accepts data in a variety of formats, including flat files, CSV, and relational database tables.

TEC AQ1.21 DataStage can process arrays and repeating groups.

TEC AQ1.22 Inclusion/exclusion criteria can be built into the DataStage job.

ETL Tool – Transformation and Loading We designed an ETL solution that provides the capability to perform structural transformations against source data including summarization, partitioning, normalization, consolidation, filtering, derivation, and other structural transformations. Through its use for our many client data conversions, we have found DataStage to be a comprehensive tool fully capable of handling all complex data mappings and value conversions. DataStage provides explicit support for Slowly Changing Dimensions (SCD) in their “Slowly Changing Dimension Stage” feature. In addition, the Advantage Build processes also maintain Slowly Changing Dimensions for the Analytic Aggregates that it delivers. DataStage, or other elements of our ETL solution, meets all of ETL the technical requirements specified in RFP Appendix 2, Section B.2 Data Acquisition – ETL.

The flexibility and accuracy of dimension assignments is strongly supported by the components of our solution. The ETL components of our solution (MDSS, Oracle utilities, DataStage, Advantage Build) provide the capability to schedule and monitor transformation jobs/sessions that are used to populate MDW internal analytic applications. In addition, they provide the capability to create complex job streams with interdependencies, create complex job schedules that have both serial and parallel streams, initiate jobs based on time or occurrence of events, and create log files that are detailed enough to debug issues.

DataStage supplies a limited ability for auditing/controls other than record counts, but it is flexible enough to create customized features. We propose to deliver sums, counts, and distributions that are used to validate the output of ETL jobs with the raw inputs. These are built into each DataStage job using custom reporting routines.

We provide the ability to correct data and subsequently re-submit corrected data to the ETL process. However, to guarantee both end user data integrity and consistency across all targets, we take a thorough and strict approach to data correction. Auditing occurs throughout the ETL process to isolate issues in data quality. Once it is determined the source containing a data error, the correction must be made in that source and all subsequent processes to populate the data warehouse can then be executed.

PAGE 182 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

At no point will we directly amend data outside of the normal data processing sequence in order to keep the integrity of the data and the source. If data has already been loaded into the database, a variety of functions can be utilized such as removing data by an identifier (e.g.: claim id, batch number, timestamp).

Failures that prevent the DataStage job from completing are noted in the job log. Other issues, (e.g. count mismatches) are built into each DataStage job using custom reporting routines that we have developed. We support the ability to recover from the abnormal ending of a job and restart or rollback in both DataStage and Advantage Build. The Advantage Build process is controlled via a GUI called Build Manager. If a job step ends abnormally, the sequence is restarted from the last successful step.

11.4.2.2 Data Quality Process Ensuring that data reconciles is a major focus of our data management efforts. We provide a secure, HIPPA-compliant method for receiving, testing, and reporting accuracy of transferred data files. We manage all data receipts using our automated, proprietary Medstat Data Submission System (MDSS), a web-enabled tool that monitors data arrival. It validates raw data against known formats and sample data types before submitting for ETL processing to the system. Using MDSS we can create a variety of reports about raw data files, import layouts, view layout details, or view all data suppliers that are active (or inactive). MDSS may be used to automatically start data transformation upon receipt.

The following are examples of the validations that MDSS executes on the raw data:

• Compares the incoming raw data with the layout group to ensure record lengths match.

• Compares the expected number of records to the actual number of records, as found in the documentation sent with the media.

• Confirms the accuracy of the raw database on the layout group’s field type. For example, the process validates fields declared numeric to insure they only contain numeric values, and date fields only contain dates.

• Reviews and reports missing and null values for each field.

• Uses information found in the raw data file to calculate actual control totals, record counts, as well as start and end dates.

Data validation begins with the layout group that is associated with the file to be validated and includes general information describing the file (e.g., file format, delimited/fixed, validation level, control record). It also includes several record layouts and their defining attributes (e.g., name, record type), as well as field and column information. For more information on MDSS, see the “Secure Data Submission System” section of 7.11.

It is important to note that our concept of “cleansing data” does not involve changing the values in data elements that may be in error. In many cases, it is not clear which data element contains the faulty information. Our approach is to understand the anomalies in the data, help our clients understand them, and then work together with the client and the data suppliers to make improvements over time. One implication of this approach is that we do not apply name and address cleansing routines to the data.

However, there are two instances when we do change the data value. We change the raw diagnosis code or procedure code if we can determine reliably what the correct diagnosis code or procedure code should be. For example, this is typically done when a raw data four-digit diagnosis code should contain

PAGE 183 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

five digits and the fifth digit does not change the assigned DRG―we would add a fifth digit that indicates “not specified.” If the addition of the fifth digit would result in different DRG assignments, we do not add the fifth digit because there is no reliable way to know what the correct digit should be.

The data quality checks we routinely perform on incoming data are summarized below.

Data Quality Processes We utilize Data Profiler, a data profiling tool, in the data receipt process. It is used to assist with data investigation, integrating traditional field analysis with the data quality investigation guidelines. Standard Data Quality reports are automatically generated to display the results of the field and quality analyses, providing a consistent reporting mechanism for use when discussing data quality with our clients/ suppliers. All features of the Data Profiler are integrated with the pre-Advantage Build process.

We utilize a statistical process tool called EditStats to monitor data quality on an on-going basis. The process tracks the input data over time to ensure the data are consistent with pre-established guidelines and are within thresholds set. The information is reported in a graphic format, allowing a user to quickly determine adverse statistical trends and implement a course of action. A sample of an EditStat graphical report is presented below:

An output from the conversion of raw data in our DataStage process is Conversion Operations Results Reporting (CORR). CORR is a Web-based application that contains details about the conversion of the raw data, based on the reporting routines found in the DataStage convert jobs. The CORR application is used to view the results of DataStage job runs and allows the Data Management Team to:

• Assess data quality.

• Review the transformation process.

• Flag unexpected raw data values.

• View the record count summaries.

• Compare MDSS record counts and control totals to DataStage values.

As part of the data receipt process, we utilize a utility called Capella, which compares a data file (e.g., convert output) against an ABIO schema (.sch) file. Capella determines if the schema file reasonably describes the layout of the data file. The schemas read records from the data file, and the columns check for reasonable content. Reasonable content checks include:

Average Age from EligibilityMoving Average (X-bar)

25.50

26.00

26.50

27.00

27.50

28.00

28.50

29.00

Oct

-96

Nov

-96

Dec

-96

Jan-

97

Feb-

97

Mar

-97

Apr

-97

May

-97

Jun-

97

Jul-9

7

Aug

-97

Sep

-97

Oct

-97

Nov-

97

Dec

-97

Jan-

98

Feb-

98

Mar

-98

Apr-9

8

May

-98

Jun-

98

Jul-9

8

Aug-

98

Sep-

98

Oct

-98

Nov-

98

Dec-

98

Jan-

99

Feb-

99

Mar

-99

UCLMidpointLCLMeasureBaseline

PAGE 184 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

• Numeric columns contain valid numbers.

• Date columns contain valid dates.

• Columns do not have both leading and trailing spaces.

• The ETL solution includes error/exception handling processes that will identify/isolate the errant data. DataStage processes are designed to identify errant data and apply appropriate exception handling processes to those data. The Advantage Build process performs a number of edits to ensure the usability of the data, including; validation of clinical codes, count, minimum and maximum value statistics, and missing look-up values.

During the Build process, automated processes help ensure optimum efficiency and data quality. Two error criteria are defined prior to transforming the data. These criteria may cause all or part of the update process to halt, depending upon the magnitude of the data quality problem:

• Critical Errors — These are errors that are deemed to be critical path items on which there are many dependencies in the remaining data transformation processes. Data which meet the criteria established for critical errors cause the transformation routines to abort until the problem is resolved. Examples of critical errors include data submitted in the wrong format or in the wrong byte size.

• Drop Condition Errors — These are errors that cause the data transformation routines to drop individual records and proceed with the process if certain conditions are met. For example, if an invalid Person ID is encountered, that particular record is dropped from the process and the program moves on to the rest of the data. Drop condition errors are not critical path items and do not impact other pieces of the process, but they are assessed for their overall impact on the database quality.

The ETL solution includes audit and control processes that demonstrate that the target data warehouse and internal analytic applications were populated accurately and completely. For example, at the time the data are transformed to the standard Advantage Suite record format, the system generates an Output Report that identifies unique values that failed any mapping operation in the transformation process. It also reports on the number of records and total payments read into the transformation process; which records were excluded during the transformation; and which were actually transformed into the standard format. These numbers are compared to submitted control totals. This reconciliation step alerts the data management team to any out of balance situations.

11.4.2.3 Data Validation and Reconciliation Plan

RFP 3.1.4 Provide a detailed Reconciliation Plan within 45 calendar days of contract execution, which is reconciled to financial control totals, that includes processes to automatically maintain data integrity and verify/ reconcile data against the source systems, including payment data, and accounts for discrepancies. Thomson Reuters agrees to do so. This plan will reflect how the components we have described will be used in the essential process of control totals.

A standard part of our database update process is to balance data outputs against source system payment and eligibility reports. Balancing is a key requirement for the credibility of the system and is a prime focus for Thomson Reuters. Please also see Section 7.2.6, where we discuss our System Testing

PAGE 185 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

process. This section contains additional information about data quality assurance and data reconciliation.

Beyond the testing and reconciliation of the data, we have configuration management processes in place, which we will review with the state.

11.4.3 Data Access RFP § 3.2.2, Item 3

The end-user experience is directly related to success or failure in an information technology project. Users must have data access tools that are appropriate to their job, their skill level, their time availability, and their need to know. Users must believe that the information is accurate and complete. They must be able to understand the information and where it originated. The information must be actionable (i.e., it must require little or no work to see the implications of the information). Above all, users must enjoy using their data access tools because the software is easy to use, is available when needed, and helps them perform their jobs better and faster.

Our solution is composed of COTS products that are in widespread production use today. We have designed the user interface to be optimized for the intended end-user use. Therefore, because we are not providing an application custom-built only for BMS, we are not able to commit to compliance with state standards regarding that look and feel of the system.

Web Portal — We offer a Web Portal configured for the applications that are available. It provides an intuitive “one-stop” entry point to the data access tools that are available to each user. Depending on their business need, Power Users will see icons for multiple tools, and Executive Users may see only one or two icons. Power Users or System Administrators that engage in data delivery and the management of metadata will see icons for those tools, as well. Role based access to metadata is provided. In addition to providing an entry point for accessing reporting tools, the portal will also provide relevant industry news, thought leadership articles, proactive analytic content, and account management information. This component, called the Solution Center, is built using the JBoss Enterprise Portal Platform.

Business Intelligence — We are proposing our Advantage Suite system, a mature, easy-to-use Web-based COTS tool, as the primary vehicle to address the data access requirements of this RFP. Advantage Suite has been re-architected to utilize Cognos as its underlying engine, with a custom interface developed by Thomson Reuters to facilitate easy use of Advantage Suite’s analytical applications. Data access and analysis capabilities for program integrity will be provided by J-SURS, and the i-Sight software will provide case tracing and case management functionality. The capabilities of these applications are described in detail in Section 10 of this proposal. Our proposed solution will support 60 active users and 30 concurrent users, with an assumed yearly growth rate of 10% in active and concurrent users.

Our proposed DBMS, Oracle, is open and fully ODBC compliant. However, in order to provide BMS with the most cost effective solution we have proposed a shared services environment for Advantage Suite. Therefore, for security reasons and in order to maintain properly controlled performance levels within the shared services environment, we will not permit a direct ODBC connection to the data warehouse by external applications, even though this is technically feasible. If at a later point BMS desires to shift to a

PAGE 186 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

more costly, dedicated platform for Advantage Suite, we could enable such an ODBC connection for BMS.

We believe that Advantage Suite, with its performance measures and custom interface provides superior user access to the system’s star schema data model. We think that our proposed Data Access approach via the Advantage Suite application obviates the need for separate ODBC access to the relational database. In this way, the need to reconcile differences in calculations and database queries, let alone the time to develop and validate them, is replaced with a unified, proven application interface to the data. This healthcare analytic approach to these applications is also superior to the natural language interface approach. The flexibility and sophistication of a healthcare information application far exceeds what is available with a natural language based system.

11.4.4 Data Delivery RFP § 3.2.2, Item 4

Our proposed DW/DSS solution supports the Bureau’s need for periodic delivery of large data sets extracted from the DW/DSS. In order to be performed independently by the authorized power users, a complete solution needs to support the functions listed in the Data Delivery section of the RFP, including schedule, create, store, publish, and administer. Advantage Suite meets these requirements and provides capabilities for end-user controlled data extracts that all operate in the context of the access control, audit log, role-based security capabilities of the rest of the solution.

The Advantage Suite Record Listing capability provides end-users with the ability to independently define the extract in terms of rows and columns up to a maximum of 2.5M rows. Further, users may download this extract in CSV format or request that Thomson Reuters operations staff execute it. Date, time, requestor, etc are all logged as with other reports. This robust data delivery capability is in regular use and meets your detailed Data Delivery requirements in RFP Appendix 2, Section B. 4 Data Delivery.

11.4.5 Metadata RFP § 3.2.2, Item 5

Our metadata management solution is based on the IBM DataStage Metadata Manager tool. We make metadata available to end-users via Cognos, with data acquisition accomplished through the DataStage ETL process. Our data acquisition processes thoroughly document each customer’s source data and transformation logic. The metadata store we make available to technical users on the Web portal will capture the information necessary, including metadata for data sources in later phases of the project, allow metadata to be downloaded and exported in a non-proprietary format, version the metadata, and perform the other functions required by BMS.

This unified metadata store supports multiple user types. Each user role requires access to certain metadata to perform assigned tasks. These same users must also be able to seamlessly and directly share information with other users in different roles, so streamlining collaboration among multiple, disparate users is critical to the success of any integration effort. The majority of these user roles are involved in every data integration project— sometimes with a single user performing multiple roles. Each user generates critical metadata as a natural consequence of the specific task being performed. These tasks are often performed in parallel. Metadata generated and consumed during the integration process is role and task based. There are three primary types of metadata: business, technical, and operational.

PAGE 187 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

• Business metadata defines terms in everyday language, without regard for technical implementation. Elements are defined by the people and business processes that use them to make decisions.

• Technical metadata is used by more technical oriented staff, such as developers. It includes items such as table definitions and data types. These objects are used heavily during design and development.

• Operational metadata refers to the metadata generated and captured when a process executes. It allows administrators to manage the system and ensure things are running smoothly; it also helps them troubleshoot issues if there is a problem with a process.

Specific kinds of information stored include; data definitions (business and technical), transformations (source to target mappings, business rules and transformation logic), and process controls (usage metrics, quality and audit metrics, operational and application messages). Unifying the types of metadata creates an end-to-end relationship, enabling users to understand not just where information is stored and what happened to it as it moved through the organization, but also the business context of that information. The core data integration components are architected into a single platform through repository, engine and interface integration to create a comprehensive information integration platform.

11.5 Data Center and Facilities RFP § 4.1.11

11.5.1 Secure Hosting Site and All Work in the U.S. RFP § 3.2.1

RFP .1.1 (as amended) Host the DW/DSS and maintain a secure site and secure back-up site within the continental United States. All work performed in association with this contract must originate from the continental United States. The Vendor must be responsible for all costs associated with supporting the facilities and ensuring that the facilities comply with legal requirements. Thomson Reuters agrees to do so.

Data Center Facilities The Thomson Reuters Data Center that will host our proposed DW/DSS solution is described above in the “Data Center Facilities” subsection of Section 11.3.1. Additional information related to system hosting is included in other portions of this Section 11.

Physical Security for Other Thomson Reuters Facilities Physical security for facilities other than the Data Center is described above under the “Physical Security” topic in Section 7.11. Basically, our other facilities are secured by a proximity card access system and other standard security procedures.

11.6 Technical Support Services 11.6.1 Help Desk / Customer Support We will provide Help Desk / Customer Support for BMS staff. For details, please see Section 7.4.

PAGE 188 May 17, 2011

©2011 Thomson Reuters ● All Rights Reserved ● www.thomsonreuters.com

11.6.2 Training / Ongoing Education We will also provide Training / Ongoing education for BMS staff. For details, please see Section 7.2.7.

11.6.3 Data Management and Database Updates We will provide both the initial build of the DW/DSS as well as incremental data updates on an ongoing basis. The proposed database build process, described below, is designed to support both the initial build and the incremental updates. Full rebuilds are supported by the same process should the need occur. The Build Manager software manages this process and is used to control maintenance to the database as well. We will provide update and maintenance services as part of our ASP solution.

The initial load and subsequent update processes use the same Build process, which includes:


Recommended