+ All Categories
Home > Documents > [IEEE 2012 IEEE AUTOTESTCON - Anaheim, CA, USA (2012.09.10-2012.09.13)] 2012 IEEE AUTOTESTCON...

[IEEE 2012 IEEE AUTOTESTCON - Anaheim, CA, USA (2012.09.10-2012.09.13)] 2012 IEEE AUTOTESTCON...

Date post: 08-Dec-2016
Category:
Upload: jake
View: 221 times
Download: 5 times
Share this document with a friend
5
Life Cycle Planning from Product Development to Long Term Sustainment Jake Harnack PXI and Modular Instruments National Instruments Corporation Austin, TX 78759 USA AbstractOne of the major challenges engineers face when developing military test systems is balancing the life cycle mismatch of test equipment that’s commonly deployed for 20+ years with the shorter life cycle of commercial-off-the-shelf (COTS) components often used in those systems. To ensure long term supportability of these systems, it is important to plan for obsolescence issues starting in the product development phase and continuing through the sustaining state to end of life. Successful long term support of test systems requires careful up-front planning, a proper system architecture, and a comprehensive long term life cycle management plan. Software is becoming increasingly more important in long term sustainment as it continues to define more and more of the test system functionality. A key software architecture for mitigating the impact of obsolescence is the implementation of hardware abstraction layers (HALs). A modular software architecture, such as a HAL, is an important proactive component of a life cycle management plan that also includes traditional hardware life cycle management strategies such as sparing, obsolescence tracking and planned technology refreshes. This paper examines some of the techniques used to manage test system obsolescence through HALs and hardware life cycle management. I. INTRODUCTION Thirty years ago, a typical computer might have had a state- of-the-art Intel microprocessor with 134,000 transistors running a new operating system called MS-DOS [2]. Today’s processors pack over 1 billion transistors into the same space to provide much faster and more efficient computing capable of running advanced graphical operating systems such as Windows and Mac OS X. These rapid advancements are one of the reasons why more test systems are implementing commercial-off-the-shelf (COTS) technology, which provides innovative technology at a fraction of the cost of traditional solutions. However, one of the trade-offs of using COTS is the more rapid evolution of the technology and shorter component and software lifetimes. Engineers developing military systems face many challenges ranging from meeting complex test requirements to lowering development costs. Organizations commonly support aircraft and weapons systems that were developed more than 30 years ago with legacy instruments and software. Test engineers face a major challenge to maintain a single test system over this period of time when many of the instruments and software are becoming obsolete with new technology. Effective management of these life cycle challenges is critical when 70 percent of COTS technology will become unavailable on projects requiring 20 years of support [3]. II. PLANNING FOR CHANGE Figure 1 illustrates typical phases of a system’s lifetime. The length of time a system stays in each phase depends on the project complexity and longevity. Systems with very long lifetimes may go through these phases multiple times as the system adapts to changing application requirements and technologies. Figure 1: Typical phases in the lifetime of a test system Most military test applications are sustainment-dominated. As illustrated in Figure 2, such a system spends a very long time in operation and support relative to the development and deployment phases. In such a scenario, it can be challenging to keep a test system operational because the test system technologies change many times over the lifetime of the system being tested. Very often, all the funds available for a test system are spent in development with little planning and budget left for long-term operation and support. Figure 2: A sustainment-dominated life cycle spends a very long time in operation and support compared to the earlier stages of development and deployment. In many cases, the lack of upfront planning means that obsolescence issues are dealt with a reactive, emergency basis. Unfortunately, because the system is often very old, it may be impossible to find replacement parts, leading to an expensive redesign and possibly loss of system availability. The earlier you consider life cycle planning in the system’s lifetime, the more options you have to manage change. Figure 3 shows an example of these options. Note that proactive and reactive approaches are not mutually exclusive. For example, if you know that a particular component is going EOL at some 978-1-4673-0700-0/12/$31.00 ©2012 IEEE
Transcript
Page 1: [IEEE 2012 IEEE AUTOTESTCON - Anaheim, CA, USA (2012.09.10-2012.09.13)] 2012 IEEE AUTOTESTCON Proceedings - Life cycle planning from product development to long term sustainment

Life Cycle Planning from Product Development to Long Term Sustainment

Jake Harnack PXI and Modular Instruments

National Instruments Corporation Austin, TX 78759 USA

Abstract— One of the major challenges engineers face when developing military test systems is balancing the life cycle mismatch of test equipment that’s commonly deployed for 20+ years with the shorter life cycle of commercial-off-the-shelf (COTS) components often used in those systems. To ensure long term supportability of these systems, it is important to plan for obsolescence issues starting in the product development phase and continuing through the sustaining state to end of life.

Successful long term support of test systems requires careful up-front planning, a proper system architecture, and a comprehensive long term life cycle management plan. Software is becoming increasingly more important in long term sustainment as it continues to define more and more of the test system functionality. A key software architecture for mitigating the impact of obsolescence is the implementation of hardware abstraction layers (HALs). A modular software architecture, such as a HAL, is an important proactive component of a life cycle management plan that also includes traditional hardware life cycle management strategies such as sparing, obsolescence tracking and planned technology refreshes.

This paper examines some of the techniques used to manage test system obsolescence through HALs and hardware life cycle management.

I. INTRODUCTION Thirty years ago, a typical computer might have had a state-

of-the-art Intel microprocessor with 134,000 transistors running a new operating system called MS-DOS [2]. Today’s processors pack over 1 billion transistors into the same space to provide much faster and more efficient computing capable of running advanced graphical operating systems such as Windows and Mac OS X. These rapid advancements are one of the reasons why more test systems are implementing commercial-off-the-shelf (COTS) technology, which provides innovative technology at a fraction of the cost of traditional solutions. However, one of the trade-offs of using COTS is the more rapid evolution of the technology and shorter component and software lifetimes.

Engineers developing military systems face many challenges ranging from meeting complex test requirements to lowering development costs. Organizations commonly support aircraft and weapons systems that were developed more than 30 years ago with legacy instruments and software. Test engineers face a major challenge to maintain a single test system over this period of time when many of the instruments

and software are becoming obsolete with new technology. Effective management of these life cycle challenges is critical when 70 percent of COTS technology will become unavailable on projects requiring 20 years of support [3].

II. PLANNING FOR CHANGE Figure 1 illustrates typical phases of a system’s lifetime.

The length of time a system stays in each phase depends on the project complexity and longevity. Systems with very long lifetimes may go through these phases multiple times as the system adapts to changing application requirements and technologies.

Figure 1: Typical phases in the lifetime of a test system

Most military test applications are sustainment-dominated. As illustrated in Figure 2, such a system spends a very long time in operation and support relative to the development and deployment phases. In such a scenario, it can be challenging to keep a test system operational because the test system technologies change many times over the lifetime of the system being tested. Very often, all the funds available for a test system are spent in development with little planning and budget left for long-term operation and support.

Figure 2: A sustainment-dominated life cycle spends a very long time in operation and support compared to the earlier stages of development and deployment.

In many cases, the lack of upfront planning means that obsolescence issues are dealt with a reactive, emergency basis. Unfortunately, because the system is often very old, it may be impossible to find replacement parts, leading to an expensive redesign and possibly loss of system availability.

The earlier you consider life cycle planning in the system’s lifetime, the more options you have to manage change. Figure 3 shows an example of these options. Note that proactive and reactive approaches are not mutually exclusive. For example, if you know that a particular component is going EOL at some

978-1-4673-0700-0/12/$31.00 ©2012 IEEE

Page 2: [IEEE 2012 IEEE AUTOTESTCON - Anaheim, CA, USA (2012.09.10-2012.09.13)] 2012 IEEE AUTOTESTCON Proceedings - Life cycle planning from product development to long term sustainment

point in the future, the best plan may be to do a last time buy of that component to carry the system over while a technology refresh is implemented. Thus, a comprehensive life cycle plan seeks to identify obsolescence issues ahead of time and defines appropriate remediation activities to avoid unexpected adverse impact on the system and high engineering costs to manage changes.

Figure 3: A comprehensive life cycle management plan includes proactive and reactive strategies

III. MANAGING HARDWARE LIFE CYCLES To create a successful life cycle management plan, you

must acquire the best possible information about the life cycles of hardware components in the system. Getting such information depends on establishing a cooperative relationship and good communication between you and your suppliers. Instrument vendors should share roadmap information and help you plan for technology evolution in your system (Figure 4). Instrument vendors should also provide services ranging from upfront consulting on product selection to long term extended service agreements to meet your specific needs. The following sections describe a few specific ways to work with instrument vendors to build your life cycle management plan.

Figure 4: Support services offered throughout a product life cycle

A. Planned Technology Refreshes Whenever possible, it is best to plan technology updates

rather than being forced to update when an EOL event occurs. Planned technology refreshes allow you to budget time and money ahead of time. You can also plan for any possible

downtime or other availability impact on your systems. Technology refresh planning should take into account any changes that are driven by your system. This can include new capabilities being added to an existing system or new technologies added to enhance performance.

Maintaining test systems is ultimately much easier with instruments that are still actively sold and supported. If a vendor obsoletes a product in your test system, they will typically provide a replacement instrument that they recommend for new designs. The ideal replacement is something that uses the same platform, driver, and has the same connectivity and mechanical characteristics as its predecessor. Some instruments can be swapped out without a driver upgrade if they use the same digital control circuitry and have the same functionality. If the connectivity between the instrument and your device under test (DUT) has changed, then crossover cables or custom PCBs provide a good solution. If a new product does require a driver upgrade, then the driver API should offer the same functionality and formatting as the previous version to minimize the amount of changes to the test software.

B. Sparing and Last Time Buys Instrumentation vendors can help you plan for spares and

lifetime buys of key components. It’s important to consider multiple factors when stocking spares or purchasing a lifetime supply of modules.

• Failure rate: This can be the mean time between

failures (MTBF) number from the vendor or the in-service failure rate. You can use either number in your analysis, but the in-service failure rate gives you the historical data for an instrument within your test system, which is ultimately more useful because it represents how you’re using the instrument. However, you can also derive an estimated failure rate from the MTBF if this data is not available.

• Number of systems: This takes into account current systems along with any new systems you plan to deploy over this period.

• Number of instruments per system: This number is important to determine when you’re sparing components instead of systems.

• Required uptime: Service contracts can extend repair options for obsolete components, but if you need your test system to run consistently you should account for extra spares while an instrument is repaired.

• Service period: A test system or component is more likely to fail the longer it’s in the field. As the probability of failure rises, the number of spares should increase.

Combining the above factors helps determine the amount of

spares for a test system; however, both MTBF and in-service failure rates should be used as statistical averages instead of expected lifetimes. If you plan your sparing strategy purely around MTBF data, the component still has a 50 percent chance of failing prematurely.

Page 3: [IEEE 2012 IEEE AUTOTESTCON - Anaheim, CA, USA (2012.09.10-2012.09.13)] 2012 IEEE AUTOTESTCON Proceedings - Life cycle planning from product development to long term sustainment

Figure 5: Using statistical data to determininterval

Figure 5 shows how to examine this dconfidence interval, which in this case is the pcomponent will last through its estimated lifetinstrument with an average lifetime of 4 yearsstandard deviation. In this scenario you confident that the module will last at least 4 yconfident that the module will last at least 3percent confident that it will last at least 3 yesparing calculations of a 3 year expected lifetyears helps ensure that you have enough sparprotect your test system from premature statist

C. Extended Support Services An extended support service can ensure

specific support you need for your applicproducts beyond their normal commercial lifproduct is not available for purchase, you stilto repair deployed instruments that are obsoletthe option for advanced replacements and ontest systems that require maximum uptimedescribed, a sparing strategy is importaespecially if you’re planning to purchase a liinstruments. A service agreement can comstrategy with an extended repair capagreements can also provide regular obsolesthe products in your system, allowing you to spotential changes.

D. Third-Party Support Companies In addition to the instrument vendors them

third-party companies that specialize in longobsolete products. These companies will manufacturing and repair processes for an obsand offer support for an extended period of ticontinued sale and support, the cost of theslikely to increase due to the additional risk taking over that manufacturing. For exampcompany in California that has taken ovesupport of specific VXI controllers originally National Instruments [5]. Rochester Electr

e the confidence

data through the probability that a time. Consider an s with a 6 month are 50 percent

years, 84 percent 3.5 years, and 98 ears. Basing your time instead of 4 res to effectively ical failures.

that you get the cation, even for fetime. While the l have the option te. You also have n-hand spares for e. As previously ant to consider, ifetime supply of mbine a sparing ability. Service cence reports on stay ahead of any

mselves, there are g-term support of

take over the solete component ime. In return for se components is and overhead of

ple, GDCA is a er the long-term manufactured by

ronics is another

company that provides long particularly by continued manufactulife semiconductors [4].

IV. MITIGATING OBSOLESCENABSTRACTION L

HALs allow you to develop highthe reliance on specific low-levelsystems, this enables the main applicreturns a value from an instrumentinstrument and device-specific coabstraction helps mitigate obsolescyou to expand or add features withthe main test program. Developingconsuming and more expensive; hosee benefits over the life of a test obsoleted or features are added.

A. HAL Overview Every test system has some for

unless you are manually programmachine code. Typically, test enginand an application programming incommands to the instrument. This Aan industry standard such as IVI, thend-user. Building these layers intflexibility to change out instrumentto the user interface or overall teMcCarthy, B. Powell, and A. Veeraa typical layered approached to dewith minimal hardware dependencie

Figure 6: Hardware abstraction laythe test application and instruments

The top-level “Test Application

interacts with the system and runs for the DUT. The Application provides a high-level interface withspecific tests from the main API. Bfocus on measurements and configuration or commands. The reon providing an interface between tthe instrument commands.

term support solutions, uring of mature and end-of-

NCE WITH HARDWARE LAYERS h-level code that minimizes l hardware details. In test cation to call a function that t without knowledge of the onfiguration. This layer of cence concerns and allows

hout a complete overhaul of g a HAL is initially time-wever, companies typically system as components are

rm of hardware abstraction mming the registers with neers use hardware drivers

nterface (API) to send basic API can be defined through he instrument vendor, or the o your code gives you the ts without making changes

est structure. N. Tacha, A. amani use Figure 6 to show eveloping the test software es [1].

yers provide a link between s[1]

n” layer is where the user the different tests required Separation Layer (ASL)

h functions required to run Both of these layers purely

not specific hardware emaining layers are focused the measurement layers and

Page 4: [IEEE 2012 IEEE AUTOTESTCON - Anaheim, CA, USA (2012.09.10-2012.09.13)] 2012 IEEE AUTOTESTCON Proceedings - Life cycle planning from product development to long term sustainment

1) Measurement Layers The Test Application and ASL focus on the measurements

that are required to test your DUT. For example, the test API calls functions required to characterize or verify the components of your DUT such as an LED, a battery, and an ADC. The main test API just takes user input and runs those three tests by calling into functions in the ASL, which has specific steps for making sure the LED, battery, and ADC are working correctly.

In the ASL layer, the LED function sources a voltage, measures back the current, and reports the results to the main test API. Similar functions in the battery and ADC verification may include sweeping a voltage or turning on a power supply. These top layers do not focus on which type of instrument is in the system or how it’s configured.

Figure 7: Example tests at different software layers in the HAL

2) Hardware Layers

The lower levels of Figure 6 focus on providing an interface that the ASL layers can directly call into without intimate knowledge of the hardware. The Device-Specific Software Plug-In (DSSP) provides an interface between the instrument drivers and the upper abstraction layers focused on measurements. Test code written in the ASL is not affected by the type of instrument you’re using. For example, the frequency sweeps and voltage tests in the ASL work whether you’re using a source measurement unit (SMU) from vendor A or vendor B. The two SMUs may require different configuration settings but both have common parameters such as voltage range and resolution that the DSSP will dynamically map to match specific instrument settings.

B. HAL Benefits Using a layered software approach has benefits other than

life cycle management. It also improves efficiency in the development process and code reuse between design and production.

1) Efficiency in the Development Process

Design engineers may know exactly which tests they need to run to ensure their devices are working properly. However, they may not be very familiar with the instrument commands required to implement those tests. For example, they know they need to characterize their LEDs by sweeping a voltage from 0 to 5 V in 5 mV steps, but they may not be very efficient in configuring an instrument to accomplish that task. Other test engineers know exactly how to configure and program instruments such as DMMs, oscilloscopes and signal generators, but they have limited knowledge of the overall DUT tests. Having engineers focus on either the measurement or hardware programming allows them to leverage and grow their domain expertise while improving the group’s efficiency. It also prevents design engineers from having to consistently learn new pieces of hardware.

While separating these tasks can improve efficiency, the two groups must work closely on integrating the two layers. Providing a clean, modular interface between the ASL and DSSP is critical when developing HALs.

2) Code Reuse

HALs are essentially intellectual property (IP) that you can leverage throughout your organization. Developing a scalable HAL increases the amount of code reuse through technology cycles as well as reuse from characterization to production. Since all the user defined inputs are in the main test API, you can translate design code to production code with a few modifications. For example, if you are characterizing your LED, you may want to sweep 1,000 points between 0 and 5 V and display all the data in the main UI. However, 1,000 points of data is probably inefficient in a production environment and you only want to see a pass/fail result. In this scenario, you can use the same code from design to production and modify the input and output parameters. This saves you from investing in the development and validation of two different test systems.

C. Desirable HAL Features Authors N. Tacha, A. McCarthy, B. Powell, and A.

Veeramani describe the desirable benefits of a HAL in Figure 8. The authors also explain how to implement a HAL in greater detail throughout the rest of their paper [1].

Page 5: [IEEE 2012 IEEE AUTOTESTCON - Anaheim, CA, USA (2012.09.10-2012.09.13)] 2012 IEEE AUTOTESTCON Proceedings - Life cycle planning from product development to long term sustainment

Figure 8: Desirable features of a HAL[1]

V. CONCLUSION Balancing the life cycle mismatches between COTS

instruments and military test systems is challenging when the system spends a majority of its time in the operation and support phase. Component obsolescence is traditionally managed on a reactive, emergency basis that often leads to an

expensive redesign or loss of availability. A more comprehensive life cycle management plan includes combining reactive measures such as lifetime buys with proactive strategies such as planned system upgrades and modular software architectures. Strong vendor relationships are critical to developing an efficient life cycle management plan, and vendors can provide roadmap information along with services such as up-front consulting and extended product support. Designing test systems with a modular software architecture and proactively managing hardware life cycles with your vendor will ultimately mitigate hardware obsolescence issues and improve the longevity of the system.

ACKNOWLEDGMENT A special thanks to Mike Santori, Brian Powell, Norm

Kirchner, Travis White, Chad Pelletier, Mike Owens, and Garth Black.

REFERENCES [1] N. Tacha, A. McCarthy, B. Powell, A. Veeramani, “How to mitigate

hardware obsolescence in next-generation test systems,” AUTOTESTCON, 2009 IEEE, pp. 229–234.

[2] Moore’s Law Inspires Intel Innovation [Online]. http://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html

[3] P. Singh, P. Sandborn, “Obsolescence driven design refresh planning for sustainment-dominated systems,” The Engineering Economist, Vol. 51, No. 2, pp. 115-139, April-June 2006.

[4] Rochester Electronics corporate profile [Online]. http://www.rocelec.com/about-rochester-electronics/

[5] GDCA Product Support Catalog [Online]. http://www.gdca.com/products.asp


Recommended