+ All Categories
Home > Documents > RTC Magazine

RTC Magazine

Date post: 24-Mar-2016
Category:
Upload: rtc-group
View: 217 times
Download: 0 times
Share this document with a friend
Description:
 
Popular Tags:
44
The magazine of record for the embedded computing industry www.rtcmagazine.com March 2014 An RTC Group Publication Finding the Sweet Spot for SoC and ASIC Design High End Graphics Light Up Small Devices Get beyond the BIOS for Embedded
Transcript
Page 1: RTC Magazine

The magazine of record for the embedded computing industry

www.rtcmagazine.comMarch 2014

An RTC Group Publication

Finding the Sweet Spot for SoC and ASIC Design

High End Graphics Light Up Small Devices

Get beyond the BIOS for Embedded

Page 2: RTC Magazine

Building BlocksDesigned To Last

Like the Great Pyramids at Giza, computers engineeredwith board-level building blocks from Trenton Systemsare built for performance and longevity. Ok, it’s not likelythat a rackmount computer built with Trenton’s long-lifeSBC’s, backplanes or embedded motherboards will bearound 4,500 years from now. However, Trenton boardsdo extend system functionality while reducing the overalldo extend system functionality while reducing the overallcost of computer ownership by utilizing long-life boardcomponents with built-in support for standard I/O optioncards. Trenton building blocks enable the initial systeminvestiments to pay dividends over typical computerdeployment cycles of seven years or more!

Here’s a snapshot of the available Trenton board-levelbuilding blocks for your next computer system design:

Trenton’s BXT7059 is a robust dual-processor single boardcomputer featuring long-life Intel Xeon processors.

The single-processor TSB7053 offers a wide range of I/Oand video interface options.

Our backplanes come in all shapes and sizes engineered todeliver maximum value in your unique system design.

The JXM7031 embedded MicroThe JXM7031 embedded MicroATX motherboard has a uniquelong-life design featuring dual Intel Xeon processors.

® ®

® ®

The Global Leader In Customer Driven Computing Solutions™

770.287.3100 800.875.6031www.TrentonSystems.com

Our board engineering experts are available to discuss your unique military computing application requirements.Contact us to learn more at 770.287.3100 / 800.875.6031 or www.TrentonSystems.com

Page 3: RTC Magazine

TABLEOFCONTENTS

Digital Subscriptions Available at http://rtcmagazine.com/home/subscribe.php

RTC MAGAZINE MARCH 2014 3

VOLUME 23, ISSUE 3

32

28

20

2412

16836

6

5TECHNOLOGY COREFinding the Sweet Spot for SoC and ASIC Design

Beyond Drivers: The Critical Role of System Software Stack Architecture in SoC Hardware DevelopmentJim Ready, Cadence Design Systems

Integration Blurs the Line between MCUs and SoCsJason Tollefson, Microchip Technology

TECHNOLOGY IN CONTEXTOptimizing Machine Vision Systems

FPGAs – Taking Vision to the Next LevelCarlton Heard, National Instruments

TECHNOLOGY CONNECTEDHigh End Graphics on Small Devices

Speeding Time-to-Market for GUI DesignsBrian Edmond, Crank Software

TECHNOLOGY IN SYSTEMSGetting beyond the BIOS for Embedded

Open Source Firmware – Coreboot for x86 Architecture BoardsClarence Peckham, Senior Editor

TECHNOLOGY DEVELOPMENTThe POSIX Heritage - History and Future

POSIX – 25 Years of Open Standard APIsArun Subbarao, LynuxWorks

DEPARTMENTSEditorialIntegrating for Parallelism, Performance and Power— A Dance with Complexity

Industry InsiderLatest Developments in the Embedded Marketplace

Small Form Factor ForumSFFs Take on the World

Products & TechnologyNewest Embedded Technology Used by Industry Leaders

mini-ITX Industrial Mainboard for 24/7 Continuous ServiceRugged PCI/104-Express SBC with Intel N2800 Offers Rich I/O

6U VPX Board Features 4th Generation Intel Core Processor

10

EDITOR’S REPORTBig Data Drives New Apps

Solutions from Data: Innovative Apps Can Bring Engagement, Loyalty and RevenueTom Williams

4036 38

Page 4: RTC Magazine

4 MARCH 2014 RTC MAGAZINE

MARCH 2014

Publisher PRESIDENT John Reardon, [email protected]

Editorial

EDITOR-IN-CHIEF Tom Williams, [email protected]

SENIOR EDITOR Clarence Peckham, [email protected]

CONTRIBUTING EDITORS Colin McCracken and Paul Rosenfeld

MANAGING EDITOR/ASSOCIATE PUBLISHER Sandra Sillion, [email protected]

COPY EDITOR Rochelle Cohn

Art/Production

ART DIRECTOR Jim Bell, [email protected]

GRAPHIC DESIGNER Michael Farina, [email protected]

Advertising/Web Advertising

WESTERN REGIONAL SALES MANAGER Mike Duran, [email protected] (949) 226-2024

MIDWEST REGIONAL AND INTERNATIONAL ADVERTISING MANAGER Mark Dunaway, [email protected] (949) 226-2023

EASTERN REGIONAL ADVERTISING MANAGER Jasmine Formanek, [email protected] (949) 226-2004

BillingCindy Muir, [email protected] (949) 226-2021

Compatible Modules fromSingle-Core to Quad-Core

The MSC Q7-IMX6 with ARM Cortex™-A9 CPU is a compatible module with economic single-core CPU, strong dual-core processor or a powerful quad-core CPU with up to 1.2 GHz, and provides a very high-performance graphics.

Freescale i.MX6 Quad-, Dual- or Single-Core ARM Cortex-A9 up to 1.2 GHz up to 4 GB DDR3 SDRAM up to 64 GB Flash GbE, PCIe x1, SATA-II, USB Triple independent display support HDMI/DVI + LVDS up to 1920x1200 Dual-channel LVDS also usable

as 2x LVDS up to 1280x720 OpenGL® ES 1.1/2.0, OpenVG™

1.1, OpenCL™ 1.1 EP UART, Audio, CAN, SPI, I2C Industrial temperature range

Qseven™ - MSC Q7-IMX6

MSC Embedded Inc. Tel. +1 650 616 [email protected]

www.mscembedded.com

V-7_2013-WOEI-6535

Untitled-3 1 8/14/13 2:16 PM

Bridge the gap between ARM and x86with Qseven Computer-on-ModulesOne carrierboard can be equipped with Freescale® ARM, Intel® Atom™or AMD® G-Series processor-based Qseven Computer-on-Modules.

congatec, Inc.6262 Ferris Square | San Diego | CA 92121 USA | Phone 1-858-457-2600 | [email protected]

www.congatec.us

conga-QMX6 conga-QAF

ARM Quad Core Intel® Atom™ AMD® G-Series

conga-QA3

To Contact RTC magazine:

HOME OFFICE The RTC Group, 905 Calle Amanecer, Suite 250, San Clemente, CA 92673 Phone: (949) 226-2000 Fax: (949) 226-2050, www.rtcgroup.com

Editorial Office Tom Williams, Editor-in-Chief 1669 Nelson Road, No. 2, Scotts Valley, CA 95066 Phone: (831) 335-1509

Published by The RTC GroupCopyright 2014, The RTC Group. Printed in the United States. All rights reserved. All related graphics are trademarks of The RTC Group. All other brand and product names are the property of their holders.

Page 5: RTC Magazine

RTC MAGAZINE MARCH 2014 5

EDITORIALMARCH 2014

Tom Williams Editor-in-Chief

The effects of integration on high performance embedded com-puting are definitely producing high performance. While this may be basically enabled by Moore’s law, it has unleashed a

wide range of creativity. And like every other outbreak of innova-tion, these developments are taking markedly different directions. We have only to remember the over 100 schemes for switched fabrics that emerged some ten years ago, which ultimately resulted in the much smaller number used today, to appreciate what a positive thing this is.

Time will tell how this all plays out, but from here it looks like the ability to integrate really powerful hardware performance while maintaining a high degree of configurability and programmability is poised to push the ASIC into ever more rarified zones of high volume and special needs. With the development time for a highly integrated specialized ASIC stretching as long as four years (!), these other choices are going to look increasingly attractive.

In what may appear to be a somewhat subjective classification, I see this generation of highly integrated core devices breaking out in a number of ways, some of which appear to be the integration of what were once distinct devices on a board or module. There are, for example, the now fairly well known integrations of multicore ARM processors in the same silicon die with a set of their standard periph-erals and an FPGA fabric along with, in some cases, additional ana-log components. These offerings come notably from Altera, Xilinx and Microsemi.

Then we have other offerings from companies such as NVIDIA and AMD that integrate multicore CPUs on the same die with very powerful and parallel general-purpose graphics processing units (GPGPUs) tightly integrated with the CPUs. These GPGPUs are de-signed to do very high-level graphics, video and machine vision pro-cessing—tasks that also often involve intensive mathematical opera-tions, all of which lend themselves to be executed with a high degree of parallelism.

Next there are families of highly integrated microcontrollers that incorporate CPU cores along with a highly integrated set of on-chip peripherals, memory, memory interfaces and graphic processors along with internal buses. Families like the PIC32MZ from Microchip and the Atom Z36xxx and Z37xxx (formerly “Bay Trail”) from Intel come with versions that provide different combinations of internal functions that the designer can select from to best fit his or her needs.

Finally, there are multicore processors that replicate CPU cores with identical instruction sets in devices with two to ten or more cores. These include multicore processors from AMD, Intel, ARM partners and many more. The CPU/FPGA, GPGPU and multicore di-rections have in common the fact that they are trying along different approaches to increase performance by offering parallelism in terms of the programmable fabric, the parallel architecture of the GPGPU, or the multicore architecture.

One general observation about these different approaches is that they appear to involve different levels and complexities of software is-sues. Perhaps the most difficult and as of yet not fully solved hurdle comes with the CPU/FPGA combinations. Here we are bringing to-gether two different disciplines of programmable devices that tradition-ally have been programmed by their own specialists. Individual manu-facturers do supply tools, but there is so far no overall programming/configuration or analysis tool methodology that applies to all of them.

The CPU/GPGPU approach fares better in that there are tools and software platforms that let developers express themselves in an extended world of the C/C++ language. NVIDIA has developed the CUDA platform for its Kepler architecture, which will parallelize C code intended for execution on the GPGPU. AMD has selected the OpenCL platform developed for this same purpose to use on its graphics coprocessors to implement parallel mathematical opera-tions. OpenCL also has the advantage that it is starting to be used for programming parallel operations in FPGAs as well.

The world of advanced microcontrollers can be programmed in a single language, as long as the manufacturers provide drivers for their internal peripherals as they of necessity must do. The “homog-enous” multicore processors enjoy several alternatives. They can be programmed with a single operating system and a single language, or they can make use of such things as hypervisors and virtualization to accommodate multiple OSs. Such devices also lend themselves to having what would otherwise be special hardware peripherals imple-mented in software instead.

Integrating diverse hardware elements also has implications for complexity across interfaces via different protocols as well as obsta-cles to scalability. These are just an indication of some of the issues that will face developers pursuing greater device integration as we move through an exciting and promising period of innovation.

Integrating for Parallelism, Performance and Power – A Dance with Complexity

Page 6: RTC Magazine

providing warning messages, for example about driving the wrong way or possible collisions at inter-sections, as well as advance warn-ings of roadwork, traffic jams and other potential risks to road safety. This vision of safe and intelligent mobility can be achieved by uti-lizing wireless communication technologies to link vehicles and infrastructure and identify poten-tial risks in real time.

With more than 200 million vehicles on the roads in Europe today and some 13 million jobs at stake across the continent, it is essential for Europe’s automo-tive industry to be at the forefront when it comes to introducing new technologies. However, the next generation of “connected cars” will not work without common technical specifications, for ex-

ample regarding radio frequen-cies and messaging formats. This is why the European Commission decided in 2009 to issue a formal request (Mandate 453) to CEN and ETSI, asking them to pre-pare a coherent set of standards, specifications and guidelines to support the implementation and deployment of Cooperative ITS systems across Europe.

Connected cars are ex-pected to appear on European roads in 2015. The authorities in Austria, Germany and the Neth-erlands have agreed to cooper-ate on the implementation of ITS infrastructure along the route between Rotterdam and Vienna (via Frankfurt).

GE Expands Operations in Huntsville, Alabama

GE Intelligent Platforms has announced the expansion of its facility in Huntsville, Alabama with the formal opening of a new building. Huntsville is a key loca-tion for GE Intelligent Platforms, serving the defense and aerospace industries as well as multiple in-dustrial markets.

The new building creates a Center of Excellence that will be at the heart of GE Intelligent Platforms’ growing systems busi-ness, which sees GE delivering the value that is increasingly required by demanding prime contractors, systems integrators and OEMs in defense and other industrial indus-tries as those organizations look to focus on their core competencies.

GE’s Huntsville facility is home to 235 employees, including engineering, manufacturing and administrative functions. The ex-pansion allows for consolidation of operations into a single facility housing the designers and devel-opers of GE’s high-performance, rugged integrated systems in the same building in which those sys-tems are manufactured and built.

Synergies are created to allow for faster, more responsive product development—from prototyping to production—and shorter lead times for GE’s customers.

Housed in the new build-ing are advanced capabilities for extended testing and analysis of the effects of vibration as well as for examining and implementing innovative cooling technologies. GE is perhaps the most experi-enced developer of rugged em-bedded computing solutions in the defense industry, which re-quires computing that can with-stand the rigors of deployment in environments that are subject to extremes of shock, vibration and temperature as well as the ingress of water and contaminants. This expertise is also vital to applica-tions in oil & gas, power and met-als industries.

Imagination and Green Hills Partner to Bring Compiler and Tools Support to MIPS CPUs

Imagination Technologies and Green Hills Software have signed a multi-year agreement that brings expanded, comprehensive Green Hills compiler and tools support to a broad range of Imagi-nation’s current and future MIPS CPU cores and architectures.

Green Hills Software’s em-bedded development solutions are coming to MIPS Aptiv cores, as well as to the new MIPS War-rior family of cores, comprised of the entry-level M-class cores, mid-range I-class cores and high-performance P-class cores. Green Hills Software is also upgrading capability to deliver fully opti-mized support for the microMIPS code compression instruction set architecture (ISA) and the latest MIPS r5 architecture, including key features such as hardware virtualization. The companies are

6 SEPTEMBER 2014 RTC MAGAZINE

First Set of Standards for Cooperative Intelligent Transport Systems (C-ITS)

The European Committee for Standardization (CEN) and the European Telecommunica-tions Standards Institute (ETSI) have confirmed that the basic set of standards for Cooperative In-telligence Transport Systems (C-ITS), as requested by the Europe-an Commission in 2009, have now been adopted and issued. The so-called “Release 1 specifications” developed by CEN and ETSI will enable vehicles made by different manufacturers to communicate with each other and with the road infrastructure systems.

When they have been applied by vehicle manufacturers, the new specifications should contribute to preventing road accidents by

INDUSTRYINSIDER

Microsoft has named a new CEO and a new chairman of its board with co-founder and chairman Bill Gates stepping down from that position into a role as “technology advisor” to the new CEO—whatever that means. Word is that Gates will remain a director and come into work at least one day each week. Replacing retiring CEO Steve Ballmer is Satya Nadella, who has been with the company for 22 years and has overseen its corporate software business as head of its Cloud and Enterprise division. Replacing Gates as chairman is John Thompson, who is on the Microsoft board and is a former CEO of Symantec.

All this leaves open many questions as to what direction Microsoft will take. Its traditional PC-based product lines have softened with the drop in PC shipments, and it is facing stiff competition from players like Apple and Samsung in the tablet and smartphone arena. Microsoft recently acquired Nokia’s handset unit, but it is not yet clear where that will lead. The Enterprise and Cloud division that Nadella comes from is by comparison relatively stable. Gates’ role—in addition to his activity with the Bill and Melinda Gates Foundation, which he considers his full-time work—will be as a strategic technology advisor, which would seem to imply that although his direct involvement in the company’s operations may be cut back, his influence on it strategic direction may still be strongly felt. It will be an interesting time because whenever something as big as Microsoft moves, the world feels the waves.

MARCH 2014

6 MARCH 2014 RTC MAGAZINE

Major Shift at Microsoft: Gates out as Chairman, New CEO

Page 7: RTC Magazine

also working together in support of next-generation architectures. MIPS CPU support from Green Hills Software includes its C/C++ compiler, assembler and linker, binary tool chain, Multi inte-grated development environment (IDE), Green Hills Probe, Super-Trace Probe and documentation.

Says Mike Haden, general manager, Advanced Products, Green Hills Software: “Green Hills has a long history of sup-port for MIPS, and we are con-tinuing that tradition through this comprehensive new agreement. Under Imagination, we are seeing growing demand for MIPS. With Imagination’s and Green Hills Software’s combined expertise in security, multi-threaded and mul-ticore CPUs, and heterogeneous computing, this collaboration will provide tremendous value to joint customers.”

Says Tony King-Smith, EVP marketing, Imagination: “Imagi-nation already has a strong and vibrant world-class ecosystem for MIPS, and we are continuing to invest in growing that ecosystem to address new opportunities. Green Hills is a great strategic partner for Imagination, and this new agreement reflects growing demand across several key mar-kets. We look forward to working together to help drive the future of mobile and embedded soft-ware engineering, and the future of heterogeneous processing.”

Memoir Systems Joins TSMC Soft IP Alliance

Memoir Systems has an-nounced that it has joined the TSMC Soft IP Alliance Program, leveraging TSMC’s advanced process technologies to improve power, performance and area for its Renaissance family of multi-port memory IP. Using Memoir’s IP delivery platform that includes

RTC MAGAZINE SEPTEMBER 2014 7RTC MAGAZINE MARCH 2014 7

design-for-formal and exhaustive formal verification, Memoir de-livered fully verified ultra-high-performance multiport memory IPs to TSMC. The TSMC Soft IP Alliance Program requires rigor-ous checks and quantitative data to demonstrate the robustness and completeness of synthesizable semiconductor IP that is part of the TSMC 9000 IP library. These IPs successfully passed TSMC’s comprehensive soft IP qualifica-tion process ensuring the best possible design experience, easiest design reuse and the fastest inte-gration into SoCs.

New SoCs are constructed predominately by assembling a multitude of IP building blocks, with as many as 50-80% of those being memory. Therefore, the quality of IP building blocks and the ease with which they can be integrated for a particular process has a huge impact on time-to-mar-ket and customer success.

In many SoCs, embedded memory performance is the lim-iting factor. As the processor-embedded memory gap widens, higher performance multiport memories are required to unlock application potential. Memoir’s Renaissance memories combine single port embedded memory macros with algorithms to in-crease memory performance by up to 10X. The algorithms are implemented in standard RTL logic and expose multiple mem-ory interfaces that allow multiple parallel accesses within a single memory clock cycle. The result-ing multiport memory is delivered as soft IP. It is fully verified to cover all corner cases, and offers guaranteed performance for fully random and non-random memory accesses, while reducing area and power consumption.

Mentor Graphics Acquires Mecel Picea Autosar Development Suite

Mentor Graphics has an-nounced that it has strengthened its automotive software solution by purchasing the Autosar as-sets, including the Mecel Picea Autosar Development Suite, from Mecel AB. The acquired assets complement the existing automo-tive software solution from Men-tor including the Volcano Autosar products, Mentor Embedded Hy-pervisor and Mentor Automo-tive Technology Platform (ATP), which enables Linux-based au-tomotive solutions, including GENIVI-compliant infotainment (IVI) solutions. The Mentor au-tomotive software solutions en-able a wide range of subsystems, including secure, homogenous and heterogeneous multicore and single-core ECUs.

The Mentor Graphics Em-bedded Software Division en-ables embedded development for a variety of applications includ-ing automotive, industrial, smart energy, medical devices and con-sumer electronics. Embedded de-velopers can create systems with the latest processors and micro-controllers with commercially supported and customizable Linux-based solutions including the industry-leading Sourcery CodeBench and Mentor Embed-ded Linux products. For real-time control, systems developers can take advantage of the small-footprint and low-power-capable Nucleus RTOS.

Artesyn Joins ETSI Network Functions Virtualization (NFV) Industry Specifications Group

Artesyn Embedded Tech-nologies, formerly Emerson Net-work Power’s Embedded Com-puting & Power business, has

announced that it has joined the European Telecommunications Standards Institute (ETSI) Net-work Functions Virtualization (NFV) Industry Specifications Group (ISG). Initiated by some of the largest network operators in the world, the group has at-tracted broad industry support and participation now includes communication equipment ven-dors, IT vendors and technology providers.

The NFV ISG aims to achieve a consistent approach and common architecture for the hardware and software in-frastructure needed to support virtualized network functions. The group has already published the first five specifications and is developing more detailed guid-ance. These documents agree to a framework and terminology for NFV, which will help the industry channel its efforts toward fully interoperable solutions to enable global economies of scale.

Mark Dunton, software solutions manager for Artesyn Embedded Technologies, said: “Artesyn has joined the world’s leading experts in NFV with a view to evolving our product line to support infrastructure develop-ment and network deployment. We embrace the objectives of the NFV ISG as they align with our vision for Artesyn’s com-munications solutions. Artesyn is looking forward to becoming a major contributor to the group, specifically leveraging expertise in heterogeneous acceleration and other critical functions to enable effective NFV deployments.”

Page 8: RTC Magazine

8 MARCH 2014 RTC MAGAZINE

FORUMColin McCracken

SMALL FORM FACTOR

The Embedded World show in Germany is the largest em-bedded-focused exhibition for components, software and tools. This year’s show, once again, did not disappoint.

From show floor demos to secret knightings in dungeons over libations, visitors took away much knowledge and wisdom. When the weather outside was frigid, the relationships warmed up in-side and underground.

It’s a wonder that EW continues to grow while similar shows in the U.S. are shrinking and even folding their tents. Or perhaps it’s not too much of a mystery. The way OEMs find and buy prod-ucts is different between the geographies. Each European coun-try has distributors who speak the local language and foster great long-standing relationships with their customers, with an em-phasis on meeting face-to-face. While many OEMs buy through distribution in the Americas, increasingly manufacturing goes offshore or through system integrators and contract manufac-turers. Boards are sold directly from suppliers to system OEMs, often without the local touch of manufacturers’ reps anymore. Hardware and software engineers simply surf the Web for up-to-the-minute module specs and fill out the vendor’s online contact forms. Even chip vendors are reducing the use of manufacturer’s reps and distributors in order to focus on competitive high-vol-ume business directly themselves. Sadly, personal relationships aren’t as important in this manic stretched-too-thin culture.

Considering small form factor technology, Germany contin-ues to be the locus of popular open standards for tiny comput-ing modules. More than 10 years ago, non-standard DIMM-PCs first intrigued the market with processor boards plugging into commodity high-volume RAM sockets. ETX modules took over, using board-to-board connector pairs instead. COM Express car-ried the x86 PC module concept into the PCI Express era while XTX breathed some life into old ETX carrier boards. At a time when some trade groups were struggling with relevance, com-puter-on-modules (COMs) blew right past and never looked back, crossing the million modules per year mark while assimilating the volumes from legacy form factors.

With COM Express (COMe) on auto-pilot from the Pentium M and Core Duo momentum, in 2008 an explosion of form fac-tors harkened from these hallowed halls when Intel’s low-power Atom family did not conform well to COMe. Atom created a massive “form factor forking,” as board vendors raced inde-

pendently to invent the next big small form factor. This year’s show commemorated the sixth anniversary of the Qseven mod-ule standard. Qseven not only reached critical mass faster than any of the other form factors introduced in the 2008 great bang, but Qseven modules were displayed with the broadest processor manufacturer coverage by board vendors from all over the world. It’s now possible to design a single carrier board that uses Intel, AMD, Freescale, TI, Nvidia and Qualcomm processor modules! And most of these processors are ARM, not even x86. But this year’s show hammers home the idea of a cross-architecture car-rier board.

Qseven’s success also signals the return to low-cost laptop PC-style connectors for computing modules, just like the good old DIMM-PC days. Full circle. While tiny modules are neces-sarily limited in terms of performance and power dissipation, Intel’s latest Bay Trail SoC and AMD’s G-series SoC run circles around modules from just a few years ago, meeting the needs of many high-volume applications.

“Success breeds _____.” Fill in the blank with “contempt,” “jealousy,” or just “good old fashioned competition.” Not to be outdone completely, the non-Qseven vendors created the ULP-COM standard; you guessed it, at the same venue four years later—Embedded World 2012. Can’t we all just get along? It took another year to settle on a name, “SMARC.” It took a year last time too, when COM Express emerged as a better name for “ETX Express,” which bore no resemblance to ETX. New con-nectors, and two instead of four of them. A new module size. 12V input instead of 5V, so that power consumption could ratchet way up to 188 watts, leaving plenty of room for the low-power Qseven to come in underneath.

Small form factor modules are everywhere now, in all major embedded market segments, in mobile as well as fixed installa-tions, all around the world. Emanating from Germany, they are spreading like a wildfire in a California drought.

You’ll find everything at Embedded World, except perhaps crop circles. The frozen tundra is too hard to carve this time of year. In case you missed all the action, pencil it into your 2015 calendar. With the breadth of vendors exhibiting, the lively tech discussions, relationships re-kindled, and of course the Weiss-bier, EW has more than earned the word “World” in its name.

SFFs Take on the World

Page 9: RTC Magazine
Page 10: RTC Magazine

EDITOR’S REPORTplications developed around products to enhance user experience. It should be noted here that the examples given are not necessarily all 3Pillar customers but serve to illustrate the points made by DeWolf.

Take for example, Nike. What does Nike make? Sports apparel, primarily shoes. Nike is of course known through-out the world by its trademark “swoosh” that adorns so many other products, per-sons and events. Now once we get past all the image, glitter, sports super stars and psychological manipulation, why do peo-ple buy Nike products? Well, they have an interest in pursuing sports themselves, physical activity, which also equates to an interest in their own personal fitness.

Nike also offers a low-cost digital product, a consumer embedded system called the Nike Fuel Band SE, which is a bracelet that contains technology to mea-sure activity such as pace, heart rate, etc.; it also offers a Sport Watch GPS that will track pace as well as the path run. There is also a smartphone app called Nike+ Run-ning that without additional sensors tracks distance, pace, time and calories burned along with GPS. In addition there is com-puter software to store and analyze the data and access to Facebook “buddies” with the ability to earn badges, set goals and generally track and motivate fitness progress. In fact, there is a whole online community called Nike+ dedicated to motivation and training.

All this to sell shoes? Well, actually these are software-based products that are also for sale and they have the effect of en-gaging the customer and represent value in their own right while reinforcing and adding to the value of the underlying basic product and/or service.

The example of Nike helps illustrate a much wider phenomenon that 3Pillar has identified and is actively involved in, helping a wide variety of companies iden-tify and use software to create solutions that can have a similar effect on their own growth and market presence. 3Pillar does this for customers by looking for innova-tive ways of using data related to a prod-uct to solve problems. They then create a prototype that can be put out for customer feedback. In this scenario, software de-velopment does not begin with a require-

inforce brand awareness and effectively become an additional product that can add to company revenues and continue customer engagement. This opens a whole new arena for software development that seeks to leverage the digital aspects of a company and its products, many of which are consumer devices based on embedded technology that are connected in various ways and generate data that can be used by customers to improve their interaction with the purchased product.

According to David DeWolf, CEO of 3Pillar Global Software, “More and more traditional companies that aren’t software companies are building software prod-ucts. Software is now your brand. You touch your customer more through soft-ware than any other way no matter what industry you’re in.” This appears fairly obvious due to the Web and how custom-ers find out about and are sold products, but also increasingly in the software ap-

10 NOVEMBER 2014 RTC MAGAZINE

Big Data Drives New Apps

by Tom Williams, Editor-in-Chief

With today’s systems and devices producing Big Data, there is a need for creative ways to gain insight and use it to create additional value by solving problems related to underlying products and services. This creates customer engagement, brand loyalty and leads to additional growth.

Data. It is being relentlessly gener-ated by almost every form of hu-man activity—by commerce, gov-

ernment, research—and by billions of interconnected systems and devices all over the world. Even the smallest sensors generate data that is aggregated over lo-cal networks and eventually winds up on servers in the Cloud where it is used for . . . what? Time was, the answer to that question would come from the IT depart-ment and it would center around internal operations, cost control, inventory man-agement, customer databases, etc., all very useful and necessary points. But that was data generated and managed about prod-ucts and business operations. What we are coming to call Big Data is generated more by products and by customer interaction with those products and services.

As such, Big Data has the potential to be used in fundamentally different ways to enhance the user experience, re-

EDITOR’S REPORT

10 MARCH 2014 RTC MAGAZINE

Solutions from Data: Innovative Apps Can Bring Engagement, Loyalty and Revenue

Page 11: RTC Magazine

RTC MAGAZINE OCTOBER 2013 11

EDITOR’S REPORT

RTC MAGAZINE MARCH 2014 11

ments document but rather with a func-tional concept built on data and knowl-edge of the customer. This prototype is then refined over time. “Market research,” DeWolf says, “is a thing of the past.”

Consumers have embraced the smart-phone and the tablet experience, and the app is now the door to a world of innova-tive software solutions based on compa-nies’ data and imagination. One customer that 3Pillar did work with to expand the value of its existing business was Carfax, the source for information on used vehi-cles. Carfax reports are increasingly used by dealers to supply customers with infor-mation about vehicles they have in stock. Carfax wanted to grow their company value and add customers by turning their Web-based consumer application into a mobile app.

Now a person shopping for a used car can simply subscribe—note additional revenue—to one or a number of reports and enter the license or the VIN number and get a full report on his or her smart-phone. The app features maintenance re-ports, service records, registration, etc. In addition, the user can locate recom-mended service providers and receive re-pair estimates (Figure 1). This is a case of building on an existing software service to create a solution to a remaining prob-lem, namely how to check the report while you’re actually out on a dealer’s lot with a salesman droning his pitch at you. And the app has built on the existing software and data in that it also provides a way to maintain service records on a given pur-chased vehicle (Figure 2).

But DeWolf emphasizes that such in-novation is only the start. There must also be acceleration, the constant use of feed-back to continue to differentiate over time. This involves using the customer feedback and customer engagement created by the initial innovative application to continue the product life cycle relying on existing information and information created from customer interaction in an agile develop-ment methodology.

This can also make use of other exist-ing and available data that might not have been generated in-house. The existence of Big Data, of course, does not come from a single source. Consider, for example,

FIGURE 1 FIGURE 2

The Carfax mobile app lets the user find a repair shop and get an estimate on work that may be needed in conjunction with a possible purchase.

Carfax offers access to a vehicle’s service record and also notifies of upcoming needed service.

the possibility of using the biometric data generated from sensors and apps such as those in the Nike+ or a similar environ-ment for inclusion in a medical monitor-ing application that might bring in other data such as blood oxygen, more detailed EKG data, or more focused data about a specific condition.

There are, of course, other compa-nies offering biometric training monitors besides Nike. What if one of them were to include data from recipes that would furnish information about caloric intake, trans fats or other nutritional data? That could then be correlated with the exer-cise data to provide an even richer train-ing application or a weight loss program. In a similar way, it would appear that the Carfax app brings in more information than simply the vehicle records. It also ac-cesses repair shop data and can get repair or service estimates. This relies on data well beyond that associated with a given

vehicle, but it serves to enhance the value of the underlying product that Carfax originally offered.

The possibilities are endless. Maybe someday we can enter the information from a wine label and get data about that season’s sunshine and moisture, soil character, etc. There is a huge, largely un-tapped market for the creative use of data supplied by an ever growing number of systems and devices that can be used to create engagement, add to the user expe-rience, increase brand loyalty and solve problems based on huge amounts of ex-isting data. The secret is to look at it in creative and imaginative ways aimed at innovative solutions.

3Pillar Global Fairfax, VA. (703) 259-8900 www.3pillarglobal.com

Page 12: RTC Magazine

12 OCTOBER 2013 RTC MAGAZINE12 MARCH 2014 RTC MAGAZINE

TECHNOLOGYCOREFinding the Sweet Spot for SoC and ASIC Design

I t’s no secret in the semiconductor in-dustry that software development costs for a new system-on-chip (SoC) can

exceed the hardware development costs by a significant margin. Having been di-rectly involved in the software side of the SoC development process, I’ve expe-rienced in detail the overall development flow, which gives me the courage to try and answer the following questions: “Why is there so much software to develop? An-droid and other operating systems exist, they all have an abstracted hardware in-terface, so isn’t it just a simple matter of a few “drivers” to link up the new silicon with the OS?” If only it were so simple.

Well, the bottom line is that it’s all about the hardware. All the software ef-fort, from writing the lowest level driver to building the coolest multimedia An-droid app, is driven, and potentially exac-erbated, by the underlying hardware capa-bilities and their impact on the software developers on the SoC team.

To get a feel for the magnitude of the software effort for a new SoC, here’s a composite picture gleaned from projects I worked on not too long ago. A typical project might have 500+ software devel-opers overall, with most being devoted

to operating system development and customer support, with at most 100 de-velopers for kernel porting, bring-up and testing. These projects typically take 48 months from start to finish for a main-stream, complex, mobile device SoC. If the SoC is new and not a derivative of a previous SoC, it can take much longer than 48 months to complete, especially if there is a process node change. It would be considered a success if any project com-pletes in a firm 48 months, based upon only incremental changes being made to the SoC, with more aggressive parallel-ization of hardware and software during development.

Clearly, these software development projects are indeed large in scale. Is there anything that can be done to change this situation? In a previous real-time clock (RTC) article, we discussed the critical-ity of providing software developers with a realistic and usable (good performance) platform upon which to run software be-fore silicon, to enable parallel hardware and software development. Here we as-sume that all that technology is already in place. So now let’s attack the issue of what’s currently “holding up” the devel-opment of software for the new SoC, and

why there are so many software engineers. Because the answer touches on many as-pects of software support for hardware, it’s worth taking a closer look at what’s going on.

It all begins with the need to support operating systems such as Android, Linux and Windows 8 with the digital signal processor (DSP), imaging, graphics pro-cessing unit (GPU) and other hardware subsystems on the SoC. In short, it’s the issue of offloading software functions into hardware for performance gains or to lower power consumption. The most com-mon form of offload is moving a particu-lar software capability into the underly-ing hardware of the SoC. However, given the ubiquity of wireless communication and the Internet, there is an emerging offload architecture based upon moving the offload function up into the Cloud. In fact, some architectures can decide on the fly whether to use device-based offload or Cloud-based, optimized around the best power savings, compute time, or some other user-selectable benefit. But no mat-ter where the offload is happening, the implications on the software are what we need to understand.

by Jim Ready, Cadence Design Systems

New system-on-chip designs require major software efforts from internal operating system and interface issues on up to specialized on-chip device functionality. Ultimately, the software must make the hardware work. Getting there requires clear vision and close cooperation between hardware and software teams.

Beyond Drivers: The Critical Role of System Software Stack Architecture in SoC Hardware Development

Page 13: RTC Magazine

RTC MAGAZINE OCTOBER 2013 13

TECHNOLOGY CORE

RTC MAGAZINE MARCH 2014 13

For example, a DSP subsystem on an SoC can support a wide range of au-dio processing functions, including au-dio stream coding and decoding, voice processing, equalization and many other capabilities. These capabilities are imple-mented as a combination of hardware (the DSP) and extensive software libraries. These capabilities are typically indepen-dent of any particular OS environment. Thus the “usual” notion is that “soft-ware drivers” will need to be developed, either by the SoC maker itself, or by the customer, in order to interface the DSP hardware and software audio subsystem to the operating system that the customer is using. This notion corresponds to the typical, but oversimplified, layering dia-gram of a system, where there is a hard-ware layer, a driver layer, an operating system layer and an application layer. In this model, all the hardware maker has to supply is an OS-compliant driver, and the hardware is then supported all the way up the software stack for apps to use. If only this were true!

The reality is that this simple model in many cases doesn’t reflect reality at all.

For example, see Figure 1 for a system ar-chitecture diagram of Android. Note that although there certainly is a driver layer, there are a couple of intermediate layers with multiple components before reach-ing the application layer, all of which may have some dependencies on the underly-ing hardware. Also keep in mind that this software stack consists of many millions of lines of code, which need to be under-stood by software engineers who didn’t write it in the first place. This is not the environment in which to trivialize the challenges of modifying the software stack.

With this complexity in mind, it is important to note that a number of popu-lar OSs have either unique and/or limited interfaces available to integrate support for DSP or other hardware, into the ex-isting system frameworks. Imagine that the multimedia framework developers designed the framework to be largely software-based, with minimal interfaces to make limited use of hardware accel-eration for various multimedia functions. So even if an SoC has a DSP on-chip, as far as the media framework is concerned,

most of the hardware capabilities are un-reachable; in effect, they don’t exist.

See Figure 2 for an illustration of this situation. Note that although the decoding capability of the DSP is used, all the other audio functions are performed on the ap-plication processor, even though the DSP might well be able to perform those func-tions with much greater power efficiency. Note also the back and forth movement of data between the DSP and the application processor for decoding. That data move-ment uses power, and of course, the ap-plication processor needs to be powered on as well.

In order to fully exploit the DSP to offload more of the audio function from the application processor, the hardware vendor can re-engineer the media frame-work to fully support their DSP, which is the optimal way for the system software to make full use of the DSP offload capabil-ity. See Figure 3 for an illustration of an advanced DSP offload architecture. In this case, almost all of the audio processing is offloaded to the DSP subsystem, allowing the application processor to be powered down with the resultant savings in power.

Home Contacts Phone Browser ...

Display Driver

Keypad Driver

Camera Driver

WiFi Driver

Binder (IPC) Driver

PowerManagement

Flash Memory Driver

Audio Driver

Dalvik VirtualMachine

CoreLibraries

Noti�cationManager

LocationManager

ResourceManager

TelephonyManager

PackageManager

SurfaceManager

OpenGL | ES

SGL

MediaFramework

FreeType

SSL

WebKit

Libc

SQlite

ActivityManager

WindowManager

ContentProviders

ViewSystem

LINUX KERNEL

ANDROID RUNTIMELIBRARIES

APPLICATIONS

APPLICATIONS FRAMEWORK

Optimization

Power Related

New Fram

ework

OS Related

Algorithms& Drivers

FIGURE 1

Android architecture and hardware-related intra- and cross-layer software activities. (Source: Google)

Page 14: RTC Magazine

TECHNOLOGY CORE

The hardware vendors are faced with the task of re-engineering the me-dia framework to fully support their DSP or leave that effort to the customer, but in either case it’s the only way for the system software to make full use of the hardware capability. And, of course, while implementing this offload capability, the developers have to make sure they don’t “break” any of the application interfaces to the media framework, otherwise the apps won’t work. This effort no doubt calls for multiple man-years of work, and likely has to be re-visited each time the framework is revised. In addition, it re-quires expertise in at least two domains, system software (OS internals) and signal processing—both hardware and software. But the benefits are clear. The system can deliver the same level of audio processing at a small fraction of the power required if the processing remained on the applica-tion processor.

The key takeaway here is that while many hardware-dependent functions are contained within a single layer, device drivers for example, others functions are not. For example, as Figure 1 shows, and

as we just discussed in detail for audio pro-cessing, adding a new framework, power optimization or performance tuning is ver-tical. To support the effort, software needs to be written or modified at every layer.

We might conclude that developing interface standards is the way to solve this kind of situation, and indeed it can be. But as we’ll soon see, there can be interesting and unintended consequences with that approach.

Taming the InterfacesFor example, with GPU offload, we

see a different situation than in the mul-timedia framework discussion. Here, the industry has been working for some time to have standard offload mechanisms in PC and mobile platforms to take advantage of the large raw compute power of GPUs, es-pecially for things closely related to graph-ics. These include processing with floating point because the hardware is there, and imaging because some parts of the pipe-line can be applied. These mechanisms include OpenCL, AMD’s Heterogeneous System Architecture (HSA) consortium, Google Renderscript and Filterscript, and

a number of other initiatives. While some may hope that the GPU is “The Univer-sal Offload Engine”—i.e., you only need to support GPUs and all your energy and throughput problems are solved—as usual, the reality is more complex.

As a result of the standardization ef-fort, customers are asking SoC makers to support all of the hardware and software hooks proposed for CPU/GPU coordina-tion, even when they may be a step in the wrong direction on efficiency. HSA, for example, requires full cache coherency be-tween CPUs and offload engines, unified virtual memory management, and (even-tually) 64-bit flat addressing throughout. That’s not necessarily optimal for low-cost, low-power offload. There is a legitimate argument that these things would ease function migration onto offload engines, but the lean, mean hardware leverage is significantly reduced, which could be a problem for ultra-small devices used for “Internet of Things” applications. Many of these programming models and offload ar-chitectures implicitly or explicitly demand heavy-duty floating point. That’s fine if the applications really need it. But it’s a shame if the applications can really be imple-mented in fixed point, because there’s a factor of at least three in throughput/watt if you can get a software function down from 32-bit floating point to 16-bit fixed point representation.

The bottom line is that there is no guarantee at all for an SoC maker that the proper interfaces and layers exist in Android, Linux, or Windows 8 Mobile to easily integrate hardware into those sys-tems and allow application software and the overall system to gain full benefits from the hardware. It’s no wonder then that the major SoC suppliers have large software teams re-engineering the guts of these major OSs to support the advanced hardware capabilities they’ve placed on their SoCs.

But when looking at the overall soft-ware headcount, it’s also important to rec-ognize that not all of the software devel-opers are working on the core operating system. There is plenty of customer-spe-cific development going on as well. Just as the SoC maker tries to differentiate his SoC with some snazzy hardware (lead-ing to the situation of exploding software

14 MARCH 2014 RTC MAGAZINE

Applications

Applications Framework

DSP Decoder Codec

Linux Kernel

Hardware

Compressed

PCM

Libraries

Audio HAL

HiFi DSP

Audio FlingerMediaPlayerService/StageFright

Mixing Effects

DSP Driver ALSAHiFi Driver

OMX IL

SW Mixer Effect/MixingControl

SW Effects

File Reader/Parser Audio Sink

Audio TrackAudio EffectMedia Player

Media PlayerApplication

Java App/PCM/Game

FIGURE 2

Android Audio Playback Baseline DSP Offload. (Source: Cadence)

Page 15: RTC Magazine

RTC MAGAZINE MARCH 2014 15

developer headcount discussed here), the SoC customer in turn needs to differen-tiate his product. That differentiation is likely to be done with software, and it’s often part of the SoC maker’s business deal that the SoC maker does a lot of that work. For example, it is not uncommon for a large SoC project to have a significant part of the hundreds of software develop-ers devoted to helping (usually for free to the big customers) customize and opti-mize the OS to their device.

What can be done to improve the situation? First, maybe nothing at all. As Fred Brooks noted in his now legend-ary book, “The Mythical Man-Month,” sometimes what’s left for the software is the unique part of the system, the “Essen-tial Complexity” he calls it, and there’s no way around the work required to imple-ment it. But Brooks was no pessimist, so we’ll follow his lead and look at some suggestions to ease the burden even under the current constraints of the market and industry today.

First, there may be some process im-provements that can help. For example, here’s an idealized development flow that a number of software architects I’ve worked with have either implemented or wished that they had. The first step in any all-new SoC development is to capture the high-level re-quirements for the SoC by a team staffed by both hardware and software architects. (It’s not clear that this is always common prac-tice in the industry, by the way). The end result should be a functional specification composed of all of the individual hardware intellectual property (IP) blocks in the SoC. This should include the register definitions of each IP block, which are a key inter-face for building the software stack. The software architects now have enough data to validate that the software requirements could, at least in theory, be met by the un-derlying hardware definition. In turn, the architects need to validate that the design could meet the “speeds and feeds” required. This process can conclude with the decision that the SoC “looks good on paper” and the development effort is now moved on to the next phase of implementation.

What’s critical here is two-fold: One is that the magnitude of the gap between the SoC hardware and the target operat-ing system(s) should now be identified,

whether large or small. Maybe it really is “a small matter of a driver or two,” or, worst case, a complete re-write of some major subsystem, but at least there should be no illusions as to the effort required (even though, being software, the effort is still likely to be underestimated). The other activity is that the core OS team now has enough information to design and im-plement an abstraction layer, a generic in-terface to the underlying SoC acceleration and other specialized SoC hardware sub-systems. The main OS team then develops in parallel the middleware pieces and ap-plications that use those capabilities.

Another observation borne from ex-perience, despite wishing the situation were otherwise, is that it’s important not to oversimplify. Hardware/software inter-actions can be very complex and even the smallest hardware interface or a change to that interface can have ripple effects all the way up the software stack, including the application layer. These ripple effects can occur in many forms:

• Porting existing software to an SoC might require a major re-write of the software to support a new hard-ware capability.

• Adding new hardware to an exist-ing SoC might disrupt the software stack, making the hardware change too expensive to add; or, shipping an SoC with unused hardware can take up space and consume power.

• Designing a new software stack with-out regard to the possibility of utilizing hardware offload capability in the fu-ture might preclude the software from supporting the next hot SoC.

To shamelessly quote Brooks once again, “There is no silver bullet” when it comes to software development. Indeed, as long as the software is built at arm’s length from the hardware development (and vice versa, of course) and both sides are aggressively innovative, software will bear the burden of making sure the two pieces fit and work together. One could argue this is the cost of innovation and the cost of horizontally structured industry.

Cadence Design Systems San Jose, CA (408) 943-1234 www.cadence.com

Media PlayerApplication

Java App/PCM/Game

Applications

Applications FrameworkAudio TrackAudio EffectMedia Player

LibrariesAudio FlingerMediaPlayerService/StageFright

Effect/MixingControl

File Reader/Parser Audio Sink

Audio HAL

HiFi Driver

DSP Decoder Codec

Compressed

PCM

HiFi DSP

Mixing Effects

Linux Kernel

Hardware

FIGURE 3

Android Audio Playback Advanced DSP Offload. (Source: Cadence)

TECHNOLOGY CORE

Page 16: RTC Magazine

16 OCTOBER 2013 RTC MAGAZINE16 MARCH 2014 RTC MAGAZINE

TECHNOLOGYCOREFinding the Sweet Spot for SoC and ASIC Design

The System on a Chip (SoC) repre-sents the pinnacle of tailored de-signs. Expressly selected peripher-

als, especially analog, give the promise of a perfect fit with no waste, delivering very low component cost. Wikipedia defines the SoC as “an integrated circuit that in-tegrates all components of a computer or other electronic system into a single chip.” With the level of integration that is com-monplace today, many ICs can qualify as an SoC, especially the Microcontroller.

The investment required to develop

a custom or semi-custom SoC is sub-stantial in both time and cost. There are non-recurring engineering (NRE) costs, negotiating the design specification, de-sign time, fabrication time, and finally, developing the application. But then there is that low component cost as the reward.

Consider now the MCU, a standard product, widely available, without NRE and with many, if not all, of the required peripherals, such as analog and commu-nications, but it does not match the cost of the custom SoC. Should you choose future perfection (i.e., a custom SoC) or an MCU that’s available today? This is the decision designers must make when con-sidering a custom SoC or standard MCU for their next high-volume design.

Criteria for ComparisonLet’s look deeper into this question

and compare our choices amongst several key criteria. First, let’s define the bound-aries of the discussion. We will consider four types of products: a standard MCU, a full custom SoC (or ASIC), a semi-custom SoC, and the FPGA with integrated CPU.

The semi-custom SoC is different from a full custom SoC in that it is al-ready available and was designed with an application in mind. These products can be found from vendors such as Broadcom. Toshiba and Infineon offer full custom

ASIC solutions. An FPGA is well known in system design, but recently companies such as Xilinx are offering hybrid devices with an embedded CPU complemented by programmable logic. Meanwhile, the MCU has grown in complexity. Compa-nies such as Microchip Technology are integrating advanced analog peripherals, lots of memory, and hoards of communi-cation and timing peripherals, making the once sharp lines between MCU and SoCs blurry.

Now let’s bring the differences back into focus by establishing some criteria for comparison. For the assessment to be valid and complete, we need to consider the total cost of ownership, not just the unit cost. This includes the three broad ar-eas of product features, design enablement and time-to-market (Table 1).

Product FeaturesWhen it comes to obtaining the pe-

ripherals that are an exact fit for your ap-plication, it’s hard to beat the custom SoC. You work with the vendor and include just the right peripherals to optimize your de-sign. There is little waste and fewer com-promises. If you want a 10 Msample/s Pipelined ADC, you simply specify it.

The FPGA is similar in that you can program the logic to be what you want, but may be forced to make sacrifices

by Jason Tollefson, Microchip Technology

Time and money are major considerations when approaching a design. With today’s high scales of integration, the available devices offer a wide array of alternatives, all of which involve different combinations of time, money and other resources. Selecting the right mix can be vital to success.

Integration Blurs the Line between MCUs and SoCs

TABLE 1

SoC considerations.

Page 17: RTC Magazine

RTC MAGAZINE OCTOBER 2013 17

TECHNOLOGY CORE

FIGURE 1

Total Cost of Ownership (where it comes from).

RTC MAGAZINE MARCH 2014 17

with analog. For example, I can have a 1 Msample/s SAR ADC, but not a 10 Msample/s Pipelined ADC.

The semi-custom SoC offers a vari-ety of peripherals, but they are designed with application segments in mind, such as communication processors, and may be mismatched to your application. So it has more constraints, along with some periph-erals that you will not use.

There are literally thousands of dif-ferent MCU configurations, each “dialed in” for an application space. It’s hard to find an application that cannot be served by the MCU. But, vendors scale cost with integration. So, getting that 10 Msample/s Pipeline ADC might also mean you get an LCD controller and USB, whether you need them or not. Advantage: Custom SoC.

Sometimes core performance matters, sometimes it does not, depending on the application. Rarely would you need a core running at 200 MHz for a home thermo-stat, for example. And you would not want it if the thermostat were battery powered.

With FPGAs and semi-custom SoCs, you will typically get the Ferrari. They tend to integrate a CPU so screaming fast and power hungry that it ensures high per-formance in almost any application. This might be overkill for your application, but it will definitely work.

The MCU, much like the custom SoC, can be scaled to fit. There are lots of choices within 8-/16-/32-bit MCUs. You can easily find one that will fit your processing load and power budget. Many vendors have put special emphasis on CPU efficiency and current consumption, which is a great combination for battery-powered applications. But if you need a Ferrari, you can find that too. Advantage: MCU & Custom SoC.

Cost is typically the reason that peo-ple consider an SoC. The perception is that the cost of the SoC is lowest, and that is often the case. But we must be sure that the total cost of ownership is fully under-stood and considered before committing to the custom SoC.

The fully custom SoC is intentionally a perfect fit for the application, with little to no extraneous features. This generally leads to the lowest unit cost. But there are other considerations. There will be the de-sign and test charges (NRE) that need to

be added total cost. Once the chip is out of the fab, if issues are found, it will need to be fixed. This is an additional NRE cost. A trip back to the fab for a mask revision is an additional cost and can wipe out the unit cost savings in a hurry. A re-spin also takes time. A fab cycle can be as long as 90 days, leaving you without product to develop your application; an opportunity cost. Another consideration is develop-ment tools. Tools for developing applica-tion code and testing hardware will need to be custom designed, developed and purchased. These costs can vary widely. However, if your volumes are significant and your product lifetime long, the custom SoC unit savings may just overcome these additional costs.

The FPGA has a high unit cost in the tens of dollars, due in part to the ad-vanced process geometries that enable its flexibility. But other costs include support chips, such as boot memory and numerous voltage regulators. Development tools for FPGAs start around $1,000, depending on how many tool seats are needed. These costs might be absorbed if the application has a high price. But, typically, there are better choices if system cost is a primary concern.

The MCU fitting the application might have a higher unit cost, but can still represent a lower total cost of ownership. For one thing, there are no startup costs (NRE). You simply order your chip on-line, and get it a few days later. The MCU has its own flash memory and regulator built in, so no supporting chips are nec-essary. Finally, most MCU vendors offer free software tools and low-cost hardware starting at $20. So, in essence, the total cost of ownership is the product cost (Fig-

ure 1). Advantage: MCU.With a custom SoC, all of the flex-

ibility is at the beginning of the design. You can select peripherals, core and I/O to match your exact application needs. But, after the SoC design becomes a chip, flexibility is lost. The same is true for a semi-custom SoC, where you can select the one that fits your application, but you cannot change the features after that—you are locked in.

Contrast that with the flexibility of the MCU and FPGA. Both offer scal-ability in memory, peripherals and I/O. However, they accomplish this differ-ently—the FPGA through programmabil-ity, and the MCU through proliferation of product families—but the end result is the same. Changes can be made throughout the design cycle, even after the product is launched. Advantage: MCU & FPGA.

Design EnablementWhere do I go for information? Who

do I talk to when I’m stuck or have a prob-lem? How do I integrate the chip into my

FIGURE 2

Time-to-Market (relative time).

Page 18: RTC Magazine

TECHNOLOGY CORE

application? These are three critical ques-tions that you will encounter after you have selected your chip, whether it is an SoC, FPGA or MCU. How these ques-tions are answered by your vendor is cru-cial to the success of your application.

For the fully custom SoC, you have a face-to-face relationship with the ven-dor. All information flows through your contact at the company. Sounds great, but what if you live across the globe, with 12 time zones separating you and your con-

18 MARCH 2014 RTC MAGAZINE

tact? Because the SoC is custom, you must seek out your vendor who is the ex-pert to get information. The situation is similar for the semi-custom SoC, in that information comes from the vendor and is not widely available. A relationship is required to get information.

Contrast that with the MCU and the FPGA. Look on the vendor website for in-formation and you will find a plethora of free-flowing information about your prod-uct. Videos, code examples, data sheets,

TABLE 2

Advantages by Scoring Criteria.

errata documents, peripheral user manuals, package information and reference designs, all available 24 hours a day, 7 days a week. A relationship with a person is not required to gain access to information. But if you want to establish relationships with people in the know, there are community user fo-rums, distribution partners, and even 24/7 online engineering support available. Ad-vantage: MCU & FPGA.

Time-to-MarketThe famous American entrepreneur

and statesman Benjamin Franklin once said, “Time is money.” This quote can be interpreted two ways when marketing your product. Franklin’s meaning was to not waste time. In other words, the faster you are to market, the more money your prod-uct can make. Another meaning could be to take the time to get your product right and you will make more money. These two approaches are illustrative of designing with an MCU/FPGA vs. the SoC.

The MCU and FPGA are your chips of choice if you want to get to market fast (Figure 1). With myriad design resources and information, combined with over-night availability of product from sources such as Digi-Key, there is little in the way of getting the product to market. The trade-off, of course, is unit cost. This can be higher, as noted above in system cost. But, if you have calculated the total cost of ownership, considered the risk of re-spin and know that unit cost is your long-term issue, taking the time to design a custom SoC might be your best choice. Advan-tage: MCU & FPGA.

We’ve learned that time can be as im-portant as features and design resources when reviewing the SoC options. In the end, we as engineers have to study the trade-offs and make decisions. But good decisions will include considerations be-yond unit cost, and will consider the total cost of ownership for the application. By looking at the advantages that the MCU, FPGA and SoC have relative to each other in their entirety, we will make a great choice. Table 2 shows a parting summary of the considerations we’ve made. Good luck with your design!

Microchip Technology. Chandler, AZ. (480) 792-7200. www.microchip.com.

Welcome to the next generation of small form factor technology. Interscale M minimizes integration time with quick assembly and easy access to PCBs smaller than 19”. Protect your valuable electronics with EMC shielding and choose from a variety of sleek enclosure designs tailored to your unique specifications and corporate identity. Interscale M offers simplicity, flexibility and innovation in one package.

WWW.SCHROFF.BIZ/INTERSCALEM/

THE FORM FACTOR FOR THE FUTURE

Interscale M

Page 19: RTC Magazine
Page 20: RTC Magazine

FPGAs – Taking Vision to the Next LevelAs today’s machine vision applications become ever more demanding, the unique capabilities of FPGAs, such as parallelism and low power consumption, can greatly enhance performance. But their advantages often depend on a good understanding of the use case. Often, in fact, they can be used in tandem with CPUs for the best overall advantage.

Today, manufacturing companies are striving to lower costs and increase quality and throughput, robots are

becoming smarter and more flexible, and automation is a hot topic with a large amount of resources backing it. Vision is one of the key enabling technologies be-hind these trends, and it has been growing rapidly over the past couple of decades. But the performance of image processing applications has been largely tied to ad-vances in CPU speed. Vision has been rid-ing the CPU frequency wave to run more complex algorithms at higher camera frame rates and resolutions, but lately the nearly exponential growth in CPU perfor-mance has been tapering off compared to the explosive growth of the past decade.

Vision applications must rely on alternative solutions to increase speed rather than simply depending on a faster processor. One option is to divide the im-age processing algorithm and do more in parallel, as many of the algorithms used in vision applications are very well suited to handle this. Technologies like SSE, hyperthreading and multiple cores can be been used to parallelize and do more without increasing the raw clock rate. However, there are issues when selecting this option. Unless the software package

being used abstracts the complexity, there are difficulties in programming software to use multiple threads or cores. Data must be sent between threads, which can result in memory copies and synchroniza-tion jitter. Additionally, it is generally a manual process to take an existing single-threaded image processing algorithm and make it multicore compatible. Even then, cost often prohibits parallelizing very much because most system designers do not have the option to purchase a 16-core

server class computer for each test cell they create.

One solution for this issue is made possible with an FPGA, as it is fundamen-tally a semiconductor device that contains a large quantity of logic gates, which are not interconnected and whose function is determined by a wiring list that is down-loaded to the FPGA. The wiring list de-termines how the gates are interconnected and this interconnection is performed dynamically by turning semiconductor

CONTEXTOptimizing Machine Vision Systems

TECHNOLOGY IN

20 MARCH 2014 RTC MAGAZINE

by Carlton Heard, National Instruments

Acquire

Image

Stop

MorphologyThreshold

i

Image

TF

Morph Operation

Morphology

Morph

TF

Image

Threshold Range

ThresholdTF

FIGURE 1

When operations are programmed sequentially, the loop rate is limited by the sum of all times for each operation.

Page 21: RTC Magazine

RTC MAGAZINE OCTOBER 2013 21

TECHNOLOGY IN CONTEXT

ticular vision application. Often the use of an FPGA can add

complexity to the design process. Hard-ware programming is a significant depar-ture from traditional software program-ming as there is a non-trivial learning curve. However, high level synthesis tools such as LabVIEW FPGA are available to abstract much of this complexity, enabling the designer to take advantage of FPGA technology without a deep knowledge of VHDL programming.

There are also great differences in clock rates between FPGAs and CPUs. Clock rates of an FPGA are on the or-der of 100 MHz to 200 MHz, which are significantly lower than a CPU that can easily run at 3.0+ GHz. Therefore, if an application requires an image processing algorithm that must run iteratively and cannot take advantage of the parallelism of an FPGA, a CPU results in faster pro-cessing. This serves as another reminder to evaluate the system requirements and algorithms before selecting between an FPGA or CPU.

Is there are big need for floating point support? Floating point is difficult to achieve on an FPGA. This is somewhat mitigated by using fixed point or high

is most likely best suited for a CPU-based system.

If a loop has multiple operations running within it and those operations run se-quentially, the time it takes for the loop iteration to complete is the sum of the time each operation takes to run (Figure 1). One way to increase the processing loop rate is to parallelize the operations through pipe-lining. By doing this, the processing loop rate is lim-ited only by the slowest op-eration rather than the sum of them all (Figure 2). This approach increases speed along with latency because the result is not valid until multiple loop iterations are complete. For pixel-by-pixel operations including kernel operations, dilate, erode or edge-finding, algorithms can be stacked back-to-back incorporat-ing only marginal latency.

Security can also be an issue. Since the image processing occurs in hardware with FPGAs, the image and code stays within the FPGA. This is beneficial if ap-plications require the image or IP to re-main secure and hidden from the user.

And don’t forget the factors of power and heat. An FPGA may consume 1-10 watts of power, while a CPU of the same performance can easily consume 50-200 watts. With that much power, there is also a lot of heat that must be dissipated. For fanless embedded applications this may result in a more complex and larger me-chanical design. The lower power con-sumption of an FPGA is particularly use-ful for extreme conditions such as space, airborne and underwater applications.

Considerations for Using a CPUAs with most applications, there are

tradeoffs to consider along with potential benefits. While FPGAs offer many advan-tageous features, there are still instances where a CPU may be more beneficial. Consider the following tradeoffs when de-termining whether an FPGA, a CPU, or a combination is most appropriate for a par-

switches on or off to enable different con-nections. The benefit of using an FPGA is that it is essentially software-defined hardware. Therefore, system designers can program the chip in software, and once that software is downloaded to the FPGA, the code becomes actual hardware that can be reprogrammed as needed. Using an FPGA for image processing is especially beneficial as it is inherently parallel. Algorithms can be split up to run thousands of different ways and can re-main completely independent. While FP-GAs are inherently well suited for many vision applications, there are still certain aspects of the system that may not be as suited to run on the FPGA. There are a number of features to consider when eval-uating whether to use an FPGA for image processing.

Considerations for Using an FPGAFPGAs have incredibly low latency

(on the order of microseconds) when they are already in the image path. This is criti-cal because latency accounts for the time it takes until a decision is made based on the image data. When using FPGAs with high-speed camera buses such as Camera Link that do not buffer image data, the FPGA can begin processing the image as soon as the first pixel is sent from the camera rather than waiting until the entire image readout has completed. This re-duces the time between exposure and im-age processing by nearly an entire frame period, making it possible to achieve ex-tremely tight control loops for applica-tions like laser tracking and in-flight de-fect rejection systems.

FPGAs can help avoid jitter. Because they do not have the overhead of other threads, an operating system or interrupts, FPGAs are extremely deterministic. For many image processing algorithms, it is possible to determine the exact execution time down to nanoseconds.

For massively parallel computation or heavily pipelined math, the raw computa-tion power of an FPGA can be an advan-tage over a CPU-based system. An impor-tant consideration, however, is to under-stand what image processing algorithms are needed for the application. If the algo-rithm is iterative and cannot take advan-tage of the parallel nature of an FPGA, it

RTC MAGAZINE MARCH 2014 21

AcquireF

FF

O

Threshold Range

Threshold

Morph Operation

Morphology

Image Stop

Morphology

Threshold

i

Morph

Image

TF

TF

TF

Image

FIGURE 2

Pipelining speeds up loop rates as each operation can run in parallel. In this case, the loop rate is only limited by the operation that takes the longest.

Page 22: RTC Magazine

22 OCTOBER 2013 RTC MAGAZINE

TECHNOLOGY IN CONTEXT

servo motors. Often all the inspection and decision-making can be accomplished on the FPGA with little or no CPU interven-tion, but a CPU can still be used for su-pervisory control or operator interaction. Applications best suited for high-speed control include high-speed alignment, where one object needs to stay within a given position relative to another as in laser alignment and high-speed sorting (Figure 3).

From food products and rocks to manufacturing goods and recycled gar-bage, there is a huge bottleneck for effi-ciently and quickly sorting items based on color, shape, size, texture, etc. The ability to acquire an image, process it and out-put a result within the FPGA can speed up this process, resulting in more accurate sorting so that fewer good parts are re-jected and fewer bad parts are accepted. A more specific example where FPGAs can be especially beneficial is with air sorting, which involves imaging, inspecting and sorting a product while it is falling. Low jitter is critical for this type of application because the time between the decision-making and I/O must be known.

Image preprocessing and co-process-ing are nearly the same with the differ-ence being which device initially acquires the image. In both situations the FPGA works in conjunction with a CPU to pro-cess images. When preprocessing images, the image data travels through the FPGA, which modifies or enhances the data, before sending it to the host for further processing and analysis. Co-processing implies that the image data is sent to the FPGA from the CPU instead of a camera. This scenario is most common for post-processing large batches of images once they are acquired. One of the most excit-ing examples is using FPGAs to boost the speed and efficiency of Optical Coherence Tomography (OCT). This is a technique for obtaining sub-surface images of trans-lucent or opaque materials at a resolution equivalent to a low-power microscope. It is effectively an “optical ultrasound” that images reflections from within tissue to provide cross-sectional images. OCT is attracting interest among the medical community, as it provides tissue morphol-ogy imagery at a much higher resolution (better than 10 µm) than other imaging

the camera and performs some type of in-line processing such as highlighting edges and features of interest or masking features. Then the FPGA outputs the im-age directly to a monitor or sends it to the host CPU for display. In most instances, the FPGA directly outputs the image as low latency and jitter are important in the system. As an example, with medical de-vices an image is taken and cells are pro-cessed and displayed on the monitor for a doctor to review. The FPGA can be used to measure the size and color of each cell and highlight specific cells for the doctor to focus on.

In high-speed control applications, instead of an image for display as the out-put it is some other type of I/O such as a digital signal controlling an actuator. In these applications, the time between when an image is acquired and an action is taken must be fast and consistent, so an FPGA is preferred due to the low latency and low jitter it offers. This very tight in-tegration with vision and I/O enables ad-vanced applications like visual servoing, which is when visual data is used as direct feedback for positioning and control with

level synthesis tools, but it is a factor that must be kept in mind when using FPGAs that may not even need to be considered when working with a CPU.

In many applications, the combina-tion of an FPGA and a CPU to handle various aspects of the design can be very useful. DMA can help pass data back and forth between the devices and each device can be used to take care of the process-ing that is most appropriate for each chip. This is not to say that an FPGA or a CPU is incapable of performing all tasks, but some are better suited for one chip versus the other and using both can simplify the design while making it possible to gain high performance. Many applications can benefit from this architecture.

Matching the Needs of Application Categories

There are four main categories in-cluding visualization, high-speed control, image preprocessing and co-processing. Visualization takes an image from a cam-era and changes it for the purpose of en-hancing it to display for human eyes. In this case, the FPGA reads the image from

22 MARCH 2014 RTC MAGAZINE

FIGURE 3

FPGAs can be used for advanced control applications such as high-speed laser tracking. Low latency and jitter are requirements for adaptive optics that are possible with FPGA image processing.

Page 23: RTC Magazine

RTC MAGAZINE OCTOBER 2013 23

TECHNOLOGY IN CONTEXT

modalities such as ultrasounds or MRIs (Figure 4).

A typical OCT system uses a line-scan camera and a special light source

RTC MAGAZINE MARCH 2014 23

FIGURE 4

Kitasato University used FPGAs to create the world’s first real-time 3D OCT medical imaging system.

ExpressCard, PCIe, or Thunderbolt

connectivity package

1, 2, 3, 5, or 8 slots

Full-length (13.25”), mid-length (9.5” ),

or short card (7.5” )

Half-height orfull-height cards

36W, 180W, 400W, 550W or 1100W

power supply

Flexible and Versatile: Supports any combination of Flash drives, video, lm editing, GPU’s, and other PCIe I/O cards.The CUBE, The mCUBE, and The nanoCUBE are trademarks of One Stop Systems, Inc. Maxexpansion.com and the Maxexpansion.com logo are trademarks of One Stop Systems, Inc.Thunderbolt and the Thunderbolt logo are trademarks of the Intel Corporation in the U.S. and other countries.

CUBEThe ™expansion enclosures

ORDER TODAY!

Choose from a variety of options:

that sweeps across a tissue and images the surface beneath, one line at a time. Once each line is acquired, the data is scaled and converted to the frequency do-main, where the data is further manipu-lated and combined with other lines to reveal a high reso-lution, 3D picture of a tissue. With industrial inspec-tion, there are many applications today that use brute force

methods to check for defects over large and continuous areas, as seen in web in-spection. FPGAs can be used to prepro-cess the large amounts of data associated

with web inspection through performing flat field correction, thresholding and par-ticle analysis.

The advantages of an FPGA for im-age processing are dependent upon each use case, including the specific algorithms used, latency or jitter requirements, I/O synchronization, power and programming complexity. In many cases, using an ar-chitecture featuring both an FPGA and a CPU presents the best of both worlds and offers a competitive advantage in terms of performance, cost and reliability. With a multitude of inherent benefits, FPGAs are poised to take many vision applications including medical imaging and vision mo-tion integration to the next level.

National Instruments Austin, TX (512) 794-0100 www.ni.com

Page 24: RTC Magazine

In traditional GUI design, a user experi-ence (UX) or user interface (UI) team creates a prototype on desktop soft-

ware such as Adobe Photoshop, Illustrator, HTML or Flash, submits it for approval, and then transfers it—for most of the re-mainder of the development process—to the engineering team. This design process presents the first major obstacle in time-to-market and is also what often results in a less-than-desirable UI. Once that critical UI design hand-off occurs, embedded sys-tem developers proceed to re-implement the prototype for the embedded system. The result is that the original prototype, in essence, becomes a throwaway, since the performance observed in the desktop ap-plication bears no resemblance to the per-formance of the target platform. As embed-ded system developers go about the process of re-implementing the prototype—and at-tempting to replicate the UI—they inevita-bly make changes and sacrifice features in order to fulfill their mandate, which is to make it run on the target.

It is important to note another factor that delays time-to-market: UI designers and embedded system developers typically do not work in tandem at any point in the process. In fact the opposite is true. Once the design is handed off, UI designers often do not see it again until the alpha or beta phase of product testing. This siloed ap-

proach, in which there is a complete loss of design control, creates lag time late in development as the designer attempts to retrofit features into a nearly completed product. As a result, another obstacle to market release is a back-and-forth process between the UI designers and embedded system developers to develop a product that both reflects the original design and is fully

functional (Figure 1). The disconnect between the two teams

runs even deeper than that. UI designers, as mentioned before, typically use desktop applications that were never intended to run on the target platform. In other cases, the prototype itself is comprised with fake content and imagery and does not even contain real data. This adds to the embed-

by Brian Edmond, Crank Software

Delays in getting an embedded UI to market are costly in terms of development resources as well as competitive advantage. The process is often lengthy and tedious, setting back launch dates and driving up development expenses. This can be improved using some best-practice approaches to speeding time-to-market.

TECHNOLOGYCONNECTED

Speeding Time-to-Market for GUI Designs

High End Graphics on Small Devices

24 MARCH 2014 RTC MAGAZINE

FIGURE 1

A very rich and complex user interface can be designed using Windows-based tools like Photoshop or Adobe Illustrator and others. Translating that design to run under the RTOS on an embedded design can be filled with complications and compromises.

Page 25: RTC Magazine

RTC MAGAZINE OCTOBER 2013 25

TECHNOLOGY CONNECTED

RTC MAGAZINE MARCH 2014 25

Often the hand-off process for the de-sign team involves exporting the design in-formation and images into a format usable by the development team—a time-con-suming task that again delays deployment. The reason: UI designers’ applications do not speak to embedded system developers’ software development tools. A better solu-tion is to allow UI designers to use a set of tools they are comfortable with and that can be easily integrated by embedded sys-tem developers—and then transmitted back to UI designers when changes are required. This eliminates the need to re-write code for every UI change, which introduces bugs into the functionality, requires more testing time and delays the UI release. The more expeditious approach is to allow the de-signer to make changes to data files, such as XML or HTML, that can be used as a UI description language.

A common “fix” for development pro-cess issues is to deploy third-party soft-ware to bridge the gap between the UI de-signers’ toolsets and the embedded system developers’ toolsets. Yet all too often, the third-party application does not integrate well with either. Needless to say, incom-patibility issues lengthen the development process, and third-party software that is not compatible with the software currently in place will exacerbate the issue. It is es-sential that the development support soft-ware can integrate with what both teams

ded system developers’ timeline, as mas-sive re-coding is required to make the translation from these desktop applications to an entirely different hardware and/or software platform.

As every UI development team knows, the result is that development time has been so delayed that in fact there is no time left in the schedule to adequately address UX issues. These delays also mean that test-ing occurs late in the development cycle, since no portion of the UX can be tested independently while the engineering team is still writing back-end code. Ironically, it is the UX that is the true differentiator for any embedded UI, and the intended UX—one that ties customers to a specific brand with rich features and intuitive functions—often never gets released. In the end, the prototype and the end product have di-verged due to design misinterpretation and performance implications to an extent that the end product does not reflect the original design.

Best Practices for Speeding Time-to-Market

To get GUI designs to market more quickly, a better approach is to allow UI designers and embedded system developers to work independently, but concurrently, on UI development—doing what each does best. In a workflow where there is no prod-uct hand-off, each team remains involved and able to provide continuous feedback. If UI designers are allowed to own the design throughout the development process, it not only compresses the development sched-ule, it also requires fewer embedded sys-tem developers on the development team. The reason is that they are no longer forced to write code in order to implement design features at the same time they are work-ing toward functionality on the target plat-form. When embedded system developers are forced to change the design, two things result. First, they make mistakes because that is not their area of expertise. This then results in a multitude of trial-and-error ef-forts to rectify the mistakes. Secondly, it also takes exponentially longer for them to make said design changes.

When creating a UI, development teams can expedite the release date by cre-ating a thorough design up front, by fully defining the UI features, the hardware

platform and the system integration points. In other words, each team should have an equal amount of information about what the product will look like and what it will do, from the beginning. This is important because UI designers need to know what data the UI will be able to retrieve from the system, and embedded system developers need to know what demands the system must be able to accommodate. Armed with information on the various required entry points, embedded system developers can test these independently, and much earlier in the process than is typical today.

Another time-to-market boon is the prototype-as-product approach, which means implementing designs on a true prototype immediately. The typical pro-cesses are to prototype the design and then re-implement it for the embedded platform. However, if the design is imple-mented on the prototype from the outset, with the intended design fully functional on the intended platform, then any design flaws, feature or hardware compatibility issues, etc. can be rectified early, rather than in the testing phase. If the prototype is the product, then the embedded system developers can begin writing the back-end code immediately as well. Working from a functional prototype that runs as well on the desktop as it does on the target, such as a tablet, can help condense development time from months to weeks.

FIGURE 2

The Crank Storyboard Suite was designed for engineers by engineers. It allows UI designers with no programming experience to drag-and-drop their UI designs in parallel with, yet independently from, the engineers who are working on the coding. Storyboard simplifies the design process, saves valuable time and leverages the core skills of each valued member of the team.

Page 26: RTC Magazine

TECHNOLOGY CONNECTED

26 MARCH 2014 RTC MAGAZINE

tation to overlay frameworks. To address all of these development roadblocks, many companies resort to implementing a frame-work over the development process. The result is that not only is the company em-ploying a team to build, test and maintain a UI, it is also employing a separate team to build, test and maintain a framework. This added layer serves to complicate and delay the development process. Similar circumstances occur when someone in the company builds custom tools to solve these internal issues. The builder then becomes an internal product provider.

Real-World ExamplesCompanies using Crank Software

(Figure 2) have been able to speed time-to-market by implementing these best prac-tices. For instance, QNX Software Sys-tems used Crank Software to implement a 17-inch, curved, 1080p center console display embedded in a Bentley concept car. The unique digital light projection HMI, which debuted at the Consumer Electron-ics Show in 2013, featured content that was originally created in Adobe Photoshop and was fully implemented on the target in only eight weeks.

Another company, Auto Meter, used Crank Software to successfully develop its new LCD Competition Dash—a user-customizable display with precise data acquisition—in less than six months for their customer, NASCAR. The display was launched in time to debut at the 2012 SEMA Show for automotive specialty products.

Much exists in current UI development scenarios that extends the development timeline, drives up costs and sacrifices UI quality in order to meet a targeted release date. Creating an environment in which UI designers and embedded system develop-ers can work collaboratively, but indepen-dently, enables each to stay focused on what they do best, to maintain ownership of the UI from concept to implementation, and to speed time-to-market—a critical re-quirement in a landscape where companies can succeed or fail based on their next UI.

Crank Software Ottawa, Ont. (613) 595-1999 www.cranksoftware.com

are currently using—with Adobe Illustra-tor or Adobe Photoshop for UI designers, and with tools like Eclipse or native desk-top tools for Linux for embedded system developers.

Development software that separates the UI from the back end can speed devel-opment and deployment. Using a model-view-controller pattern can shorten the de-sign process and help teams work together with clear objectives. If they can be run in-dependently, then each team can continue to work on the product without making disastrous changes while still maintaining clear integration points. This allows each team to focus on their core competencies: design or embedded system development.

Development software that does not flexibly support multiple platforms like Macintosh and Linux can also add time constraints. Every member of the develop-ment team should be able to work in the environment in which they are the most efficient. The developer should also be able to simulate and test features on their respective platforms to limit the need for external hardware platforms.

Development teams also must have the ability to run a functional prototype on multiple platforms to compare perfor-mance early in the product cycle. Teams that are able to evaluate hardware plat-forms, and various configurations on those platforms, can test performance early in the process and make educated decisions about whether or not the platform will per-form as expected with the design. Many time delays in development are centered around resolving those issues—or worse, settling for a less robust UI due to hard-ware constraints.

For teams looking to speed time-to-market, the key is unquestionably to have flexibility. When choosing development support software, beware of anything that limits either the operating system or the target hardware platform. To maintain ef-ficiency and competitiveness, companies should have the freedom to move from one platform to another based on development budget, customer expectations and similar factors. The converse scenario forces com-panies to purchase different UI tools for different product levels.

It is also important to resist the temp-

A TQMa28 modulewith a Freescale i.MX28can save you designtime and money

TQ embedded modules:

■ Are the smallest in the industry, without compromising quality and reliability

■ Bring out all the processor signals to the Tyco connectors

■ Can reduce development time by as much as 12 months

The TQMa28 module comes witha Freescale i.MX28x (ARM926™)and supports Linux, WinCE 6.0and QNX operating systems.

The full-function STKa28-AA StarterKit is an easy and inexpensiveplatform to test and evaluate theTQMa28 module.

ConvergencePromotions.com/TQ-USATQ-USA is the brand for a module product line represented in N. America by Convergence Promotions, LLC

Technology in Quality

TQMa28 V2 1-3 Page Ad.indd 1 2/3/14 3:56 PM

Page 27: RTC Magazine

Untitled-5 1 9/26/13 9:29 AM

Page 28: RTC Magazine

28 OCTOBER 2013 RTC MAGAZINE28 MARCH 2014 RTC MAGAZINE

W ithout a doubt, one of the key changes in software develop-ment has been the growth of

open source software projects. The news is full of Linux- and Android-based sys-tems, with Android being the largest in-stalled base of open source software (also the best example of a Linux-based sys-tem used on a smartphone). In the time since Linux started the public awareness of open source software, there have been other efforts that have laid the foundation for the open effort.

One of the challenges of developing software has been the availability of tools such as compilers, assemblers, linkers, de-buggers and integrated development envi-ronments. On top of this, the proliferation of processors has made tools availability even more critical.

The number of available processor so-lutions based on MIPS, x86 PowerPC and ARM architectures has increased almost daily. On top of that are the 8- and 16-bit processor solutions such as those from Microchip, Freescale and others. How do the tools keep up? The solution has been the development of a base set of devel-opment tools based on the GNU toolset shown in Table 1. Each of the manufactur-ers, or open source developers, provide a set of tools based on the GNU toolset. It is

possible to use a set of open source tools for almost all of the popular processers available for embedded designs.

Availability of inexpensive, or even free tools has opened up the use of many processors that might not have been used if an expensive toolset were the only so-

lution. After all, it is an uphill road to convince your boss that trying the lat-est HAL2014 processor is a good idea if it is going to cost $10K to get a set of tools. To be fair, I should mention that the HAL2014 vendor will, in most cases, pro-vide an evaluation set of tools with limited

by Clarence Peckham, Senior Editor

The traditional commercial BIOS, while very useful for PCs, does not ideally serve the needs of embedded applications both in terms of functionality and pricing/licensing. The open source community has developed an alternative aimed at the needs of embedded developers.

Open Source Firmware – Coreboot for x86 Architecture Boards

Coreboot ROM Stage

Vendor Reference Code

Coreboot RAM Stage

Payloads

SeaBIOS

Payload Examples

iPXE

User Application

FIGURE 1

Coreboot architecture including payloads. User payload can be proprietary code that does not have to be leased under the open source license. The Vendor Reference Code is provided by the processor vendor as either source or binary files.

TECHNOLOGY INSYSTEMSGetting beyond the BIOS for Embedded

Page 29: RTC Magazine

RTC MAGAZINE MARCH 2014 29

TECH IN SYSTEMS

since it involves an upfront cost as well as royalties for each unit shipped.

Embedded Systems Firmware Requirements

The firmware, or BIOS, used in x86 architecture systems was developed to provide a means to test and initialize hardware and boot the operating system from a disc drive. For embedded systems, the firmware has requirements that go be-yond the normal BIOS features—in most cases requirements that are much simpler than what is offered in the typical BIOS.

First for an embedded system is the ability to boot from cold to the applica-tion as fast as possible. In some cases an embedded system must be up and running in less than a second for critical applica-tions. This requires the ability to utilize the smallest amount of code to execute the minimal operations required.

Another requirement is the flexibility to handle anything from a small system to a large multiprocessor computing system. Flexibility also requires the ability to eas-ily customize the firmware and have open source for most of the software. And if binary-only modules are used, you need the ability to locate the binary modules as required. The advantage of allowing the use of binary modules in open source software is that it enables chip manufac-

functionality for a limited time—but free and unlimited is better.

One issue of the open source tools is the lack of defined support. In a lot of cases the large number of developers means that bug reports get immediate at-tention. If that is not enough, a support ecosystem has built up around companies that will provide support for a toolset for the cost of a support contract. This makes a lot of embedded system developers more comfortable using an open software tool solution.

With the tools and open source op-erating system solutions, the user can develop an embedded solution. However, there is still a hole in the open software offerings for embedded applications—the boot firmware, or in the case of x86 ar-chitecture, the Basic Input/Output System (BIOS) used to start the application. For processor solutions other than x86 there is an open source solution called U-Boot, which started as a solution for the Pow-erPC processor and has migrated to the MIPS and ARM architecture, and System on Chip (SoC) solutions based on MIPS and ARM. For the x86-based architecture, the standard has been to use a traditional BIOS developed for the PC architecture such as the offerings from Phoenix or AMI. This is a workable solution but not one that is ideal for the embedded market

FIGURE 2

SageEDK Eclipse Development Environment debug page. (courtesy Sage Electronic Engineering)

Page 30: RTC Magazine

30 OCTOBER 2013 RTC MAGAZINE

TECH IN SYSTEMS

other nonstandard media. The major-ity of existing proprietary BIOS soft-ware does not support this capability. Customization: Coreboot enables you to customize your firmware even at the most basic level. Structurally, Coreboot is based around the requirements of the x86 PCI device tree and is designed to do minimal hardware initialization before passing control to a payload. Coreboot initialization contains no BIOS services and does not stay resident in memory. Runs in 32-Bit Protected Mode: In Coreboot, the boot block contains the jump instruction to the initialization code—the first instruction fetch—and an immediate change to 32-bit pro-tected flat mode. After the switch to protected mode, the boot block does minimal northbridge and southbridge setup required to jump to the RAMstage. Code Written in C: Moving away from assembly code toward a high-level lan-guage saves developers considerable time in coding, debugging and documenting. Loading Application Software: Core-boot supports multiple booting choices, including loading software without an OS, as with certain standalone pro-grams (memory testers, games, etc.). Multiple Debug Features: Coreboot en-ables a number of functions including re-mote flash firmware, embedded software development via serial ports, systems administration of remote computers, and booting from a network.

Coreboot PayloadsThe concept of payloads is the key

feature of Coreboot. By using a payload, embedded developers can define the exact features they need and not have to include any features they do not require. This makes for an efficient and fast solution. Although the payload can be completely developed by the user, there are several developed payloads that can be used if desired.

An example is the seaBIOS payload, which provides normal BIOS calls so that standard OSs, such as Windows and Linux, can be booted. Another example is the iPXE payload, which provides for loading over the network. Or both pay-loads can be used in a Coreboot imple-mentation.

As an open source project the Core-boot community continues to grow, at-tracting developers from all parts of the globe. Members work for technology companies, conduct research at universi-ties, and take part in government-funded programs. “With support from both AMD, as a source code provider, and In-tel, providing the Firmware Support Pack-age (FSP), access to low-level chip func-tions has also helped Coreboot become successful,” commented Kerry.

Coreboot Features and Embedded Use

A simple definition of Coreboot is that it is a replacement for the traditional BIOS, but it is a boot loader and not a BIOS. The purpose of Coreboot is to ini-tialize the hardware and then load a pay-load. Figure 1 shows the basic architecture of the Coreboot firmware. The payload is the module that decides what the hardware is going to be used for. A payload can be the end application, or as in most cases, it is a path to booting the final applica-tion. The major features of Coreboot are: Smaller Binary Images: By generating smaller binary images, you’ll be able to use less flash memory, or incorporate addi-tional features with the available memory. Boot Flexibility: With Coreboot you can boot from NAND Flash and

turers such as AMD and Intel to provide proprietary software for their advanced chips without having to release the source code. As we will see in the following sec-tions, Coreboot provides the fast, flexible and cost-effective solution to providing a firmware solution for embedded systems.

Atomic Research Spawns the Coreboot Initiative

The birth of LinuxBIOS began in 1999 with a handful of researchers at Los Alamos Labs lead by Ron Minnich. The objective was to improve computing per-formance through faster BIOS startup and better error handling in large computer clusters. “From that start, LinuxBIOS was renamed Coreboot in 2008 and migrated into commercial high-performance com-puting (HPC) and began capturing the at-tention of industry leaders such as AMD and Intel,” stated Kerry Brown, VP/COO of Sage Electronic Engineering.

Also a number of manufacturers such as Gigabyte, Micro-Star International (MSI) and Acer are supporting Coreboot development on their motherboard and laptop designs. Recognizing Coreboot’s advantages, Google is now on board as a project sponsor. “Several Google Summer of Code (GSOC) projects have been based on Coreboot development and enhance-ments,” added Kerry.

30 MARCH 2014 RTC MAGAZINE

TABLE 1

Open Source tools for S/W development — available for multiple processor architectures.

GNU Tool Function

GNU Make Automation tool for compilation and build

GCC C Compiler

G++ C++ Compiler

GNU Binutils Suite of tools including linker, assembler and other tools

GNU Bison Parser Generator

GNU M4 Macro Parser

GDB Source Code Debugger

GNU Build Autotools for builds — autoconf, autoheader, automake, libtool

GNU Libraries Std I/O libraries, math libraries, etc.,

Page 31: RTC Magazine

RTC MAGAZINE MARCH 2014 31

TECH IN SYSTEMS

The modularity of the Coreboot proj-ect provides segregation of your IP from the publicly available GPL code base. Since IP is delivered in a payload called from the Coreboot initialization firmware, it can be developed on a proprietary basis or leveraged from a code base that doesn’t have a publication requirement. In other words, you can fully use Coreboot to your advantage and participate in the global Coreboot community without sharing your intellectual property

Coreboot Software Development Developing software for Coreboot re-

quires the use of development tools such as GCC and GDB for debugging. All of the source code can be accessed via www.coreboot.org, and in addition to the source code, a full set of documentation is avail-able to help speed the learning curve. As with any open source project, a company that wants to utilize the code has to be willing to contribute to the support of the code as well as keep track of the changes so that they can decide when to roll their internal code revisions. This can be a large task that can consume a lot of devel-opment time.

An alternative is to work with a ven-dor that supports commercial use of Core-boot. Sage Electronic Engineering is a company that supports Coreboot for com-mercial use. Sage can provide any level Coreboot support. Their key products are SageBIOS, Sage EDK and SmartProbe.

SageBIOS is an integrated version of Coreboot that can be configured to sup-port any payload required. Sage EDK, shown in Figure 2, is an integrated devel-opment environment based on the Eclipse platform and open tools. When used with SageBIOS, Sage EDK provides a complete development package for Coreboot appli-cations. For debugging, SmartProbe can be used with AMD-based platforms to control firmware updates and debugging.

Both Intel and AMD provide code to support Coreboot applications on their re-spective chip sets. AMD provides source code via their AMD Generic Encapsulated Software Architecture (AGESA), and In-tel provides binaries of their Firmware Support Package. “Both Intel and AMD are key supporters of the Coreboot project

with the AGESA and FRP software pack-ages. In the case of Intel’s FRP, Sage is a source code licensee, so any changes re-quired can be made,” commented Kerry.

Coreboot has gained a lot of support in the past few years, and with the trend to consider open source solutions in em-bedded applications, replacement of the traditional BIOS with a royalty-free open source solution seems to make good sense. Also, just as Linux started gaining better acceptance with releases from RedHat and Ubuntu, support for Coreboot from companies like Sage Engineering should help gain embedded developers.

Another plus is Google’s use of Coreboot for their Chromebook laptops. Google’s major objective is to give more support to Coreboot development not only by developing the code base, but also through testing and quality control, in-cluding developing a Coreboot version for the ARM processor. Coreboot meets all of the objectives of an embedded applica-tion, and with support provided by many companies and individual developers, it is a realistic alternative to a traditional com-mercial BIOS.

AMD Sunnyvale, CA (408) 749-4000 www.amd.com

Coreboot www.coreboot.org

Intel Santa Clara, CA (408) 765-8080 www.intel.com

Sage Electronic Engineering Longmont, CO (303) 495-5499 www.se-eng.com

A TQMa35 module witha Freescale i.MX35 cansave you design timeand money

ConvergencePromotions.com/TQ-USATQ-USA is the brand for a module product line represented in N. America by Convergence Promotions, LLC

Technology in Quality

TQ embedded modules:

■ Are the smallest in the industry, without compromising quality and reliability

■ Bring out all the processor signals to the Tyco connectors

■ Can reduce development time by as much as 12 months

The TQMa35 module comes with aFreescale i.MX35 (ARM11™), andsupports Linux, QNX, and WinCE6.0 operating systems.

The full-function STKa35-AB StarterKit is an easy and inexpensiveplatform to test and evaluate theTQMa35 module.

TQMa35 V2 1-3 Page Ad.indd 1 2/3/14 3:56 PM

Page 32: RTC Magazine

the market such as server infrastructure, military, avionics, general purpose com-puting, scientific computing and more.

The POSIX standards have contin-ued to evolve into the 21st century with significant revisions to the standards. One significant evolution of the standard hap-pened in 2004, when the POSIX stan-dards underwent a significant expansion and unification, to evolve into a newer standard IEEE 1003.1-2004. The IEEE 1003.1-2004 standard provided an exten-sive set of APIs encompassing applica-tions in scientific, real-time and enterprise computing (Figure 1).

At the same time, the IEEE POSIX standard also recognized the specialized needs of embedded operating systems and defined the IEEE POSIX 1003.13 stan-dard, which defines four different profiles that correspond to variants of embedded designs that are prevalent in the indus-try. The IEEE 1003.13-2003 (POSIX.13) standard for real-time profiles and appli-cations specifically targets embedded ap-plications. This standard defines four real-time POSIX profiles:

PSE 51: MinimalPSE 52: ControllerPSE 53: DedicatedPSE 54: Multi-purpose

These four profiles, shown in Figure 1, specify increasing levels of complex-ity and functionality to satisfy the full spectrum of real-time applications that can be designed using POSIX. It also de-fines a strict API compatibility standard that requires each higher POSIX profile to be a superset of the lower profiles. This guarantees that POSIX applications writ-ten to the minimal profile (PSE51) will run on a multipurpose profile (PSE 54) on compatible operating systems. These profiles, PSE51 through PSE54, allow the flexibility needed for scaling from deeply embedded applications to high-end work-station applications. The POSIX IEEE 1003.1 standard has continued to evolve with newer revisions in 2008 and 2013 (Figure 2).

The ability of an operating system to conform to established open stan-dards application programming in-

terfaces (APIs) is a key enabler for a criti-cal mass of middleware and applications executing in its environment. It allows application portability among execution environments, thereby allowing develop-ers the maximum flexibility in creating application software that can be migrated to newer environments with minimal ef-fort. As the complexity of hardware and software continues to increase, the ability to preserve the software investment pro-vides significant competitive leverage for both software vendors and OEMs alike.

One of the best-known and most widely adopted API standards in the em-bedded and server infrastructure, which has withstood the test of time, is the IEEE POSIX standard.

POSIX: Early Origins The POSIX API standard had its

origins in the early UNIX environments, when fragmenting of UNIX variants in the late 1980s resulted in the need to de-fine a common API standard to ensure that application portability between dif-ferent operating systems could be main-

tained. This resulted in the early specifi-cation of the POSIX standard.

POSIX, an acronym for Portable Op-erating System Interface, is a family of re-lated standards governed by the IEEE and maintained and evangelized by The Open Group. POSIX defines the application programming interface (API) for soft-ware compatibility with the different fla-vors of operating systems. First released 25 years ago, the POSIX standard defines the specifications for the characteristics of operating systems, database management systems, data interchange, programming interface, networking and user interface. POSIX enables developers to write their applications for one target environment so they can subsequently be ported to run on a variety of operating systems that sup-port the POSIX APIs—a term commonly known in the industry as “source code compatibility.”

POSIX EvolutionSince its modest beginnings in stan-

dardizing Unix APIs, the IEEE POSIX standard has now emerged as the most prevalent and widely regarded broad-based API standard for operating systems. It has extended its reach into various segments of

TECHNOLOGY DEVELOPMENT

POSIX – 25 Years of Open Standard APIs

The POSIX Heritage - History and Future

32 MARCH 2014 RTC MAGAZINE

The POSIX API has a venerable history of allowing portability and compatibility among a wide variety of systems and applications. Its legacy is destined to continue well into the future.

by Arun Subbarao, LynuxWorks

Page 33: RTC Magazine

RTC MAGAZINE OCTOBER 2013 33

TECHNOLOGY DEVELOPMENT

POSIX Conformance and Compliance

The Open Group is an independent third-party organization that has defined and certifies various implementations of POSIX conformance. The availability of such an independent testing body is an important part of the validation required to certify conforming implementations of operating systems. This allows for a vendor-neutral assessment of the POSIX compatibility of an operating system and allows end users to make an informed de-cision that best suits their application.

The evaluation and selection of an op-erating system that supports POSIX stan-dards is a key decision that determines the level of reuse and portability that can be designed into the system. POSIX “confor-mance” and “compliance” are two terms that have been used by vendors somewhat interchangeably to describe their POSIX compatibility. However, the difference be-tween the two is significant.

POSIX “conformance” indicates ad-herence to the standard without any de-viation. A conforming implementation of this standard offers the highest level of API compatibility with the specification. POSIX “compliance,” however, offers a

much weaker adherence to the standard. An implementation claiming POSIX “compliance” merely needs to disclose APIs that it supports and the ones that it does not.

A higher level of standard exists when an OS’s conformance is approved by an accredited, independent certifica-tion organization. To be conformant with any POSIX standard, the conforming im-plementation must undergo independent certification using a third party (such as The Open Group) and obtain a POSIX conformance certification. The presence of this certification guarantees to the user a complete adherence to the POSIX stan-dard by the operating system.

Strong Industry Support for POSIX

While the benefits of POSIX out-lined above show POSIX’s relevance and importance in embedded environments, it is not an embedded-centric standard. POSIX plays a role in many leading tech-nologies, and a look at how broad POSIX support is among operating systems, both embedded and enterprise, shows that it is a standard that has really seen adoption and usage across many industries. Many

RTC MAGAZINE MARCH 2014 33

Dedicated (PSE53)

Controller (PSE52)

Multi-purpose (PSE54)

Minimal(PSE51)

Networking Asynchronous I/O Multi-Process

Tracing

Message QueuesSimple File System

Core

POSIX.1 (IEEE 1003.1-2001)

Others

Wide Characters

Full File System

Multiple Users

Shell and Utilities

FIGURE 1

The IEEE 1003.13-2003 (POSIX.13) Profiles for Real-Time Applications

UNIX, Linux and UNIX-like operating systems really do a good job of not just conforming or complying to the POSIX standards, but also have POSIX as their native API. Examples include IBM AIX, HP-UX, BSD UNIX, Linux, Oracle So-laris, and the LynxOS and QNX RTOSs. Other operating systems use a POSIX API layer to allow a translation from POSIX to the native proprietary interface of the operating system. Although this adds a slight amount of inefficiency compared to a native API, this is how many RTOSs achieve POSIX compatibility. Examples include VxWorks, Nucleus OS, eCOS and Symbian OS. Even Windows has a POSIX compatibility interface called Cygwin, and this is used to run applications on Windows that were originally built for Linux or UNIX.

This broad support really helps the embedded developer, especially as the lines blur between embedded and enter-prise applications, as many software ap-plications that were originally built for more general purpose computer operat-ing systems can easily be migrated to a POSIX-based RTOS. This reduces the amount of software creation, reduces the porting time, and ultimately reduces the time-to-market and cost for new embed-ded products.

POSIX and Emerging Technologies

The dynamics of the software indus-try continue to evolve with the emergence of several disruptive technology trends that will continue to define the evolution of the software industry at large and em-bedded systems in particular. However, the relevance of the POSIX standards has not been diminished by these paradigm shifts, two of which are mentioned here.

Future Airborne Capability Environ-ment (FACE): The FACE Consortium is hosted and managed by The Open Group and provides a vendor-neutral forum for industry and the U.S. government to work together to develop and consolidate open standards, best practices, guidance docu-ments and business models. The FACE Technical Standard defines the frame-

Page 34: RTC Magazine

TECHNOLOGY DEVELOPMENT

work for creating a common operating en-vironment to support applications across multiple Department of Defense avion-ics systems. The standard is designed to enhance the U.S. military aviation com-munity’s ability to address issues of lim-ited software reuse and accelerate and enhance war fighter capabilities, as well as enable the community to take advan-tage of new technologies more rapidly and affordably. The current FACE APIs are heavily based on the existing POSIX standard and define several profiles such as the Security Profile, Safety Profile (Ba-sic & Extended) and the General Purpose profile. It is a testament to the longevity and relevance of the POSIX APIs that this consortium, which was initiated in 2010, relies so heavily on the POSIX standards.

Internet of Things: Another of the emerging technology trends is the Internet of Things (IoT), where billions of devices are expected to connect via the network to communicate with Cloud infrastuctures, as well as with each other. This marks a key inflection point in the embedded industry and its convergence with main-stream enterprise computing. As these embedded devices connect to the net-work, the POSIX IEEE 1003.13 standard becomes particularly relevant, and the PSE53 profile may become the de facto standard for connected devices since it combines the key elements of small foot-print and network connectivity, two of the essential elements for devices that need to

connect to the Cloud.

POSIX for the Next 25 YearsIt is evident that one need look no fur-

ther than the POSIX standard as one of the definitive API specifications that has stood the test of time and has the neces-sary mechanisms to provide the unifica-tion needed among disparate execution en-vironments and application requirements. This is essential for achieving broad adop-tion of legacy and emerging technologies, and the preservation of a critical mass of applications that subsequently helps cre-ate a network effect for other applications.

As the industry continues to experi-ence technology shifts driven by proces-sor advances, virtualization, security, Cloud computing and mobility, we can look to the POSIX standard to bridge the gap between legacy environments and emerging applications, and to provide a unified application programming envi-ronment that can add compelling value to the technology industry.

LynuxWorks San Jose, CA (408) 979-3900 www.lynuxworks.com

POSIX. 2Shell &Utilities

POSIX. 1Core Services

(Incorporates ANSI C)

POSIX. 1-2001Single Unix

Speci�cation

POSIX. 1-2001(Two technical

Corrigenda)

POSIX. 1-2008Open Group

Base Speci�cationVersion 7

POSIX. 1-2008(Technical

Corrigendum 1)

POSIX. 1bReal-timeextensions

POSIX. 1cThreads

extensions

POSIX. 13Real-timePro�les

Real-time Speci�c

1988 1993 1995 2001 2003 2004 2008 20131992

FIGURE 2

25 Years of POSIX Evolution.

25 Years of LynxOS

LynuxWorks (formally Lynx Real-time Systems) has also celebrated its 25th anniversary for both the company and its POSIX operating system, LynxOS. Since its founding, LynuxWorks has been a strong supporter of open standards. The company was among the earliest support-ers of POSIX, is a member of The Open Group, and is an active participant in the work to keep the standard current. The LynxOS operating system was first re-leased 25 years ago and was designed to offer embedded real-time developers the same features that were available to UNIX programmers in the computer world, but with real-time performance and determin-ism. The POSIX standard, especially the POSIX.1b and POSIX.1c extensions, pro-vided a very natural fit as the native API for LynxOS, and provides good compat-ibility and portability with both UNIX and Linux applications. This enables de-velopers to build complex systems using LynxOS and still meet the strict real-time requirements that are not always available when using UNIX or Linux.

Although LynxOS has an open stan-dards POSIX API, it is still a proprietary RTOS, and hence not encumbered with open source licensing restrictions, and it maintains a very well controlled code base. This proprietary code base is also much smaller than traditional UNIX and Linux systems, and has allowed LynuxWorks to create some derivative versions of LynxOS to support specific market needs. The LynxOS-178 product is designed for safety-critical avionics systems and has been certified in systems to the highest FAA levels. LynxOS-178 still maintains the POSIX API, but adds in a safety par-titioning scheme. This POSIX API has been very useful in allowing LynxOS-178 to meet the FACE standard now being ad-opted in military avionics systems, which is based on POSIX and maintained by The Open Group.

LynxOS is celebrating its 25th birth-day with a new version, LynxOS 7.0. This version bring in new security and commu-nication features that are seen as essential for enabling embedded developers to build the latest devices to contribute to the Inter-net of Things (IoT).

34 MARCH 2014 RTC MAGAZINE

Page 35: RTC Magazine

DESIGN AUTOMATION CONFERENCESAN FRANCISCO, CA • JUNE 1 - 5, 2014 • DAC.COM

REGISTRATION OPENS:MARCH 27

Where IC Design and the EDA ecosystem learns, networks, and conducts business

DAC

in technicalcooperation with:

sponsored by:

NEW TRACKS FOR 2014: AUTOMOTIVE SYSTEMS & SOFTWARE - IP - SECURITY

WWW.DAC.COM

DAC DELIVERS:• A world-class technical program on EDA, Automotive Systems and

Software, Security, IP, Embedded Systems and Software

• Designer Track presentations by and for users

• Colocated Conferences, Tutorials and Workshops

• Over 175+ Exhibitors including: The New Automotive Village

• The ARM Connected Community® (CC) Pavilion

• Daily Networking Events

• Thursday Training Day

Sponsored By:

ANNOUNCING 2014 KEYNOTESKEY

Sir Hossein Yassaie CEO and President Imagination Technologies

Raj TalluriSenior Vice President of Product Management Qualcomm

Jim Tung MathWorks Fellow MathWorks, Inc.

James Buczkowski Henry Ford Technical Fellow Ford Motor Company

Ernie BrickellChief Security ArchitectIntel Corp.

#51DAC

Sponsored By:

Dr. Cliff HouVice President, Research and DevelopmentTSMC

Page 36: RTC Magazine

36 OCTOBER 2013 RTC MAGAZINE

PRODUCTS &TECHNOLOGY

6U VPX Board Features 4th Generation Intel Core Processor

A new 6U VPX processor board is based on the fourth generation Intel Core processor family (previously codenamed “Haswell”). The VR E1x/msd from Con-current Technologies features either the quad-core Intel Core i7-4700EQ processor or the dual-core Intel Core i5-4400E pro-cessor, together with the associated mobile Intel QM87 Express chipset. With up to 32 Gbytes of DRAM and a rich assortment of I/O interfaces, this board is an ideal proces-sor board for 6U VPX solutions requiring the latest in processing performance. 6U VPX is particularly well suited to high-end compute-intensive applications including command and control, surveillance, radar and image processing systems.

The 4th generation Intel Core proces-sor family is based on 22nm process tech-nology and provides enhanced CPU and graphics performance over previous genera-tions at TDP levels up to 47W. Additionally, new instructions are introduced, includ-ing: the Intel Advanced Vector Extensions 2.0 (Intel AVX) to provide a performance improvement in integer and floating-point-intensive computations, particularly appro-priate for image processing applications; and the Intel AES New Instructions (Intel

AES-NI) enhancements to accelerate data encryption and decryption in hardware.

The VR E1x/msd is a 6U VPX proces-sor board featuring this latest quad-core and dual-core processor and supporting chipset,

with up to 32 Gbytes DDR3L DRAM with ECC. Additional features

include 4 x SATA600 mass storage interfaces including

an onboard SATA 600 HDD/SSD site, onboard Compact-

Flash site, serial, USB, GPIO and GPI in-terfaces, Gigabit Ethernet ports, graphics and stereo audio interfaces. The wide range of I/O interfaces can be further expanded by the addition of one or two XMC/PMC modules. The board supports a configu-rable control plane fabric interface (VITA

46.6) and a flexible PCI Express (PCIe) data plane fabric interface (VITA 46.4) support-ing up to Gen 3 data rates and is compat-ible with several OpenVPX profiles. The VR E1x/msd is I/O compatible with the previ-ous generation VR 737/x8x family and can be used alongside the VR XMC/x01, a 6U VPX Dual XMC/PMC Carrier and Mass Storage Board.

Initially the boards are available as commercial and extended temperature vari-ants, and ruggedized variants will follow in the near future. To ease integration, many of today’s leading operating systems includ-ing Windows, Linux and VxWorks are sup-ported. Systems using multiple processing boards will benefit from the optional Fabric Interconnect Networking Software (FIN-S), which provides a high-performance, low-latency, communications mechanism for multiple host boards to intercommunicate across the high-speed fabric interface.

Concurrent Technologies Woburn, MA (781) 933-5900 www.gocct.com.

Compact Network Platform Uses AMD Embedded G-Series SoC

WIN Enterprises announces a new desk-top platform designed to support a range of applications requiring compact size and ver-satile performance. The PL-80520 from WIN Enterprises is powered by an AMD Embed-ded G-Series SoC, an embedded component guaranteed for long product life. The platform features a high-bandwidth DDR3 DIMM slot that supports memory up to 8 Gbytes. Stor-age interfaces include a 2.5” SATA HDD and CompactFlash.

The PL-80520 is equipped with 4 Copper GbE ports, bypass function, a USB 2.0 port, a RJ45 console port, a mini-PCIe socket (PCI-e x1 & USB) and 11 LED indicators for moni-toring power, storage activities for system management, maintenance and diagnostics.

Housed in an attractive black-metal chas-sis, the unit was designed as a wireline or wireless (optional) networking or network se-curity device. Expansion capabilities include a full-size Mini-PCIe with USB, a half-size Mini-PCIe with USB and PCIe/SATA signal-ing. Additional network features include 1 x RS-232/422/485 + 2 x RS-232; 2 x USB 2.0 and 1x USB 3.0, and Line-out, Mic-In audio ports. Features also include the onboard 1.6 GHz AMD Embedded G-Series SoC, one DIMM with up to 8 Gbyte DDR3 and optional Wi-Fi support.

WIN Enterprises, North Andover, MA, (978) 688-2000. www.win-enterprises.com.

36 MARCH 2014 RTC MAGAZINE

Page 37: RTC Magazine

RTC MAGAZINE OCTOBER 2013 37

PRODUCTS & TECHNOLOGY

RTC MAGAZINE MARCH 2014 37

VITA 59 RCE Rugged COM Express for Harsh Environments

The new VITA 59 standard enables the proven COM Express technology to be used in mission-critical and harsh environments. New mechanical parameters guarantee operation across an extended temperature range, while providing high shock and vibration resistance as well as EMC protection.

As a new VITA standard, Rugged COM Express is based on the well-known and wide-spread PICMG standard COM.0, or COM Express. Rugged COM Express, or VITA 59

RCE, has been developed for mission-critical applications that demand higher requirements for thermal design, shock/vibration, environ-mental influences and EMC protection than are available with PICMG COM.0.

Rugged COM Express adds PCB wings for mounting the electronics inside a conduc-tion-cooled aluminum (CCA) frame. When combined with passive cooling, the CCA tech-nology enables electronics to work in high temperature ranges without the need for high-maintenance fans.

One railway application that benefits from the easy extension of the temperature range is a locomotive drive control that requires full EN 50155 compliance and an extended temperature range from -40° to +125°C with passive cooling.

The sturdy metal frame and firmly se-cured electronics inside the unit deliver high resistance against shock and vibration. This is an essential advantage in the harsh mining environment. The electronics within a mining machine’s IP67-compliant control platform have to withstand extremely high vibrations of up to 5G and must be shock proof for up to 50G. VITA 59 RCE is the perfect choice.

Medical systems often need very low EMC values. Thanks to the metal cover on top and on all four sides as well as the bottom cover from the carrier board, the Rugged COM Express standard provides 100% EMC protec-tion. Systems with dual-redundant CPUs—e.g., one for processing and one for HMI control—can communicate undisturbed using Rugged COM Express.

Another advantage of the cover frame is that, in combination with conformal coating, a sealed enclosure is formed, preventing the intrusion of environmental elements such as dust, chemicals and humidity.

Finally, many applications—especially in the railway and avionics market—require long-term availability of up to 30 years. This is ensured by the EOL management at MEN Micro and gives users reliable planning op-tions and saves costs because of longer life-times of the systems.

MEN Microsystems, Blue Bell, PA. (215) 542-9575. www.menmicro.com.

FMC/VPX Carrier Equipped with Optical Backplane Interface

A new line of FPGA Mezzanine Card (FMC) carriers and FMC modules from Pen-tek will include an optical backplane inter-face. The Model 5973 3U VPX FMC carrier is the first member of the Flexor family with a Virtex-7 FPGA and the Model 3312 mul-tichannel, high-speed data converter FMC. They combine the high performance of the Virtex-7 with the flexibility of the FMC data converter, creating a complete radar and soft-ware radio solution.

The Flexor Model 5973 features a high-pin-count VITA 57.1 FMC site, 4 Gbyte of DDR3 SDRAM, PCI Express (Gen. 1, 2 and 3) interface up to x8, optional user-configu-rable gigabit serial I/O and optional LVDS connections to the FPGA for custom I/O. The Flexor Model 5973 delivers new levels of I/O performance by incorporating the emerging VITA 66.4 standard for half-size MT optical interconnect, providing 12 optical duplex lanes to the backplane. With the installation of a se-rial protocol, the VITA-66.4 interface enables gigabit backplane communications between

boards independent of the PCIe interface.The Flexor Model 3312 FMC surpasses

the speed and density of previous products with four 250 MHz 16-bit A/Ds and two 800 MHz 16-bit D/As. Its high-pin-count FMC connector matches the new Virtex-7 FPGA carrier, boosting performance levels and add-ing flexibility.

The Flexor Model 5973 comes preconfig-ured with a suite of built-in functions for data capture, synchronization, time tagging and for-matting, all tailored and optimized for specific FMC modules, such as the Flexor Model 3312. Together, they provide an attractive turn-key signal interface for radar, communications or general data acquisition applications, eliminat-ing the integration effort typically left for the user when integrating FMC and carrier.

The Pentek GateXpress PCIe Configura-tion Manager supports dynamic FPGA recon-figuration through software commands as part of the runtime application. This provides an ef-ficient way to quickly reload the FPGA, which occurs many times during development. For deployed environments, GateXpress enables reloading the FPGA without the need to reset

the host system, ideal for applications that re-quire dynamic access to multiple processing IP algorithms.

The Pentek ReadyFlow Board Support Package is available for Windows and Linux operating systems. ReadyFlow is provided as a C-callable library; the complete suite of initialization, control and status functions, as well as a rich set of precompiled, ready-to-run-examples accelerate application development.

The Flexor Model 3312 FMC and Flexor Model 5973 VPX carriers are designed for air-cooled, conduction-cooled and rugged operat-ing environments. The Model 3312 starts at $2,495. The Model 5973 module with 4 Gbytes of memory starts at $14,995. Delivery is 8-10 weeks ARO for all models.

Pentek, Upper Saddle River, NJ. (201) 818-5900. www.pentek.com.

FIND the products featured in this section and more at

www.intelligentsystemssource.com

Page 38: RTC Magazine

38 OCTOBER 2013 RTC MAGAZINE38 SEPTEMBER 2013 RTC MAGAZINE

PRODUCTS & TECHNOLOGY

38 MARCH 2014 RTC MAGAZINE

2.5A Monolithic Active Cell Balancer with Telemetry Interface

A monolithic flyback DC/DC converter is designed to actively balance high-voltage stacks of batteries. These battery stacks are com-monly found in electric and hybrid vehicles as well as in fail-safe power supplies and energy storage systems. Because these batteries are stacked

in series, the lowest capacity battery will limit the entire battery stack’s run-time. Ideally, the batteries would be perfectly matched, but this is often not the case and generally gets worse as the batteries age. Pas-sive energy balancing offers no improved run-time as it dissipates the added energy of the higher capacity batteries to match the lowest one. Conversely, the LT8584 from Linear Technology offers high-efficiency active balancing, which redistributes the charge from the stronger cells (higher voltage) to charge the weaker cells during discharge. This en-ables the weaker cells to continue to supply the load, extracting 96% of the entire stack capacity where passive balancing usually extracts ap-proximately 80%.

The LT8584 includes an integrated 6A/50V power switch, enabling an average discharge current of 2.5A while offering a simple and com-pact application circuit. Its isolated balancing design can return charge to the top of the battery stack or to any combination of cells in the stack or even to a 12V battery used as an alternator replacement. The LT8584 runs off of the cell that it is discharging, removing the need for compli-cated biasing schemes. It integrates seamlessly via the enable pin with the LTC680x family of battery stack voltage monitoring ICs without any additional software. The LT8584 also provides system telemetry, includ-ing current, resistance and temperature monitoring when used with the LTC680x family of parts. When the LT8584 is disabled, it draws less than 20nA of quiescent current from the battery. For applications that require higher balancing current, multiple LT8584s can be paralleled. It is packaged in a 16-lead TSSOP and is both FMEA and ISO 26262 compliant. The LT8584EFE is packaged in 16-lead TSSOP and is priced starting at $2.95 each.

Linear Technology, Milpitas, CA (408) 432-1900. www.linear.com.

Rugged PCI/104-Express SBC with Intel N2800 Offers Rich I/O

A rugged PCI/104-Express single board computer (SBC) is based on Intel’s dual-core Cedar Trail N2800 CPU. The Atlas from Di-amond Systems offers a speed of 1.86 GHz and dual-core hyperthreading technology that enables applications to run in parallel, providing exceptionally efficient process-

ing. The Atlas SBC combines Intel Atom CPU performance, a wealth of onboard I/O and a conduction-cooled thermal solution at a competitive price. Its rugged design makes it exceptionally reliable in harsh applications including industrial, on-vehicle and military environments.

Available I/O includes USB 2.0, RS-232/422/485, Gigabit Ethernet, SATA and digital I/O. Atlas supports I/O expansion with PCI-104, PCIe/104, PCI/104-Express and PCIe MiniCard I/O modules. Atlas uses a new miniature, cost-effective, high-speed expansion connector that supports most PCIe/104 I/O modules. This design helps keep the cost of Atlas low, while increasing the PCB area available for other I/O features.

Thanks to a dual-use PCIe MiniCard/mSATA socket, the board can accommodate newer I/O modules in the PCIe MiniCard form factor featuring Wi-Fi, Ethernet, ana-log I/O, digital I/O and CAN. These modules provide compact expandability without in-

creasing the total height of the system. For rugged applications, mSATA disk modules up to 64 Gbyte are available in SLC and MLC technologies and with wide tempera-ture operation.

Atlas SBCs run Linux, Windows Em-bedded Standard 7 and Windows Embedded CE operating systems. A Linux software de-velopment kit is available with bootable im-ages and drivers enabling engineers to start a design project right out of the box. The Atlas SBC was specifically designed for rugged ap-plications, from an operating temperature of -40° to +75°C and onboard DDR3 SDRAM to an integrated conduction-cooling heat spreader and a high tolerance for shock and vibration. Two models are available, one with 4 Gbyte and one with 2 Gbyte. Single unit pricing starts at $645.

Diamond Systems, Mountain View, CA. (650) 810-2500. www.diamondsystems.com.

FIND the products featured in this section and more at

www.intelligentsystemssource.com

Page 39: RTC Magazine

sensorsexpo & conference

www.sensorsexpo.comJune 24-26, 2014 Donald E. Stephens Convention Center • Rosemont, IL

Innovative Applications. Expert Instructors. Authoritative Content. Tomorrow’s Solutions. Register today to attend one of the world’s largest and most important gatherings of engineers and scientists involved in the development and deployment of sensor systems.

Reimagining Building Sensing and ControlLuigi Gentile PoleseSenior EngineerDepartment of Energy, National Renewable Energy Lab

Sensors, The Heart of Informatics

Featuring Visionary Keynotes:

Henry M. BzeihHead of Infotainment & TelematicsKia Motors America

CHEMICAL & GAS SENSING

ENERGY HARVESTING

INTERNETOF THINGS

M2M MEMS MEASUREMENT & DETECTION

POWERMANAGEMENT

SENSORS @ WORK

WIRELESS

IoT M2M

What’s Happening in 2014:

Plus+ • Full-day Pre-Conference Symposia• Technology Pavilions on the Expo Floor

New

• Internet of Things• Energy Harvesting

• MEMS• Wireless

• High Performance Computing

• Co-location with High Performance Computing Conference• Best of Sensors Expo 2014 Awards Ceremony• Networking Breakfasts• Welcome Reception• Sensors Magazine Live Theater• And More!

Tracks

SPECIAL Subscriber Discount!

Sensing Technologies Driving Tomorrow’s Solutions

*Discount is off currently published rates. Cannot be combined with other offers or applied to previous registrations.

OFFICIAL PUBLICATION: INDUSTRY SPONSOR & CO-LOCATED WITH:

Registration is open for Sensors 2014! Sign up today for the best rates at www.sensorsexpo.com or call 800-496-9877. #sensors14

Register with code A318Cfor $50 off Gold and Main

Conference Passes.*

Page 40: RTC Magazine

40 OCTOBER 2013 RTC MAGAZINE40 SEPTEMBER 2013 RTC MAGAZINE

PRODUCTS & TECHNOLOGY

40 MARCH 2014 RTC MAGAZINE

Industrial Server-Grade System with Refreshed Xeon E5-2600 v2

A new 4U server-grade industrial system is based on the Intel Xeon Processor E5-2600 product family and delivers a scalable high-per-formance platform for a wide array of industrial applications. The TRL-40 from Adlink Tech-nology features increased computing power with intelligent manageability by IPMI v2.0, and dedicated PCIe Gen 3 interfaces for up to 3 PCIe x16 VGA cards, making it the optimal so-lution for automated optical inspection (AOI), digital surveillance, video wall and medical imaging applications.

The TRL-40 provides increased perfor-mance with the latest Intel Xeon Processor E5-2600 v2 for peak workloads, significantly im-proving performance for applications that rely on floating point or vector computations, cou-pled with dual channel ECC Registered DDR3 1600 MHz memory supporting up to 128 Gbyte in eight DIMM slots.

Adlink’s TRL-40 implements a user-friendly web interface through an integrated web-server and web-based KVM, enabling auto record video based on an event trigger. Ad-ministrators can easily monitor the system re-motely, decreasing maintenance costs through media redirection and out-of-band power man-agement.

Featuring multiple I/O expansion, includ-ing 4x PCIe x16 Gen3, 1x PCIe x8 Gen3 and 1x PCIe x4 Gen2, the TRL-40 delivers dedicated PCIe Gen3 bandwidth for image data process-ing, reducing I/O latency by up to 30% and as much as doubling the bandwidth of previous generations. In addition, the TRL-40 is compat-ible with Adlink’s off-the-shelf frame grabbers, making it ideal for high-end machine vision and video streaming solutions. To ensure stor-age utilization and data security, the TRL-40 also provides a hardware RAID solution for up to 4 SATA III storage, as well as one mini PCIe form factor expansion slot and bundling with Adlink’s Industrial modules.

ADLINK Technology, San Jose, CA. (408) 360-0200. www.adlinktech.com.

mini-ITX Industrial Mainboard For 24/7 Continuous Service A new mini-¬ITX mainboard is specifically designed

for 24/7 continuous service. The D3243-S from Fujitsu is based on the Intel Q87 Express chipset and supports DDR3 1333/1600 SDRAM memory components as well as the complete range of 4th-generation Intel Core i3/i5/i7 processors with LGA1150 sockets. The mini-ITX mainboard is made from particularly rugged, long-lived components and is de-signed for industrial embedded applications with operating temperatures between 0° and 60°C. It meets industrial standards concerning CE (EMC and safety), burst, climate, shock, vibration etc.

The D3243-S comes with Intel HD Graphics, for example HD4600 is already inte-grated into the processor. In terms of graphics display, the compact mini¬ITX board sup-ports DVI-I, Dual DP V1.2, and dual-channel 24-bit LVDS for up to three independent dis-plays. Further features already integrated available onboard include PCI Express x16 Gen3 and Mini-PCI Express, 8-bit GPIO and multi-channel audio, a mSATA socket (SATA III) for the embedded operating system, six USB2.0 and two USB3.0 sockets. Furthermore, the D3243-S comes with two sockets for Intel GbE LAN, which also enable teaming of several network cards. The LGA1150 socket has a high degree of scalability, higher performance, lower cost as well as a lower level of capital commitment, which also entails a reduction of inventory risk.

Also already integrated onboard, is the Infineon Trusted Platform Module (TPM) V1.2 that enables extensive protection of data and licenses. The D3243-S mainboard also boasts further safeguards against unauthorized access to data, namely the password protection of the BIOS and hard disks, as well as the BIOS functionality EraseDisk, which enables safe erasure of the hard disk. On the other hand, there is the Recovery BIOS function that makes it possible to repair malfunctioning firmware.

Fujitsu, Tokyo, Japan. +81-3-6252-2220. www.fujitsu.com.

40GbE Dual-Port Fiber QSFP+ Network Adapter Boosts Network Speed

A next-generation 40GbE network adapter supports dual fiber QSFP+ ports. The NIP-86020 from American Portwell Technology leverages the Mellanox Connectx-3 Ethernet controller and is designed with 40G Ethernet tech-nologies fully compliant with the IEEE 802.3ba standard. It provides IPv6 offload-ing, IEEE 1588 precision time protocol cir-cuitry synchronization performance, RDMA over Converged Ethernet (RoCE) and Jumbo Frame functions. The new NIP-86020 delivers high-bandwidth and industry-leading Ethernet connectivity for performance-driven server and storage-intensive applications in enter-prise data centers, high-performance comput-ing, as well as in a variety of embedded envi-ronments. Portwell's NIP-86020 also supports virtual machine software by VMware, Micro-

soft, Citrix, Oracle and others through virtual-ization acceleration technology.

Portwell's NIP-86020 40GbE dual-port fiber QSFP+ network interface card is de-signed with scalability, reliability, simplicity and affordability. It is built to deliver outstand-ing bandwidth capabilities for next-generation Ethernet traffic in high-end appliances.

American Portwell Technology, Fremont, CA. (510) 403-3399. www.portwell.com.

Page 41: RTC Magazine

RTC MAGAZINE OCTOBER 2013 41

PRODUCTS & TECHNOLOGY

RTC MAGAZINE MARCH 2014 41

USB 2.0 Digital Signal Analyzer Includes New Time-Frequency Analysis

A new value-added Visual Signal DAQ Express time-frequency analysis (TFA) appli-cation is now included with the Adlink Tech-nology USB-2405 24-bit USB 2.0 dynamic signal acquisition (DSA) module for integrated electronic piezoelectric (IEPE) accelerom-eter and microphone-based vibration mea-surement. Inclusion of the application helps to provide a complete solution and improves user experience in machinery vibration analysis en-vironments.

Visual Signal DAQ Express is an easy-to-use application with powerful functionality and interactive user interface that simplifies acquisition and analysis of noise and vibration signals for instant results. Combining high ac-curacy, superior performance and value-added TFA software, the USB-2405 is the best choice for portable time-frequency spectrum analysis for machine diagnostics and failure prevention, research, and portable field measurement.

Visual Signal DAQ Express was devel-oped by AnCAD Technology, Adlink’s soft-ware alliance partner. Visual Signal DAQ Express features graphical, ready-to-use functional modules for quick setup of the USB-2405 DSA, data acquisition and post-pro-

cessing, frequency domain conversion, digital filtering, time-frequency analysis, data logging and exporting. With a focus on TFA, the com-bined USB-2405 and Visual Signal DAQ Ex-press package is a valuable tool for analyzing machinery vibration. Users can add modules as needed into the user-defined project to get visual analysis results instantly without any programming, which can minimize the devel-opment time of a new project. And the Adlink USB-2405 with Visual Signal DAQ Express provides analysis functions similar to known

sound and vibration analysis applications, con-serving development resources. The installa-tion USB flash drive for Visual Signal DAQ Express is included with the shipment-ready Adlink USB-2405 at no extra cost. Users need only follow the instructions on the Quick Start Guide to register on the website and activate their Visual Signal DAQ Express.

ADLINK Technology, San Jose, CA. (408) 360-0200. www.adlinktech.com.

Fanless Embedded Box PCs with High Expansion, High Performance

A series of fanless embedded box PCs features high performance and rich expansion. The ARK-3500 and ARK-3510 from Advantech are powered by third generation Intel mobile QM77, and support up to a Core i7 quad core processor. The ARK-3500 series boasts versatile expansions—2 PCI, PCIe x1, PCIe x4, MIOe module and 2 MiniPCIe to fulfill diverse applications. Storage options include 2 hard drives or SSD/ 2 mSATA/ Cfast, and there is also optional wireless communica-tion Wi-Fi / 3G / GPS support. As for rugged design, ARK-35 series supports wide-range power input: 9~34V/ 12 VDC, and wide operating temperature from -10° ~ 60°C with SSD. These new series provide com-plete EMC & Safety Certifications (CE/ FCC/ UL/ CCC/ CB/ BSMI).

ARK-3500 provides dual expansion slots with 2 PCI or PCIe x1 + PCIe x4 interfaces. It is easily compatible with isolated AIO/DIO CAN

card and motion control card for factory automation, camera link card for machine vision, and video capture card for surveillance. ARK-3510 features high flexibility with its optional MIOe module support for ex-tended I/O. It can offer six different SKUs for different applications, such as adding an MIOe-220 module for up to a total of 5 GigaLAN ports, which can serve as a data backup and transfer station. Both ARK-3500 and ARK-3510 can support two more MiniPCIe interfaces with two SIM holders, and can support Wi-Fi, 3G, LTE 4G, GPS modules for wireless connectivity.

Advantech’s SUSIAccess software provides a smart, easy, remote management API so users can monitor, configure and control a large number of terminals with centralized, real-time maintenance capability. This allows customers to focus on their applications while SUSIAccess helps manage administration. ARK-3500 and ARK-3510 series are also equipped with McAfee for enhanced security and Acronis for backup and recovery; these are official licenses, which protect your devices from threat.

ARK-3500 and ARK-3510 support iManager F/W technology—an intelligent, cross-platform, self-management tool that monitors system status and takes automatic action if anything is abnormal. iManager provides multi-level programmable watchdogs including IRQ interrupt, ACPI events, reset levels, and can also monitor voltage and temperature to ensure system reliability.

Advantech, Irvine, CA. (949) 420-2500. www.advantech.com.

Page 42: RTC Magazine

Advertiser Index

42 MARCH 2014 RTC MAGAZINE

RTC (Issn#1092-1524) magazine is published monthly at 905 Calle Amanecer, Ste. 250, San Clemente, CA 92673. Periodical postage paid at San Clemente and at additional mailing offices. POSTMASTER: Send address changes to RTC, 905 Calle Amanecer, Ste. 250, San Clemente, CA 92673.

GET CONNECTED WITH INTELLIGENT SYSTEMS SOURCE AND PURCHASABLE SOLUTIONS NOWIntelligent Systems Source is a new resource that gives you the power to compare, review and even purchase embedded computing products intelligently. To help you research SBCs, SOMs, COMs, Systems, or I/O boards, the Intelligent Systems Source website provides products, articles,

and whitepapers from industry leading manufacturers---and it's even connected to the top 5 distributors. Go to Intelligent Systems Source now so you can start to locate,

compare, and purchase the correct product for your needs.www.intelligentsystemssource.com

Company Page WebsiteAdvanced Micro Devices, Inc. ........................................................................................... 44 ............................................................................................... www.amd.com/embedded

Commell .......................................................................................................................... 29 ......................................................................................................www.commell.com.tw

Congatec, Inc. ................................................................................................................... 4 ............................................................................................................. www.congatec.us

Dolphin Interconnect Solutions .......................................................................................... 43 ........................................................................................................www.dolphinics.com

Design Automation Conference ......................................................................................... 35 ..................................................................................................................www.dac.com

Grey Matter Consulting and Sales ..................................................................................... 19 .................................................................................................. www.greymatter-cs.com

MSC Embedded, Inc. ......................................................................................................... 4 ..................................................................................................www.mscembedded.com

One Stop Systems, Inc................................................................................................... 23, 27 ............................................................................................www.onestopsystems.com

Pentair/Schroff ................................................................................................................. 18 ...........................................................................................www.schroff.biz/interscalem/

Portwell ............................................................................................................................ 9 ............................................................................................................ www.portwell.com

Real Time Embedded Computing Conference .................................................................... 42 ............................................................................................................... www.rtecc.com

Sensors Expo & Conference .............................................................................................. 39 .................................................................................................... www.sensorsexpo.com

Trenton Systems ................................................................................................................ 2 ................................................................................................. www.trentonsystems.com

TQ Systems GmbH ........................................................................................................ 26, 31 .................................................................. www.convergencepromotions.com/TQ-USA

Learn how PCI Express™ improves your application’s performance

www.dolphinics.com

Remote Deviceto DeviceTransfers

Need to access FPGA, GPU, or CPU resources between systems? Dolphin’s PCI Express Network provides a low latency, high throughput method to transfer data. Use peer to peer communication over PCI Express to access devices and share data with the lowest latency.

Fast Data Transfers

The Event for Embedded & High-Tech Technology

Register today at www.rtecc.com

2014 Real-Time & Embedded Computing ConferencesDallas, TXMarch 18

Austin, TXMarch 20

Melbourne, FLApril 15

Huntsville, ALApril 17

Boston, MAApril 29

Nashua, NHMay 1

Rosemont, IL - Sensors Expo PavilionJune 24-26

Orange County, CAAugust 19

San Diego, CAAugust 21

Minneapolis, MNSeptember 9

Chicago, ILSeptember 11

Toronto, ONOctober 7

Ottawa, ONOctober 9

Los Angeles, CAOctober 21

San Mateo, CAOctober 23

Tysons Corner Area, VANovember 13

High-Performance Computing Conference

High-Performance Computing Conference

High-Performance Computing Conference

High-Performance Computing Conference

High-Performance Computing Conference

High-Performance Computing Conference

June 25-26Rosemont, ILHPCConference.com

Page 43: RTC Magazine

Learn how PCI Express™ improves your application’s performance

www.dolphinics.com

Remote Deviceto DeviceTransfers

Need to access FPGA, GPU, or CPU resources between systems? Dolphin’s PCI Express Network provides a low latency, high throughput method to transfer data. Use peer to peer communication over PCI Express to access devices and share data with the lowest latency.

Fast Data Transfers

Page 44: RTC Magazine

Recommended