Test & Measurement
Improving Detection With Future Protection
Delivering future-proof innovation to the most demanding customer-base requires the adoption of the best-in-class solutions. By Stephane Monboisset, Senior Manager, Processing Platforms Product Marketing for Xilinx, and Peter Stemer, R&D Section Manager System Control for Agilent Life Science.
WhilOne of the most important and useful techniques available to detect the presence of unwanted substances in any number of daily items, for example, is the use of High Performance Liquid Chromatography, or HPLC. Sometimes referred to as High Pressure LC, it uses a well-established scientific technique of separating particles held in small samples across a spectrum to determine the composite elements of a substance suspended in a liquid. It is a technique used not only in toxicology but wherever chemicals are found, such as the food industry, agriculture and environmental control.
Agilent Technologies has been a leading supplier in the field of HPLC since 1984, using a modular approach to manufacturing the large and complex machines that include not only sophisticated electronics but a range of precisely developed valves, pumps and heaters. As the enabling technologies developed, the range of applications that could make use of HPLC grew and with that came a demand for ever-greater performance in terms of accuracy and speed of results.
As a result, customer demand drove Agilent to push the instrument designers to their limits; the modules used in the HPLC instruments fell in to a product life cycle of approximately 10 years and with each new generation the enabling electronics are expected to advance significantly. In 1995, the second generation of modules was developed, followed in 2005 by the Nucleus range. Today the family of modules under development is the Fusion range and, true to form, the engineering team will be looking to employ a platform that can provide significant scope to meet not only today’s requirements but those of the next 10 years.
Future Proof
The incumbent module design employs a PowerPC processor coupled to an FPGA; for the next generation Fusion range, the platform will be a single chip based on the Zynq programmable SoC from Xilinx. The Zynq-7000 platform goes beyond the traditional FPGA format, by tightly integrating hard IP with an advanced FPGA fabric and the world’s leading embedded processor sub-system.
The decision to use the Zynq SoC was made to allow as much design reuse as possible; both the firmware and the VHDL code used in the Nucleus range can be ported to the Zynq-7000 platform. This gives the engineering team a platform that will be capable of meeting the needs of HPLC instruments for at least 10 years.
Initially the intention is to use the dual core ARM Cortex-A9 processing system to provide greater functionality in terms of the data acquisition. This will allow the design team to develop new modules and, over time, enable old modules to be replaced with a single, common architecture.
Agilent’s design team is also aware of the indirect benefits of using a single chip solution, such as lower cost, but it is the extensibility of the platform that is key; the available performance will ensure a long time-in-service for the Fusion modules which will eventually replace all of the existing modules in its HPLC instruments.
Consolidation
It can take up to three years to fully develop a module for HPLC instrumentation, so any design reuse that can be achieved will naturally be critical. The Nucleus family of modules employed a PowerPC processor, while the Zynq-7000 platform uses the ARM Cortex-A9, but porting the software will be simplified thanks to the use of ENEA’s operating system, OSE.
Moving from a single-core to a multicore platform offers numerous benefits; consolidation is one — systems that previously needed multiple processors can be redesigned to use a single multicore device. For Agilent’s HPLC modules the consolidation was not in the form of multiple processors but multiple devices, specifically a processor and an FPGA. Bringing together these two elements in to a single device could be achieved using an ASIC, but that wouldn’t give Agilent the longevity it needs from a single platform. For this application — and for many others — the flexibility and extensibility of the Zynq-7000 platform is the only way to achieve consolidation while delivering increased performance, without sacrificing design flexibility.
Porting any software application from a single core to a multicore platform can make sense but in order get the best from the platform it is important to understand how the software will make use of it. ENEA’s real-time operating system, OSE, is now ‘multicore aware’ but more than that, ENEA has developed a Multicore Migration Platform that further eases the overall process.
Instinctively most software is written to execute linearly; the programming language C doesn’t inherently support parallelisation, so when moving to a multicore platform it is important to understand the different approaches possible. Specifically, these are symmetric multiprocessing (SMP) and asymmetric multiprocessing (AMP). In general terms, the former assumes the operating system will run on one core and the application software will be free to run on whichever core has most availability at any given time, while an AMP approach often assumes each core will have an instantiation of the operating system running on it and that the application will be ‘hard-partitioned’ such that specific functions are ‘tied’ to run on a specific core. In the example of consolidation, where multiple processors are being replaced with a single multicore processor, AMP may be the most appropriate approach. Where a single core is being replaced with a multicore device, as in the case with Agilent’t HPLC modules, then SMP may seem like the right solution.
In practice, however, the correct solution will also depend heavily on the software; where there exists a high degree of parallelisation in the code an AMP approach may be better than SMP, but for code with a large number of interdependencies the best solution could be SMP.
Another consideration is the use and distribution of shared resources in the multicore platform, such as memory. This can have an effect on which approach is better suited to a given application. In many cases the best solution will be to employ a hypervisor; a software layer that provides abstraction between the operating system and the underlying hardware architecture. This allows the application software to ‘believe’ it is running on a single core, when in reality the hypervisor decides what runs on which core, by closely monitoring and controlling the available resources.
There is no ‘turn key‘solution when migrating to a multicore platform; it requires effort from the design team, particularly in order to achieve the best performance gains. This is where the ENEA Multicore Migration Manual comes in to play; it provides guidance and advice on identifying and dealing with shared resources, how and where to partition code and how to use the load balancing framework built in to the multicore version of OSE.
A key requirement of Agilent’s design team is that the hardware platform they develop will not only support all the different modules produced today but have the ability to grow as the demand for more features and performance in the HPLC instruments increases over the next 10 years.
Supporting any platform for 10 years is challenging enough but to also guarantees a performance roadmap using the same hardware is almost unheard of before the advent of the Zynq-7000 platform. Because it tightly integrates the industry’s most widely supported and adopted multicore processing system — MPCore from ARM — with the latest 28nm programmable logic fabric from Xilinx, the Zynq-7000 platform truly is future-proof.