Mentor Graphics explore Cell-Aware ATPG
Finding, identifying and fixing manufacturing defects and systemic yield limiters within library cells at 90nm and beyond. Stephen Pateras, product marketing director for Mentor Graphics Silicon Test product, explores Cell-Aware ATPG in this article from ES Design magazine.
The goal of silicon test is to find defective parts before they are shipped to the customer. The widely used methodology is to add scan test structures to the design, then deliver test patterns through the test structures that reveal defects when the chip responses are observed. Test patterns are generated from fault models that represent potential defects with simple properties. These fault models are based on observations of silicon defect behaviour, and are developed by the electronic software design automation (EDA) vendors.
This provides high fault detection, but mostly for defects at the gate (or cell) boundary or between library cells. However, today’s fault models like stuck-at, transition- and path-delay are no longer sufficient for detecting all defects within cells. TSMC has stated that, “for 90nm and beyond, a significant number of manufacturing defects and systematic yield limiters lie inside library cells.” With more recent fabrication technologies, the population of defects occurring within cells is significant, perhaps amounting to roughly 50% of all defects.
The traditional approach finds some of the defects within cells, but many cell-internal defects require a unique set of stimulus to excite and observe the defect. Particularly for zero-defect designs, or those using emerging technologies like FinFETs, or to simply get a quality edge over your competition, you need to use these “cell-aware” fault models that specifically target the defects within cells. A new methodology to do this, called cell-aware test pattern generation (ATPG) is available to define and use fault models that target these cell-internal defects.
The cell-aware ATPG methodology has been shown to increase defect coverage by over 1% compared to what can be achieved with traditional test patterns. Multiple studies have been performed that show tests generated using cell-aware ATPG will find defects that the normal test methods miss. Published reports have shown improvements over 800 DPM (defects-per-million) with cell-aware testing (Figure 1), which is very significant for many production environments.
Why cell-aware?
During normal ATPG, thousands of patterns are produced. So even if a cell-internal defect was not targeted by a traditional fault model, it could be fortuitously detected (detected by random luck). However, when considering millions of gates in a design, it is not effective to rely on luck to detect these potential cell-internal defects at every gate. One option would be to apply every possible combination of inputs at every gate. This fault model is referred to as a gate-exhaustive fault model. It would be effective in detecting many cell-internal defects since it would apply every possible combination. For example, for an 8 input cell, gate-exhaustive testing would apply all possible 2^8 (256) input combinations. It is easy to see that to apply such an exhaustive set of patterns quickly becomes impractical.
Furthermore, many defects inside cells can manifest themselves as timing-related defects and are not detectable using static tests. A two pattern test is necessary to detect such defects. So for our 8 input cell example, two cycle gate exhaustive testing would require the application of 2^8 x 2^8 = 2^16 patterns. A much more efficient method is clearly necessary to target static and dynamic cell-internal defects for low DPM products. This is the objective of cell-aware ATPG.
Cell-aware ATPG starts with an automated cell library characterisation process, shown in Figure 2. The goal is to convert potential physical defects that can manifest themselves in each technology cell layout into a fault model that ATPG can process.
Each fabrication process has a set of technology libraries used to describe the logic behaviour and physical layout of the lowest level component in the netlist. Usually, the ATPG and time-based simulation processes are based on a logical library composed of logic models. A corresponding physical library exists for the IC layout. The cell-aware characterisation process starts with an extraction of the physical library, represented in GDSII.
Each extracted cell results in a transistor-level design with parasitic resistance and capacitance. A resistance location represents a conductive path with the potential for an open defect and capacitance identifies locations with the potential for a bridge defect. Each potential defect is SPICE simulated against an exhaustive set of stimuli to determine if there are sets of cell inputs that produce an output different from the defect-free result.
The simplest case is to simulate each capacitive location with a 1Ω resistance representing a hard bridge. There are many possibilities to model other defects using analog simulations. For example, various resistances can have different effects and might detect defects that aren’t detectable with hard shorts (1Ω bridges). In addition, simulating over multiple cycles is also useful to detect bridges or opens that are only observed as dynamic defects.
The final process in cell-aware characterisation is to convert the list of input combinations into a set of the necessary input values for each fault within each cell. Because this information is defined at the cell inputs as logic values, it is basically a logic fault model representation of the analog defect simulation. This set of stimulus for each cell represents the cell-aware fault model file for ATPG. Within this file, a simulated defect (now a fault) can have one or more input combinations.
Note that because the cell characterisation process is performed for all cells within a technology library, any design using that technology can read in the same cell-aware fault model file. Characterisation only needs to occur once and then can be applied to any design on that technology node.
Measuring effectiveness
When evaluating the effectiveness of a new fault model it is important to determine if it uniquely identifies defects the other fault models do not and the impact of the fault model on test time and pattern count.
Several manufacturing test experiments have been performed to measure the above parameters. In these experiments, the tester is typically setup to be in data collection mode, also known as “continue on fail”. A typical flow is illustrated in Figure 3. The production test flow is to perform at-speed transition tests. For all dies that pass this test, top-off stuck-at testing is performed. For all dies that pass the stuck-at top-off tests, both the cell-aware at-speed and cell-aware slow-speed top-off patterns are run. Any die that fails either or both of these tests are logged. With sufficient volume, this process allows to calculate the reduction in DPM that is achieved by applying the cell-aware patterns. The greater than 800 DPM result mentioned earlier is one such value. Many other experiments have shown similar strong results.
Traditional fault models fully test the periphery of standard cells and the interconnections between them, but they can miss some bridging or open defects internal to the cells. Advanced process technologies, notably the use of FinFETs, introduce a variety of new failure modes that need to be addressed during IC testing.
The cell-aware characterisation process can create a set of fault models based on the simulated behaviour of defects within the cell layout. As a result, a higher quality pattern set can be produced by ATPG enhanced to support the new fault model. Silicon results have already shown significant additional defect detection beyond standard stuck-at and transition patterns when using cell-aware ATPG.
Author profile: Stephen Pateras is product marketing director for Mentor Graphics Silicon Test products. He was previously the VP of Marketing at LogicVision. While at LogicVision Stephen also held senior management positions in engineering, and was instrumental in defining and bringing to market several generations of LogicVision’s semiconductor test products. From 1991 to 1995, Stephen held various engineering lead and management positions within IBM’s mainframe test group. He received his Ph.D. in Electrical Engineering from McGill University in Montreal, Canada.