ENHANCED MANUFACTURING SERVICES 4.0
EMS 4.0: NPI Driven by Test Coverage
The purpose of any test solution is to maximize test coverage, ensuring that the majority of defects are detected, while minimizing the test cost.
If a product is not tested well enough, poor quality products will damage the company reputation. If a product is tested too much, it can negatively impact a variety of business processes, including production costs, time to market and ship to target.
Obviously, after test we know the failing products that cannot be shipped.
The larger and more complex designs will be repaired, because of the large amount of high value components present on a single PCBA, which would otherwise be thrown away. Only when the percentage of failures is very low, or the repair costs involved are much higher than the value of the product, can a “failure = scrap” policy can be considered.
Production Model
TestWay, a key tool within the ASTER digital suite, produces a Production Test Model report that clearly summarizes the test process & key metrics.

“FPY” – First Pass Yield is the percentage of boards that pass the test.
It can no longer be considered a good measure of the production quality. This is easily demonstrated by a test coverage of 0% which will result in a First Pass Yield of 100%!
“FOR” – Fall of Rate is the number of boards which fail the test.
This leads to the question, which on the surface looks simple but in reality is extremely thought provoking: is a board good because it passes the test?
From practical experience, the following question arises: “Are all failing products really faulty?” And for the same reason we may ask: “Are all products that are shipped, good products?” The answer is clear for both questions: “NO!”.
“Slip” – is the Escape rate, this is a key metric and represents the faulty products that will be shipped to end customer.
Ultimately, the “Slip” is how the end-users will measure the final quality. If a PCBA is failing at system test it is because it fell into the escape rate (or slip) this is usually much higher than expected.
There are two possible reasons why this situation occurs:
- The DPMO figures are higher than expected.
- The combined coverage is lower than optimal.
Incorrect DPMO figures are probably due to limited defect traceability, or incorrect root cause analysis. This subject will be addressed by the last article of the trilogy.
The unexpected low coverage could be due to the use of inadequate coverage metrics, such as the confusion between test accessibility and testability.
Test Coverage
In assessing the results from a combination of test methods, TestWay simulates a variety of test strategies and predicts the test coverage.

For an absurd example on how test coverage is calculated, let us consider a simple PCBA comprising of 4 components: 3 resistors and 1 BGA:
- The 3 resistors are measured with very high accuracy, but there is no test on the BGA.
- 3 resistors / 4 components
So is the board test score really 75%?
Clearly it is not. Something is needed to weight the test coverage, which is credible and can be easily updated to reflect the growing electronics complexity.
Consider all the manufacturing defects within the defect universe, including; missing components, wrong value, misalignment, incorrect polarity, damaged components, open circuits, short circuits, insufficient solder and excessive solder.
We must have test strategies in place that are capable of detecting these defects. The ability to detect defects can be expressed by a coverage facet, so that each defect category is aligned with coverage metrics.
MPSF | PPVSF | PCOLA/SOQ /FAM |
---|---|---|
Material | Value | Correct |
Live | ||
Placement | Presence | Presence |
Alignment | ||
Polarity | Orientation | |
Solder | Solder | Short |
Open | ||
Quality | ||
Function | Function | Feature |
At-Speed | ||
Measure |
These metrics allow the estimation of the theoretical coverage, or measurement of the real coverage, for each unique test strategy, or combination of test strategies.
No single test strategy is capable of detecting all the defects. It is a combination of complementary test strategies that provide a good overall coverage.
When calculating test coverage it is important to consider the DPMO that reflects the current manufacturing process.
This way the test coverage can be aligned so that better coverage is provided where there is a greater opportunity for defects occurring during manufacture.
In the Test Effectiveness formulae below, each defect category is associated with its corresponding coverage.
Design to Test
Typically a good test strategy that provides an overall high level of coverage is a combination of different test equipment available within the test line.
Each test step has the ability to catch a subset of the defects. So by identifying the escape rate, which is the number of faulty products that could, ultimately, be shipped to the customer, it is possible to plug the gaps in the overall test coverage.
This is explained by the analogy of a fisherman’s net, where fish escape through to the next net, which in this analogy. is due to a lack of test coverage.
Overlapping tests have little or no value and should be eliminated from the process, in order to provide a lean test strategy.

Machine data can subsequently be exported that is in alignment with the simulated test line. This is in a native format useable by Assembly machines, AOI, AXI, ICT, FPT and BST testers. The outputs include assembly and test programs, or input lists and test models, as well as test fixture files.
Optimizing a Flying-Probe Test (FPT) with other complementary test strategies, such as Boundary-Scan Test (BST), not only ensures that optimum test coverage is achieved, but will significantly reduce test time.
This is also the case for In-Circuit Test where test point placement is optimized for maximum test coverage and/or fixture cost reduction.
This will not only result in cost savings but also in the test generation and debug time. This can be modelled with the other test strategies within the test line to provide a production test cost model.
Real Time Savings
Breakdown
Function | Original Times | Using TestWay | Time Saved | Percentage Time Saved |
Program Generation | 7.5 Hrs. | 6 Hrs. | 1.5 Hrs. | 20% |
Debug Time | 14 Hrs. | 7 Hrs. | 7 Hrs. | 50% |
Test Time | 13m 33s | 7m 28s | 6m 5s | 45% |
⇒ First board will be finished test, 7 hours 6 minutes earlier
Test Closed-Loop
The Industry 4.0 philosophy focuses on providing a “closed loop” in order to identify where problems exist and facilitate remedial action.
An example of where disparity can occur between the expected test coverage and the achieved test coverage, is where the test development and PCBA manufacturing is outsourced.
It is imperative that the OEM has complete visibility of what is achieved by their supplier. Otherwise there is a good chance that an inferior product could be manufactured and shipped to the end customer.
High escape rates also have a direct impact on the No Fault Found bone pile.
The completed post debug test program should reflect the estimated coverage requirements defined by the OEM.
The example below shows how the outsourced test program can be measured and compared against the early estimation in order to verify that the original requirements have been realized.
