A Simple Approach to System Coverage

With the complex System-on-Chips we are building today, one of the challenges is defining system-level coverage that can be used to measure and guide system-level verification efforts.

Since the SoC is assembled from components, a common approach is to just reuse all the coverage defined during the verification of the components. This is straightforward and checks the reuse box.

Unfortunately, due to the nature of SoC-level verification (slow simulations, very large state spaces, inputs constrained by the integration, etc.) component coverage collected at the system-level will be sparse. Further, the SoC verification team will lack familiarity with block internals and find the coverage difficult to interpret.

In an attempt to take into account the coverage done at the block level, some teams invest a significant amount of effort in merging coverage results from block verification into the SoC coverage model. The resulting merged coverage may look more complete than sparse coverage, but this additional data gives a false impression of what was exercised at the system-level and may actually obscure the very data that is most relevant to the system-level verification.

Back to the basics

Let’s step back for a moment and look at what we are trying to accomplish at each level of verification, and what coverage is relevant to those goals.

DISCLAIMER: the separation of concerns discussed below is the ideal case. Reality is messier – some goals overlap and others are not addressed directly due to resource constraints. The important thing is to have a clear understanding of these concerns so that we critically examine whether the verification goals are really being addressed.

In order to manage the complexity of SoC designs, we have to follow a hierarchical verification strategy:
– Thoroughly verify blocks (e.g., IPs or clusters)
and for each level of integration, verify:
– Interconnect and protocols
– Interaction between blocks

This strategy of verifying integrations applies to each level of integration in the design (e.g., block, cluster, platform and chip).

And finally, perform strategically selected testing at the system level.

block diagram of hierarchical verification
hiearchical verification

Some more details:

block (IP or cluster) verification
A block is a portion of the design that has well specified behavior and is typically developed by a small team. Examples of typical blocks are IP blocks or memory subsystems.

A design of this size can be thoroughly exercised in a stand-alone environment and is generally within the capacity of formal property checking tools. Verification at this level should be very thorough and provide good confidence in the block’s functionality.

interconnect and protocol verification
The first part of integration verification is to verify the interconnect: to check that the pins are wired up correctly and the interfaces are using the correct protocols. Standard transactors and monitors as well as formal verification are often used to automate interconnect/protocol verification.

interaction verification
This is where verification engineers earn their money.

When verifying the interaction of blocks, we don’t need to verify the entire functionality of the blocks or the interconnect and protocols – those have been done already.

Stimulus can use Transaction-Level Models; checking can be done with integration level assertions, scoreboards or C++ models; and cross-coverage using a small number of coverage points of interest: mode transitions, exception conditions and asynchronous inputs from each block.

The integration team can’t reasonably know the implementation details of all the blocks (and for some third-party IPs, details are not available at all), so they depend on the block provider to identify important coverage points and possibly provide stimulus generation.

system verification
Up through the SoC-level integration, the verification environment has been artificially created and the stimulus may be overly constrained by testbench configuration or stimulus generation limitations.

On the other hand, customer applications have a knack for identifying use cases that have not previously been explored. The idea for system verification is to simulate or emulate the entire design as close to the intended application as possible, watching for things going wrong. The testbench should be very simple and preferably synthesizable so that the same setup can be run on evaluation boards or in emulation.

Hierarchical coverage

Given the different verification goals, it makes sense that different coverage and checkers are needed for different stages of verification:

hierarchical verification block diagram showing coverage reuse
hierarchical verification block diagram showing coverage reuse

It all begins at the block level: source, functional, and assertions are defined for all block functionality. Depending on the verification environment, block teams should use tags, naming conventions, or different modules to divide coverage and checkers into three categories:

block-level-only coverage
Coverage associated with the basic block functionality that will not be promoted to the next level of hierarchy.

interconnect/protocol coverage
Depending on the verification environment, some combination of toggle coverage on pins, source coverage, interface monitors, assertions on interfaces, protocol checkers, etc. may be used.

interaction coverage
The block team knows the implementation details and can identify important coverage points that define different operational states and transitions (e.g., asynchronous inputs, queue overflows, power mode changes, debug mode, etc.).

The interaction verification plan will generally consist of cross-coverage of these important points. Having the block owner walk through the block’s important coverage points is a very good communication mechanism between and integration verification teams.

system coverage
System-level coverage and checking can be reused from SoC-level integration but may be limited to externally observable behavior and final results checking when run in emulation or on evaluation boards. Special attention should be paid to combinations found in specific system-level scenarios.

Manageable System Coverage

By aligning coverage with specific verification goals, we can reuse coverage appropriately from one stage to the next and ultimately get a more realistic sense of the verification coverage we achieve at the system-level. Even with only a small number of important coverage points per block promoted, the cross coverage provides a large enough state space of system operations that it will be challenging to exercise them all. By being selective, the promoted coverage is understandable and provides a useful metric and guidance for focusing efforts.

Optimization opportunities

Through this deliberate coverage reuse strategy, we also achieve some operational efficiencies vs. reusing all block-level coverage:

  • Block-level-only coverage will not add overhead to integration simulations.
  • After interconnect between blocks is thoroughly verified, it can be turned off.
  • After interaction between blocks is thoroughly verified, it can be turned off.
  • All or some of interconnect and interaction could be done formally so there is no runtime checking needed.

A familiar theme

Much of verification is reverse engineering: modeling the environment, figuring out the design team’s interpretations of the specification, checking implementation details, etc. Many design and implementation decisions are not recorded and have to be reverse-engineered later.

When we capture and leverage these fleeting design and implementation decisions, we gain efficiency. For instance, design knowledge is captured and used when RTL designers record their assumptions as assertions, and implementation knowledge is used when debugging at the block-level rather than during an integration phase. Having the block teams identify important coverage points also captures implementation-level details in a form other teams can use directly.


© Ken Albin and System Semantics, 2014. All Rights Reserved.