The most effective functional verification environments use a variety of analysis technologies, where the strengths of each are combined to reinforce each other to ensure that the device under test (DUT) behaves as specified. However, this creates an inherent challenge of properly comparing and combining the results from each source in order to give a brief and accurate picture of the true state of the verification efforts.

The most common problem we see is when designers want to merge the results of the official analysis with the results of the RTL code and the functional coverage of their UVM test bench, but do not fully understand what the official coverage provides. Therefore, we will start from the familiar basis of the simulation-generated code and functional coverage before moving on to defining the formal coverage.

Overview of the simulation code and functional coverage

Code coverage is simply the percentage of RTL code that measures the number of expressions in the body of the code that has been executed by test execution. Although it is important that the test bench can handle the entire RTL code – for example, there is no dead code, which implies a DUT error – important functionality may still be missing in the design and / or paths to key functions may be in violation of specification.

The functional coverage of the RTL simulation is the metric of how much design functionality has been exercised – for example, “covered” by the test bench or test environment. The amount of functional coverage to be met is explicitly defined by the verification engineer in the form of a functional coverage model.

In its basic form, this is a user-defined mapping of each functional feature that needs to be tested up to a coverage point, and these coverage points have certain conditions (ranges, certain transitions or intersections, etc.) that must be met. before being reported as 100% covered during the simulation. All these roof point conditions are defined in the form of bins. A number of coverage points can be captured under one coverage group, and a collection of multiple hidden groups is usually called a functional coverage model.

During the simulation, when certain conditions of the roof point are affected, these containers (conditions) are covered and thus the number of strokes of the containers provides a measurement of the progress of the inspection. Following a series of test cases, a graphical report can be generated to analyze the functional coverage report and plans can be made to “cover” the “inspection holes” by creating new tests that will pass through the still unexplored areas of the DUT code.

Keep in mind that it is important to note that the coverage result is also a reflection of the quality of the test bench – for example, whether the test bench addresses all the DUT schemes you need to check. In fact, the result of the coating and the number of holes reflect the quality of the test verification plan and its alignment with the original specification itself.

Determination of official based coverage

Formal analysis provides the following types of coverage that are not visible in the simulation:


Is there any combination of input signals that will bring the state of the circuit to a certain important node? If there are none, the point is unattainable. Therefore, this gives similar information about the presence of dead code, as well as the coverage of the simulation code.


What are all possible paths of the circuit and the state space from the selected node to the signals specified in the statement? And are these the expected roads and do they meet the design specification?

Structural Cone of Influence (COI)

Working forward or backward from a selected point in the circuit, what is all the logic that could affect this node? This is a rough form of coating and is usually not useful for a final approval analysis.

Model-based mutation coverage

It measures how well the checkers cover the design by inserting mutations electronically into the model – the source code itself does not change – and checks if a check or statement detects it. Mutation coverage provides clear guidance on where more claims are needed and is a stronger indicator than any other type of coating, as it measures error detection, not just design practice.

Although these measurements are radically different from the simulation coverage, the overall application is the same at a high level: measuring progress and simultaneously assessing the quality of the official test bench (consisting of constraints and functional verification properties).

Where do the problems start

The most common request of the designer is to “merge” simulation and formally generated coverage metrics corresponding to each specific coverage point; where often the aim is to enable verification teams to choose either formally or simulatively to test an exceptional subset of designs. For example, imagine that an IP contains a design element that is very suitable for formal analysis – one example is an arbiter – and the rest of the design can be easily verified by any technology.

Naturally, the designers want to allow him to formally “take credit” for the arbitrator’s check and blend it seamlessly with the simulation results in the rest of the chain. This is where the problems can begin. The main risks you need to know are:

  • First, regardless of the technology chosen, just because something is covered does not mean it has been properly checked. Think again about the relationship between code coverage and functional coverage – just because tests can pass 100% of the code doesn’t mean the code is also functionally correct.
  • The coverage of the simulation reflects only the specific paths that the simulation has passed from the inputs through the state space for a specific set of stimuli.
  • Some types of formal coverage do reflect progress, but just because formal analysis can go through some of the same states as simulation, the logic involved is usually greater. In addition, the official analysis has a free-floating input stimulus and is valid for all times.
  • Other types of official coverage report how logic and signaling “work backwards” from the output.
  • The simulation is performed at the cluster / system level on a chip (SoC), while the formal one is usually performed at the block level. Therefore, code coverage is end-to-end testing against a more localized function. The way coverage data is generated may be lost once the results are registered in the coverage database.

In the end: if you are not careful to understand the differences between formal and simulation coverage – if you just combine the coverage data on a 1-1 line / object / point basis – you may incorrectly conclude that your test quality is higher than it really is. so. For example, you could mislead yourself and your managers that you are ready when the reality is that there are actually untested areas of your code and test plan.

Editor’s note: This is a series of three parts for mixing traps of formal and simulation coatings. The second part of this series of articles will compare the results of simulation and formal in parallel with progressively complex examples of RTL DUT code.

Mark Esslinger is a product engineer in the IC Verification Systems division of Siemens EDA, where he specializes in assertion-based methods and formal verification.

Joe Hupcey III is Product Marketing Manager for the official Siemens EDA Design & Verification Technologies product line of automated applications and advanced property verification.

Nicolae Tusinski is the product manager for official verification solutions at Siemens EDA.

Related content

Previous articleRenesas investment in power semiconductors
Next articleXiaomi wants to prevent the extraction of APK files on Android