25 - 26 September, 2019

Radisson Blu Bengaluru

Bangalore, India

Event Details

MP Associates, Inc.
THURSDAY September 26, 2:00pm - 3:30pm | Robusta
EVENT TYPE: REGULAR SESSION
SESSION 5
Formal Verification
Chair:
Girish Marudwar - Synopsys, Inc.
In this session we want to raise the bar further from first sessions on Formal Verification (FV) technology and methodology. Now a days we are no more worried about coverage numbers but are looking for finding more bugs. The presenters in this session will talk about advance techniques deployed beyond the normal use models and making formal more effective and efficient. There are unique new challenges which are discussed and innovative solutions are presented with case studies and results. These methodologies can be readily picked and applied on your challenging designs

5.1Formal Verification of Low-Power RISC-V Processors
Verification of modern-day processors is a non-trivial exercise, and RISC-V is no exception. In this paper, we present our RISC-V formal verification methodology that focusses on rooting out deadlocks in the processor, low-power design optimization bugs, and establishing end-to-end correctness of functional units. So far, we have caught 63 new confirmed bugs in six weeks that the designers hadn’t found earlier in a 40 nm taped-out processor and established exhaustive correctness of hundreds of end-to-end functional properties for several functional units.
 Speaker: Ashish Darbari - Axiomise Ltd.
 Author: Ashish Darbari - Axiomise Ltd.
5.2Simulation Guided Formal Verification with “River Fishing” Techniques
Traditional formal verification that starts from time 0 is good for early design validation for IPs. Once the IPs are integrated and interfaces are defined, the controllability of the design will be restricted significantly. With standard interfaces limiting the legal transactions that are accepted into the design, it is inevitably difficult for formal to build long trajectory with sequence of transactions to target deep design functionalities for verification. In this article, we are going to describe a new methodology that will improve the overall quality of verification by leveraging simulation to reach those deep design states and then turning on formal verification to exhaustively verify the functionality. “River fishing” is a good metaphor for our methodology: by identifying good “fishing spots” in the functional simulation environment, we can leverage the sequences of transactions already performed by simulation and start formal verification from those interesting states. Contributions : Formal verification has been used successfully by a lot of companies to verify complex SOCs [2] and safety-critical designs [3]. It has been used extensively for A, B, C, as described in [5]: • Assurance, to prove and confirm the correctness of design behavior. • Bug hunting, to find known or explore unknown bugs in the design. • Coverage closure, to determine if a coverage statement/bin/element is reachable or unreachable. Using formal verification to uncover new bugs is emerging to be an efficient verification approach when functional simulation regression is stabilized and not finding as many bugs as before. Traditional formal verification that starts to ex-plore the design from time 0 is good for early design verification, but it is inefficient for exercise complex scenarios in the design. Complex bugs and coverage goals happen under combinations of events and scenarios. Sophisticated approaches such as waypoints [4] and goal-posting[5] have been proposed and have been used successfully to find bugs buried deep in the design, especially post-silicon bugs. We had expanded these approach further. River fishing [1] is a good metaphor for our methodology: by identifying some good “fishing spots,” we can significantly increase the number of fishes we catch. Hence, instead of using one starting state to initialize the design for formal verification, we picked out many interesting simulation states and started formal verification from these “fishing spots”. These fishing spots could be corner-cases, special events and design states resulting from long simulations with thousands of transactions. Effectively, we leverage what simulation has performed and start formal verification deep in the design space where design issues can be explored efficiently. We encountered one problem with this approach, however. In a function simulation regression environment, there may be too many “fishing spots.” We learnt that it is insufficient to find just good “fishing spots,” it is also important to make sure it is easy to catch fishes at those spots. To summarize, the “River fishing” formal methodology consists of 3 major steps: 1. Identify and extract a set of good “fishing spots” from the simulation traces. 2. Prioritize and screen the effort of these “fishing spots” 3. Launch multiple formal engines concurrently on the server farm environment. Identify “fishing spots.” River fishing experts [1] have suggested that the interesting fishing spots are special sections of the river where there are unusual activities. Picking interesting spots from simulation traces for formal verification is very similar. We have identified four criteria for these spots: interface interactions, control and interrupts, concurrent events, feedback, loops, and counts. Launching from “fishing spots” There are advantages of launching formal verification from these “fishing spots.” If functional simulation has exercised similar behaviors, these “fishing spots” can be very close to the targets. If some targets require specific pre-conditions to happen ahead of time, we can also take advantage of the functional tests and identify different “fishing spots” for those tar-gets. Effectively, the river fishing approach leverages what had already been performed by functional simulations and starts formal verification as close to the targets as possible. Results: We have captured two representative testcases here. They were successful only after we have started formal verification from simulation states deep in the simulation traces. Case1: To save power, today's design has a lot of ratio synchronous clocks that runs each component as slow as possible. As a result, ratio synchronous interfaces are common. At one of this interface, the fast clock domain was designed to sample the data at the end of a slow clock period when the data valid condition is asserted. However, under some corner case situa-tions (unknown to the design team initially), the data was sampled even when the valid condition was not asserted. As a result, corrupted data was registered and passed on within the system. The “fishing spots” were determined based on activi-ties on the two interfaces, counters and control logics. Several simulation states were picked after the design had been ini-tialized, and also after data had been flowing correctly for 100s of cycles between these two interfaces. Formal verification exposed the incomplete handshake between these two interfaces to cause the data to be sampled invalidly. Case 2: The design is a data flow controller. Dynamic transfer channels are set up to handle data packets with different prior-ities. In this case, when more than one channels had finished the transfer at the same time, one set of address pointers was de-allocated twice, while the other set was not de-allocated. As a result, it caused memory leaks and data corruption. The “fishing spots” were determined based on activities on the concurrent events, counters, and complex control logics. In this type of designs, it is very difficult for formal verification to setup these data transfers. However, they are readily available in the simulation regression. By leveraging multiple simulation states deep in the simulation runs, formal verification was able to expose the weakness in the channel de-allocated logic and synchronized two channels to complete the transfer at the same time.
 Speaker: Bathri Narayanan Subramanian - Mentor, A Siemens Business
 Authors: Bathri Narayanan Subramanian - Mentor, A Siemens Business
Ping Yeung - Mentor, A Siemens Business
5.3Challenges of Formal Verification on Deep Learning Hardware Accelerator
In this paper, we will discuss about formal verification challenges and its solution in verifying Deep learning Hardware accelerator, where complex set of computational modes and its configuration leads to several complex scenarios, which are very hard to verify exhaustively with traditional constrained random verification. The formal solution includes divide-and-conquer on sub-units for better formal convergence, black boxing verified blocks and memories, verifying data path and control path independently, and sequence equivalence checks. Each of these solution are implemented independently and sometime results are overlapped with each other, but all results converge and gives maximum formal coverage numbers in addition to the existing constrained random verification. Formal Verification also has positive impact on designers and verification engineers debug interaction TAT, relieving work load on machine resources and human resources, better quality of results, and better utilization of time and resources on development activities.
 Speaker: Satish Yellinidi Dasarathanaidu - Intel Technologies Corporation
 Author: Satish Yellinidi Dasarathanaidu - Intel Technologies Corporation