Event Details

MP Associates, Inc.
THURSDAY September 26, 10:30 - 17:30 | Exhibit Area Corridor
Poster Session
13.1Automatically Synthesizing Higher Level of Protocol Abstraction for Faster Debug and Deeper Insight into Modern Digital Designs
The design and verification complexity has led to the evolution of various languages and methodologies, such as SV and UVM. This evolution happened primarily because of the raised level of abstraction at which design and verification engineers have to think and capture the intent. Such abstraction simplifies the comprehension and debug activities of the system. Still, as designs go through the flow, they are transformed into a lower level of abstraction (logic synthesis and instrumentation), that could be based on context in which this data is captured, such as third-party sources, or hardware accelerator-based data capture. The analysis and debug complexity again goes up due to unavailability of higher level of data abstraction. In this paper, we propose to synthesize the higher level of abstraction from the lower level of design data and making design data available for the debug and analysis. In our experiment, we captured AXI signal-level activity from emulation system and synthesized higher level of transactions. With this view, users can debug and analyze AXI data with the packetized transactions instead of signal-level waveforms. Though the methodology is demonstrated on AXI but is applicable to all the industry standard protocols. We have also demonstrated that this higher level of data can be easily analyzed for other insights into the subsystem like instructions coverage, address ranges, and performance analysis
 Speaker: Amar Patel - Mentor Graphics
 Authors: Amar Patel - Mentor Graphics
Yogesh Badaya - Mentor Graphics
Alasdair Ferro - Mentor Graphics
Christopher Jones - Mentor Graphics
13.2SoC Verification Enablement Using HM Model
Today, the chipmakers in the semiconductor world release a new chip within a very small-time span of a few months. A very strict timeline is followed for completing the project milestones and for staying ahead of the competition it is made sure that the chip is out in the market as per the scheduled plan without compromising the quality of the product. Before the chip sees the silicon, it is very important for the System-on-Chip (SoC) to undergo a thorough pre-silicon verification so that any bugs in design can be caught and rectified earlier in the initial stages of chip development. But there can be certain situations where the Register-Transfer Level (RTL) design of a sub-block or a Hard Macro (HM) is under development and its delivery schedule is not aligned with the SoC schedule. The HM could be as complex as it could include Wireless-Fidelity (WIFI) IP, Digital Signal Processors (DSP) processor, Network-on-Chip (NOC), Power controller, Debug Logic and SRAM/ROM. In the absence of the HM, the verification of the complete SoC cannot be halted as it may delay the final tape-out of the chip which eventually increases the Time-To-Market (TTM) of the product. This paper proposes a solution to this problem where an equivalent UVM based Model of the HM can be plugged in the SoC and verification process may continue before the actual RTL design of HM is available enabling left shift of the verification cycle and shorter TTM without compromising quality. Using this approach, the standalone HM development & verification process and the SoC verification process can happen parallelly and can help cut down schedule time. This model supports all the integration aspects of the actual HM such as Power modes, Debug modes, Master and Slave AXI/AHB transactors, Interrupts and Performance. Apart from the functional features, it also supports the Design For Testability (DFT) features like boundary scan, test-bus, etc. Furthermore, the model is configurable to mimic the traffic profile of standard IP’s like WIFI and DSP for use-cases and contains in-built performance monitor which can be configured for the required min/max latencies and average/peak bandwidth. The model also supports the self-checking mechanism for connectivity verification of the bi-directional signals (pads in HM) and the unidirectional signals (pads in SoC IO ring).
 Speaker: Vineet Tanwar - Qualcomm India Private Limited
 Authors: Vineet Tanwar - Qualcomm India Private Limited
Chirag Kedia - Qualcomm India Private Limited
Rahul Gupta - Qualcomm India Private Limited
13.4Use of Message Bus Interface to Verify Lane Margining in PCIe
Due to the variations in manufacturing and other environmental factors, different components have different performance indexes. Thus, all performance degrading factors like noise, cross talk, humidity etc. must be considered while designing any component. To avoid issues due to these factors, lane margining is mandatory at all receiver ports (for PCIe 4.0). It Is thus, also referred to as Receiver (Rx) Margining. The actual implementation of the lane margining feature in both the PHY and the controller DUT is design-specific. Presented is our employed approach of using MBI (Message Bus Interconnect) to verify the lane margining requirements of the PHY DUT. Local PHY lane margining is supported in PCIe VIP (PIPE Mode) using transmission of lane margin command over the MBI interface signals (i.e. p2m & m2p) defined in PIPE v4.4.1 specifications. The ideal setup for verification is an environment running at 16 GT/s or higher speed where the VIP is either a Downstream or Upstream port initiating the Rx Margining requests to its own PHY.
 Speaker: Ankita Vashisht - Synopsys India Pvt Ltd
 Author: Ankita Vashisht - Synopsys India Pvt Ltd
13.5Building and Modelling Reset Aware Testbench for IP Functional Verification
OneOne of the major new initiative in the SOCs now a days is to integrate Functional Safety – FUSA features compliant IPs. Supporting parity is one of the important aspects of Functional Safety – FUSA. Checking correctness of the parity features requires verification environment to be fully reset aware. Parity Error handling and verification at IP level proves a significant challenge for verification engineers in terms of time and efforts for recovery from various error scenarios. Most of the errors put IP into the unrecoverable state and applying reset to the IP is one of solutions offered to recover from such error scenarios. Modelling reset in all the verification components is an enhancement in the usual testbench development process followed. Subsystems and SoC can leverage this methodology to validate FUSA at the cluster level also. Different best known methods described in this paper will be useful in reduction of time and efforts for functional verification engineers across different projects.
 Speaker: Naishal Shah - Intel
 Author: Naishal Shah - Intel
13.7Improving Simulation Performance at Subsystem/SoC Level Using LITE ENV
Increasing hardware design complexity has resulted in significant challenges for hardware design verification. Verification Challenges increases exponentially as the effort moves from IP to SubSystem to SoC level. Subsystem/SoC verification involves validating of end to end data paths, inter IP communications, performance analysis, quality of service and other system level scenarios. Multiple IP level verification environments are integrated to build Subsystem/SoC verification environment. Time taken to simulate system level scenarios in such complex environment degrades the simulation performance and hence impacts overall verification convergence. In this paper, we propose a smart approach called “LITE ENV” that helps to reduce simulation time taken at SoC/Subsystem level enabling verification team to focus more on debugging, bug hunting and fixing complex SoC bugs rather than waiting for hour’s long simulation to complete.
 Speaker: Avni Patel - Intel
 Authors: Avni Patel - Intel
Heena Mankad - Intel
13.8Verification Strategies and Modelling for the Uninvited Guest in the System: Clock Jitter
This paper presents the modeling of various clock generators using System Verilog to mimic the behavior of various types of jitter in the simulation clock to validate that the design is efficiently able to cope with the allowable jitter in the clock signal. The motivation for this work comes during development of LPDDR verification environment which is a both edge (rising and falling) triggered system and a little jitter (permissible) in the clock can cause timing as well as data sampling issues which are not expected. This paper also briefly describes the different types and characteristics of jitter.
 Speaker: Deepak Nagaria - Synopsys India Pvt. Ltd.
 Authors: Deepak Nagaria - Synopsys India Pvt. Ltd.
Vikas Makhija - Synopsys India Pvt. Ltd.
Apoorva Mathur - Synopsys India Pvt. Ltd.
13.9Adaptive UVM <-> AMOD Testbench for Configurable DSI IP
The efforts required to create a functional verification framework scales up significantly with the increase in the complexity of hardware design. Moreover, for a new design, verification is obstructed by the availability of interfaces/ports, initial first-cut design. In this paper, we describe an approach where a pure C++ architectural model (AMOD) can be re-used ingeniously along with UVM modeling to verify the protocol-physical layer IP to shorten the verification cycle. Less dependency of protocol layer on the timing but more on transaction accuracy and more dependency of the physical layer on timing is the foundation of the described approach. This work is demonstrated using the work carried out at NVIDIA on the configurable MIPI DSI (Display Serial Interface) IP.
 Speaker: Krishnapal Singh - NVIDIA
 Authors: Krishnapal Singh - NVIDIA
Pavan Yeluri - NVIDIA
Ranjith Nair - NVIDIA
13.10Spec Automated Formal Verification of a Security Module with a Hybrid of Formal Property and Security Path Verification Tools
With the advent of Internet of Things and increasing interactions between devices, system security has become a paramount requirement, thereby making hardware security verification, a critical component in the design cycle of a device. Functional verification of a security module involves two main aspects: Positive checks which verify that the secure or permitted accesses to secure code/data are carried out correctly with the associated data integrity; Negative checks that verify that unsecure or non-permitted accesses to secure code/data are blocked correctly without any leakage of secure information. In this paper, we discuss the Formal Verification (FV) of the Access Control Security Module (ACSM), which performs the function of filtering accesses from the various system masters to the multiple Flash/RAM/ROM memories in the system, subject to the defined access security rules. We also articulate how the two mentioned aspects; positive and negative checks are handled respectively by the two specialized formal tools, Formal Property Verifier (FPV) and Security Path Verifier (SPV) from Cadence, using automated generation of the formal assertions based on the access rules mentioned in the security specification. The paper also discusses the associated improvement in verification quality and reduction of time and effort.
 Speaker: Karthik Rajakumar - Texas Instruments
 Author: Karthik Rajakumar - Texas Instruments
13.11From Device Trees to Virtual Prototypes
Embedded platforms and associated products are getting smarter day-by-day, thanks to their rapidly increasing complexity. Since software content of these systems is supporting complex features, the overall product development becomes highly complicated and time-consuming. The answer to keeping pace with this innovation is developing Virtual Prototypes (VP). With increasing SoC complexities in embedded-systems, the corresponding pressure to accelerate VP model/platform creation and reduce overall development-time is also increasing. In this context, generation of structural VPs for various IPs and their assembly, plays a key-role in improving overall iteration turnaround-time of generating VP. Current process of developing a VP requires SoC specifications availability. At initial stages, complete specification is generally not available to VP developers. Even if available, manual reading & parsing of information takes significant time and effort, increasing human-error probability. Furthermore, if specification is revised by SoC provider, updating the VP involves repetitive manual iterations that considerably adds to overall platform development-time. The proposed solution addresses these challenges and provides a systematic and innovative way to create structural VPs, using information available in operating system (OS) software (Device Tree, abbreviated as DT from here onwards), along with taking limited inputs from vendor provided IP/platform specification document(s). The flow enables taking minimal user inputs, facilitated by automation-framework, hence significantly reducing efforts in creating complex VP systems with 100’s to 1000’s of models and connections. In this paper, we will present an overview of the proposed flow, details of how complex platforms have been created with our structured and systematic proposed solution, presenting significant gains achieved in overall turnaround-time against the traditional manual flow.
 Speaker: Sakshi Arora - Synopsys India Pvt. Ltd.
 Authors: Sakshi Arora - Synopsys India Pvt. Ltd.
Vikrant Kamboj - Synopsys India Pvt. Ltd.
Preeti Sharma - Synopsys India Pvt. Ltd.
13.12Enabling SystemC/TLM-2.0 Compliant Virtual Prototypes With Fault Injection Capability
Virtual Prototype (VP) is a well-known approach for early software development and its verification. In the last few years, robustness and safety of the products gaining higher priority especially, automotive, medical, defence applications. This necessitates the need for an easy mechanism to control the simulation behaviour as per the safety critical application needs. E.g., inject faults into registers/bit-fields, driving ports or any other runtime interaction to check the application stability. Since VP is developed using industry standard SystemC and TLM 2.0, it is delivered as an executable binary (shared library, .exe etc.) by the hardware vendors. This makes it difficult to enable VP with runtime interaction capability. There are paid tools available in the industry, which address these problems. However, it introduces strict binding of these tools to the development of VP. In addition, if the VP wants to provide custom runtime interaction possibility. Then, VP team will have to negotiate with tool vendors. Therefore, this paper discusses a unique approach to address these problems for an industry standard SystemC, TLM-2.0 compliant VP’s.
 Speaker: Chethan Nayak - Infineon Technologies India Pvt. Ltd
 Authors: Chethan Nayak - Infineon Technologies India Pvt. Ltd
Nikhil Jain - Infineon Technologies India Pvt. Ltd
13.13High Level Synthesis to Bridge the FPGA to ASIC Gap
In the last three years, there has been a general increase in the number of hardware designs targeting FPGA. For a large number of IoT and automotive designs, FPGAs are used for building out the first prototype and for having a quick time to market. FPGAs, albeit slower and more expensive, does not have the associated high NRE cost of the ASICs; it makes them a very efficient prototyping device. Hence most of the IoT manufacturers only produce ASICs once they reach a certain volume. FPGAs also typically consume more power than ASICs However, moving from FPGA to ASICs has always been a long and painful process – in terms of both front end (design and verification) as well as for back end (physical design, etc). FPGA sepecifications are constrained by the manufacturer in terms of amount of logic and memory available and the key goal is to have the maximum Fmax as well as to most efficiently utilize the resources on the device. ASICs typically are free of those resource constraints but they have to maximize the PPA for a given implementation. Traditionally it requires a costly rewrite of major parts of the RTL to get optimal results for the ASIC. These rewrites also increases the verification cost of the underlying implementation. In this paper, we propose an alternative methodology of using high level synthesis for a quick migration from FPGA to ASIC. High level synthesis raises the abstraction layer of describing hardware and separates the algorithm from the implementation. By coding in C/C++/SystemC, designers are free from the underlying implementation details and can focus on efficiency. Typically, HLS improves designer productivity by 4X, as proven repeatedly in different designs. We will give an overview of how HLS can also help ease the pain of moving from FPGA to ASIC – particularly for datapath intensive designs. We will go over some example neural network designs and show how the same C++ code could be targeted to different underlying implementations and collate the results for different FPGA and ASICs. Finally, we will go over the practical use cases for this methodology and the factors that the designer need to consider for such migration.
 Speaker: Anoop Saha - Mentor A Siemens Business
 Author: Anoop Saha - Mentor A Siemens Business
13.14Break the SoC with Random UVM Instruction Driver
Today’s system on a chip (SoC) is made up of numerous peripherals, usually multiple processor cores and in-house and third-party IPs. Due to the various scenarios that come with a multitude of highly dependent IPs, verifying such complex designs can be even more challenging. It has become essential to perform complex scenarios that include complex stimuli on the software where all CPU cores are supported, as well as multiple peripheric interfaces. While C tests have gained a significant share of the SoC verification world and reuse of a UVM code presents multiple benefits, both have significant drawbacks. Integrating C tests (containing the CPU instructions) and the UVM code (containing the rest of the testbench components) into a single verification testbench brings limitations for the control, reusability and randomization of tests. However, this paper demonstrates a technique that allows random generation and driving from a UVM verification environment of CPU instructions for SoC verification, using a black-box approach in design, offering unparalleled advantages. The solution accommodates any design with one or more CPU cores, it is flexible for design changes and CPU instructions set to change. The solution is fully compatible with UVM and non-UVM SV based verification testbenches. Examples show how this results in a testbench that automatically adapts to and works with any design configuration. There are different methodologies of SoC verification. Usually, the code for the CPU core is written in C, then processed by a compiler, converted into assembly code and later executed by the processor. However, C tests solve only a part of the problem - the stimuli for the CPU instructions. There are other stimuli driving peripheral communications (I2C, SPI, UART etc). Since the UVCs are already working at a module level, a natural solution is to reuse the UVM components from the module to system. In this way, C tests contain the CPU code and the SV test drives the other stimuli, including monitors, checkers and other components; but there could be a communication protocol between the SV testcase and the C code creating more complex scenarios. There are major disadvantages to assembly/C tests, like the challenge to communicate efficiently between SV and assembly/C, which decreases the controllability of the stimuli, becoming harder to implement more complex scenarios. There are limitations for randomization and reusability of tests as well. Other SoC verification methodologies are based on removing the CPU from the design completely and replacing it with a CPU model (Figure 1). This helps drive communications with other components and eliminates the need for assembly/C tests. However, there are complex interactions between the CPU and the rest of the blocks in the SoC (NVM controller, DMA, RAM, peripherals etc), and if the CPU is replaced by a simple model, there is a high risk of omitting bugs. Additional disadvantages to this approach include setting up new tests for gate level simulations each time there is a new netlist and a SDF package is released, while also having an incomplete netlist in the test. Figure 1 Another option in SoC verification is to keep the design but add an extra multiplexor. This bypasses the NVM controller and feeds the CPU directly from an SV driver - which takes the data randomly generated from a UVM sequencer (Figure 2). While this has some advantages over removing the CPU completely, it is not a perfect solution. The NVM Figure 2 controller is bypassed, so the possibility of finding integration bugs between the NVM controller and the CPU decreases significantly. The methodology proposed by this paper is: while keeping some assembly/C based tests, move most of the tests to a UVM/SV only testbench, without having to make any compromise regarding the structure of the design. As can be seen in Figure 3, the proposed methodology is a black-box approach of the SoC design, and it doesn’t require removing or adding any component of the design. The CPU code will be written directly and dynamically into the memory, using an instruction driver called the Instruction Transactor or IX. The SV test will generate randomly constrained sequences that will be transmitted to the IX. The IX will convert it to CPU instructions, read the current program memory address and write it to the next location, where the CPU will read the instructions in the next cycle. There are two dependencies: - Dependency with the CPU type or the CPU instruction set architecture - Dependency with different NVM types (depending on size, technology, model provider etc) The first dependency is resolved by writing an IX driver for each CPU instruction set architecture, but with the same interface as the SV code. The second dependency is resolved by creating an additional layer between the IX and the NVM model – the IX2NVM interface. One such interface can be created for each memory type, with the same interface as the IX driver. Figure 3 Summary/Results: By using the UVM CPU instruction driver, the majority of the SoC testcases can be completely written in UVM/SV, thus providing a high reusability of tests and testbench components from a module to system level. This approach allows more flexibility in generating complex scenarios for the entire SoC. It allows full randomization of the scenarios and easier portability of tests and testbenches between projects. This methodology presents a high flexibility for design change and high reusability between different projects with different memory technology, CPU types and numbers of CPU cores. Using a layered hierarchy, the UVM testbench can be completely decoupled from the design architecture, flash technology and even from the CPU architecture/type. The SoC design will be kept unmodified, compared to other SoC verification methodologies, which require to add or replace certain design blocks. A main advantage is that the IX Driver is seamlessly integrated for gate level simulations. Since this methodology is a black-box approach, no components of the design were removed (like in Figure 1) and no additional components were added (like in Figure 2). The gate level netlist should not require any change in order to have the IX driver plug-and-play. References: [1] IEEE Std 1800-2012, “Standard for SystemVerilog – Unified Hardware Design, Specification, and Verification Language” [2] My Testbench Used to Break! Now it Bends: Adapting to Changing Design Configurations, Jeff Vance, Jeff Montesano, Kevin Vasconcellos, Kevin Johnston, Verilab Inc, DVCon USA 2018
 Speaker: Pravin Wilfred - Microchip Technology Inc.
 Authors: Pravin Wilfred - Microchip Technology Inc.
Madhukar Mahadevappa - Microchip Technology Inc.
Bogdan Todea - Microchip Technology Inc.
Diana Dranga - Microchip Technology Inc.
13.15Systematic Asynchronous FIFO Verification Using Structural Analysis and Formal Techniques
An effective method for ensuring asynchronous high speed data transfer is FIFO (First-In First-Out). A FIFO ensures speed matching and guarantees data transfer across clock domains. It also eliminates meta-stability risk and has key advantages such as minimal data stability requirement, guarantee of service, and low latency. However, designing and verifying FIFO is a challenging task. The FIFO contains diverse data and control logic that interacts smartly across asynchronous paths to make protocol data transfer across asynchronous boundaries highly efficient. The complexity of functional and structural logic along with options to tune the performance, and control reliability of data transfer makes FIFO implementations vary across designs. This complexity poses a serious verification challenge for CDC verification teams in the later stages of the silicon flow. Recognizing and verifying these structures reliably is a hurdle that industry is trying to overcome. In this paper, we 1. Investigate the risks associated with standard FIFO implementations that can lead to structural or functional failures 2. Present a generic method to break-down a complex FIFO into standard sub-structures that can be validated independently. This helps pinpoint the exact stage and point of failure. 3. Present a technique using formal verification to functionally validate the FIFO. The proposed automated method was validated on multiple industry designs. This paper illustrates a case study on an industry microprocessor design. Proposed methodology helped identify real functional bug in an optimized FIFO structures that could have caused silicon failure.
 Speaker: Anchal Gupta - Mentor Graphics, A Siemens Buisness
 Authors: Anchal Gupta - Mentor Graphics, A Siemens Buisness
Ashish Hari - Mentor Graphics, A Siemens Buisness