It’s likely common knowledge among professionals in the #ASIC and #SoC verification industry that the current state-of-the-art in simulation technology is woefully inadequate. Sometimes, typical simulations aren’t enough to find faults in complicated systems that involve firmware, software, and hardware in a reasonable amount of time.
Instead, projects usually resort to block-level simulation, extensive emulation cycles, and FPGA-based prototyping to get their feet wet.
There are two main architectures used for emulation. Both use proprietary ASIC designs or field-programmable gate arrays (FPGAs) as its core emulation technology. In addition, an FPGA platform is always used throughout the prototyping phase.
For this reason, many projects employ both FPGA prototyping and emulation.
Faster than emulation and cheaper per gate, FPGA-based devices nonetheless have several FPGA architectural limitations.
Compile cycles and debug are not straightforward on an FPGA-based platform, therefore you can only start from a relatively stable design snapshot. FPGA systems necessitate lengthy cycles of compilation and place-and-route, so any problems that are discovered will cause yet another such cycle to begin. When debugging on an FPGA platform, it is typically necessary to specify the signals you wish to monitor. Any time the signals to follow are modified, a new place-and-route cycle must be compiled and added. It increases the amount of logic space required and has an effect on both the machine’s processing power and its overall efficiency. This is a problem when the design is immature and re-compilation is required for debugging.
Once up and running, an FPGA is a fantastic platform for software development, albeit debugging might be trickier than on an ASIC-based system.
Emulation is made easier with the specialised CPUs available on ASIC-based devices. No timing difficulties, no physical place-and-route cycles, and no recompilation for debugging are involved.
Providers who make available both FPGA-based and dedicated ASIC-based systems give their customers the best of both worlds.
The Preferred Methodology—Combining the two worlds
Time to market is an important consideration in addition to detecting flaws that can’t be caught by traditional simulation. In addition to speeding up the process of getting ready for tapeout, starting work early also gives you more time to track down any elusive issues that may have been missed.
Typically, simulations are used for testing at the beginning of a project.
Code development by software/firmware teams is typically favoured by clients over FPGA development. However, bring-up cycles can take many weeks to months if the FPGA route is pursued before the architecture is stable enough. Furthermore, extra debug/re-compile/place-and-route cycles might have a significant impact on the programme development schedule.
The use of an emulator from the outset is highly suggested, since it will allow for considerably quicker iterations during the debugging process. Having the design refined in this way before moving it to an FPGA system is a must. After further debugging on the emulator by the design team (now including software engineers in the emulation effort), the emulator can be used with the quicker and cheaper per gate FPGA-based devices.
The Cadence Protium X1 Enterprise Prototyping Platform is an FPGA-based system, while the Cadence Palladium Z1 Enterprise Emulation Platform is a CPU-based emulation system; together, they create what Cadence terms “the dynamic duo.” Very large-scale integration (VLSI) projects benefit most from using both platforms.
Questions and considerations to help you choose the right platform for your project
Here are some things to think about while deciding between an FPGA-based platform and an emulation platform:
- The time it takes to compile your design and the number of CPUs required to compile 200M gates in an hour.
- Your system-on-a-(SoC) chip’s physical size. How many millions, hundreds of millions, or even just a few thousand gates does it have? An FPGA-based system may be the most cost-effective option if your design can fit inside a single FPGA or if the array size is manageable for your technical team. When iterations and development around a major design grow complicated and time-consuming, it may be time to switch to an emulation system.
- The number of concurrent users who can work on the machine simultaneously. Typically, users are able to acquire enough capacity for the simultaneous execution of one full-chip simulation. Nonetheless, there are situations where only a few of the components making up a subsystem are actually being used. The number of concurrent runs you can perform is proportional to the granularity of the smallest cluster.
- Whether or not USB, PCIe, Ethernet, or other real-world devices will be connected to the emulator. Consider either Cadence SpeedBridge Adapters or Cadence VirtualBridge Adapters, depending on your needs. The SpeedBridge Adapters are actual hardware that may be used to test the concept in a realistic environment at actual speeds. To connect the emulation system, which is much slower than the real world, to actual hardware components, these adapters are used. The two worlds can be bridged by implementing suitable wait states in each protocol. Each VirtualBridge Adapter has a transactor that facilitates rapid data transfers between the user’s design under test (DUT) in a Palladium emulation platform and the host workstation.
- How easily a design may be transferred between simulation, emulation, and an FPGA-based prototyper. Particularly, you’ll need to foresee if debugging on an FPGA-based or an emulation-based platform will be more difficult.
- How long will it take to run regression tests, how much time will be needed for debugging, what impact could signal capture have on performance, how much data can be logged, whether or not signal changes necessitate re-compilation, whether or not you can capture full visibility of your design, and how many emulation cycles will be necessary?
- When debugging, whether hot swapping between emulation and simulation is useful to release the machine and debug offline. You should also consider whether you can easily force and/or release signals.
- How many design changes and adaptations you will be required to do—if any—to enable the original design run on your chosen platform.
- What tools you have for debug already, and how emulation performance will be affected—if at all—when running with signals. Factor in whether you can you perform debugging on the fly.
- What tools you have to track the project and whether automation capabilities will help and are available.
- Whether or not you will need to save a design state and restore it multiple times.
- You would probably want to generate massive amount of meaningful random tests for your emulation or FPGA platform. Here’s a link to a previous article I wrote on this topic that explains how to do that.