For those scratching your head, possibly because you are new to simulation, here is a quick explanation. In digital simulation there are multiple ways of simulating embedded hardware, each with its pros and cons. Perhaps the simplest explanation, Cycle Accurate is a type of detailed modeling that strictly imitates the hardware, down to the timing of CPU instructions. You can add more detail and approach analogue simulation by estimating the timing of the analog effects of the digital circuits for sub-cycle accuracy.
Moving up the spectrum, there are various levels of abstractions that can be employed to achieve higher simulation performance, which result in some differences in timing. Commonly the first level of abstraction is referred to as ‘Approximately Timed’ simulation. Simulations such as these change the viewpoint from an individual circuit to a functional block but importantly still capture most timing effects. The next level is to simplify the model of each block giving ‘Loosely Timed’ simulation.
With VLAB Virtual Platforms we try to use a horses for courses approach. That being, we model only what is required to achieve what users actually need. In most cases that means that our hardware IP and core models fall somewhere between Approximately and Loosely. Now obviously there are scenarios where timing accuracy is important. Take for example analyzing the timing budget of servicing interrupt routines, or predicting if a real time control algorithm has sufficient processing headroom to meet requirements. Our experience is that in most cases, performing tasks such as these on simulators that are not cycle accurate is still valuable. Of course where safety requirements are a consideration, it is still valuable to validate and/or calibrate the simulation results with hardware. When dealing with certification this value often becomes mandatory.
What it looks like in practice
Recently we conducted a trial project to characterize communications within an automotive environment using Autosar Classic. The objective, to measure and analyze various communication scenarios to demonstrate how Virtual Platforms can be used to develop Autosar applications. To achieve this we chose to leverage VLAB Virtual platforms rather than hardware platforms. The rationale behind the choice…? The increased visibility into the operation of the software would improve the depth of our analysis. Our test harness consisted of two MCU Virtual Platforms, connected via Ethernet and CAN as part of a single simulation. Each MCU was running an Autosar CP stack, with one acting as a server and the other as a client.
With this configuration we crafted multiple scenarios that exercised the software stacks to test for a range of conditions. A small sample of these tests included:
- Time taken from initial startup to service availability
- Impact of increased computation and communication load due to increased number of tasks
- Performance of various software layers using different configuration parameters
What did we learn
When analysing the generated data, we started from a position of understanding the types of simulation we were using. For example, simulation of the software execution was Approximately Timed, while simulation of communication across the Ethernet Bus was Loosely Timed. This actually is somewhat reflective of reality. Modern processors are fast enough and complex enough that there will always be small run to run differences. In a similarly way, communication buses are subject to a wide variety of influences that can impact timing more broadly. With this understanding and using the data only available in simulation, we developed a number of key learnings, including:
- Potential bottlenecks, when the system is under heavy load
- Opportunities for optimizing certain data paths through different software layers
- Parts of the software base that could not affect performance in normal operation
- Which aspects of communications timing were likely to be highly variable over time and which were relatively stable
In conclusion, while cycle accuracy is an easy goal to state, it is rarely necessary to create useful characterization of system performance. The impact on simulation performance modelling cycle accuracy often limits the amount of data that can be collected. This reduction in data calls into question its validity as an accurate representation of real world system performance.