From the simulation core to the device modeling frameworks, ASTC is committed to meaningful improvement in VLAB. Every aspect of the tools is under re-evaluation to determine how best to serve our users. The key question we are asking is this: How does this help the user achieve their goals of shipping a better product sooner, at lower cost, with higher quality? Any feature or technology that doesn’t have a good answer to this question is unlikely to get investment. The result is tools that are more effective at doing what customers actually want. The luxury VLAB seat I had planned with heating, cooling, and lumbar massage got cancelled before the beta test. Why? Luxury seats are what our customers add to their car, not what we add to the software, oh well.
As part of our comprehensive review of all things VLAB, we wrestled with the question about simulation itself. Specifically how does our definition of simulation directly help the user? To start with we tried to define what is our definition of simulation? We could choose a definition from OSCI TLM such as AT or LT, or something like the Physical Layer Abstraction from Mathworks. However, these definitions came about with digital or analog circuit designers in mind. VLAB in contrast focuses on the system or application level software or test engineer.
Thus the VLAB definition of simulation is somewhat software centric. The job of virtual platforms is to enable all aspects of the software under test for execution and inspection. Traditionally that meant all aspects of the software. Recently we have been re-examining this interpretation to see if we can make it more precise. What we found is that there are many aspects of software that are not under test and don’t need to be simulated in detail.
A new understanding
Let’s assume momentarily that your system includes graphics. Let’s further assume that like most engineers you don’t work for a company named Nvidia, AMD, or Apple. As such you call OpenGL, DirectX, or Metal, rather than writing directly to the graphics hardware found on your SoC/Discrete GPU. Given these assumptions when was the last time you had to debug the workings of your graphics stack? For me it was in 1996 shortly before I discovered that it was easier to call QuickDraw 3D than to implement a 3d vector transform in C, but your specific date may be different. In general most of us need to debug our calls to the graphics stack and never want to, or need to, look into the inner workings of that stack.
This brings us to the virtual machine concept of host GPU offload. In a VM this means installing a new graphics library or DLL in the target OS image with high enough priority in the link path that it handles all graphics calls and passes them off to the equivalent call on a host graphics stack. In VLAB the concept is similar, though in many cases we can’t simply inject a new DLL. Instead we use our debugging infrastructure to dynamically capture API calls and pass them on to the host. This preserves the ability of the target application to use graphics, while dramatically simplifying the simulation with a GPU model replace by the host GPU.
…prioritizing technologies and techniques that recognizes not all software is equally meaningful to all developers
What can you expect
We have started referring to this technique as Fusion or more specifically OpenGL Fusion. This should tip you off that in the future we plan to introduce additional fusions. Fusions that will support standard APIs to complex hardware not needed to develop and test the software.
To summarize, ASTC is investing in performance. Not just the usual several % here, a few % there, sorts of tweeks everyone does every generation. We are prioritizing technologies and techniques that recognizes not all software is equally meaningful to all developers. OpenGL Fusion and fusion for other similar APIs has the potential to totally change the nature of simulation. How much? We think the same way that multiple window VMs changed the way we think application level compatibility.