End-to-end framework for novel datatype arithmetic verification
2025
Machine Learning (ML) accelerators are increasingly adopting diverse datatypes and data formats, such as FP16 and microscaling, to optimize key performance metrics such as inference accuracy, latency and power consumption. However, hardware modules like the arithmetic units and signal processing blocks associated with these datatypes pose unique verification challenges. In this work, we present an end-to-end flow for this novel datatype verification. In RTL level, we delve into Datapath Validation[1] (DPV), a formal verification approach that has proven instrumental in tackling the verification challenges of the datapath verification with new datatypes. We use DPV along with simulationbased approach for our datapath verification. We use FP16 as a case study here to discuss our approaches, but also demonstrated how our methodology can be extended to more complex data types such as microscaling. In application level, we leverage PyTorch-C based approach to verify the accuracy in the context of the neural network. Additionally, we further extend the tool for rapid performance projection for various operators. By leveraging the mathematical precision and rigor of formal verification as well as our customized C Reference model implementation, we were able to identify and address critical issues with lightning-fast speed, leading to a more reliable final product. As an example, we verified the FP16 implementation in under 8 weeks—a remarkable feat that would have been a daunting task for traditional simulation-based methods. The final application-level accuracy difference is within 0.1% of the original PyTorch model. We proved that the power of the DPV can lead to faster time-to-market and increased confidence in the IP’s integration.
Research areas