TY - GEN UR - http://lib.ugent.be/catalog/pug01:8625887 ID - pug01:8625887 LA - eng TI - Racing to hardware-validated simulation PY - 2019 SN - 9781728107462 PB - 2019 AU - Adileh, Almutaz UGent 000131219071 802001695415 802001703293 AU - González-Álvarez, Cecilia UGent 000121044175 802001439373 AU - Miguel De Haro Ruiz, Juan AU - Eeckhout, Lieven TW06 801001255603 0000-0001-8792-4473 AB - Processor simulators rely on detailed timing models of the processor pipeline to evaluate performance. The diversity in real-world processor designs mandates building flexible simulators that expose parts of the underlying model to the user in the form of configurable parameters. Consequently, the accuracy of modeling a real processor relies on both the accuracy of the pipeline model itself, and the accuracy of adjusting the configuration parameters according to the modeled processor. Unfortunately, processor vendors publicly disclose only a subset of their design decisions, raising the probability of introducing specification inaccuracies when modeling these processors. Inaccurately tuning model parameters deviates the simulated processor from the actual one. In the worst case, using improper parameters may lead to imbalanced pipeline models compromising the simulation output. Therefore, simulation models should be hardware-validated before using them for performance evaluation. As processors increase in complexity and diversity, validating a simulator model against real hardware becomes increasingly more challenging and time-consuming. In this work, we propose a methodology for validating simulation models against real hardware. We create a framework that relies on micro-benchmarks to collect performance statistics on real hardware, and machine learning-based algorithms to fine-tune the unknown parameters based on the accumulated statistics. We overhaul the Sniper simulator to support the ARM AArch64 instruction-set architecture (ISA), and introduce two new timing models for ARM-based in-order and out-of-order cores. Using our proposed simulator validation framework, we tune the in-order and out-of-order models to match the performance of a real-world implementation of the Cortex-A53 and Cortex-A72 cores with an average error of 7% and 15%, respectively, across a set of SPEC CPU2017 benchmarks. ER -Download RIS file
00000nam^a2200301^i^4500 | |||
001 | 8625887 | ||
005 | 20191022144019.0 | ||
008 | 190826s2019------------------------eng-- | ||
020 | a 9781728107462 | ||
024 | a 000470201600006 2 wos | ||
024 | a 1854/LU-8625887 2 handle | ||
024 | a 10.1109/ispass.2019.00014 2 doi | ||
040 | a UGent | ||
245 | a Racing to hardware-validated simulation | ||
260 | c 2019 | ||
520 | a Processor simulators rely on detailed timing models of the processor pipeline to evaluate performance. The diversity in real-world processor designs mandates building flexible simulators that expose parts of the underlying model to the user in the form of configurable parameters. Consequently, the accuracy of modeling a real processor relies on both the accuracy of the pipeline model itself, and the accuracy of adjusting the configuration parameters according to the modeled processor. Unfortunately, processor vendors publicly disclose only a subset of their design decisions, raising the probability of introducing specification inaccuracies when modeling these processors. Inaccurately tuning model parameters deviates the simulated processor from the actual one. In the worst case, using improper parameters may lead to imbalanced pipeline models compromising the simulation output. Therefore, simulation models should be hardware-validated before using them for performance evaluation. As processors increase in complexity and diversity, validating a simulator model against real hardware becomes increasingly more challenging and time-consuming. In this work, we propose a methodology for validating simulation models against real hardware. We create a framework that relies on micro-benchmarks to collect performance statistics on real hardware, and machine learning-based algorithms to fine-tune the unknown parameters based on the accumulated statistics. We overhaul the Sniper simulator to support the ARM AArch64 instruction-set architecture (ISA), and introduce two new timing models for ARM-based in-order and out-of-order cores. Using our proposed simulator validation framework, we tune the in-order and out-of-order models to match the performance of a real-world implementation of the Cortex-A53 and Cortex-A72 cores with an average error of 7% and 15%, respectively, across a set of SPEC CPU2017 benchmarks. | ||
598 | a P1 | ||
700 | a Adileh, Almutaz u UGent 0 000131219071 0 802001695415 0 802001703293 0 972227270853 9 835AD5E0-4280-11E3-A709-765F10BDE39D | ||
700 | a González-Álvarez, Cecilia u UGent 0 000121044175 0 802001439373 0 977952694338 9 7DAE5B08-9344-11E2-AAF3-2DC310BDE39D | ||
700 | a Miguel De Haro Ruiz, Juan | ||
700 | a Eeckhout, Lieven u TW06 0 801001255603 0 0000-0001-8792-4473 9 F57D866C-F0ED-11E1-A9DE-61C894A0A6B4 | ||
650 | a Technology and Engineering | ||
773 | t IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) g 2019 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS). 2019. p.58-67 q :<58 | ||
856 | 3 fullText u https://biblio.ugent.be/publication/8625887/file/8625890 z [open] y ispass2019-sniper-arm.pdf | ||
920 | a confcontrib | ||
Z30 | x EA 1 TW06 | ||
922 | a UGENT-EA |
All data below are available with an Open Data Commons Open Database License. You are free to copy, distribute and use the database; to produce works from the database; to modify, transform and build upon the database. As long as you attribute the data sets to the source, publish your adapted database with ODbL license, and keep the dataset open (don't use technical measures such as DRM to restrict access to the database).
The datasets are also available as weekly exports.