What Can Compiler Benchmarks Tell Us About Metaprogramming Implementation Strategies
In 2022, how do modern compilers really deal with metaprogramming code? And as a consequence what are the best metaprogramming patterns to maximize compile-time performance? Optimizing application runtime is often guided by a mix of popular knowledge, expertise, and, when required, hard cold benchmarking data. Over the years, tools have been developed to help with this process and make the most of available computing resources. But what about the few situations where the compile-time itself is critical, like in the case of metaprogramming-heavy libraries intended to be used at the core of large projects? Does the niche knowledge accumulated over the years in this regard really match the hard cold data coming from compilers? Additionally, how do compilers behave when pushed to their absolute limits? And is there a lot of discrepancy between the behaviors of different compilers?
This talk is an exploration of all these questions and what it means in practice for implementations. We will see how to build reliable portable benchmarks for compilers using advanced metaprogramming constructs both in C++20 as well as in C++17 when legacy compatibility matters. We will dive, in particular, into the internals of a metabenchmarking library that has been designed to help with the compile-time generation of such benchmarks. To better understand the real limit of modern compilers when it comes to generative programming strategies, we used supercomputers to explore the parameter space of these benchmarks. In this talk, we will dissect the results together to see if the popular knowledge is reflected in the data. This will allow us to analyze, amongst other things, the real cost of class templates, function templates, generic lambdas, value-based metaprogramming, and concepts on compile-time for various compilers and various compiler versions. And, in many cases, we will see that the results can be very counter-intuitive.
The goal of this talk is to give a clear and up-to-date picture of what really matters for compile-time today. But beyond that, the goal is also to provide guidelines on implementation strategies and metaprogramming patterns when compile-time matters. Finally, it also constitutes an invitation to compiler writers to compare their tools when facing particularly ill-behaved situations.
Vincent Reverdy
Vincent Reverdy is a Full Researcher in Computer Science and Astrophysics at French Center for Scientific Research (CRNS) and located at the Annecy Laboratory for Particle Physics (LAPP) in the French Alps. He also is a member of the French delegation to the C++ Standards Committee. After a PhD at the Paris Observatory in 2014 on the topic of numerical cosmology and general relativity for which he explored extensively advanced metaprogramming techniques, he joined the University of Illinois at Urbana-Champaign in the US. There, he led an interdisciplinary research group in both computer science and computational astrophysics, trying to bridge the gap between programming languages and computational sciences. In late 2019 he moved back to France to continue to work on software architecture aspects related to astrophysics, and joined CNRS in January 2022 to lead long-term research aiming at building bridges between theoretical computer science including type theory and category theory on one side, and computational sciences with a focus on numerical astrophysics on the other side. Finally, as a member of the C++ committee, he has been working extensively on low-level programming components, including bit manipulation, as well as mathematical abstractions.