Update: I’m transitioning off of Medium, and have reposted this story onto my new blog, which can be found here. Thanks for reading!
Before C++17, template deduction [basically] worked in two situations: deduction function parameters in function templates and deducing
auto for variables/return types in functions. There was no mechanism to deduce template parameters in class templates.
The result of that was— whenever you used a class template, you either had to (1) explicitly specify the template parameters or (2) write a helper
make_* function that does the deduction for you. In the former case, it’s either repetitive/error-prone (if we’re just providing exactly the types of our arguments) or impossible (if our argument is a lambda). In the latter case, we have to know what those helpers are… they’re not always named
make_*. The standard has
make_move_iterator… but also
back_inserter for instance.
Class template argument deduction changed that by allowing class template arguments to be deduced by way of either the constructors of the primary class templates or deduction guides. The end result is that we can write code like:
No types explicitly specified here. No need for using
make_*(), even for lambdas.
However, there are two quirks of class template argument deduction (hereafter, CTAD) that are worth keeping in mind.
The first is that, this is the first time in the language where we can have two variable declarations that look like they’re declaring the same type but are not:
When we use
auto, there’s no expectation that this is a type at all. But when we use the name of a primary class template, we have to stop and think for a bit. Sure, with
std::pair it’s obvious that this isn’t a type — this is a well-known class template. But with user-defined types, it may not be so obvious. In the above example,
d look like they’re objects of type
std::pair — and thus are of the same type. But they’re actually objects of type
(update: it was correctly pointed out by /u/cpp_learner that this is not the first such case due to the existence of arrays of unknown bound. However, I suspect that CTAD will be used far, far more often than that so I think it’s at least far to say that (a) this will be the first commonly used time that this holds and (b) arrays of unknown bounds are more obviously placeholder types than names of class templates).
We’ll get this same issue in C++20 with the adoption of Concepts. And the YAACD paper actually points to CTAD as a reason for supporting
Concept name = ... over
Concept auto name = ...:
In variable declarations, omitting the
autoalso seems reasonable:
Constraint x = f2();
Note, in particular, that we already have a syntax that does (partial) deduction but doesn’t make that explicit in the syntax:
std::tuple x = foo();
This using-placeholder-that-looks-like-a-type-but-isn’t issue isn’t going to go away. Quite the opposite, it’s going to become much more common. So it’s just something to keep in mind.
The second quirk is, to me, a much bigger issue and one that is meaningfully different between Concepts and CTAD and comes from exactly what problem it is that we’re trying to solve.
The motivation for CTAD as expressed in every draft of the paper is very much: I want to construct a specialization of a class template without having to explicitly specify the template parameters — just deduce them for me so I don’t have to write helper factories or look up what they are. That is, I want to construct a new thing.
The motivation for Concepts is broader, but specifically in the context of constrained variable declarations is: I want to construct an object whose type I don’t care about, but rather than using
auto, I want to express a more specific set of requirements for this type. That is, I’m still using the existing type, I’m just adding an annotation.
At least that’s how I think about it.
These two ideas may not seem like they clash, but they do. And it may not appear that we’re making a choice between two things, but we are. This conflict is expressed by a recent twitter thread:
Never quit, JF. Never quit.
The issue boils down to: what does this do, exactly:
What is the intent behind the declaration of variable
x? Are we constructing a new thing (the CTAD goal) or are we using
std::tuple as annotation to ensure that
x is in fact a
tuple (the Concepts goal)?
STL makes the point that most programmers would expect
y to have the same meaning. But this kind of annotation wasn’t the goal of CTAD. CTAD was about creating new things — which suggests that while
y is clearly a
x needs to be a
tuple<tuple<int>>. That is, after all, what we’re asking for. We’re creating a new class template specialization based on our arguments?
This conflict becomes clearer in this example:
This is the point that Casey made. Is
tuple<int> or a
vector<int> or a
Today, if we’re using CTAD with a copy, the copy takes precedence. This means that the single-argument case effectively follows a different set of rules than the multi-argument case. Today,
c is a
z is a
vector<int>, each just copy-constructing its argument.
In other words, to Casey’s point, the type of
tuple(args...) depends not only on the number of arguments but also their type. That is:
sizeof...(args) != 1:
- Otherwise, if
args0is not a specialization of
That’s decidedly not simple.
I think this is an unfortunate and unnecessary clash — especially in light of the imminent arrival of Concepts, that would allow us to easily distinguish between the two cases:
Here, we would use each language feature for the thing it does best: constructing new things for CTAD, and constraining existing things for Concepts.
But these are the rules we have today, so it’s important to keep in mind that these quirks exist. Especially the second one — which means you need to be very careful when you use CTAD in generic code: