Stretching the Reach of Implicitly Typed Variables in C#

Leandro T. C. Melo
The Startup
Published in
11 min readJul 22, 2020

Type inference is a common feature among mainstream programming languages. The functional ones, like ML and Haskell, are pioneers in exploring this programming paradigm where the declaration of a variable or a function may omit a type annotation. Today, even classic imperative languages such as C++ offer, to a certain extent, a kind of type inference.

C# is not an exception to the ever-increasing popularity of inference-enabled languages: since C# 3.0, developers may declare implicitly typed variables with the var keyword (in C++ 11, a similar feature was introduced — re-targeting the then-existing auto keyword). This functionality, however, falls under a provision: a declaration must be accompanied by an initializer.

Although one would not expect to see ML and Haskell’s “whole-program type inference” in C#, could this initializer requirement be relaxed? Why does it exist, altogether?

  • Is it due to any limitation or just a matter of taste?
  • Could the variable's type be inferred from arbitrary expressions?
  • If yes, what are trade-offs from a language-design perspective?

In this article, I will describe the foundation of a standard type inference technique and ramble about the questions above.

Non-initializing Declarations

What do you answer when someone asks: why is it necessary, in C#, that a local var-declared variable is accompanied with an initializer? The accurate reply to this question is: because the language specification says so; it is not because the type would not be inferable otherwise. Consider function f.

The two statements above (a declaration- and an expression-statement) are not fundamentally different from the single statement var num = 42;. In fact, if permitted, the Intermediate Language (IL) code produced for the former would probably be equal to that produced for the latter. Also, one may easily observe that int is a valid replacement for var in f.

To set the stage for our discussion, let us quote — with minor adjustments for pedagogical reasons — the relevant fragment of the C# specification to which var is subject:

“When the local variable type is specified as var… the declaration is an implicitly typed local variable declaration, whose type is inferred from the type of the associated initializer expression.”

Another characteristic of C# is that a variable must be definitely assigned before we attempt to obtain its value. Let us also quote the language specification fragment that states such mandatory assignment.

“For an initially unassigned variable to be considered definitely assigned… , an assignment to the variable must occur in every possible execution path…”

It is not my intent to enter the pedantic realm of language specifications. Yet, based on our (informal) observation that var num; num = 42; is equivalent to var num = 42;; and the fact that a variable must be assigned before being used, could we slightly stretch the functionality provided by var?

Hindley-Milner, Constraints, and Unification

In our small and trivial function f, it is clear that int is a valid type for the program variable num. But, in a general setup, reasoning about the typing relations among a set of expressions demands a systemic approach. The de facto type inference discipline adopted in typical functional languages is that of Hindley-Milner’s type system.

Here, I will describe (at a superficial level) the core ideas behind a constraint-based variation of Hindley-Milner, alongside a simplified version of a popular constraint-generation and constraint-solving technique that builds on the unification framework to implement such a type system. The setup I present is more elaborate than what would be essential to our discussion about var — to make it comprehensible in the event one would like to further extend it.

A Constraint Language

A primary component of a constraint-based inference technique is a constraint language. A constraint language is composed by its own syntactic elements, and by the types of the (programming) language which we are modelling; in our case, C# is the modelled language. We refer to a C# type (e.g., an int or System.DateTime) by 𝝉, and to a type variable , which is a type too, by 𝜶.

The BNF grammar of our constraint language is as follows. C is a constraint.

C ::= def f : 𝜶 in C | 𝜶. C | C ^ C | 𝝉 = 𝝉

The first constraint form, def, introduces a function identified by f, whose type is 𝜶. (A function type is denoted with the classical arrow notation.) The second form, , is an existential constraint which indicates that 𝜶 “holds” some type that may be employed in a constraint. The ^ operator, appearing in the third constraint form, is logical conjunction — think of it as the & operator in C#. The last form is a constraint that expresses type equality; it is on this one that we will focus. Moreover, our constraint language is equipped with two builtin operators, decl_type(𝒗), which designates the declared type of a 𝒗ariable, and type_of(𝓵), which designates the type of a 𝓵iteral.

A constraint language must also have its semantics defined. But, except for the equality of types, I will not do so in this article — that would entail a longer discussion… and one that is expendable for this general presentation. Having said that, the semantic definition that we have for 𝝉 = 𝝉 is straightforward: that of mathematical equality.

Constraint Generation and Constraint Solving

Supplied with a constraint language, we may generate the constraint associated with a program. This task is usually accomplished in a syntax-directed manner, i.e., we traverse the program’s Abstract Syntax Tree (AST) and, while doing so, generate a constraint that is sound in respect to the modelled language. Below is how a constraint for f could look like:

def f : stringvoid in ∃𝜶1. decl_type(num) = 𝜶1 ^ 𝜶1 = type_of(42)

A constraint implies the typing of a program. But for it to be really useful, type variables must be instantiated. This is achieved by solving the constraint. Let us do that for the one generated for f.

  • def is intrinsic to scope; there are various ways to deal with it, e.g., through De Bruijn Indices. We assume that a function comprises a single naming environment and conflicts never happen. Also, f is not called recursively, none of the expressions in its body mention the parameter name, its void return is uninteresting, and it contains no local functions inside it. These allow us to reduce the previous constraint as follows.

    𝜶1. decl_type(num) = 𝜶1 ^ 𝜶1 = type_of(42)
  • Because the type variable 𝜶1 does not occur free in the constraint, we may eliminate the existential quantification.

    decl_type(num) = 𝜶1 ^ 𝜶1 = type_of(42)
  • We now evaluate operator calls. type_of(42) evaluates to int; for decl_type(num), we introduce a fresh type variable for each program variable, and store the corresponding mappings in a set — this set is referred to as 𝜓.

    𝜶2 = 𝜶1 ^ 𝜶1 = int where 𝜓 = { (num, 𝜶2) }

At this point, our constraint is in a normal format such that it is eligible for a final solving stage by an off-the-shelf framework.

Unification and Typing

What remains from the partly-solved constraint of function f is two equalities. A solution to those is a substitution, from type variables to types, that will ideally render the constraint in question satisfiable. This is exactly what unification provides us with. Specifically, as the result of a successful unification, we have a “special” substitution that is referred to as the most general unifier (mgu), which, in turn, suggests a principal type.

Provided with an implementation of unification, this is how to proceed.

  1. Initially, we unify the first constraint of the conjunction, 𝜶2 = 𝜶1. The return is a substitution of 𝜶2 for 𝜶1, written as 𝝈 = [𝜶2 ↦ 𝜶1].
  2. Next, 𝝈 must be applied both on the remaining constraint, 𝜶1 = int, and on 𝜓. The application of a substitution is denoted with juxtaposition, therefore: 𝝈C = [𝜶2 ↦ 𝜶1]𝜶1 = int and 𝝈𝜓 = [𝜶2 ↦ 𝜶1]{ ( num, 𝜶2) }.

    a) The former operation has no effect, for that 𝜶2 does not appear further.
    b) The latter, 𝝈𝜓, results in 𝜓 = { (num, 𝜶1) }.
  3. Lastly, we move on to the constraint that follows, 𝜶1 = int, and repeat the process from the beginning.

This iteration goes on until the entire constraint is processed or an error happens, e.g., due to an attempt of unifying incompatible types likes an int against a string. Our example terminates without any error, after the second iteration, when [𝜶1 ↦ int] is computed and applied on 𝜓, leaving it as:

𝜓 = { (num, int) }

The mapping above tells us that int is a solution as the type of program variable num. If the constraint that we generated for f is indeed sound, then replacing var for int in the original source-code must yield a well typed C# program, according to the language’s typing judgements.

In a formal semantics, typing judgements are defined by means of inference rules like this one, for a basic (symmetric) assignment.

The above rule says that, under the judgement of the typing environment 𝚪, whenever the type of an assignment is 𝝉, then the type of its left-hand-side expression and its right-hand-side expression are 𝝉 as well. However, in a constraint-based setting, the constraint — or 𝜓 in our formulation — must be accounted as part of the environment too, enriching the rule as follows.

Had the C# specification been defined with a formal semantics within it, it would be imperative to demonstrate that our constraints (generation and solving) is consistent with a program’s typing.

Back to the C# Universe

We have seen that, from a technical standpoint, it is possible to stretch the functionality of C#’s var a bit further. But at what “cost”? There are trade-offs to be aware of. To have a sense of those, let us pretend that we are extending C# into a fictitious language named C##. (We shall not “resolve” the double hash, since a D language already exists — pun intended.)

Of course, I will not go into every possible language-design aspect here; I would not know all of them myself, as a matter of fact. So the upcoming sections are just a glimpse of the highlights.

The Dynamic Typing Confusion

Suppose that, in C##, we decided to accept the syntax of our original f function, as-is. We know that the compiler would have no problem to infer the type of num as int, but would developers find that an intelligible approach? Consider this function g, a slightly incremented f.

The error above may be obvious to some, but, apparently, not to all. This affirmation is in view of the fact that the introduction of the dynamic keyword in C# 4.0 prompted a confusion between said new keyword and var. The developer’s intention was perhaps to write g like this:

In spite of the contrast between var and dynamic, as a language designer, one wants to avoid potential confusion. Therefore, it might indeed be sensible to forbid non-initializing var-declarations in C##. However… what if we demand, for a non-initializing implicitly typed variable, an extra level of consciousness from the developer? For example, that the keywords var and static are combined, as in this C## version of g.

A collateral coolness of the above approach is that it reuses an existing keyword. This change is subtle. Whether or not it would effectively eliminate doubts for the general audience, I cannot tell, though. Toward “consistency”, we could also enable var dynamic as a synonym of the plain dynamic.

Type Conversions, Subtyping, and Generics

The foremost advantage of requiring that a var-declared variable is accompanied by an initializer is that there is no ambiguity in deciding what the variable's type is. By giving away this obligation, yet retaining the requirement that variables must be assigned prior to their use, our inference needs to account for further possibilities when typing a valid program.

  • Inferring a type that renders an individual assignment correct, but which fails to type another assignment to the same implicitly typed variable.
  • Inferring multiple unequal types that render all assignments to the implicitly typed variable correct.

This situation is exemplified by function conv.

The type of the literal 42 is int, but we may not type v as such; otherwise, the subsequence assignment would trigger a compilation error, given that the type of literal 1.6180F is float, but no coercion from float to int is available. One correct alternative is to type v as float — keep in mind that C# allows developers to define implicit/explicit type conversions.

In the presence of subtype polymorphism, the situation is similar.

Now, a compilation error would be triggered if we inferred, based on the first assignment, the type of w as C. To avoid this error, one may naively think that always inferring a top type is a solution to this situation; leading to A as the type w, and, likewise, double as the type of v in function conv. But that strategy does not work in general, because there might be further uses of the implicitly typed variable that impose a specific type: a call w.b() in sub, or a call whoops(v) in conv, where whoops is void whoops(float p) {}.

There are other language constructs that I do not mention in these basic examples that would also demand special care in the non-initializing behavior of var static-declared variables in C##. A notorious challenge is parametric polymorphism; put into that bucket covariance and contravariance as well, features that are supported by C#’s generics.

It is certainly possible to deal with many (maybe all) of the difficulties I just presented in a nice way. However, advances in our type inference technique and constraint language would be necessary — alongside eventual syntax hints. A few to consider are:

  • Type schemes and let-polymorphism, for fine-grained control over generalization and instantiation of types.
  • Constraints in the form of type inequalities, i.e., 𝝉 ≤ 𝝉, whose implied semantics may of a subtype: 𝝉 <: 𝝉 . Its interpretation, in turn, depend, among other matters, on whether the system is nominal or structural.

Is it Worth It?

After all this rambling, the natural question is: would it be worth it or anyhow beneficial to incorporate, in C#, the feature we have been discussing about? The language specification could be amended in the following (loose) way:

“When the local variable type is specified as var… the declaration is an implicitly typed local variable declaration, whose type is inferred from the type of the associated initializer expression; if said variable is specified as var static, its type is inferred from assignment expressions involving it.

That being said — and as a disclaimer — , I do not feel in position to make a thorough evaluation of this “proposal”. Apart from the technicalities, a primary aspect to consider is the actual/practical benefit (if any) implied in such a feature. To my understanding (I am not affiliated to Microsoft), var was originally conceived for anonymous types. Yet, I believe that extending its functionality to non-anonymous types was likely a choice of convenience. Then, under the subjectiveness of what is convenient…, one may just as well wish to write the code as in function z below.

Thank you for reading! (This is my first Medium article.) I hope you have enjoyed the text — if you did, I would be curious to hear your feedback; and even more grateful if you share this article further.

--

--

Leandro T. C. Melo
The Startup

I write about topics in the field of programming languages.