Syntactic parametricity strikes again
In this blog post, reporting on a collaboration with LiYao Xia, I will show an example of how some results that we traditionally think of as arising from free theorems / parametricity can be established in a purely “syntactic” way, by looking at the structure of canonical derivations. More precisely, I prove that is isomorphic to where is the type of integers smaller than , corresponding to the set .
Context: Last week I had the pleasure of visiting UPenn, where I had many interesting discussions with various people. It was also an occasion to temporarily resume a discussion/collaboration I have with LiYao Xia, who is currently an intern there, and JeanPhilippe Bernardy, about testing polymorphic programs and its relation to canonical representations for System F.
During one of our blackboard discussion, LiYao and I did a manual proof of a cool result: we proved a parametricity theorem for using syntactic methods, namely proof search among canonical proofs. (This is an idea that I have been playing with since the last year of my PhD thesis, where I unsuccessfully tried to extend my work on canonical forms for the simplytyped lambdacalculus to polymorphism. It is here worked out on an specific example, but my end goal is to turn the manual reasoning into an algorithm.)
You may wonder, first, why the isomorphism holds. The idea is that a polymorphic function of type cannot inspect the elements of the input list; it can only use them in the resulting list, possibly duplicating, reordering or dropping some elements. On any input list of size , the behavior of the function can be described by a list of indices in . For example, if the input (for some values of ) gives the output , then this relation will hold on any value of , as the function cannot inspect their value or even test them for equality. The behavior of this function on lists of size 3 can be fully described by the list of indices . Its whole behavior is then uniquely determined by one such list for each possible size:
The idea behind the “syntactic” (prooftheoretic?) proof method is the following: the set of closed values at a type is isomorphic to the search space for canonical/normal derivations of . We have tools (in particular the notion of invertible inference rules) to reason on those – in this post I will only present the reasoning informally, but it can easily be made formally precise.
We start by looking at the shape of the search space for
or, said otherwise, of the judgment
with a fresh/abstract type variable . (I will not be keeping opened type variables in context to avoid confusing them with hypotheses.)
Any derivation of a function type, without loss of generality (w.l.o.g), is equivalent to a derivation starting with a function introduction. This is the ηexpansion rule for functions: any proof term is equivalent to , a proof that starts with a . So any proof can be taken to start as follows: we can, w.l.o.g, unfold the recursive type in the context (\(\List α = 1 + (α × \List α)\)):
A derivation with a sum type as hypothesis can, w.l.o.g, be assumed to start by splitting on this pair (this is the ηexpansion rule for sums):
In the right subgoal, we can always, w.l.o.g, split a hypothesis of product type:
Now, an interesting pattern emerges. In the process of trying to prove , we have to prove the (right) subgoal . We can generalize this derivation by assuming that we start with some number of variables of type in the context (we write for this):
Let us write for the search space corresponding to all possible derivations of the judgment . All the proof steps above have been done “without loss of generality” (in terms of focusing, we only used invertible rules), so they appear in any such derivation. Similarly, let us write for the space of all possible derivations of , then above we have proven that
This equality can be unfolded at will
or written as an infinite product and, in particular,
Now let’s look at the structure of the derivations of . A proof of this judgment cannot start with a “left rule”, inspecting the value of one of the variables of type , given that the structure of is unknown/abstract. It must start by choosing to either build the empty list or a cons cell. We write this as follows (after unfolding the type):
The notation between two judgments is nonstandard; it means that they are not two requirements of the same proof, but two alternatives for possible proofs. All valid proofs fit that structure, and they either have a premise or a premise. With this syntax, we are describing a set of possible derivations, rather than a single (partial) derivation.
Proofs of are trivial, and a proof of a product is always, w.l.o.g, a product of proofs (in intuitionistic logic / the λcalculus they reuse the same context), so we can decompose further:
There is exactly one possible proof of , so its search space is , the unit set (with a single element). There are exactly possible proofs of , so the search space is just , seen as a set, or, in typetheoretic notation, . We thus have the recursive equation:
This type is either , or a and itself, recursively. This is exactly a list:
so, plugging everything together:
Post Scriptum
Some of reasoning steps above can be formulated in a way that is less clear but more familiar, as a sequence of type isomorphisms. For example, the first part on can written as:
Reading this equational sequence, it may look like we had to make the right choice at each step; but the proofsearch perspective reveals that there were in fact no choices, as each time we apply invertible rules (“w.l.o.g. rules”).
Furthermore, some parts cannot be derived in this style; in the latter part of the proof, the isomorphism between and , which is immediate from a proof search perspective, cannot be justified in this way. (In particular, is not isomorphic to .)
Going further

It is an unfortunatelylittleknown obvious fact that many things we associate to “free theorems” can be recovered by proof search. For example, it is much simpler to prove that the only inhabitant of is the identity using proof search than parametricity. I briefly discussed the idea in the section 1.3 of my 2015 article, Which simple types have a unique inhabitant?.

If you are unfamiliar with proof search (or the LF community) and curious about what I mean by “canonical forms” and why I think this is an important idea, see my nontechnical 2017 article Search for Program Structure. The problem of extending the notion of canonical forms to arbitrary polymorphic types is briefly discussed in the section 14.5 of my 2016 phd manuscript.

If you haven’t heard of it yet, you would probably be interested in the 2010 article Testing Polymorphic Properties by JeanPhilippe Bernardy, Patrik Jansson and Koen Claessen. LiYao has a 2016 implementation called Metamorph that got us talking together. The problems of understanding canonical forms and testing are quite related, but yet not exactly the same…