Closure Conversion as CoYoneda

:: Yoneda, coYoneda, category theory, compilers, closure conversion, math, by Max New

The continuation-passing style transform (cps) and closure conversion (cc) are two techniques widely employed by compilers for functional languages, and have been studied extensively in the compiler correctness literature. Interestingly, typed versions of each can be proven to be equivalence preserving using polymorphic types and parametric reasoning, as shown by my advisor Amal Ahmed and Matthias Blume (cps,cc).

In fact, there is something like a duality between the two proofs, cps uses a universal type, closure-conversion uses an existential type and the isomorphism proofs use analogous reasoning. It turns out that both are instances of general theorems in category theory: the polymorphic cps isomorphism can be proven using the Yoneda lemma, and the polymorphic closure-conversion isomorphism can be proven using a less well known theorem often called the *co*Yoneda lemma.

The connection between cps and the Yoneda embedding/lemma is detailed elsewhere in the literature and blogosphere (ncafe, Bartosz), so I’ll focus on closure conversion here. Also, I’ll try to go into some detail in showing how the “usual” version of Yoneda/coYoneda (using the category of sets) relates to the appropriate version for compilers.

I’ll assume some background knowledge on closure conversion and parametricity below. Fortunately, Matt Might has a nice blog post explaining untyped closure conversion.

Polymorphic Closure Conversion

Closure conversion is a way of compiling a language with closures (i.e., basically any modern high-level language) to one that only has function pointers/labels like C or machine code. Closure conversion compiles high-level functions (aka closures) to a pair of an environment that will contain the values of all the functions’ free variables and a code pointer to a block that takes as inputs all the inputs to the function and values for all of the free variables.

For instance

let x = 3 in λ y. x + y

would be converted to something like

let x = 3 in ([x: 3], λ env, y. let x = env.x in x + y)

Can we give a type to the resulting code? The source program has type Number -> Number, but the target has a type more like

{ x: Number} × ({x : Number} × Number -> Number).

In addition to being ugly, this type is leaking irrelevant details of the function’s implementation: all of its free variables are there in its type, so two terms with the same function type but different free variables would be translated to different types. Also high-level program equivalences like -reducing the term to just λ y. 3 + y would not even preserve typing. Not only that, but some bad code could now supply a different, well-typed value for x than allowed which could break invariants the programmer had about the function.

We could fix the type preservation issue by just using a dynamic type for our environment, but this would still leak details in the values. Fortunately, there is a nice solution to the other problems using existential types. The idea is that the type of the environment of free variables is irrelevant to anyone that calls the function, only the function itself should know what the environment looks like; the type of the environment should be abstract to the caller and concrete to the callee. Existential types capture this.

We can translate functions in the source of type A -> B to pairs of an environment and a code pointer, but now making the environment type existentially quantified:

∃ Γ. Γ × (Γ × A -> B).

Then the syntax of existential types ensure that all any consumer can do with the env : Γ in the pair is pass it to the code pointer with an A argument.

How do we prove that this is correct? And what does correct even mean? We’ll focus on a property called full abstraction which says that if two programs are equal in the source language, then their translations are equal. Here, equal in the source language will just mean equivalence, so things like as above:

let x = 3 in λ y. x + y
λ y. 3 + y

To prove this we’ll show that in a language with existential types the types ∃ Γ. Γ × (Γ × A -> B) and A \to B are isomorphic. The usual proof is by parametricity, instead we’ll use a closely related category-theoretic argument: the coYoneda lemma.

The CoYoneda Lemma

The coYoneda lemma is a generalization of the equivalence described above. I’ll start with the ordinary version which uses coends and presheaves.

The coYoneda lemma says that for any category , presheaf , and object , is isomorphic to the coend: Let’s break that down.


A coend is a construction that is very similar to the parametric existential quantifier. If you’re familiar with parametricity, a good intuition is that coends have the same definition as existential types but where the only relations are functional relations.

You can take the coend of a functor of type . We can get such an from a type with a free type variable like by splitting the into positive and negative occurrences: . Then the coend is like the union of all , but where the is ensured to be “irrelevant”.

So for any object there is a map , we can “hide the A”. To make sure the is treated opaquely, we add an invariance condition that says if you have an and an such that the positions are related by some function , then . More formally, this means that if you have a , then

or in a point-free style:

A function parameterized by types like that has this property is called a co-wedge from .

A coend is an object and a co-wedge that are universal, i.e. any other co-wedge factors through . This gives us the syntax for existential elimination.

If you are familiar with parametricity, it is a good exercise to see why the usual condition for invariance wrt all relations implies that a parametric will form a cowedge. It seems that in general it would not be a universal co-wedge because a parametric exists is invariant under all relations and there are many relations that don’t act like functions.


Next, a presheaf is just a functor . Think of this as a set that is parameterised by a type of “inputs”, so if you have a map in you get a function that “preprocesses” the inputs using . Functoriality ensures that preprocessing with the identity is just the identity and that composition of preprocessers is the preprocessor from the composite function.

So the informal explanation of the coYoneda lemma is that for any presheaf , if we have an , then since we can’t inspect the in any way, all we can really do is compose the with the preprocesser from the function , giving us a .

Enriched Categories and Enriched CoYoneda

But there’s a gap from here to applying this to a programming language, the coYoneda lemma as presented says that and are isomorphic as sets, but we wanted an isomorphism of types in our programming language. We can reconcile this by considering enriched category theory and the enriched coYoneda lemma. Let be a category, then if is sufficiently like the category of sets, then we can do a lot of category theory by replacing the word “set” with “object of ”.

Specifically, a -enriched category (or just -category) has a set of objects , but for each pair of objects we get a -object of morphisms from to . If is a closed category, we can see itself as a -enriched category with the same objects and just making i.e. the internal hom aka exponential.

Then we can reinterpret the coYoneda lemma above by saying is a -category and is a -presheaf i.e., just a contravariant functor from to itself: where the preprocessing function is now a morphism in . Haskelletons just call this a contravariant functor. Furthermore, since existential types provide at least as strong of a reasoning principle as coends, the proof of the coYoneda lemma goes through with existential types instead. Finally, the point-free description above for coend can be interpreted in any category.

Now that we’re working all inside our language, let’s look at what the isomorphism looks like in Haskellish/Agdaish syntax. We want mutually inverse functions

f : (Contravariant Q) => (∃ Γ. (Δ -> Γ) × (Q Γ)) -> Q Δ
g : (Contravariant Q) => Q Δ -> ∃ Γ. (Δ -> Γ) × (Q Γ)

If you try to implement them you won’t be able to get it wrong, but here they are:

f (k, qΓ) = contramap k qΓ
g qΔ = (id, qΔ)

where we just instantiate in the second case. You can prove using just and the Contravariant laws, but to prove you need to use the coend reasoning. For those of you that know about the Yoneda lemma, note the similarity to that proof in using the identity function and instantiating a type variable in a trivial way.

Closure Version as CoYoneda

Now it’s time to bring it all together. Let be our programming language viewed as a category in the usual way.

We want to prove the closure conversion isomorphism:

using the -coYoneda lemma which says for any contravariant functor , and object

Clearly based on the right hand side, should be which gives us for any :

Next we pick , the unit type. Then we use some basic facts about the unit type: and (at least in a pure language) to get the desired result by composition:


Since closure conversion is an instance of the CoYoneda lemma, this might be a nice example to give intuition for CoYoneda for programmers. While not as famous as its cousin Yoneda, CoYoneda is used in Haskell and is also central to the Day Convolution, which can be used to give semantics to separation logic.

Also in researching for this post, I was surprised at how little I could find on the relationship between ends/coends and relational parametricity. This seems very unfortunate as it looks like we’re reproving some of the same theorems (Yoneda, coYoneda) using very similar, but incompatible formalisms.

You might also like