Abstract Types, Parametric Polymorphism, And Dependency Injection And Mock Testing

Albert Y. C. Lai, trebla [at] vex [dot] net

Parametric polymorphic functions have very uniform behaviour, “works the same way for all types”; a few test cases with a few concrete types can tell you a lot about the general case. For example, if f :: a -> [a] and a test case yields f () = [(), (), ()], then we can deduce f 4 = [4, 4, 4].

This comes from an important theorem that’s pretty daunting to state. Fortunately, it is closely related to a theorem about programming with abstract types. I think that the abstract type story is more relatable, so I will start with it, and then we can transition to the parametric polymorphism story.

But here is how the two are connected. Suppose there is a module IS that exports this interface:

abstract type IntSet
emptyset :: IntSet
insert :: Int -> IntSet -> IntSet
lookup :: Int -> IntSet -> Bool

then this program that uses IS

import IS

use :: Bool
use = let
    s0 = emptyset
    s1 = insert 4 s0
    s2 = insert 5 s1
  in lookup 4 s2

is very much like this polymorphic function

polyUse :: a
        -> (Int -> a -> a)
        -> (Int -> a -> Bool)
        -> Bool
polyUse emp ins lkp = let
    s0 = emp
    s1 = ins 4 s0
    s2 = ins 5 s1
  in lkp 4 s2

Hopefully you are also thrilled to see:

Beautifully we have a nice little piece of 1980s programming language theory that explains and unifies multiple programming practices.

Programming with Abstract Types

An abstract type can have many possible implementations. A program that uses the abstract type is given one such underlying implementation, and then later someone may switch to another. So there is a correctness question to ask, and it takes one of these forms:

I will walk through the strategy with an example of an abstract type of sets of Ints:

abstract type IntSet
emptyset :: IntSet
insert :: Int -> IntSet -> IntSet
lookup :: Int -> IntSet -> Bool

I have a list implementation and a binary search tree implementation. In each one, I change the names to have suffixes (in code) or subscripts (in sentences) to help me to refer them later:

How to make precise “they behave the same” starts from this idea:

We have the insight that, e.g., [1,2] is a valid list representation, (1(2)) is a valid tree representation, and they represent the same set. Perhaps we can design a relation/correspondence, call it “~”, between lists and trees so that “xs ~ t” means that xs and t are respective valid representations of the same set.

So I define: xs ~ t iff

Examples:

Negative examples:

Then the following are true, and they are the precise meaning of “the two implementations behave the same”, or sometimes I also say “the two implementations are in correspondence”.

(It looks redundant to assume “i = j” instead of just having i, but I am setting up for a generalization later.)

Generally, the correspondence uses “~” between the two representation of IntSet values (lists vs trees), “=” between Int values, “=” again between Bool values, and extends this to functions by “if inputs correspond, then outputs correspond”.

Now we can talk about programs that uses IntSet. Here is an example program:

use :: Bool
use = let
    s0 = emptyset
    s1 = insert 4 s0
    s2 = insert 5 s1
  in lookup 4 s2

Does it give the same answer whether you use IntSetL or IntSetB? Yes. Let me clone the code and add suffixes to help explain:

use_L :: Bool
use_L = let
    s0_L = emptyset_L
    s1_L = insert_L 4 s0_L
    s2_L = insert_L 5 s1_L
  in lookup_L 4 s2_L
use_B :: Bool
use_B = let
    s0_B = emptyset_B
    s1_B = insert_B 4 s0_B
    s2_B = insert_B 5 s1_B
  in lookup_B 4 s2_B

Step through both versions to check:

This works for every program that uses IntSet. (And every program that doesn’t, for that matter.)

Zooming back out, this is what you do when you have two implementations for an abstract type. First fulfill this prerequisite:

Then you can conclude:

(Similarly if you have multiple abstract types.)

The type abstraction theorem is a general and precise way to state the above.

The Type Abstraction Theorem

The type abstraction theorem is daunting to state because it takes some setting up to make “in correspondence” precise and cover all cases. The particular challenges are: Sometimes it means “=”, some other times it means my (or your) custom “~” relation; and we ambitiously extend it to function types, e.g., IntSet → Bool.

We have two implementations of IntSet; let me call one of them “left side” and “on the left”, the other “right side” and “on the right”. Here is the ultimate purpose of the definitions. For arbitrary type expression T:

These can be achieved by structural recursion on type expressions. (I think you can already guess how to do DomL and DomR.)

DomL and DomR are defined by:

DomL(Bool) = Bool
DomL(Int) = Int
DomL(IntSet) = [Int]
DomL(T → U) = DomL(T) → DomL(U)
DomR(Bool) = Bool
DomR(Int) = Int
DomR(IntSet) = IntBST
DomR(T → U) = DomR(T) → DomR(U)

⟨T⟩ is defined by:

⟨Bool⟩ = equality for Bool
⟨Int⟩ = equality for Int
⟨IntSet⟩ = my “~”
⟨T→U⟩ = explained below

For ⟨T→U⟩, the motivation is “if inputs correspond, then outputs correspond” as hinted earlier. So we define formally:

fL ⟨T→U⟩ fR iff:
for all xL::DomL(T), xR::DomR(T) :
    if xL ⟨T⟩ xR, then fL xL ⟨U⟩ fR xR

For example, “lookupL ⟨Int→IntSet→Bool⟩ lookupB” expands to:

for all i::Int, j::Int, xs::[Int], t::IntBST:
    if i = j and xs ~ t, then lookupL i xs = lookupB j t

With that, the following are true of the two implementations:

That allows us to deduce: useL ⟨Bool⟩ useB. Generally, if program p has type T, then (p under list impl) ⟨T⟩ (p under tree impl).

The same reasoning applies to other abstract types and programs. This framework is summarized as the abstraction theorem (for less clutter, I state it for just one abstract type and one operation):

If:

then:

(Similarly if you have more operations and multiple abstract types.)

The Parametricity Theorem

I now transition from programming with abstract types to parametric polymorphic functions.

The type abstraction theorem can be rearranged as (giving explicit “for all” to AL, AR, (~), opL, opR):

for all type AL, type AR, relation (~) between AL and AR :
    let DomL(A) = AL, DomR(A) = AR, ⟨A⟩ = (~) in
    for all opL :: DomL(U), opR :: DomR(U) :
        if opL ⟨U⟩ opR
        then pL ⟨T⟩ pR

Applied to the IntSet example:

for all type AL, type AR, relation (~) between AL and AR :
    let DomL(IntSet) = AL, DomR(IntSet) = AR, ⟨IntSet⟩ = (~) in
    for all emptysetL :: DomL(IntSet), emptysetR :: DomR(IntSet),
            insertL :: DomL(Int→IntSet→IntSet), insertR :: DomR(Int→IntSet→IntSet),
            lookupL :: DomL(Int→IntSet→Bool), lookupL :: DomR(Int→IntSet→Bool):
        if emptysetL ⟨IntSet⟩ emptysetR
        and insertL ⟨Int→IntSet→IntSet⟩ insertR
        and lookupL ⟨Int→IntSet→Bool⟩ lookupR
        then useL ⟨Bool⟩ useR

If you perform dependency injection, you get this parametric polymorphic function:

polyUse :: a
        -> (Int -> a -> a)
        -> (Int -> a -> Bool)
        -> Bool
polyUse emp ins lkp = let
    s0 = emp
    s1 = ins 4 s0
    s2 = ins 5 s1
  in lkp 4 s2

Then useL is converted to “polyUse emptysetL insertL lookupL”. (AL is implicit, hidden under the act of instantiating a to AL; some people make it explicit with “polyUseAL”.) Similarly for useR.

And the type abstraction theorem can be converted accordingly:

for all type AL, type AR, relation (~) between AL and AR :
    let DomL(a) = AL, DomR(a) = AR, ⟨a⟩ = (~) in
    for all empL :: DomL(a), empR :: DomR(a),
            insL :: DomL(Int→a→a), insR :: DomR(Int→a→a),
            lkpL :: DomL(Int→a→Bool), lkpL :: DomR(Int→a→Bool):
        if empL ⟨a⟩ empR
        and insL ⟨Int→a→a⟩ insR
        and lkpL ⟨Int→a→Bool⟩ lkpR
        then polyUse empL insL lkpL ⟨Bool⟩ polyUse empR insR lkpR

Note that it has a drastic (cheesy?) simplification by the definition of ⟨T→U⟩:

for all type AL, type AR, relation (~) between AL and AR :
    let DomL(a) = AL, DomR(a) = AR, ⟨a⟩ = (~) in
    polyUseAL ⟨a→(Int→a→a)→(Int→a→Bool)→Bool⟩ polyUseAR

That motivates extending the definition of ⟨⟩ to polymorphic types:

e1 ⟨∀a. U⟩ e2 iff:
for all type AL, type AR, relation ~ between AL and AR :
    let DomL(a) = AL, DomR(a) = AR, ⟨a⟩ = (~) in
    e1AL ⟨U⟩ e2AR

With that definition, this is true of polyUse:

polyUse ⟨∀a. a→(Int→a→a)→(Int→a→Bool)→Bool⟩ polyUse

The parametricity theorem says that it’s true in general:

For all type T in which all type variables are ∀-quantified (“T is a closed type”), for all term e::T : e⟨T⟩e.

This theorem is behind how we can use a few test cases on a polymorphic function to discover fairly general behaviour.

Worked Examples

Example: ∀a. a→(a→a)→a

Suppose e :: ∀a . a → (a → a) → a . I expand the parametricity theorem step by step:

  1. The parametricity theorem says:

    e ⟨∀a . a → (a → a) → a⟩ e

  2. Expand ⟨∀a. …⟩

    for all type AL, type AR, relation (~) between AL and AR :
        let DomL(a) = AL, DomR(a) = AR, ⟨a⟩ = (~) in
        e ⟨a → (a → a) → a⟩ e
  3. Expand ⟨a → (a → a) → a⟩

    for all type AL, type AR, relation (~) between AL and AR :
        let DomL(a) = AL, DomR(a) = AR, ⟨a⟩ = (~) in
        for all zL::DomL(a), zR::DomR(a), sL::DomL(a→a), sR::DomR(a→a) :
            if zL ⟨a⟩ zR and sL ⟨a → a⟩ sR
            then e zL sL ⟨a⟩ e zR sR
  4. Expand sL ⟨a → a⟩ sR

    for all type AL, type AR, relation (~) between AL and AR :
        let DomL(a) = AL, DomR(a) = AR, ⟨a⟩ = (~) in
        for all zL::DomL(a), zR::Dom~R(a), sL::DomL(a→a), sR::DomR(a→a) :
            if zL ⟨a⟩ zR and (for all x::DomL(a), y::DomR(a): if x ⟨a⟩ y then sL x ⟨a⟩ sR y)
            then e zL sL ⟨a⟩ e zR sR
  5. Expand DomL, DomR, ⟨a⟩

    for all type AL, type AR, relation (~) between AL and AR :
        for all zL::AL, zR::AR, sL::AL→AL, sR::AR→AR :
            if zL ~ zR and (for all x::AL, y::AR: if x ~ y then sL x ~ sR y)
            then e zL sL ~ e zR sR

(In future examples, I will jump straight to the final expansion.)

I will show that e is determined by e 0 succ, where succ = (λn → n+1). This and the proof below are inspired by how the assumption looks like induction over the natural numbers.

Choose AL = ℕ, zL = 0, sL = succ. Choose (~) to be this functional relation: x~y iff f zR sR x = y, where f is defined recursively:

f z s 0 = z
f z s (n+1) = s (f z s n)
(Generally, f z s k = “apply s k times to z”.)

This shows two common tricks:

After that choosing, we get

for all type AR :
    for all zR::AR, sR::AR→AR :
        if f zR sR 0 = zR and (for all x::ℕ, y::AR: if f zR sR x = y then f zR sR (succ x) = sR y)
        then f zR sR (e 0 succ) = e zR sR

Then there are two assumptions to verify (left as an exercise):

With that, we get

for all type AR :
    for all zR::AR, sR::AR→AR :
        f zR sR (e 0 succ) = e zR sR

It means: e hides a secret natural number n, and all it does is to start with zR and apply sR n times. The secret n can be exposed by e 0 succ. (Advanced students go on to say that the type (∀a . a → (a → a) → a) encodes ℕ.)

For example, if e 0 succ = 2, then e zR sR = sR (sR zR).

Example: ∀a. a → [a]

Suppose e :: ∀a. a → [a] . Then the parametricity theorem expands to:

for all type AL, type AR, relation (~) between AL and AR :
    let ⟨a⟩ = (~) in
    for all xL::AL, xR::AR :
        if xL ~ xR
        then e xL ⟨[a]⟩ e xR

I haven’t told you what to do with ⟨[T]⟩, but here it is: Define inductively:

You can understand it as: the two lists have the same length, and the respective elements are related by ⟨T⟩. But I show you the inductive definition because it generalizes to all algebraic data types. (But if you generalize “have the same length” to “have the same shape”, that’s the right idea.)

Usually the element-wise relation ⟨T⟩ is a function h; then ⟨[T]⟩ simplifies to the function map h, i.e., xs⟨[T]⟩ys iff map h xs = ys.

Going back to e :: ∀a. a → [a]. I prove that one special test, e (), tells you what e does in general. Choose AL=(), xL=(), (~) to be the function const xR.

(The () type is called “the unit type”. It has only one value, also written as (). You can think of the type definition data Unit = TheOneAndOnly apart from notation. You can also think of a singleton set.)

Then we get:

for all type AR :
    for all xR::AR :
        if const xR () = xR
        then map (const xR) (e ()) = e xR

The assumption (const xR () = xR) is easily verified. Conclusion:

for all type AR :
    for all xR::AR :
        map (const xR) (e ()) = e xR

So for example, if e () = [(), (), ()], then e xR = map (const xR) [(), (), ()] = [xR, xR, xR]

Example: ∀a,b. (a,b) → a

The standard library function fst :: (a,b) -> a extracts the 1st field from the pair. Conversely, we find it intuitive that any function of that type has to do the same thing (or else hangs). This is justified by the parametricity theorem too.

Suppose f :: ∀a,b. (a,b) → a . You can think of that type as nesting ∀’s: ∀a. ∀b. (a,b) → a . Expanding the parametricity theorem:

for all types AL, AR, BL, BR, relation (~) between AL and AR, relation (#) between BL and BR:
    let ⟨a⟩ = (~), ⟨b⟩ = (#) in
    for all pL::(AL,BL), pR::(AR,BR) :
        if pL ⟨(a,b)⟩ pR
        then f pL ~ f pR

I haven’t said what to do with ⟨(a,b)⟩, but it’s easily guessable:

(xl,yl) ⟨(T,U)⟩ (xr,yr) iff: xl⟨T⟩xr and yl⟨U⟩yr

Using that, and expanding “for all pL::(AL,BL)” to “for all xl::AL, yl::BL”, similarly for pR:

for all types AL, AR, BL, BR, relation (~) between AL and AR, relation (#) between BL and BR:
    for all xl::AL, yl::BL, xr::AR, yr::BR :
        if xl ~ xr and yl # yr
        then f (xl,yl) ~ f (xr,yr)

Choose AL=(), BL=(), xl=(), yl=(), (~) to be the function const xr, (#) to be the function const yr. Then:

for all types AR, BR:
    for all xr::AR, yr::BR :
        if const xr xl = xr and const yr yl = yr
        then const xr (f (xl,yl)) = f (xr,yr)

The assumptions are easily verified. Conclusion:

for all type AR, BR :
    for all xr::AR, yr::BR :
        const xr (f(xl,yl)) = f (xr,yr)

i.e., xr = f (xr,yr).

Parametric vs Ad Hoc

The two names “parametric polymorphism” and “ad hoc polymorphism” were coined by Strachey for the difference in:

The parametricity theorem is the outcome of Reynolds working out a mathematical definition. It was also his idea to start with the abstract type story.

Ad hoc polymorphism allows a program of type ∀a.T to do “if ‘a’ is Int, do something special”. This breaks the parametricity theorem.

Likewise, if a user of IntSet can do “if IntSet is a list, do something special”, this doesn’t treat IntSet as an abstract type, so it breaks the type abstraction theorem.

The kind of polymorphism provided by OO languages leans on the ad hoc side, e.g., two subclasses can implement a method in arbitrarily unrelated ways, and so the knowledge that they belong to the same superclass isn’t very informative.

In a principles of languages course, I’m obliged to fearmonger you with: If a language allows this, then someone will do it, and you have a much harder time reasoning about programs.

Outside, in fairness, programmers don’t set out to troll each other. You would implement the two subclasses in conceptually related ways, even though the precise relation is difficult to formalize. In this sense, programmers stay close to the spirit of parametric polymorphism, even in a language of ad hoc polymorphism. Still, this is open to one more kind of bugs, so watch out.


I have more Haskell Notes and Examples