Poleiro: latest posts http://poleiro.info/atom.xml Arthur Azevedo de Amorim 2019-12-25T00:00:00Z Quotients in Coq http://poleiro.info/posts/2019-12-25-quotients-in-coq.html 2019-12-25T00:00:00Z 2019-12-25T00:00:00Z
Quotients are crucial in mathematical practice, and it is a shame that they are not available in Coq's standard library. There was a recent discussion on the Coq GitHub page on this issue and the consequences of implementing quotients like Lean does, where the eliminator for function types has a reduction rule that breaks pleasant metatheoretic properties such as subject reduction.
In this post, we are going to define quotients in Coq with three standard axioms:
• Functional extensionality
• Propositional extensionality
• Constructive definite description (also known as the axiom of unique choice)

Check @functional_extensionality_dep :
∀ A B (f g : ∀ x : A, B x),
(∀ x : A, f x = g x) → f = g.

Check @propositional_extensionality :
∀ P Q, (PQ) → P = Q.

Check @constructive_definite_description :
∀ A P, (exists! x : A, P x) → {x : A | P x}.

As far as axioms go, these three are relatively harmless. In particular, they are valid in any elementary topos, which are generally regarded as a good universe for doing constructive, higher-order reasoning. (Naturally, adding axioms in type theory does not come for free: since they have no useful reduction behavior, our quotients won't compute.)

Section Quotient.

We define the quotient of T by an equivalence relation R as usual: it is the type of equivalence classes of R.

Context (T : Type) (R : relation T) (RP : Equivalence R).

Record quot := Quot_ {
quot_class : TProp;
quot_classP : ∃ x, quot_class = R x;
}.

The projection into the quotient is given by the Quot constructor below, which maps x to its equivalence class R x. This definition satisfies the usual properties: Quot x = Quot y if and only if R x y. The "if" direction requires the principle of proof irrelevance, which is a consequence of propositional extensionality.

Definition Quot (x : T) : quot :=
@Quot_ (R x) (ex_intro _ x erefl).

Lemma Quot_inj x y : Quot x = Quot yR x y.
Proof.
move=> e; rewrite -[R x y]/(quot_class (Quot x) y) e //=; reflexivity.
Qed.

Lemma eq_Quot x y : R x yQuot x = Quot y.
Proof.
move=> e; rewrite /Quot; move: (ex_intro _ y _).
suff ->: R y = R x.
move=> ?; congr Quot_; exact: proof_irrelevance.
apply: functional_extensionality=> z.
apply: propositional_extensionality.
by rewrite /= e.
Qed.

We can also show that Quot is surjective by extracting the witness in the existential.
Lemma Quot_inv q : ∃ x, q = Quot x.
Proof.
case: q=> [P [x xP]]; exists x; move: (ex_intro _ _ _).
rewrite xP=> e; congr Quot_; exact: proof_irrelevance.
Qed.

Unique choice comes into play when defining the elimination principles for the quotient. In its usual non-dependent form, the principle says that we can lift a function f : T S to another function quot S provided that f is constant on equivalence classes. We define a more general dependently typed version, which allows in particular to prove a property S q by proving that S (Quot x) holds for any x. The statement of the compatibility condition for f is a bit complicated because it needs to equate terms of different types S (Quot x) and S (Quot y), which requires us to transport the left-hand side along the equivalence R x y.

Section Elim.

Definition cast A B (e : A = B) : AB :=
match e with erefl => id end.

Context (S : quotType) (f : ∀ x, S (Quot x)).
Context (fP : ∀ x y (exy : R x y), cast (congr1 S (eq_Quot exy)) (f x) = f y).

We begin with an auxiliary result that uniquely characterizes the result of applying the eliminator to an element q : quot. Thanks to unique choice, this allows us to define the eliminator as a function quot_rect.

Lemma quot_rect_subproof (q : quot) :
exists! a : S q, ∃ x (exq : Quot x = q), a = cast (congr1 S exq) (f x).
Proof.
case: (Quot_inv q)=> x -> {q}.
exists (f x); split=> [|a]; first by exists x, erefl.
case=> y [eyx -> {a}].
by rewrite (proof_irrelevance _ eyx (eq_Quot (Quot_inj eyx))) fP.
Qed.

Definition quot_rect q : S q :=
sval (constructive_definite_description _ (quot_rect_subproof q)).

Lemma quot_rectE x : quot_rect (Quot x) = f x.
Proof.
rewrite /quot_rect.
case: constructive_definite_description=> _ [y [eyx /= ->]].
by rewrite (proof_irrelevance _ eyx (eq_Quot (Quot_inj eyx))) fP.
Qed.

End Elim.

In the non-dependent case, the compatibility condition acquires its usual form.

Section Rec.

Context S (f : TS) (fP : ∀ x y, R x yf x = f y).

Definition congr1CE (A B : Type) (b : B) x y (e : x = y) :
congr1 (λ _ : A, b) e = erefl :=
match e with erefl => erefl end.

Definition quot_rec : quot -> S :=
@quot_rect (λ _, S) f
(λ x y exy, etrans
(congr1 (λ p, cast p (f x)) (congr1CE S (eq_Quot exy)))
(fP exy)).

Lemma quot_recE x : quot_rec (Quot x) = f x.
Proof. by rewrite /quot_rec quot_rectE. Qed.

End Rec.

End Quotient.
]]>
A verification challenge http://poleiro.info/posts/2019-12-24-a-verification-challenge.html 2019-12-24T00:00:00Z 2019-12-24T00:00:00Z
Hillel Wayne posted a challenge to find out whether functional programs are really easier to verify than imperative ones, as some claim. There were three problems, and I am posting Coq solutions here (two of them by myself).

The first challenge was to prove the correctness of a padding function. Given a padding character c, a length n and a sequence s, left_pad c n s should pad s with c on the left until its size reaches n. If the original string is larger than n, the function should do nothing. My implementation is similar to other solutions on the web, and reuses functions and lemmas available in the Math Comp libraries.
Implicit Types (c : char) (i n : nat) (s : seq char).

Definition left_pad c n s := nseq (n - size s) c ++ s.

The call to nseq creates a constant sequence with the number of cs that needs to be added, and appends that sequence to s. Note that subtraction is truncated: n - size s is equal to 0 when n <= size s.
As the specification for left_pad, I am taking the properties that were verified in the original solution in Hillel's post, which was written in Dafny. My proofs are not automated like in Dafny, but still fairly easy. (The statements are slightly different because they use the nth function to index into a sequence, which requires a default element to return when the index overflows.)

Lemma left_pad1 c n s : size (left_pad c n s) = maxn n (size s).
Proof.
Qed.

Lemma left_pad2 c n s i :
i < n - size s ->
nth c (left_pad c n s) i = c.
Proof.
move=> bound.
by rewrite /left_pad nth_cat size_nseq bound nth_nseq bound.
Qed.

Lemma left_pad3 c n s i :
nth c (left_pad c n s) (n - size s + i) = nth c s i.
Proof.
Qed.

Interestingly, these properties completely characterize the result of left_pad: any function f that satisfies the same specification must produce the same output.

(forall c n s, size (f c n s) = maxn n (size s)) ->
(forall c n s i, i < n - size s -> nth c (f c n s) i = c) ->
(forall c n s i, nth c (f c n s) (n - size s + i) = nth c s i) ->
forall c n s, f c n s = left_pad c n s.
Some fans of functional programming claim that it obviates the need for verification, since the code serves as its own specification. Though exaggerate (as we will see in the last problem), the claim does have a certain appeal: the definition of left_pad is shorter and more direct than the specification that we proved.

## Unique

The second problem of the challenge asked to remove all duplicate elements from a sequence. I decided not to include a solution of my own here, since the Math Comp library already provides a function undup that does this. We had to show that the set of elements in its output is the same as in its input, and that no element occurs twice, properties that are both proved in Math Comp (mem_undup and undup_uniq). Hillel wrote that imperative programs had an advantage for this problem because of its symmetry. I am not sure what was meant by that, but the Dafny and the Coq proofs are comparable in terms of complexity. (To be fair, Dafny has better automation than Coq, but this has probably little to do with being functional or imperative: other functional provers fare better than Coq in this respect as well.)

## Fulcrum

The last problem was also the most challenging. The goal was to compute the fulcrum of a sequence of integers s, which is defined as the index i that minimizes the quantity fv s i shown below in absolute value.
Implicit Types (s : seq int) (n : int).

Definition fv s i :=
\sum_(0 <= l < i) s`_l - \sum_(i <= l < size s) s`_l.

Definition is_fulcrum s i := forall j, `|fv s i| <= `|fv s j|.

The simplest way out would be to compute fv s i for all indices i and return the index that yields the smallest value. Instead of writing this program ourselves, we can just reuse the arg min function available in Math Comp.

Definition fulcrum_naive s :=
[arg min_(i < ord0 : 'I_(size s).+1) `|fv s i|]%O.

Lemma fvE s i :
fv s i = (\sum_(0 <= l < i) s`_l) *+ 2 - \sum_(0 <= l < size s) s`_l.

Lemma fv_overflow s i : fv s i = fv s (minn i (size s)).

Lemma fulcrum_naiveP s : is_fulcrum s (fulcrum_naive s).
Proof.
rewrite /fulcrum_naive; case: Order.TotalTheory.arg_minP=> //= i _ iP j.
move/(_ (inord (minn j (size s))) erefl): iP.
rewrite (_ : fv s (inord _) = fv s j) //= [RHS]fv_overflow.
by rewrite inordK ?ltnS ?geq_minr //.
Qed.

Unfortunately, this naive implementation runs in quadratic time, and the problem asked for a linear solution. We can do better by folding over s and computing the values of fv s i incrementally. With a left fold, we can compute the fulcrum with four auxiliary variables defined in the state type below, without the need for extra stack space. (This is a bit more efficient than the original solution, which had to store two sequences of partial sums.)

Record state := State {
curr_i : nat; (* The current position in the list *)
best_i : nat; (* The best fulcrum found so far    *)
curr : int; (* = fv s curr_i                    *)
best : int; (* = fv s best_i                    *)
}.

On each iteration, the fulcrum_body function updates the state st given n, the current element of the sequence. The main function, fulcrum, just needs to provide a suitable initial state and return the final value of best_i.

Definition fulcrum_body st n :=
let curr' := n *+ 2 + st.(curr) in
let curr_i' := st.(curr_i).+1 in
let best' := if `|curr'| < `|st.(best)| then curr' else st.(best) in
let best_i' := if `|curr'| < `|st.(best)| then curr_i' else st.(best_i) in
State curr_i' best_i' curr' best'.

Definition fulcrum s :=
let k := - foldl +%R 0 s in
(foldl fulcrum_body (State 0 0 k k) s).(best_i).

To verify fulcrum, we first prove a lemma about foldl. It says that we can prove that some property I holds of the final loop state by showing that it holds of the initial state x0 and that it is preserved on every iteration in other words, that I is a loop invariant. The property is parameterized by a "ghost variable" i, which counts the number of iterations performed.

Lemma foldlP T S f (s : seq T) x0 (I : nat -> S -> Prop) :
I 0%N x0 ->
(forall i x a, (i < size s)%N -> I i x -> I i.+1 (f x (nth a s i))) ->
I (size s) (foldl f x0 s).

We complete the proof by instantiating foldlP with the fulcrum_inv predicate below. This predicate ensures, among other things, that best_i holds the fulcrum for the first i positions of the sequence s. Hence, when the loop terminates, we know that best_i is the fulcrum for all of s.

Variant fulcrum_inv s i st : Prop :=
| FlucrumInv of
(st.(best_i) <= i)%N &
st.(curr_i) = i &
st.(best) = fv s st.(best_i) &
st.(curr) = fv s i &
(forall j, (j <= i)%N -> `|fv s st.(best_i)| <= `|fv s j|).

Lemma fulcrumP s : is_fulcrum s (fulcrum s).
Proof.
rewrite /fulcrum; have ->: - foldl +%R 0 s = fv s 0.
by rewrite /fv big_geq // (foldl_idx [law of +%R]) (big_nth 0) add0r.
set st := foldl _ _ _; suff: fulcrum_inv s (size s) st.
case=> ???? iP j; rewrite [fv s j]fv_overflow; apply: iP.
exact: geq_minr.
apply: foldlP=> {st}; first by split=> //=; case.
move=> i [_ best_i _ _] a iP [/= bP -> -> -> inv].
rewrite (set_nth_default 0 a iP) {a}.
have e: fv s i.+1 = s`_i *+ 2 + fv s i.
split=> //=; do 1?[by case: ifP; rewrite // -ltnS ltnW].
move=> j; case: ltngtP => // [j_i|->] _.
case: ifP=> [|_]; last exact: inv.
rewrite -e => /Order.POrderTheory.ltW {}iP.
apply: Order.POrderTheory.le_trans iP _; exact: inv.
by case: Order.TotalTheory.leP => //; rewrite -e.
Qed.

## Conclusion

So, is functional really better than imperative for verification? Though I am a fan of functional languages, I will not attempt to offer a definitive answer. It is true that simple programs such as left_pad are often as good (or better) than their specifications, but some of those would be inefficient in practice, and fast implementations such as fulcrum might be tricky to verify.
As mentioned in Hillel's post, the main issue when verifying imperative code is aliasing: we must argue that every call preserves the assumptions made in other parts of the code, something that comes for free with functional programming. However, if a program only modifies local state (like the Dafny solutions did), reasoning about this kind of interference becomes much more manageable.
Of course, nothing prevents us from combining the benefits of functional and imperative programming. We can implement an efficient algorithm using mutable state and verify it against a specification that mixes logic and pure functions. Indeed, that is the strategy followed by the Dafny solution, and many frameworks for verifying imperative code only allow pure functions in specifications. Moreover, it is possible to define imperative languages within Coq and use the logic to verify programs in this language; this is the essence of frameworks such as VST or Iris.
]]>
Contravariance and Recursion http://poleiro.info/posts/2019-08-06-contravariance-and-recursion.html 2019-08-06T00:00:00Z 2019-08-06T00:00:00Z
Convincing Coq that a function terminates can be frustrating. When modeling programming languages with subtyping, in particular, you might run into situations where the contravariance of function types forces you to swap the arguments of a recursive function. Coq does not know how to handle this recursion pattern and rejects the definition. This post presents a technique for dealing with such cases.
Suppose that we want to model a programming language with the following types:

Inductive type :=
| Top (* Any value whatsoever, akin to Object in Java or Ruby *)
| Int
| Arrow (T S : type) (* Functions from T to S *).

Notation "T ⟶ S" := (Arrow T S) (at level 25, right associativity).

Our language comes with a subtyping relation T <: S that tells when it is safe to use elements of T in a context that expects elements of S. Its definition is given below as an inductive declaration.
Inductive subtype : type -> type -> Prop :=
| STTop T
: T <: Top

| STInt
: Int <: Int

| STArrow T1 T2 S1 S2 of
T2 <: T1 & S1 <: S2
: T1S1 <: T2S2

where "T <: S" := (subtype T S).

Since Top contains arbitrary values, it makes sense for it to be a supertype of every other type in the language. Things get more interesting for functions, as subtyping goes in opposite directions for arguments and results; in common jargon, we say that subtyping treats arguments contravariantly and results covariantly. If arguments worked covariantly, any function f of type (Int Int) Int would also have the type Top Int, since Int Int <: Top. Thus, it would be possible to apply f to an Int, which has no reason to be safe if f expects a function argument.
We would like to show that subtyping is decidable by expressing this relation as a boolean function. Its definition is not difficult in principle, but the naive attempt is rejected:

Fail Fixpoint subtypeb T S {struct T} :=
match T, S with
| _, Top => true
| Int, Int => true
| T1S1, T2S2 => subtypeb T2 T1 && subtypeb S1 S2
| _, _ => false
end.

Recursive definition of subtypeb is ill-formed.
...
Recursive call to subtypeb has principal argument equal to "T2"
instead of one of the following variables: "T1" "S1".
What happened? Recursive functions in Coq are always defined with respect to a principal argument. In this case, the argument was marked as T using the struct keyword, but it can usually be inferred by Coq automatically. For the definition to be valid, all the recursive calls must be performed on principal arguments that are subterms of the original principal argument. However, one of the calls to subtypeb swaps its two arguments, leading to the above error message.
Coq provides well-founded recursion to circumvent this rigid check, which allows recursive calls whenever the arguments decrease according to a suitable relation. In the case of subtypeb, for instance, we could show termination by proving that the combined sizes of the two arguments decrease on every call. However, there is a more direct route in this case: it suffices to add a flag to the function to specify if we are computing subtyping in the correct or opposite order. If we flip the flag on the first call, we do not need to swap the arguments.

Fixpoint subtypeb_gen b T S :=
match T, S with
| Top, Top => true
| Top, _ => ~~ b
| _ , Top => b
| Int, Int => true
| T1S1, T2S2 => subtypeb_gen (~~ b) T1 T2 && subtypeb_gen b S1 S2
| _, _ => false
end.

Definition subtypeb := subtypeb_gen true.

Notation "T <:? S" := (subtypeb T S) (at level 20, no associativity).

To relate the two definitions of subtyping, we first show that flipping the flag of subtypeb_gen is equivalent to swapping the other arguments.

Lemma subtypeb_genC b T S : subtypeb_gen (~~ b) T S = subtypeb_gen b S T.
Proof.
by elim: T S b => [| |? IH1 ? IH2] [| |??] //= b; rewrite -?IH1 ?IH2 negbK.
Qed.

This yields a more convenient equation for subtypeb. (We match on S first to ensure that the RHS is definitionally equal to true when S is Top.)

Lemma subtypebE T S :
T <:? S =
match S, T with
| Top, _ => true
| Int, Int => true
| T2S2, T1S1 => T2 <:? T1 && S1 <:? S2
| _ , _ => false
end.
Proof.
case: T S=> [| |??] [| |??]; by rewrite /subtypeb //= -(subtypeb_genC false).
Qed.

Finally, we can prove that the two definitions are equivalent using the reflect predicate of Ssreflect. Showing that the inductive formulation implies the boolean one is easy: we just proceed by induction on the relation. Proving the converse is a bit more subtle, as it requires generalizing the statement with subtypeb_gen.

Lemma subtypebP T S : reflect (T <: S) (T <:? S).
Proof.
apply/(iffP idP); last first.
by elim: T S / => [T| |???? _ IH1 _ IH2] //=; rewrite subtypebE ?IH1 ?IH2.
suffices /(_ true): forall b, subtypeb_gen b T S ->
subtype (if b then T else S) (if b then S else T) by [].
elim: T S=> [| |T1 IH1 S1 IH2] [| |T2 S2] [] //=;
try case/andP=> /IH1 ? /IH2 ?; by constructor.
Qed.

### Update (August 9, 2019)

Robert Rand and Anton Trunov proposed an alternative approach based on mutual recursion, which I've included here for completeness. Notice how the two recursive functions have different recursive arguments.

Definition body T S := fun subtypeb subtypeb' =>
match T, S with
| _, Top => true
| Int, Int => true
| T1S1, T2S2 => subtypeb' T2 T1 && subtypeb S1 S2
| _, _ => false
end.

Fixpoint subtypeb T S {struct T} := body T S subtypeb subtypeb'
with subtypeb' T S {struct S} := body T S subtypeb' subtypeb.

Notation "T <:? S" := (subtypeb T S) (at level 20, no associativity).

Lemma subtypeb'_subtypeb S T :
(subtypeb' T S = subtypeb T S) * (subtypeb' S T = subtypeb S T).
Proof.
elim: S T=> [| |Ts IHt Ss IHs] [| | Tt St] //=.
by rewrite !IHt !IHs.
Qed.

Lemma subtypebP T S : reflect (T <: S) (T <:? S).
Proof.
apply/(iffP idP); last first.
- by elim: T S / => [[] //| |???? _ IH1 _ IH2] //=; rewrite subtypeb'_subtypeb IH1 IH2.
suffices : (T <:? S -> T <: S) /\ (S <:? T -> S <: T) by case.
elim: S T => [| |T1 IH1 S1 IH2] [| |T2 S2] //=; try by split=>//; constructor.
rewrite !subtypeb'_subtypeb.
split; case/andP.
- by move=> /(proj2 (IH1 _)) subT12 /(proj1 (IH2 _)); constructor.
by move=> /(proj1 (IH1 _)) subT21 /(proj2 (IH2 _)); constructor.
Qed.
]]>
Checking for Constructors http://poleiro.info/posts/2018-10-15-checking-for-constructors.html 2018-10-15T00:00:00Z 2018-10-15T00:00:00Z
When writing tactics, there are situations where it would be useful to know whether a term begins with a constructor. One possibility is to extract the head of the term and use the is_constructor tactic.

match t with
| ?t' _ => head t'
| _ => t
end.

let t' := head t in is_constructor t'.

Goal True.
(* These calls succeed *)
(* ...but this one fails *)
(*
The command has indeed failed with message:
In nested Ltac calls to "head_constructor" and "is_constructor", last call failed.
Tactic failure: not a constructor.
*)

Abort.

As of version 8.8, the is_constructor tactic is undocumented, but this should hopefully be fixed soon. Interestingly, we can achieve almost the same effect without is_constructor, by observing that Coq does not reduce a fixpoint unless it is applied to a term that begins with a constructor.

match type of t with
| ?T =>
let r := eval cbn in ((fix loop (t' : T) {struct t'} := tt) t) in
match r with tt => idtac end
end.

Goal True.
(* Succeed *)
Abort.

Unlike the previous version, this one reduces the term using cbn before testing it. Thus, the following test succeeds, while the analogous one before failed:

Goal True.
(* Succeeds *)
Abort.

Update Anton Trunov pointed out that we can emulate the behavior of the first tactic with a fixpoint by adding a flag in the call to eval:

match type of t with
| ?T =>
let r := eval cbn fix in ((fix loop (t' : T) {struct t'} := tt) t) in
match r with tt => idtac end
end.

Now, only the outer fixpoint is reduced.

Goal True.
Abort.
]]>
Equality in Coq http://poleiro.info/posts/2018-01-26-equality-in-coq.html 2018-01-26T00:00:00Z 2018-01-26T00:00:00Z
Coq views truth through the lens of provability. The hypotheses it manipulates are not mere assertions of truth, but formal proofs of the corresponding statements ─ data structures that can be inspected to build other proofs. It is not a coincidence that function types and logical implication use the same notation, A -> B, because proofs of implication in Coq are functions: they take proofs of the precondition as inputs and return a proof of the consequent as the output. Such proofs are written with the same language we use for programming in Coq; tactics are but scripts that build such programs for us. A proof that implication is transitive, for example, amounts to function composition.

Definition implication_is_transitive (A B C : Prop) :
(A -> B) -> (B -> C) -> (A -> C) :=
fun HAB HBC HA => HBC (HAB HA).

Similarly, inductive propositions in Coq behave just like algebraic data types in typical functional programming languages. With pattern matching, we can check which constructor was used in a proof and act accordingly.

Definition or_false_r (A : Prop) : A \/ False -> A :=
fun (H : A \/ False) =>
match H with
| or_introl HA => HA
| or_intror contra => match contra with end
end.

Disjunction \/ is an inductive proposition with two constructors, or_introl and or_intror, whose arguments are proofs of its left and right sides. In other words, a proof of A \/ B is either a proof of A or a proof of B. Falsehood, on the other hand, is an inductive proposition with no constructors. Matching on a proof of False does not require us to consider any cases, thus allowing the expression to have any type we please. This corresponds to the so-called principle of explosion, which asserts that from a contradiction, anything follows.
The idea of viewing proofs as programs is known as the Curry-Howard correspondence. It has been a fruitful source of inspiration for the design of many other logics and programming languages beyond Coq, other noteworthy examples including Agda and Nuprl. I will discuss a particular facet of this correspondence in Coq: the meaning of a proof of equality.

## Defining equality

The Coq standard library defines equality as an indexed inductive proposition. (The familiar x = y syntax is provided by the standard library using Coq's notation mechanism.)

Inductive eq (T : Type) (x : T) : T -> Prop :=
| eq_refl : eq T x x.

This declaration says that the most basic way of showing x = y is when x and y are the "same" term ─ not in the strict sense of syntactic equality, but in the more lenient sense of equality "up to computation" used in Coq's theory. For instance, we can use eq_refl to show that 1 + 1 = 2, because Coq can simplify the left-hand side using the definition of + and arrive at the right-hand side.
To prove interesting facts about equality, we generally use the rewrite tactic, which in turn is implemented by pattern matching. Matching on proofs of equality is more complicated than for typical data types because it is a non-uniform indexed proposition ─ that is, the value of its last argument is not fixed for the whole declaration, but depends on the constructor used. (This non-uniformity is what allows us to put two occurrences of x on the type of eq_refl.)
Concretely, suppose that we have elements x and y of a type T, and a predicate P : T -> Prop. We want to prove that P y holds assuming that x = y and P x hold. This can be done with the following program:

Definition rewriting
(T : Type) (P : T -> Prop) (x y : T) (p : x = y) (H : P x) : P y :=
match p in _ = z return P z with
| eq_refl => H
end.

Compared to common match expressions, this one has two extra clauses. The first, in _ = z, allows us to provide a name to the non-uniform argument of the type of p. The second, return P z, allows us to declare what the return type of the match expression is as a function of z. At the top level, z corresponds to y, which means that the whole match has type P y. When checking each individual branch, however, Coq requires proofs of P z using values of z that correspond to the constructors of that branch. Inside the eq_refl branch, z corresponds to x, which means that we have to provide a proof of P x. This is why the use of H there is valid.
To illustrate, here are proofs of two basic facts about equality: transitivity and symmetry.

Definition etrans {T} {x y z : T} (p : x = y) (q : y = z) : x = z :=
match p in _ = w return w = z -> x = z with
| eq_refl => fun q' : x = z => q'
end q.

Definition esym {T} {x y : T} (p : x = y) : y = x :=
match p in _ = z return z = x with
| eq_refl => eq_refl
end.

Notice the return clause in the first proof, which uses a function type. We cannot use w = z alone, as the final type of the expression would be y = z. The other reasonable guess, x = z, wouldn't work either, since we would have nothing of that type to return in the branch ─ q has type y = z, and Coq does not automatically change it to x = z just because we know that x and y ought to be equal inside the branch. The practice of returning a function is so common when matching on dependent types that it even has its own name: the convoy pattern, a term coined by Adam Chlipala in his CDPT book.
In addition to functions, pretty much any type expression can go in the return clause of a match. This flexibility allows us to derive many basic reasoning principles ─ for instance, the fact that constructors are disjoint and injective.

Definition eq_Sn_m (n m : nat) (p : S n = m) :=
match p in _ = k return match k with
| 0 => False
| S m' => n = m'
end with
| eq_refl => eq_refl
end.

Definition succ_not_zero n : S n <> 0 :=
eq_Sn_m n 0.

Definition succ_inj n m : S n = S m -> n = m :=
eq_Sn_m n (S m).

In the eq_refl branch, we know that k is of the form S n. By substituting this value in the return type, we find that the result of the branch must have type n = n, which is why eq_refl is accepted. Since this is only value of k we have to handle, it doesn't matter that False appears in the return type of the match: that branch will never be used. The more familiar lemmas succ_not_zero and succ_inj simply correspond to special cases of eq_Sn_m. A similar trick can be used for many other inductive types, such as booleans, lists, and so on.

## Mixing proofs and computation

Proofs can be used not only to build other proofs, but also in more conventional programs. If we know that a list is not empty, for example, we can write a function that extracts its first element.

From mathcomp Require Import seq.

Definition first {T} (s : seq T) (Hs : s <> [::]) : T :=
match s return s <> [::] -> T with
| [::] => fun Hs : [::] <> [::] => match Hs eq_refl with end
| x :: _ => fun _ => x
end Hs.

Here we see a slightly different use of dependent pattern matching: the return type depends on the analyzed value s, not just on the indices of its type. The rules for checking that this expression is valid are the same: we substitute the pattern of each branch for s in the return type, and ensure that it is compatible with the result it produces. On the first branch, this gives a contradictory hypothesis [::] <> [::], which we can discard by pattern matching as we did earlier. On the second branch, we can just return the first element of the list.
Proofs can also be stored in regular data structures. Consider for instance the subset type {x : T | P x}, which restricts the elements of the type T to those that satisfy the predicate P. Elements of this type are of the form exist x H, where x is an element of T and H is a proof of P x. Here is an alternative version of first, which expects the arguments s and Hs packed as an element of a subset type.

Definition first' {T} (s : {s : seq T | s <> [::]}) : T :=
match s with
| exist s Hs => first s Hs
end.

While powerful, this idiom comes with a price: when reasoning about a term that mentions proofs, the proofs must be explicitly taken into account. For instance, we cannot show that two elements exist x H1 and exist x H2 are equal just by reflexivity; we must explicitly argue that the proofs H1 and H2 are equal. Unfortunately, there are many cases in which this is not possible ─ for example, two proofs of a disjunction A \/ B need to use the same constructor to be considered equal.
The situation is not as bad as it might sound, because Coq was designed to allow a proof irrelevance axiom without compromising its soundness. This axiom says that any two proofs of the same proposition are equal.

Axiom proof_irrelevance : forall (P : Prop) (p q : P), p = q.

If we are willing to extend the theory with this axiom, much of the pain of mixing proofs and computation goes away; nevertheless, it is a bit upsetting that we need an extra axiom to make the use of proofs in computation practical. Fortunately, much of this convenience is already built into Coq's theory, thanks to the structure of proofs of equality.

## Proof irrelevance and equality

A classical result of type theory says that equalities between elements of a type T are proof irrelevant provided that T has decidable equality. Many useful properties can be expressed in this way; in particular, any boolean function f : S -> bool can be seen as a predicate S -> Prop defined as fun x : S => f x = true. Thus, if we restrict subset types to computable predicates, we do not need to worry about the proofs that appear in its elements.
You might wonder why any assumptions are needed in this result ─ after all, the definition of equality only had a single constructor; how could two proofs be different? Let us begin by trying to show the result without relying on any extra assumptions. We can show that general proof irrelevance can be reduced to irrelevance of "reflexive equality": all proofs of x = x are equal to eq_refl x.

Section Irrelevance.

Variable T : Type.
Implicit Types x y : T.

Definition almost_irrelevance :
(forall x (p : x = x), p = eq_refl x) ->
(forall x y (p q : x = y), p = q) :=
fun H x y p q =>
match q in _ = z return forall p' : x = z, p' = q with
| eq_refl => fun p' => H x p'
end p.

This proof uses the extended form of dependent pattern matching we have seen in the definition of first: the return type mentions q, the very element we are matching on. It also uses the convoy pattern to "update" the type of p with the extra information gained by matching on q.
The almost_irrelevance lemma may look like progress, but it does not actually get us anywhere, because there is no way of proving its premise without assumptions. Here is a failed attempt.

Fail Definition irrelevance x (p : x = x) : p = eq_refl x :=
match p in _ = y return p = eq_refl x with
| eq_refl => eq_refl
end.

Coq complains that the return clause is ill-typed: its right-hand side has type x = x, but its left-hand side has type x = y. That is because when checking the return type, Coq does not use the original type of p, but the one obtained by generalizing the index of its type according to the in clause.
It took many years to understand that, even though the inductive definition of equality only mentions one constructor, it is possible to extend the type theory to allow for provably different proofs of equality between two elements. Homotopy type theory, for example, introduced a univalence principle that says that proofs of equality between two types correspond to isomorphisms between them. Since there are often many different isomorphisms between two types, irrelevance cannot hold in full generality.
To obtain an irrelevance result, we must assume that T has decidable equality.

Hypothesis eq_dec : forall x y, x = y \/ x <> y.

The argument roughly proceeds as follows. We use decidable equality to define a normalization procedure that takes a proof of equality as input and produces a canonical proof of equality of the same type as output. Crucially, the output of the procedure does not depend on its input. We then show that the normalization procedure has an inverse, allowing us to conclude ─ all proofs must be equal to the canonical one.
Here is the normalization procedure.

Definition normalize {x y} (p : x = y) : x = y :=
match eq_dec x y with
| or_introl e => e
| or_intror _ => p
end.

If x = y holds, eq_dec x y must return something of the form or_introl e, the other branch being contradictory. This implies that normalize is constant.

Lemma normalize_const {x y} (p q : x = y) : normalize p = normalize q.
Proof. by rewrite /normalize; case: (eq_dec x y). Qed.

The inverse of normalize is defined by combining transitivity and symmetry of equality.

Notation "p * q" := (etrans p q).

Notation "p ^-1" := (esym p)
(at level 3, left associativity, format "p ^-1").

Definition denormalize {x y} (p : x = y) := p * (normalize (eq_refl y))^-1.

As the above notation suggests, we can show that esym is the inverse of etrans, in the following sense.

Definition etransK x y (p : x = y) : p * p^-1 = eq_refl x :=
match p in _ = y return p * p^-1 = eq_refl x with
| eq_refl => eq_refl (eq_refl x)
end.

This proof avoids the problem that we encountered in our failed proof of irrelevance, resulting from generalizing the right-hand side of p. In this return type, p * p^-1 has type x = x, which matches the one of eq_refl x. Notice why the result of the eq_refl branch is valid: we must produce something of type eq_refl x * (eq_refl x)^-1 = eq_refl x, but by the definitions of etrans and esym, the left-hand side computes precisely to eq_refl x.
Armed with etransK, we can now relate normalize to its inverse, and conclude the proof of irrelevance.

Definition normalizeK x y (p : x = y) :
normalize p * (normalize (eq_refl y))^-1 = p :=
match p in _ = y return normalize p * (normalize (eq_refl y))^-1 = p with
| eq_refl => etransK x x (normalize (eq_refl x))
end.

Lemma irrelevance x y (p q : x = y) : p = q.
Proof.
by rewrite -[LHS]normalizeK -[RHS]normalizeK (normalize_const p q).
Qed.

End Irrelevance.

## Irrelevance of equality in practice

The Mathematical Components library provides excellent support for types with decidable equality in its eqtype module, including a generic result of proof irrelevance like the one I gave above (eq_irrelevance). The class structure used by eqtype makes it easy for Coq to infer proofs of decidable equality, which considerably simplifies the use of this and other lemmas. The Coq standard library also provides a proof of this lemma (eq_proofs_unicity_on), though it is a bit harder to use, since it does not make use of any mechanisms for inferring results of decidable equality.
]]>
POPL and CoqPL 2016 http://poleiro.info/posts/2016-03-14-popl-and-coqpl-2016.html 2016-03-14T00:00:00Z 2016-03-14T00:00:00Z POPL 2016 was just held a few weeks ago in Saint Petersburg, Florida, bringing together many researchers in programming languages to learn about the latest developments in the field. As usual, it was an exciting event for the Coq community. Early in the week, Robert Rand and I led an introductory Coq tutorial. Later, there was a special meeting to announce DeepSpec, an ambitious research effort to develop and integrate several verified software components, many of which use Coq. Near the end, Xavier Leroy received an award for the most influential POPL paper of 2006, for Formal Verification of a Compiler Back-end, where he introduced the CompCert verified C compiler.

Besides all these events, this year also featured the second edition of the CoqPL workshop. Its main attraction may have been the release of the long-awaited Coq 8.5. Matthieu Sozeau and Maxime Dénès gave a detailed presentation of its main new features, which include asynchronous proof checking and editing, universe polymorphism, and a new tactic engine. Congratulations to the Coq team for the great work!

Another fun talk was by Clément Pit-Claudel, where he announced company-coq, a set of Proof General extensions brings many nice editing features for Coq code in Emacs. These include: automatically prettifying code (for instance, replacing `forall` by `∀`), auto-completion, code folding, and improved error messages, among many others. If you work with Coq under Emacs, you should definitely give it a try!

]]>
Writing reflective tactics http://poleiro.info/posts/2015-04-13-writing-reflective-tactics.html 2015-04-13T00:00:00Z 2015-04-13T00:00:00Z
One important aspect of Coq's logic is the special status given to computation: while some systems require one to apply explicit deductive steps to show that two given terms are equal, Coq's logic considers any two terms that evaluate to the same result to be equal automatically, without the need for additional reasoning.
Without getting into too much detail, we can illustrate this idea with some simple examples. Russell and Whitehead's seminal Principia Mathematica had to develop hundreds of pages of foundational mathematics before being able to prove that 1 + 1 = 2. In contrast, here's what this proof looks like in Coq:

Definition one_plus_one : 1 + 1 = 2 := erefl.

erefl is the only constructor of the eq type; its type, forall A (a : A), a = a, tells us that we can use it to prove that given term a is equal to itself. Coq accepts one_plus_one as a valid proof because, even though the two sides of the equation are not syntactically the same, it is able to use the definition of + to compute the left-hand side and check that the result is the same as the right-hand side. This also works for some statements with variables in them, for instance

Definition zero_plus_n n : 0 + n = n := erefl.

The same principle applies here: + is defined by case analysis on its first argument, and doesn't even need to inspect the second one. Since the first argument on the left-hand side is a constructor (0), Coq can reduce the expression and conclude that both sides are equal.
Unfortunately, not every equality is a direct consequence of computation. For example, this proof attempt is rejected:

Fail Definition n_plus_zero n : n + 0 = n := erefl.

What happened here? As mentioned before, + is defined by case analysis on the first argument; since the first argument of the left-hand side doesn't start with a constructor, Coq doesn't know how to compute there. As it turns out, one actually needs an inductive argument to prove this result, which might end up looking like this, if we were to check the proof term that Coq produces:

Fixpoint n_plus_zero n : n + 0 = n :=
match n with
| 0 => erefl
| n.+1 => let: erefl := n_plus_zero n in erefl
end.

It seems that, although interesting, computation inside Coq isn't of much use when proving something. Or is it?
In this post, I will show how computation in Coq can be used to write certified automation tactics with a technique known as proof by reflection. Reflection is extensively used in Coq and in other proof assistants as well; it is at the core of powerful automation tactics such as ring, and played an important role in the formalization of the Four-color theorem. As a matter of fact, the name Ssreflect stands for small-scale reflection, due to the library's pervasive use of reflection and computation.
Let's see how reflection works by means of a basic example: a tactic for checking equalities between simple expressions involving natural numbers.

## Arithmetic with reflection

Imagine that we were in the middle of a proof and needed to show that two natural numbers are equal:

Lemma lem n m p : (n + m) * p = p * m + p * n.

ring is powerful enough to solve this goal by itself, but just for the sake of the example, suppose that we had to prove it by hand. We could write something like

Proof. by rewrite mulnDl (mulnC n) (mulnC m) addnC. Qed.

This was not terribly complicated, but there's certainly room for improvement. In a paper proof, a mathematician would probably assume that the reader is capable of verifying this result on their own, without any additional detail. But how exactly would the reader proceed?
In the case of the simple arithmetic expression above, it suffices to apply the distributivity law as long as possible, until both expressions become a sum of monomials. Then, thanks to associativity and commutativity, we just have to reorder the factors and terms and check that both sides of the equation match.
The idea of proof by reflection is to reduce a the validity of a logical statement to a symbolic computation, usually by proving a theorem of the form thm : b = true -> P with b : bool. If b can be computed explicitly and reduces to true, then Coq recognizes erefl as a proof of b = true, which means that thm erefl becomes a proof of P.
To make things concrete, let's go back to our example. The idea that we described above for checking whether two numbers are equal can be used whenever we have expressions involving addition, multiplication, and variables. We will define a Coq data type for representing such expressions, as we will need to compute with them:

Inductive expr :=
| Var of nat
| Add of expr & expr
| Mul of expr & expr.

Variables are represented by natural numbers using the Var constructor, and Add and Mul can be used to combine expressions. The following term, for instance, represents the expression n * (m + n):

Example expr_ex :=
Mul (Var 0) (Add (Var 1) (Var 0)).

where Var 0 and Var 1 denote n and m, respectively.
If we are given a function vals assigning variables to numbers, we can compute the value of an expression with a simple recursive function:

Fixpoint nat_of_expr vals e :=
match e with
| Var v => vals v
| Add e1 e2 => nat_of_expr vals e1 + nat_of_expr vals e2
| Mul e1 e2 => nat_of_expr vals e1 * nat_of_expr vals e2
end.

Now, since every expression of that form can be written as a sum of monomials, we can define a function for converting an expr to that form:

Fixpoint monoms e :=
match e with
| Var v => [:: [:: v] ]
| Add e1 e2 => monoms e1 ++ monoms e2
| Mul e1 e2 => [seq m1 ++ m2 | m1 <- monoms e1, m2 <- monoms e2]
end.

Here, each monomial is represented by a list enumerating all variables that occur in it, counting their multiplicities. Hence, a sum of monomials is represented as a list of lists. For example, here's the result of normalizing expr_ex:

Example monoms_expr_ex :
monoms expr_ex = [:: [:: 0; 1]; [:: 0; 0]].
Proof. by []. Qed.

To prove that monoms has the intended behavior, we show that the value of an expression is preserved by it. By using the big operations \sum and \prod from the MathComp library, we can compute the value of a sum of monomials very easily:

Lemma monomsE vals e :
nat_of_expr vals e = \sum_(m <- monoms e) \prod_(v <- m) vals v.
Proof.
elim: e=> [v|e1 IH1 e2 IH2|e1 IH1 e2 IH2] /=.
- by rewrite 2!big_seq1.
- by rewrite big_cat IH1 IH2.
rewrite {}IH1 {}IH2 big_distrlr /=.
elim: (monoms e1) (monoms e2)=> [|v m1 IH] m2 /=; first by rewrite 2!big_nil.
rewrite big_cons big_cat /= IH; congr addn.
by rewrite big_map; apply/eq_big=> //= m3 _; rewrite big_cat.
Qed.

Hence, to check that two expressions are equivalent, it suffices to compare the results of monoms, modulo the ordering. We can do this by sorting the variable names on each monomial and then testing whether one list of monomials is a permutation of the other:

Definition normalize := map (sort leq) \o monoms.

Lemma normalizeE vals e :
nat_of_expr vals e = \sum_(m <- normalize e) \prod_(v <- m) vals v.
Proof.
rewrite monomsE /normalize /=; elim: (monoms e)=> [|m ms IH] /=.
by rewrite big_nil.
by apply/perm_big; rewrite perm_sym perm_sort.
Qed.

Definition expr_eq e1 e2 := perm_eq (normalize e1) (normalize e2).

Lemma expr_eqP vals e1 e2 :
expr_eq e1 e2 ->
nat_of_expr vals e1 = nat_of_expr vals e2.
Proof. rewrite 2!normalizeE; exact/perm_big. Qed.

To see how this lemma works, let's revisit our original example. Here's a new proof that uses expr_eqP:

Lemma lem' n m p : (n + m) * p = p * m + p * n.
Proof.
exact: (@expr_eqP (nth 0 [:: n; m; p])
(Mul (Add (Var 0) (Var 1)) (Var 2))
(Add (Mul (Var 2) (Var 1)) (Mul (Var 2) (Var 0)))
erefl).
Qed.

The first argument to our lemma assigns "real" variables to variable numbers: 0 corresponds to n (the first element of the list), 1 to m, and 2 to p. The second and third argument are symbolic representations of the left and right-hand sides of our equation. The fourth argument is the most interesting one: the expr_eq was defined as a boolean function that returns true when its two arguments are equivalent expressions. As we've seen above, this means that whenever expr_eq e1 e2 computes to true, erefl is a valid proof of it. Finally, when Coq tries to check whether the conclusion of expr_eqP can be used on our goal, it computes nat_of_expr on both sides, realizing that the conclusion and the goal are exactly the same. For instance:

Lemma expr_eval n m p :
nat_of_expr (nth 0 [:: n; m; p]) (Mul (Add (Var 0) (Var 1)) (Var 2))
= (n + m) * p.
Proof. reflexivity. Qed.

Of course, expr_eqP doesn't force its first argument to always return actual Coq variables, so it can be applied even in some cases where the expressions contain other operators besides + and *:

Lemma lem'' n m : 2 ^ n * m = m * 2 ^ n.
Proof.
exact: (@expr_eqP (nth 0 [:: 2 ^ n; m])
(Mul (Var 0) (Var 1)) (Mul (Var 1) (Var 0))
erefl).
Qed.

At this point, it may seem that we haven't gained much from using expr_eqP, since the second proof of our example was much bigger than the first one. This is just an illusion, however, as the proof term produced on the first case is actually quite big:

lem =
fun n m p : nat =>
(fun _evar_0_ : n * p + m * p = p * m + p * n =>
eq_ind_r (eq^~ (p * m + p * n)) _evar_0_ (mulnDl n m p))
((fun _evar_0_ : p * n + m * p = p * m + p * n =>
eq_ind_r
(fun _pattern_value_ : nat => _pattern_value_ + m * p = p * m + p * n)
_evar_0_ (mulnC n p))
((fun _evar_0_ : p * n + p * m = p * m + p * n =>
eq_ind_r
(fun _pattern_value_ : nat =>
p * n + _pattern_value_ = p * m + p * n) _evar_0_
(mulnC m p))
((fun _evar_0_ : p * m + p * n = p * m + p * n =>
eq_ind_r (eq^~ (p * m + p * n)) _evar_0_ (addnC (p * n) (p * m)))
(erefl (p * m + p * n)))))
: forall n m p : nat, (n + m) * p = p * m + p * n
By using reflection, we were able to transform the explicit reasoning steps of the first proof into implicit computation that is carried out by the proof assistant. And since proof terms have to be stored in memory or included into the compiled vo file, it is good to make them smaller if we can.
Nevertheless, even with a smaller proof term, having to manually type in that proof term is not very convenient. The problem is that Coq's unification engine is not smart enough to infer the symbolic form of an expression, forcing us to provide it ourselves. Fortunately, we can use some code to fill in the missing bits.

## Reification

To reify something means to produce a representation of that object that can be directly manipulated in computation. In our case, that object is a Gallina expression of type nat, and the representation we are producing is a term of type expr.
Reification is ubiquitous in proofs by reflection. The Coq standard library comes with a plugin for reifying formulas, but it is not general enough to accommodate our use case. Therefore, we will program our own reification tactic in ltac.
We will begin by writing a function that looks for a variable on a list and returns its position. If the variable is not present, we add it to the end of the list and return the updated list as well:

Ltac intern vars e :=
let rec loop n vars' :=
match vars' with
| [::] =>
let vars'' := eval simpl in (rcons vars e) in
constr:((n, vars''))
| e :: ?vars'' => constr:((n, vars))
| _ :: ?vars'' => loop (S n) vars''
end in
loop 0 vars.

Notice the call to eval simpl on the first branch of loop. Remember that in ltac everything is matched almost purely syntactically, so we have to explicitly evaluate a term when we are just interested on its value, and not on how it is written.
We can now write a tactic for reifying an expression. reify_expr takes two arguments: a list vars to be used with intern for reifying variables, plus the expression e to be reified. It returns a pair (e',vars') contained the reified expression e' and an updated variable list vars'.

Ltac reify_expr vars e :=
match e with
| ?e1 + ?e2 =>
let r1 := reify_expr vars e1 in
match r1 with
| (?qe1, ?vars') =>
let r2 := reify_expr vars' e2 in
match r2 with
| (?qe2, ?vars'') => constr:((Add qe1 qe2, vars''))
end
end
| ?e1 * ?e2 =>
let r1 := reify_expr vars e1 in
match r1 with
| (?qe1, ?vars') =>
let r2 := reify_expr vars' e2 in
match r2 with
| (?qe2, ?vars'') => constr:((Mul qe1 qe2, vars''))
end
end
| _ =>
let r := intern vars e in
match r with
| (?n, ?vars') => constr:((Var n, vars'))
end
end.

Again, because this is an ltac function, we can traverse our Gallina expression syntactically, as if it were a data structure. Notice how we thread though the updated variable lists after each call; this is done to ensure that variables are named consistently.
Finally, using reify_expr, we can write solve_nat_eq, which reifies both sides of the equation on the goal and applies expr_eqP with the appropriate arguments.

Ltac solve_nat_eq :=
match goal with
| |- ?e1 = ?e2 =>
let r1 := reify_expr (Nil nat) e1 in
match r1 with
| (?qe1, ?vm') =>
let r2 := reify_expr vm' e2 in
match r2 with
| (?qe2, ?vm'') => exact: (@expr_eqP (nth 0 vm'') qe1 qe2 erefl)
end
end
end.

We can check that our tactic works on our original example:

Lemma lem''' n m p : (n + m) * p = p * m + p * n.
Proof. solve_nat_eq. Qed.

With solve_nat_eq, every equation of that form becomes very easy to solve, including cases where a human prover might have trouble at first sight!

Lemma complicated n m p r t :
(n + 2 ^ r * m) * (p + t) * (n + p)
= n * n * p + m * 2 ^ r * (p * n + p * t + t * n + p * p)
+ n * (p * p + t * p + t * n).
Proof. solve_nat_eq. Qed.

## Summary

We have seen how we can use internal computation in Coq to write powerful tactics. Besides generating small proof terms, tactics that use reflection have another important benefit: they are mostly written in Gallina, a typed language, and come with correctness proofs. This contrasts with most custom tactics written in ltac, which tend to break quite often due to the lack of static guarantees (and to how unstructure the tactic language is). For solve_nat_eq, we only had to write the reification engine in ltac, which results in a more manageable code base.
If you want to learn more about reflection, Adam Chlipala's CPDT book has an entire chapter devoted to the subject, which I highly recommend.
]]>
Two nice Coq-related blogs http://poleiro.info/posts/2015-04-06-two-nice-blogs.html 2015-04-06T00:00:00Z 2015-04-06T00:00:00Z Here are two blogs with nice Coq-related content:

• Guillaume Claret, a Ph.D. student in Paris, has also a blog devoted to Coq. You can find some good articles in there, such as this one, explaining how to use OPAM as a package manager for Coq.

• Ömer’s blog has a lot of general programming-languages and functional-programming material. This post, for instance, presents a simple exercise in dependently typed programming.

]]>
Analyzing sorting algorithms http://poleiro.info/posts/2015-03-25-analyzing-sorting-algorithms.html 2015-03-25T00:00:00Z 2015-03-25T00:00:00Z
In this post, we will formalize one of the most well-known results of algorithm analysis: no comparison sort can run in asymptotically less than n * log n steps, where n is the size of its input.
Before starting, I should point out that this is the first post in this blog to use the Ssreflect and the Mathematical Components (MathComp) libraries. Ssreflect is an amazing Coq extension that brings several improvements, including a nicer set of base tactics. Both libraries cover a wide range of theories, including fairly sophisticated Mathematics - as a matter of fact, they are featured in the Coq formalization of the Feit-Thompson theorem, known for its extremely complex and detailed proof.
As we will see, having good library support can help a lot when doing mechanized proofs, even for such simple results as this one. Two things that come in handy here, in particular, are the theories of permutations and sets over finite types that are available in MathComp. Indeed, the MathComp definitions enable many useful, higher-level reasoning principles that don't come for free in Coq, such as extensional and decidable equality. Furthermore, many lemmas on the library require a fair amount of machinery to be developed on their own - for example, showing that there are exactly n! permutations over a set of n elements. Previous versions of this post (which you can still find on the repository) tried to avoid external libraries, but were much longer and more complicated, prompting me to bite the bullet and port everything to Ssreflect/MathComp.

## Basics

The informal proof of this result is fairly simple:
• If a comparison sort is correct, then it must be capable of shuffling an input vector of size n according to any of the n! permutations of its elements.
• On the other hand, any such algorithm can recognize at most 2 ^ k distinct permutations, where k is the maximum number of comparisons performed. Hence, n! <= 2 ^ k or, equivalently, log2 n! <= k.
• To conclude, Stirling's approximation tells us that n * log2 n = O(log2 n!), which yields our result.
We'll begin our formalization by defining a convenient datatype for representing the execution of a comparison sort. For our purposes, a comparison sort can be seen as a binary tree: internal nodes indicate when a comparison between two elements occurs, with the right and left branches telling how to proceed depending on its result. The leaves of the tree mark when the algorithm ends and yields back a result. Thus, we obtain the following type:

Inductive sort_alg (n : nat) : Type :=
| Compare (i j : 'I_n) (l r : sort_alg n)
| Done of 'S_n.

Let's analyze this declaration in detail. The n parameter will be used later to track the length of the array that we are sorting. 'I_n is the type of natural numbers bounded by n, whereas 'S_n represents permutations of elements of that type. The idea is that, when running our algorithm, the i and j arguments of Compare tell which elements of our array to compare. The permutation in Done specifies how to rearrange the elements of the input array to produce an answer. We can formalize the previous description in the following function:

Fixpoint execute T n c (a : sort_alg n) (xs : n.-tuple T) :=
match a with
| Compare i j l r =>
let a' := if c (tnth xs i) (tnth xs j) then l else r in
execute c a' xs
| Done p => [tuple tnth xs (p i) | i < n]
end.

execute is a polymorphic function that works for any type T with a comparison operator c. Given an algorithm a and an input array xs of length n, the function compares the elements of a, following the appropriate branches along the way, until finding out how to rearrange a's elements.
With execute, we can define what it means for a sorting algorithm to be correct, by relating its results to the MathComp sort function:

Definition sort_alg_ok n (a : sort_alg n) :=
forall (T : eqType) (le : rel T),
forall xs, execute le a xs = sort le xs :> seq T.

Finally, to translate the above informal argument, we will need some more definitions. Let's first write a function for computing how many comparisons an algorithm performs in the worst case:

Fixpoint comparisons n (a : sort_alg n) : nat :=
match a with
| Compare _ _ l r => (maxn (comparisons l) (comparisons r)).+1
| Done _ => 0
end.

And here's a function for computing the set of permutations that an algorithm can perform (notice the use of the set library of MathComp; here, :|: denotes set union):

Fixpoint perms n (a : sort_alg n) : {set 'S_n} :=
match a with
| Compare _ _ l r => perms l :|: perms r
| Done p => [set p]
end.

(Strictly speaking, both comparisons and perms give upper bounds on the values they should compute, but this does not affect us in any crucial way.)

## Show me the proofs

To show that a correct algorithm must be able of performing arbitrary permutations, notice that, if xs is a sorted array with distinct elements, then permuting its elements is an injective operation. That is, different permutations produce different arrays.
Lemma permsT n (a : sort_alg n) : sort_alg_ok a -> perms a = [set: 'S_n].
Proof.
move=> a_ok.
apply/eqP; rewrite -subTset; apply/subsetP=> /= p _.
move: {a_ok} (a_ok _ leq [tuple val (p^-1 i) | i < n]).
rewrite (_ : sort _ _ = [tuple val i | i < n]); last first.
apply: (eq_sorted leq_trans anti_leq (sort_sorted leq_total _)).
by rewrite /= val_enum_ord iota_sorted.
rewrite (perm_trans (introT permPl (perm_sort _ _))) //.
apply/tuple_perm_eqP; exists p^-1; congr val; apply/eq_from_tnth=> i.
by rewrite 3!tnth_map 2!tnth_ord_tuple.
elim: a=> [/= i j l IHl r IHr|p'].
by case: ifP=> [_ /IHl|_ /IHr]; rewrite in_setU => -> //; rewrite orbT.
move/val_inj=> /=.
rewrite in_set1=> e; apply/eqP/permP=> i; apply/esym/(canRL (permKV p)).
apply/val_inj.
rewrite (_ : val i = tnth [tuple val i | i < n] i); last first.
by rewrite tnth_map tnth_ord_tuple.
by rewrite -{}e 2!tnth_map !tnth_ord_tuple.
Qed.

Bounding the number of permutations performed by an algorithm is simple, and amounts to invoking basic lemmas about arithmetic and sets.

Lemma card_perms n (a : sort_alg n) : #|perms a| <= 2 ^ comparisons a.
Proof.
elim: a=> [i j l IHl r IHr|p] /=; last by rewrite cards1.
rewrite (leq_trans (leq_of_leqif (leq_card_setU (perms l) (perms r)))) //.
by rewrite expnS mul2n -addnn leq_add // ?(leq_trans IHl, leq_trans IHr) //
leq_exp2l // ?(leq_maxl, leq_maxr).
Qed.

Doing the last step is a bit trickier, as we don't have a proof of Stirling's approximation we can use. Instead, we take a more direct route, showing the following lemma by induction on n (trunc_log, as its name implies, is the truncated logarithm):

Local Notation log2 := (trunc_log 2).

Lemma log2_fact n : (n * log2 n)./2 <= log2 n`!.
Proof.
In order to get our proof to go through, we must strengthen our induction hypothesis a little bit:
suff: n * (log2 n).+2 <= (log2 n`! + 2 ^ (log2 n)).*2.+1.
We can then proceed with a straightforward (although not completlely trivial) inductive argument.
elim: n=> [|n IH] //=.
(* ... *)
Qed.

Our main result follows almost immediately from these three intermediate lemmas:

Lemma sort_alg_ok_leq n (a : sort_alg n) :
sort_alg_ok a -> (n * log2 n)./2 <= comparisons a.
Proof.
move=> a_ok; suff lb: n`! <= 2 ^ comparisons a.
rewrite (leq_trans (log2_fact n)) // -(@leq_exp2l 2) //.
by rewrite (leq_trans (trunc_logP (leqnn 2) (fact_gt0 n))).
rewrite (leq_trans _ (card_perms a)) // -{1}(card_ord n) -cardsT -card_perm.
by rewrite -(cardsE (perm_on [set: 'I_n])) subset_leq_card // permsT //.
Qed.

## Summary

We've seen how to formalize a result of algorithm analysis in an abstract setting: although it is fair to say that our model of a comparison sort is detailed enough for our purposes, we haven't connected it yet to a more traditional computation model, something I plan to discuss on a future post.
Aside from that, we've seen how a rich set of theories, such as the ones in MathComp, allow us to express higher-level concepts in our definitions and proofs, leading to much shorter formalizations.
]]>
Coq as its own extensible parser http://poleiro.info/posts/2014-10-26-coq-as-its-own-extensible-parser.html 2014-10-26T00:00:00Z 2014-10-26T00:00:00Z
Coq's built-in extensible parser, although quite convenient in many cases, does have some limitations. On one hand, some syntax extensions that one might like to write cannot be expressed in it. Furthermore, the extensions are not first-class, having to be defined outside of the core language. We will see how to implement some interesting syntax extensions in Coq using just coercions and regular Coq functions. Besides being first-class, we will see that the mechanism can be used for defining extensions that cannot be expressed just with Coq's extensible parser.
We want to describe each syntax extension with a few Coq types and functions that determine how it should be parsed. Our first attempt might look like this:

Module Simple.

Record parser := Parser {
token : Type;
result : Type;
initial_result : result;
read_token : result -> token -> result
}.

Our parser works by reading tokens and returning a value of type result at the end of the process. It does this by starting with some value initial_result and updating that value with each token read, using function read_token. For instance, here's a function that returns the result of parsing three tokens:

Section WithParser.

Variable p : parser.

Definition read_three_tokens t1 t2 t3 :=
) t2
) t3.

We can use this definition as a syntax extension by declaring read_token as a coercion from result to Funclass, allowing us to use each result as if it were a function. Then, applying initial_result p to a sequence of tokens will correspond to calling our parser on that sequence.

Coercion read_token : result >-> Funclass.

Definition read_three_tokens' t1 t2 t3 :=
(initial_result p) t1 t2 t3.

This works because each application of read_token returns a result, so trying to apply that updated result to another token triggers yet another coercion, allowing the process to continue indefinitely. We can check that both definitions of read_three_tokens yield the same function, meaning that the coercion is behaving as expected:

Proof. reflexivity. Qed.

To make the mechanism more robust, we wrap our parser in a new type, ensuring that the Coq type checker will not fail to perform some coercion by reducing result more than it should.

Record parser_wrapper : Type := ParserWrapper {
get_result : result p
}.

Definition read_token' (w : parser_wrapper) t :=
ParserWrapper (read_token p (get_result w) t).
Coercion read_token' : parser_wrapper >-> Funclass.

End WithParser.

As a last tweak, we declare another coercion and a notation to make using our embedded parsers more similar to other systems, like in Template Haskell:

Definition init_parser p := ParserWrapper _ (initial_result p).
Coercion init_parser : parser >-> parser_wrapper.
Notation "[ x ]" := (get_result _ x) (at level 0).

Now, we can invoke a parser simply by writing [name_of_parser <list of tokens>]:

Definition read_three_tokens'' (p : parser) t1 t2 t3 :=
[p t1 t2 t3].

As a first example, we can define an alternative syntax for Coq lists that doesn't need separators between the elements.

Definition listp (X : Type) := {|
token := X;
result := list X;
initial_result := nil;
read_token l x := app l (cons x nil)
|}.

The listp parser is parameterized by X, the type of the elements on the list. We initialize the parser with an empty list, and each token that we read is an element of X, which will be progressively added at the end of our list.

Definition list_exp : list nat := [listp nat 0 1 2 3 4 5 6 7 8 9 10].

End Simple.

## More parsers

While nice, listp is not especially interesting as a parser, since it doesn't really analyze the tokens it reads. In contrast, parsers usually treat tokens differently, depending on what it has been read so far. To take such dependencies into account, we introduce some new fields to our record:

Module State.

Record parser := Parser {
state : Type;
initial_state : state;
token : Type;
next : state -> token -> state;
result : Type;
initial_result : result;
read_token : state -> result -> token -> result
}.

End State.

The state field represents the internal state of our parser at a given point. It is set initially set to initial_state, and is updated using function next. We also change read_token to pass it the current state as an additional argument.
While more general, this version isn't quite good yet. We require our parsers to carry around a complete result value that it can return after having read any sequence of tokens. Usually, however, parsing can result in errors, and there is no meaningful value that can be returned by the parser until it finishes its job. To solve this problem, we introduce one last change to our definition: dependent types.

Record parser := Parser {
state : Type;
initial_state : state;
token : state -> Type;
next : forall s, token s -> state;
partial_result : state -> Type;
initial_partial_result : partial_result initial_state;
read_token : forall s, partial_result s -> forall t, partial_result (next s t)
}.

Now, the type of tokens expected by the parser, as well as its type of partial results, can depend on the current parsing state, and the parsing functions have been updated to take this dependency into account.
With dependent types, the type of the value being built, partial_result, can change during the parsing process, allowing us to distinguish a complete, successfully parsed result, from one that still needs more tokens, or even from a message describing a parse error. By making token depend on the current state, we can constrain which tokens can be read at each parsing state, allowing us to expose parse errors as type errors.
For listp, there is no need to use anything interesting for the state type. For more complicated parsers, however, state comes in handy. To see how, we will define parsers for prefix and postfix arithmetic expressions. This is also a nice example because prefix and postfix expressions cannot be handled by Coq's extensible parser alone.
Expressions can involve three operations: addition, subtraction and multiplication.

Inductive op := Add | Sub | Mul.

The parsers will read tokens that can be natural numbers or one of the above operations. We group those in a common type exp_token.

Inductive exp_token :=
| Op (o : op)
| Const (n : nat).

Notation "''+'" := Add (at level 0).
Notation "''-'" := Sub (at level 0).
Notation "''*'" := Mul (at level 0).

Coercion Op : op >-> exp_token.
Coercion Const : nat >-> exp_token.

Let's consider how to deal with prefix expressions first. At any point during parsing, the parser is waiting for some number of additional expressions it must read in order to complete the top-level expression. We will use this number as the parser state. Initially, the parser wants to read just one expression, so we will use that as the initial state. Once the state reaches zero, there are no more expressions to read, so we know that the parser is done.

Module Pre.

Definition state := nat.
Definition initial_state : state := 1.

If our current state is zero, any additional tokens fed to the parser should result in a parse error. To implement this behavior, we define the type of tokens for that state to be Empty_set. Trying to feed an extra token to parser at that point will result in a type error, signaling that a parse error just occurred. If the parser still expects expressions (i.e., if the current state is greater than zero), we just use exp_token for the token type.

Definition token (s : state) : Type :=
match s with
| S _ => exp_token
| 0 => Empty_set
end.

The value built by the parser will be a continuation that expects n numbers, one for each expression that the parser still needs to read.

Fixpoint partial_result (n : nat) : Type :=
match n with
| 0 => nat
| S n => nat -> partial_result n
end.

We must define how the parser actually interprets the tokens it reads. If the parser expects n expressions, reading a constant will make this number go down by one, since a constant is a complete expression. If it reads an operation, on the other hand, that number is increased by one. If n = 0, there is no token to be read, hence no next state, so we perform an empty pattern match to show that can't happen.

Definition next (s : state) : token s -> state :=
match s with
| S n' => fun t =>
match t with
| Op _ => S (S n')
| Const _ => n'
end
| 0 => fun t => match t with end
end.

How do we update the result? If we read a constant, we just feed it to the continuation. If we read an operation, we compose that operation with the continuation, which has the net effect of adding one argument to it. Here, ap_op is a function that maps each op to the corresponding Coq function.

Definition read_token s : partial_result s -> forall t, partial_result (next s t) :=
match s with
| S n' =>
fun res t =>
match t with
| Op o => fun n1 n2 => res (ap_op o n1 n2)
| Const n => res n
end
| _ => fun _ t => match t with end
end.

End Pre.

We can now package our definitions as a complete parser and try it on some examples:

Definition pre := {|
state := Pre.state;
initial_state := 1;
token := Pre.token;
partial_result := Pre.partial_result;
next := Pre.next;
initial_partial_result := fun t => t;
|}.

Definition pre_exp1 : nat :=
[pre '+ '- 1 2 '+ 4 4].
Definition pre_exp2 : nat :=
[pre '+ '* 12 '* 12 12 '* 1 '* 1 1].

We can also see that invalid expressions are rejected, as expected.

Fail Definition pre_exp_wrong : nat :=
[pre '+ 1 1 1].

Error: The term "1" has type "nat" while it is expected to have type
"token pre (next pre (next pre (next pre (initial_state pre) '+) 1) 1)".
A parser for postfix expressions is in some sense the dual of the one we've just seen: instead of carrying around a continuation that expects arguments, we construct a stack of numbers on which we can perform our operations. Our state will also be a number, denoting the number of elements on our stack. Since our stack start empty, the initial state will be zero.

Module Post.

Definition state := nat.
Definition initial_state : state := 0.

Since our operations need at least two numbers on the stack, we restrict our parser to accept only numbers if the stack doesn't have the appropriate size:

Definition token (s : state) : Type :=
match s with
| 0 | 1 => nat
| _ => exp_token
end.

The partial_result type is a length-indexed list of natural numbers with one small twist: we ensure that partial_result 1 = nat definitionally, so that we can use postfix expressions as having type nat without the need for any projections.

Fixpoint partial_result' (n : nat) : Type :=
match n with
| 0 => nat
| S n => (partial_result' n * nat)%type
end.

Definition partial_result (s : state) : Type :=
match s with
| 0 => unit
| S n => partial_result' n
end.

next and read_token are dual to the definitions of the previous parser: reading a constant increases the stack size by one, while reading an operation decreases the stack size by one.

Definition next s : token s -> state :=
match s with
| 0 => fun _ => 1
| 1 => fun _ => 2
| S (S n) => fun t =>
match t with
| Op _ => 1 + n
| Const _ => 3 + n
end
end.

Definition read_token s : partial_result s -> forall t, partial_result (next s t) :=
match s with
| 0 => fun _ t => t
| 1 => fun res t => (res, t)
| 2 => fun res t =>
match res with
| (n1, n2) =>
match t with
| Op o => ap_op o n1 n2
| Const n => (n1, n2, n)
end
end
| S (S (S n)) => fun res t =>
match res with
| (res, n1, n2) =>
match t with
| Op o => (res, ap_op o n1 n2)
| Const n => (res, n1, n2, n)
end
end
end.

End Post.

We now have a full parser for postfix expressions, which we can test on some examples.

Definition post := {|
state := Post.state;
initial_state := 0;
token := Post.token;
next := Post.next;
partial_result := Post.partial_result;
initial_partial_result := tt;