Where the Coq lives

| Comments GitHub

Equality in Coq

Coq views truth through the lens of provability. The hypotheses it manipulates are not mere assertions of truth, but formal proofs of the corresponding statements ─ data structures that can be inspected to build other proofs. It is not a coincidence that function types and logical implication use the same notation, A -> B, because proofs of implication in Coq are functions: they take proofs of the precondition as inputs and return a proof of the consequent as the output. Such proofs are written with the same language we use for programming in Coq; tactics are but scripts that build such programs for us. A proof that implication is transitive, for example, amounts to function composition.

Definition implication_is_transitive (A B C : Prop) :
  (A -> B) -> (B -> C) -> (A -> C) :=
  fun HAB HBC HA => HBC (HAB HA).

Similarly, inductive propositions in Coq behave just like algebraic data types in typical functional programming languages. With pattern matching, we can check which constructor was used in a proof and act accordingly.

Definition or_false_r (A : Prop) : A \/ False -> A :=
  fun (H : A \/ False) =>
    match H with
    | or_introl HA => HA
    | or_intror contra => match contra with end

Disjunction \/ is an inductive proposition with two constructors, or_introl and or_intror, whose arguments are proofs of its left and right sides. In other words, a proof of A \/ B is either a proof of A or a proof of B. Falsehood, on the other hand, is an inductive proposition with no constructors. Matching on a proof of False does not require us to consider any cases, thus allowing the expression to have any type we please. This corresponds to the so-called principle of explosion, which asserts that from a contradiction, anything follows.
The idea of viewing proofs as programs is known as the Curry-Howard correspondence. It has been a fruitful source of inspiration for the design of many other logics and programming languages beyond Coq, other noteworthy examples including Agda and Nuprl. I will discuss a particular facet of this correspondence in Coq: the meaning of a proof of equality.

Defining equality

The Coq standard library defines equality as an indexed inductive proposition. (The familiar x = y syntax is provided by the standard library using Coq's notation mechanism.)

Inductive eq (T : Type) (x : T) : T -> Prop :=
| eq_refl : eq T x x.

This declaration says that the most basic way of showing x = y is when x and y are the "same" term ─ not in the strict sense of syntactic equality, but in the more lenient sense of equality "up to computation" used in Coq's theory. For instance, we can use eq_refl to show that 1 + 1 = 2, because Coq can simplify the left-hand side using the definition of + and arrive at the right-hand side.
To prove interesting facts about equality, we generally use the rewrite tactic, which in turn is implemented by pattern matching. Matching on proofs of equality is more complicated than for typical data types because it is a non-uniform indexed proposition ─ that is, the value of its last argument is not fixed for the whole declaration, but depends on the constructor used. (This non-uniformity is what allows us to put two occurrences of x on the type of eq_refl.)
Concretely, suppose that we have elements x and y of a type T, and a predicate P : T -> Prop. We want to prove that P y holds assuming that x = y and P x hold. This can be done with the following program:

Definition rewriting
  (T : Type) (P : T -> Prop) (x y : T) (p : x = y) (H : P x) : P y :=
  match p in _ = z return P z with
  | eq_refl => H

Compared to common match expressions, this one has two extra clauses. The first, in _ = z, allows us to provide a name to the non-uniform argument of the type of p. The second, return P z, allows us to declare what the return type of the match expression is as a function of z. At the top level, z corresponds to y, which means that the whole match has type P y. When checking each individual branch, however, Coq requires proofs of P z using values of z that correspond to the constructors of that branch. Inside the eq_refl branch, z corresponds to x, which means that we have to provide a proof of P x. This is why the use of H there is valid.
To illustrate, here are proofs of two basic facts about equality: transitivity and symmetry.

Definition etrans {T} {x y z : T} (p : x = y) (q : y = z) : x = z :=
  match p in _ = w return w = z -> x = z with
  | eq_refl => fun q' : x = z => q'
  end q.

Definition esym {T} {x y : T} (p : x = y) : y = x :=
  match p in _ = z return z = x with
  | eq_refl => eq_refl

Notice the return clause in the first proof, which uses a function type. We cannot use w = z alone, as the final type of the expression would be y = z. The other reasonable guess, x = z, wouldn't work either, since we would have nothing of that type to return in the branch ─ q has type y = z, and Coq does not automatically change it to x = z just because we know that x and y ought to be equal inside the branch. The practice of returning a function is so common when matching on dependent types that it even has its own name: the convoy pattern, a term coined by Adam Chlipala in his CDPT book.
In addition to functions, pretty much any type expression can go in the return clause of a match. This flexibility allows us to derive many basic reasoning principles ─ for instance, the fact that constructors are disjoint and injective.

Definition eq_Sn_m (n m : nat) (p : S n = m) :=
  match p in _ = k return match k with
                          | 0 => False
                          | S m' => n = m'
                          end with
  | eq_refl => eq_refl

Definition succ_not_zero n : S n <> 0 :=
  eq_Sn_m n 0.

Definition succ_inj n m : S n = S m -> n = m :=
  eq_Sn_m n (S m).

In the eq_refl branch, we know that k is of the form S n. By substituting this value in the return type, we find that the result of the branch must have type n = n, which is why eq_refl is accepted. Since this is only value of k we have to handle, it doesn't matter that False appears in the return type of the match: that branch will never be used. The more familiar lemmas succ_not_zero and succ_inj simply correspond to special cases of eq_Sn_m. A similar trick can be used for many other inductive types, such as booleans, lists, and so on.

Mixing proofs and computation

Proofs can be used not only to build other proofs, but also in more conventional programs. If we know that a list is not empty, for example, we can write a function that extracts its first element.

From mathcomp Require Import seq.

Definition first {T} (s : seq T) (Hs : s <> [::]) : T :=
  match s return s <> [::] -> T with
  | [::] => fun Hs : [::] <> [::] => match Hs eq_refl with end
  | x :: _ => fun _ => x
  end Hs.

Here we see a slightly different use of dependent pattern matching: the return type depends on the analyzed value s, not just on the indices of its type. The rules for checking that this expression is valid are the same: we substitute the pattern of each branch for s in the return type, and ensure that it is compatible with the result it produces. On the first branch, this gives a contradictory hypothesis [::] <> [::], which we can discard by pattern matching as we did earlier. On the second branch, we can just return the first element of the list.
Proofs can also be stored in regular data structures. Consider for instance the subset type {x : T | P x}, which restricts the elements of the type T to those that satisfy the predicate P. Elements of this type are of the form exist x H, where x is an element of T and H is a proof of P x. Here is an alternative version of first, which expects the arguments s and Hs packed as an element of a subset type.

Definition first' {T} (s : {s : seq T | s <> [::]}) : T :=
  match s with
  | exist s Hs => first s Hs

While powerful, this idiom comes with a price: when reasoning about a term that mentions proofs, the proofs must be explicitly taken into account. For instance, we cannot show that two elements exist x H1 and exist x H2 are equal just by reflexivity; we must explicitly argue that the proofs H1 and H2 are equal. Unfortunately, there are many cases in which this is not possible ─ for example, two proofs of a disjunction A \/ B need to use the same constructor to be considered equal.
The situation is not as bad as it might sound, because Coq was designed to allow a proof irrelevance axiom without compromising its soundness. This axiom says that any two proofs of the same proposition are equal.

Axiom proof_irrelevance : forall (P : Prop) (p q : P), p = q.

If we are willing to extend the theory with this axiom, much of the pain of mixing proofs and computation goes away; nevertheless, it is a bit upsetting that we need an extra axiom to make the use of proofs in computation practical. Fortunately, much of this convenience is already built into Coq's theory, thanks to the structure of proofs of equality.

Proof irrelevance and equality

A classical result of type theory says that equalities between elements of a type T are proof irrelevant provided that T has decidable equality. Many useful properties can be expressed in this way; in particular, any boolean function f : S -> bool can be seen as a predicate S -> Prop defined as fun x : S => f x = true. Thus, if we restrict subset types to computable predicates, we do not need to worry about the proofs that appear in its elements.
You might wonder why any assumptions are needed in this result ─ after all, the definition of equality only had a single constructor; how could two proofs be different? Let us begin by trying to show the result without relying on any extra assumptions. We can show that general proof irrelevance can be reduced to irrelevance of "reflexive equality": all proofs of x = x are equal to eq_refl x.

Section Irrelevance.

Variable T : Type.
Implicit Types x y : T.

Definition almost_irrelevance :
  (forall x (p : x = x), p = eq_refl x) ->
  (forall x y (p q : x = y), p = q) :=
  fun H x y p q =>
    match q in _ = z return forall p' : x = z, p' = q with
    | eq_refl => fun p' => H x p'
    end p.

This proof uses the extended form of dependent pattern matching we have seen in the definition of first: the return type mentions q, the very element we are matching on. It also uses the convoy pattern to "update" the type of p with the extra information gained by matching on q.
The almost_irrelevance lemma may look like progress, but it does not actually get us anywhere, because there is no way of proving its premise without assumptions. Here is a failed attempt.

Fail Definition irrelevance x (p : x = x) : p = eq_refl x :=
  match p in _ = y return p = eq_refl x with
  | eq_refl => eq_refl

Coq complains that the return clause is ill-typed: its right-hand side has type x = x, but its left-hand side has type x = y. That is because when checking the return type, Coq does not use the original type of p, but the one obtained by generalizing the index of its type according to the in clause.
It took many years to understand that, even though the inductive definition of equality only mentions one constructor, it is possible to extend the type theory to allow for provably different proofs of equality between two elements. Homotopy type theory, for example, introduced a univalence principle that says that proofs of equality between two types correspond to isomorphisms between them. Since there are often many different isomorphisms between two types, irrelevance cannot hold in full generality.
To obtain an irrelevance result, we must assume that T has decidable equality.

Hypothesis eq_dec : forall x y, x = y \/ x <> y.

The argument roughly proceeds as follows. We use decidable equality to define a normalization procedure that takes a proof of equality as input and produces a canonical proof of equality of the same type as output. Crucially, the output of the procedure does not depend on its input. We then show that the normalization procedure has an inverse, allowing us to conclude ─ all proofs must be equal to the canonical one.
Here is the normalization procedure.

Definition normalize {x y} (p : x = y) : x = y :=
  match eq_dec x y with
  | or_introl e => e
  | or_intror _ => p

If x = y holds, eq_dec x y must return something of the form or_introl e, the other branch being contradictory. This implies that normalize is constant.

Lemma normalize_const {x y} (p q : x = y) : normalize p = normalize q.
Proof. by rewrite /normalize; case: (eq_dec x y). Qed.

The inverse of normalize is defined by combining transitivity and symmetry of equality.

Notation "p * q" := (etrans p q).

Notation "p ^-1" := (esym p)
  (at level 3, left associativity, format "p ^-1").

Definition denormalize {x y} (p : x = y) := p * (normalize (eq_refl y))^-1.

As the above notation suggests, we can show that esym is the inverse of etrans, in the following sense.

Definition etransK x y (p : x = y) : p * p^-1 = eq_refl x :=
  match p in _ = y return p * p^-1 = eq_refl x with
  | eq_refl => eq_refl (eq_refl x)

This proof avoids the problem that we encountered in our failed proof of irrelevance, resulting from generalizing the right-hand side of p. In this return type, p * p^-1 has type x = x, which matches the one of eq_refl x. Notice why the result of the eq_refl branch is valid: we must produce something of type eq_refl x * (eq_refl x)^-1 = eq_refl x, but by the definitions of etrans and esym, the left-hand side computes precisely to eq_refl x.
Armed with etransK, we can now relate normalize to its inverse, and conclude the proof of irrelevance.

Definition normalizeK x y (p : x = y) :
  normalize p * (normalize (eq_refl y))^-1 = p :=
  match p in _ = y return normalize p * (normalize (eq_refl y))^-1 = p with
  | eq_refl => etransK x x (normalize (eq_refl x))

Lemma irrelevance x y (p q : x = y) : p = q.
by rewrite -[LHS]normalizeK -[RHS]normalizeK (normalize_const p q).

End Irrelevance.

Irrelevance of equality in practice

The Mathematical Components library provides excellent support for types with decidable equality in its eqtype module, including a generic result of proof irrelevance like the one I gave above (eq_irrelevance). The class structure used by eqtype makes it easy for Coq to infer proofs of decidable equality, which considerably simplifies the use of this and other lemmas. The Coq standard library also provides a proof of this lemma (eq_proofs_unicity_on), though it is a bit harder to use, since it does not make use of any mechanisms for inferring results of decidable equality.

| Comments GitHub

POPL and CoqPL 2016

POPL 2016 was just held a few weeks ago in Saint Petersburg, Florida, bringing together many researchers in programming languages to learn about the latest developments in the field. As usual, it was an exciting event for the Coq community. Early in the week, Robert Rand and I led an introductory Coq tutorial. Later, there was a special meeting to announce DeepSpec, an ambitious research effort to develop and integrate several verified software components, many of which use Coq. Near the end, Xavier Leroy received an award for the most influential POPL paper of 2006, for Formal Verification of a Compiler Back-end, where he introduced the CompCert verified C compiler.

Besides all these events, this year also featured the second edition of the CoqPL workshop. Its main attraction may have been the release of the long-awaited Coq 8.5. Matthieu Sozeau and Maxime Dénès gave a detailed presentation of its main new features, which include asynchronous proof checking and editing, universe polymorphism, and a new tactic engine. Congratulations to the Coq team for the great work!

Another fun talk was by Clément Pit-Claudel, where he announced company-coq, a set of Proof General extensions brings many nice editing features for Coq code in Emacs. These include: automatically prettifying code (for instance, replacing forall by ), auto-completion, code folding, and improved error messages, among many others. If you work with Coq under Emacs, you should definitely give it a try!

| Comments GitHub

Writing reflective tactics

One important aspect of Coq's logic is the special status given to computation: while some systems require one to apply explicit deductive steps to show that two given terms are equal, Coq's logic considers any two terms that evaluate to the same result to be equal automatically, without the need for additional reasoning.
Without getting into too much detail, we can illustrate this idea with some simple examples. Russell and Whitehead's seminal Principia Mathematica had to develop hundreds of pages of foundational mathematics before being able to prove that 1 + 1 = 2. In contrast, here's what this proof looks like in Coq:

Definition one_plus_one : 1 + 1 = 2 := erefl.

erefl is the only constructor of the eq type; its type, forall A (a : A), a = a, tells us that we can use it to prove that given term a is equal to itself. Coq accepts one_plus_one as a valid proof because, even though the two sides of the equation are not syntactically the same, it is able to use the definition of + to compute the left-hand side and check that the result is the same as the right-hand side. This also works for some statements with variables in them, for instance

Definition zero_plus_n n : 0 + n = n := erefl.

The same principle applies here: + is defined by case analysis on its first argument, and doesn't even need to inspect the second one. Since the first argument on the left-hand side is a constructor (0), Coq can reduce the expression and conclude that both sides are equal.
Unfortunately, not every equality is a direct consequence of computation. For example, this proof attempt is rejected:

Fail Definition n_plus_zero n : n + 0 = n := erefl.

What happened here? As mentioned before, + is defined by case analysis on the first argument; since the first argument of the left-hand side doesn't start with a constructor, Coq doesn't know how to compute there. As it turns out, one actually needs an inductive argument to prove this result, which might end up looking like this, if we were to check the proof term that Coq produces:

Fixpoint n_plus_zero n : n + 0 = n :=
  match n with
  | 0 => erefl
  | n.+1 => let: erefl := n_plus_zero n in erefl

It seems that, although interesting, computation inside Coq isn't of much use when proving something. Or is it?
In this post, I will show how computation in Coq can be used to write certified automation tactics with a technique known as proof by reflection. Reflection is extensively used in Coq and in other proof assistants as well; it is at the core of powerful automation tactics such as ring, and played an important role in the formalization of the Four-color theorem. As a matter of fact, the name Ssreflect stands for small-scale reflection, due to the library's pervasive use of reflection and computation.
Let's see how reflection works by means of a basic example: a tactic for checking equalities between simple expressions involving natural numbers.

Arithmetic with reflection

Imagine that we were in the middle of a proof and needed to show that two natural numbers are equal:

Lemma lem n m p : (n + m) * p = p * m + p * n.

ring is powerful enough to solve this goal by itself, but just for the sake of the example, suppose that we had to prove it by hand. We could write something like

Proof. by rewrite mulnDl (mulnC n) (mulnC m) addnC. Qed.

This was not terribly complicated, but there's certainly room for improvement. In a paper proof, a mathematician would probably assume that the reader is capable of verifying this result on their own, without any additional detail. But how exactly would the reader proceed?
In the case of the simple arithmetic expression above, it suffices to apply the distributivity law as long as possible, until both expressions become a sum of monomials. Then, thanks to associativity and commutativity, we just have to reorder the factors and terms and check that both sides of the equation match.
The idea of proof by reflection is to reduce a the validity of a logical statement to a symbolic computation, usually by proving a theorem of the form thm : b = true -> P with b : bool. If b can be computed explicitly and reduces to true, then Coq recognizes erefl as a proof of b = true, which means that thm erefl becomes a proof of P.
To make things concrete, let's go back to our example. The idea that we described above for checking whether two numbers are equal can be used whenever we have expressions involving addition, multiplication, and variables. We will define a Coq data type for representing such expressions, as we will need to compute with them:

Inductive expr :=
| Var of nat
| Add of expr & expr
| Mul of expr & expr.

Variables are represented by natural numbers using the Var constructor, and Add and Mul can be used to combine expressions. The following term, for instance, represents the expression n * (m + n):

Example expr_ex :=
  Mul (Var 0) (Add (Var 1) (Var 0)).

where Var 0 and Var 1 denote n and m, respectively.
If we are given a function vals assigning variables to numbers, we can compute the value of an expression with a simple recursive function:

Fixpoint nat_of_expr vals e :=
  match e with
  | Var v => vals v
  | Add e1 e2 => nat_of_expr vals e1 + nat_of_expr vals e2
  | Mul e1 e2 => nat_of_expr vals e1 * nat_of_expr vals e2

Now, since every expression of that form can be written as a sum of monomials, we can define a function for converting an expr to that form:

Fixpoint monoms e :=
  match e with
  | Var v => [:: [:: v] ]
  | Add e1 e2 => monoms e1 ++ monoms e2
  | Mul e1 e2 => [seq m1 ++ m2 | m1 <- monoms e1, m2 <- monoms e2]

Here, each monomial is represented by a list enumerating all variables that occur in it, counting their multiplicities. Hence, a sum of monomials is represented as a list of lists. For example, here's the result of normalizing expr_ex:

Example monoms_expr_ex :
  monoms expr_ex = [:: [:: 0; 1]; [:: 0; 0]].
Proof. by []. Qed.

To prove that monoms has the intended behavior, we show that the value of an expression is preserved by it. By using the big operations \sum and \prod from the MathComp library, we can compute the value of a sum of monomials very easily:

Lemma monomsE vals e :
  nat_of_expr vals e = \sum_(m <- monoms e) \prod_(v <- m) vals v.
elim: e=> [v|e1 IH1 e2 IH2|e1 IH1 e2 IH2] /=.
- by rewrite 2!big_seq1.
- by rewrite big_cat IH1 IH2.
rewrite {}IH1 {}IH2 big_distrlr /=.
elim: (monoms e1) (monoms e2)=> [|v m1 IH] m2 /=; first by rewrite 2!big_nil.
rewrite big_cons big_cat /= IH; congr addn.
by rewrite big_map; apply/eq_big=> //= m3 _; rewrite big_cat.

Hence, to check that two expressions are equivalent, it suffices to compare the results of monoms, modulo the ordering. We can do this by sorting the variable names on each monomial and then testing whether one list of monomials is a permutation of the other:

Definition normalize := map (sort leq) \o monoms.

Lemma normalizeE vals e :
  nat_of_expr vals e = \sum_(m <- normalize e) \prod_(v <- m) vals v.
rewrite monomsE /normalize /=; elim: (monoms e)=> [|m ms IH] /=.
  by rewrite big_nil.
rewrite 2!big_cons IH; congr addn.
by apply/eq_big_perm; rewrite perm_eq_sym perm_sort.

Definition expr_eq e1 e2 := perm_eq (normalize e1) (normalize e2).

Lemma expr_eqP vals e1 e2 :
  expr_eq e1 e2 ->
  nat_of_expr vals e1 = nat_of_expr vals e2.
Proof. rewrite 2!normalizeE; exact/eq_big_perm. Qed.

To see how this lemma works, let's revisit our original example. Here's a new proof that uses expr_eqP:

Lemma lem' n m p : (n + m) * p = p * m + p * n.
exact: (@expr_eqP (nth 0 [:: n; m; p])
                  (Mul (Add (Var 0) (Var 1)) (Var 2))
                  (Add (Mul (Var 2) (Var 1)) (Mul (Var 2) (Var 0)))

The first argument to our lemma assigns "real" variables to variable numbers: 0 corresponds to n (the first element of the list), 1 to m, and 2 to p. The second and third argument are symbolic representations of the left and right-hand sides of our equation. The fourth argument is the most interesting one: the expr_eq was defined as a boolean function that returns true when its two arguments are equivalent expressions. As we've seen above, this means that whenever expr_eq e1 e2 computes to true, erefl is a valid proof of it. Finally, when Coq tries to check whether the conclusion of expr_eqP can be used on our goal, it computes nat_of_expr on both sides, realizing that the conclusion and the goal are exactly the same. For instance:

Lemma expr_eval n m p :
  nat_of_expr (nth 0 [:: n; m; p]) (Mul (Add (Var 0) (Var 1)) (Var 2))
  = (n + m) * p.
Proof. reflexivity. Qed.

Of course, expr_eqP doesn't force its first argument to always return actual Coq variables, so it can be applied even in some cases where the expressions contain other operators besides + and *:

Lemma lem'' n m : 2 ^ n * m = m * 2 ^ n.
exact: (@expr_eqP (nth 0 [:: 2 ^ n; m])
                  (Mul (Var 0) (Var 1)) (Mul (Var 1) (Var 0))

At this point, it may seem that we haven't gained much from using expr_eqP, since the second proof of our example was much bigger than the first one. This is just an illusion, however, as the proof term produced on the first case is actually quite big:

lem =
fun n m p : nat =>
(fun _evar_0_ : n * p + m * p = p * m + p * n =>
 eq_ind_r (eq^~ (p * m + p * n)) _evar_0_ (mulnDl n m p))
  ((fun _evar_0_ : p * n + m * p = p * m + p * n =>
      (fun _pattern_value_ : nat => _pattern_value_ + m * p = p * m + p * n)
      _evar_0_ (mulnC n p))
     ((fun _evar_0_ : p * n + p * m = p * m + p * n =>
         (fun _pattern_value_ : nat =>
          p * n + _pattern_value_ = p * m + p * n) _evar_0_
         (mulnC m p))
        ((fun _evar_0_ : p * m + p * n = p * m + p * n =>
          eq_ind_r (eq^~ (p * m + p * n)) _evar_0_ (addnC (p * n) (p * m)))
           (erefl (p * m + p * n)))))
     : forall n m p : nat, (n + m) * p = p * m + p * n
By using reflection, we were able to transform the explicit reasoning steps of the first proof into implicit computation that is carried out by the proof assistant. And since proof terms have to be stored in memory or included into the compiled vo file, it is good to make them smaller if we can.
Nevertheless, even with a smaller proof term, having to manually type in that proof term is not very convenient. The problem is that Coq's unification engine is not smart enough to infer the symbolic form of an expression, forcing us to provide it ourselves. Fortunately, we can use some code to fill in the missing bits.


To reify something means to produce a representation of that object that can be directly manipulated in computation. In our case, that object is a Gallina expression of type nat, and the representation we are producing is a term of type expr.
Reification is ubiquitous in proofs by reflection. The Coq standard library comes with a plugin for reifying formulas, but it is not general enough to accommodate our use case. Therefore, we will program our own reification tactic in ltac.
We will begin by writing a function that looks for a variable on a list and returns its position. If the variable is not present, we add it to the end of the list and return the updated list as well:

Ltac intern vars e :=
  let rec loop n vars' :=
    match vars' with
    | [::] =>
      let vars'' := eval simpl in (rcons vars e) in
      constr:((n, vars''))
    | e :: ?vars'' => constr:((n, vars))
    | _ :: ?vars'' => loop (S n) vars''
    end in
  loop 0 vars.

Notice the call to eval simpl on the first branch of loop. Remember that in ltac everything is matched almost purely syntactically, so we have to explicitly evaluate a term when we are just interested on its value, and not on how it is written.
We can now write a tactic for reifying an expression. reify_expr takes two arguments: a list vars to be used with intern for reifying variables, plus the expression e to be reified. It returns a pair (e',vars') contained the reified expression e' and an updated variable list vars'.

Ltac reify_expr vars e :=
  match e with
  | ?e1 + ?e2 =>
    let r1 := reify_expr vars e1 in
    match r1 with
    | (?qe1, ?vars') =>
      let r2 := reify_expr vars' e2 in
      match r2 with
      | (?qe2, ?vars'') => constr:((Add qe1 qe2, vars''))
  | ?e1 * ?e2 =>
    let r1 := reify_expr vars e1 in
    match r1 with
    | (?qe1, ?vars') =>
      let r2 := reify_expr vars' e2 in
      match r2 with
      | (?qe2, ?vars'') => constr:((Mul qe1 qe2, vars''))
  | _ =>
    let r := intern vars e in
    match r with
    | (?n, ?vars') => constr:((Var n, vars'))

Again, because this is an ltac function, we can traverse our Gallina expression syntactically, as if it were a data structure. Notice how we thread though the updated variable lists after each call; this is done to ensure that variables are named consistently.
Finally, using reify_expr, we can write solve_nat_eq, which reifies both sides of the equation on the goal and applies expr_eqP with the appropriate arguments.

Ltac solve_nat_eq :=
  match goal with
  | |- ?e1 = ?e2 =>
    let r1 := reify_expr (Nil nat) e1 in
    match r1 with
    | (?qe1, ?vm') =>
      let r2 := reify_expr vm' e2 in
      match r2 with
      | (?qe2, ?vm'') => exact: (@expr_eqP (nth 0 vm'') qe1 qe2 erefl)

We can check that our tactic works on our original example:

Lemma lem''' n m p : (n + m) * p = p * m + p * n.
Proof. solve_nat_eq. Qed.

With solve_nat_eq, every equation of that form becomes very easy to solve, including cases where a human prover might have trouble at first sight!

Lemma complicated n m p r t :
  (n + 2 ^ r * m) * (p + t) * (n + p)
  = n * n * p + m * 2 ^ r * (p * n + p * t + t * n + p * p)
  + n * (p * p + t * p + t * n).
Proof. solve_nat_eq. Qed.


We have seen how we can use internal computation in Coq to write powerful tactics. Besides generating small proof terms, tactics that use reflection have another important benefit: they are mostly written in Gallina, a typed language, and come with correctness proofs. This contrasts with most custom tactics written in ltac, which tend to break quite often due to the lack of static guarantees (and to how unstructure the tactic language is). For solve_nat_eq, we only had to write the reification engine in ltac, which results in a more manageable code base.
If you want to learn more about reflection, Adam Chlipala's CPDT book has an entire chapter devoted to the subject, which I highly recommend.