Poleiro

Where the Coq lives

| Comments GitHub

Coq as its own extensible parser

Coq's built-in extensible parser, although quite convenient in many cases, does have some limitations. On one hand, some syntax extensions that one might like to write cannot be expressed in it. Furthermore, the extensions are not first-class, having to be defined outside of the core language. We will see how to implement some interesting syntax extensions in Coq using just coercions and regular Coq functions. Besides being first-class, we will see that the mechanism can be used for defining extensions that cannot be expressed just with Coq's extensible parser.
We want to describe each syntax extension with a few Coq types and functions that determine how it should be parsed. Our first attempt might look like this:

Module Simple.

Record parser := Parser {
  token : Type;
  result : Type;
  initial_result : result;
  read_token : result -> token -> result
}.

Our parser works by reading tokens and returning a value of type result at the end of the process. It does this by starting with some value initial_result and updating that value with each token read, using function read_token. For instance, here's a function that returns the result of parsing three tokens:

Section WithParser.

Variable p : parser.

Definition read_three_tokens t1 t2 t3 :=
  read_token p (
    read_token p (
      read_token p (initial_result p) t1
    ) t2
  ) t3.

We can use this definition as a syntax extension by declaring read_token as a coercion from result to Funclass, allowing us to use each result as if it were a function. Then, applying initial_result p to a sequence of tokens will correspond to calling our parser on that sequence.

Coercion read_token : result >-> Funclass.

Definition read_three_tokens' t1 t2 t3 :=
  (initial_result p) t1 t2 t3.

This works because each application of read_token returns a result, so trying to apply that updated result to another token triggers yet another coercion, allowing the process to continue indefinitely. We can check that both definitions of read_three_tokens yield the same function, meaning that the coercion is behaving as expected:

Lemma read_three_tokens_same : read_three_tokens = read_three_tokens'.
Proof. reflexivity. Qed.

To make the mechanism more robust, we wrap our parser in a new type, ensuring that the Coq type checker will not fail to perform some coercion by reducing result more than it should.

Record parser_wrapper : Type := ParserWrapper {
  get_result : result p
}.

Definition read_token' (w : parser_wrapper) t :=
  ParserWrapper (read_token p (get_result w) t).
Coercion read_token' : parser_wrapper >-> Funclass.

End WithParser.

As a last tweak, we declare another coercion and a notation to make using our embedded parsers more similar to other systems, like in Template Haskell:

Definition init_parser p := ParserWrapper _ (initial_result p).
Coercion init_parser : parser >-> parser_wrapper.
Notation "[ x ]" := (get_result _ x) (at level 0).

Now, we can invoke a parser simply by writing [name_of_parser <list of tokens>]:

Definition read_three_tokens'' (p : parser) t1 t2 t3 :=
  [p t1 t2 t3].

As a first example, we can define an alternative syntax for Coq lists that doesn't need separators between the elements.

Definition listp (X : Type) := {|
  token := X;
  result := list X;
  initial_result := nil;
  read_token l x := app l (cons x nil)
|}.

The listp parser is parameterized by X, the type of the elements on the list. We initialize the parser with an empty list, and each token that we read is an element of X, which will be progressively added at the end of our list.

Definition list_exp : list nat := [listp nat 0 1 2 3 4 5 6 7 8 9 10].

End Simple.

More parsers

While nice, listp is not especially interesting as a parser, since it doesn't really analyze the tokens it reads. In contrast, parsers usually treat tokens differently, depending on what it has been read so far. To take such dependencies into account, we introduce some new fields to our record:

Module State.

Record parser := Parser {
  state : Type;
  initial_state : state;
  token : Type;
  next : state -> token -> state;
  result : Type;
  initial_result : result;
  read_token : state -> result -> token -> result
}.

End State.

The state field represents the internal state of our parser at a given point. It is set initially set to initial_state, and is updated using function next. We also change read_token to pass it the current state as an additional argument.
While more general, this version isn't quite good yet. We require our parsers to carry around a complete result value that it can return after having read any sequence of tokens. Usually, however, parsing can result in errors, and there is no meaningful value that can be returned by the parser until it finishes its job. To solve this problem, we introduce one last change to our definition: dependent types.

Record parser := Parser {
  state : Type;
  initial_state : state;
  token : state -> Type;
  next : forall s, token s -> state;
  partial_result : state -> Type;
  initial_partial_result : partial_result initial_state;
  read_token : forall s, partial_result s -> forall t, partial_result (next s t)
}.

Now, the type of tokens expected by the parser, as well as its type of partial results, can depend on the current parsing state, and the parsing functions have been updated to take this dependency into account.
With dependent types, the type of the value being built, partial_result, can change during the parsing process, allowing us to distinguish a complete, successfully parsed result, from one that still needs more tokens, or even from a message describing a parse error. By making token depend on the current state, we can constrain which tokens can be read at each parsing state, allowing us to expose parse errors as type errors.
For listp, there is no need to use anything interesting for the state type. For more complicated parsers, however, state comes in handy. To see how, we will define parsers for prefix and postfix arithmetic expressions. This is also a nice example because prefix and postfix expressions cannot be handled by Coq's extensible parser alone.
Expressions can involve three operations: addition, subtraction and multiplication.

Inductive op := Add | Sub | Mul.

The parsers will read tokens that can be natural numbers or one of the above operations. We group those in a common type exp_token.

Inductive exp_token :=
| Op (o : op)
| Const (n : nat).

Notation "''+'" := Add (at level 0).
Notation "''-'" := Sub (at level 0).
Notation "''*'" := Mul (at level 0).

Coercion Op : op >-> exp_token.
Coercion Const : nat >-> exp_token.

Let's consider how to deal with prefix expressions first. At any point during parsing, the parser is waiting for some number of additional expressions it must read in order to complete the top-level expression. We will use this number as the parser state. Initially, the parser wants to read just one expression, so we will use that as the initial state. Once the state reaches zero, there are no more expressions to read, so we know that the parser is done.

Module Pre.

Definition state := nat.
Definition initial_state : state := 1.

If our current state is zero, any additional tokens fed to the parser should result in a parse error. To implement this behavior, we define the type of tokens for that state to be Empty_set. Trying to feed an extra token to parser at that point will result in a type error, signaling that a parse error just occurred. If the parser still expects expressions (i.e., if the current state is greater than zero), we just use exp_token for the token type.

Definition token (s : state) : Type :=
  match s with
  | S _ => exp_token
  | 0 => Empty_set
  end.

The value built by the parser will be a continuation that expects n numbers, one for each expression that the parser still needs to read.

Fixpoint partial_result (n : nat) : Type :=
  match n with
  | 0 => nat
  | S n => nat -> partial_result n
  end.

We must define how the parser actually interprets the tokens it reads. If the parser expects n expressions, reading a constant will make this number go down by one, since a constant is a complete expression. If it reads an operation, on the other hand, that number is increased by one. If n = 0, there is no token to be read, hence no next state, so we perform an empty pattern match to show that can't happen.

Definition next (s : state) : token s -> state :=
  match s with
  | S n' => fun t =>
              match t with
              | Op _ => S (S n')
              | Const _ => n'
              end
  | 0 => fun t => match t with end
  end.

How do we update the result? If we read a constant, we just feed it to the continuation. If we read an operation, we compose that operation with the continuation, which has the net effect of adding one argument to it. Here, ap_op is a function that maps each op to the corresponding Coq function.

Definition read_token s : partial_result s -> forall t, partial_result (next s t) :=
  match s with
  | S n' =>
    fun res t =>
      match t with
      | Op o => fun n1 n2 => res (ap_op o n1 n2)
      | Const n => res n
      end
  | _ => fun _ t => match t with end
  end.

End Pre.

We can now package our definitions as a complete parser and try it on some examples:

Definition pre := {|
  state := Pre.state;
  initial_state := 1;
  token := Pre.token;
  partial_result := Pre.partial_result;
  next := Pre.next;
  initial_partial_result := fun t => t;
  read_token := Pre.read_token
|}.

Definition pre_exp1 : nat :=
  [pre '+ '- 1 2 '+ 4 4].
Definition pre_exp2 : nat :=
  [pre '+ '* 12 '* 12 12 '* 1 '* 1 1].

We can also see that invalid expressions are rejected, as expected.

Fail Definition pre_exp_wrong : nat :=
  [pre '+ 1 1 1].

Error: The term "1" has type "nat" while it is expected to have type
"token pre (next pre (next pre (next pre (initial_state pre) '+) 1) 1)".
A parser for postfix expressions is in some sense the dual of the one we've just seen: instead of carrying around a continuation that expects arguments, we construct a stack of numbers on which we can perform our operations. Our state will also be a number, denoting the number of elements on our stack. Since our stack start empty, the initial state will be zero.

Module Post.

Definition state := nat.
Definition initial_state : state := 0.

Since our operations need at least two numbers on the stack, we restrict our parser to accept only numbers if the stack doesn't have the appropriate size:

Definition token (s : state) : Type :=
  match s with
  | 0 | 1 => nat
  | _ => exp_token
  end.

The partial_result type is a length-indexed list of natural numbers with one small twist: we ensure that partial_result 1 = nat definitionally, so that we can use postfix expressions as having type nat without the need for any projections.

Fixpoint partial_result' (n : nat) : Type :=
  match n with
  | 0 => nat
  | S n => (partial_result' n * nat)%type
  end.

Definition partial_result (s : state) : Type :=
  match s with
  | 0 => unit
  | S n => partial_result' n
  end.

next and read_token are dual to the definitions of the previous parser: reading a constant increases the stack size by one, while reading an operation decreases the stack size by one.

Definition next s : token s -> state :=
  match s with
  | 0 => fun _ => 1
  | 1 => fun _ => 2
  | S (S n) => fun t =>
                 match t with
                 | Op _ => 1 + n
                 | Const _ => 3 + n
                 end
  end.

Definition read_token s : partial_result s -> forall t, partial_result (next s t) :=
  match s with
  | 0 => fun _ t => t
  | 1 => fun res t => (res, t)
  | 2 => fun res t =>
           match res with
           | (n1, n2) =>
             match t with
             | Op o => ap_op o n1 n2
             | Const n => (n1, n2, n)
             end
           end
  | S (S (S n)) => fun res t =>
                     match res with
                     | (res, n1, n2) =>
                       match t with
                       | Op o => (res, ap_op o n1 n2)
                       | Const n => (res, n1, n2, n)
                       end
                     end
  end.

End Post.

We now have a full parser for postfix expressions, which we can test on some examples.

Summary

We've seen how we can use coercions to extend the syntax of Coq. While useful, this technique is complementary to Coq's own extensible parser, having its own limitations. Despite being very programmable, there are also syntax extensions that cannot be defined with this technique in a seamless way, such as anything involving variable binding. Being first class can also be a disadvantage, since the parser and parsing functions are part of the resulting term, possibly affecting conversion and unification. Finally, regular syntax extensions are defined with a high-level declarative syntax, while this mechanism requires one to write a custom parser for each extension.

| Comments GitHub

Searching with eggs

Consider the following problem. Suppose we are in a 100-story building. We know that, when dropping an egg from the window, the egg will stay intact if we are below a certain floor. However, if we repeat the same exercise above that critical floor, the egg will break. How can we find this floor and minimize the number of egg drops performed in the worst case if we have only two eggs? We suppose that we are allowed to reuse eggs that fall without breaking.
We will see how we can model this problem in Coq and find a correct solution. We model a playing strategy as a decision tree:

Inductive strategy : Type :=
| Guess (floor : nat)
| Drop (floor : nat) (broken intact : strategy).

In the above definition, Guess floor represents the end of the algorithm, when we try to guess the target floor. If floor is equal to the target, we win the game; otherwise, we lose. Drop floor broken intact represents an egg drop at floor. If the egg breaks, we continue playing with strategy broken; otherwise, we continue with intact.
Simulating an egg-drop game is just a matter of performing a tree search. play target s returns true if and only if strategy s succeeds in guessing floor target.

Fixpoint play (target : nat) (s : strategy) : bool :=
  match s with
  | Guess floor =>
    beq_nat floor target
  | Drop floor broken intact =>
    play target (if leb target floor then broken
                 else intact)
  end.

We can also find how many eggs a strategy needs and how many drops are performed in the worst case. drops just computes the strategy tree height, whereas eggs computes a "skewed" height, where right branches do not add to the final value.

Fixpoint eggs (s : strategy) : nat :=
  match s with
  | Guess _ => 0
  | Drop _ broken intact => max (S (eggs broken)) (eggs intact)
  end.

Fixpoint drops (s : strategy) : nat :=
  match s with
  | Guess _ => 0
  | Drop _ broken intact => S (max (drops broken) (drops intact))
  end.

Finally, using these concepts, we can describe what the solution for our problem is. winning lower range s says that strategy s is able to find range target floors starting at lower, while is_optimal range e d states that there is a winning strategy for guessing range target floors, uses at most e eggs and performing at most d drops, such that d is the smallest possible.

Definition winning (lower range : nat) (s : strategy) : Prop :=
  forall target, lower <= target < lower + range ->
                 play target s = true.

Definition is_optimal (range e d : nat) : Prop :=
  exists s : strategy,
    eggs s <= e /\
    drops s = d /\
    winning 0 range s /\
    forall s', eggs s' <= e ->
               winning 0 range s' ->
               d <= drops s'.

A simple first strategy

Before trying to find an optimal solution, let us try to find one that works. A simple strategy is to perform linear search, starting at the bottom and going up one floor at a time. As soon as the egg breaks, we know that we have found our target.

Fixpoint linear (lower range : nat) : strategy :=
  match range with
  | 0 => Guess lower
  | S range' => Drop lower (Guess lower) (linear (S lower) range')
  end.

linear lower range works on a range of S range floors, using at most one egg. Because of this, it is not very efficient, performing n drops in the worst case.

Lemma linear_winning lower range :
  winning lower (S range) (linear lower range).

Lemma linear_eggs lower range :
  eggs (linear lower range) = min 1 range.

Lemma linear_drops lower range :
  drops (linear lower range) = range.

Improving our strategy

Now that we know a naive search strategy, we will try to find an optimal one. Intuitively, it seems that it should be possible to perform some form of binary search, starting at the middle of our floor range, and continuing recursively depending on the outcome of our drop.
Although much better than a pure linear strategy, this is still far from being optimal: if our egg doesn't break on our first drop, we will still be using at most only one egg on that upper range. If we allowed ourselves to break one of them, presumably, we would be able to solve the upper range in less than 50 drops, which would in turn allow us to perform our first drop at a lower floor, e.g.

Definition solution_take_2 : strategy :=
  Drop 33 (linear 0 33)
          (Drop 66 (linear 34 32) (linear 67 32)).

Lemma solution_take_2_winning :
  winning 0 100 solution_take_2.

Lemma solution_take_2_eggs :
  eggs solution_take_2 = 2.

Lemma solution_take_2_drops :
  drops solution_take_2 = 34.

Our new solution performs much better, but there is still room for improvement. Once again, if our first two drops are below the target floor, we will be using only one egg to search through 33 floors, which could be done better if we had used both of them. This suggests that the optimal strategy should be some form of skewed binary search, where the upper range that is produced by an egg drop should use its available eggs in an optimal way.

Finding the optimum

How can we formalize the intuition that we just developed? The key insight is to reason by duality and, instead, ask "what is the largest range of floors we can scan using at most some number of eggs and drops?" When looking at the problem this way, it becomes clear that optimality has a recursive structure that is easy to describe: to find a floor using at most e eggs and d drops, we need to combine two optimal strategies: one using at most e-1 eggs and d-1 drops, for the case where our first drop causes the egg to break, and another using at most e eggs and d-1 drops, for the case where our egg does not break at first. We can readily express this idea as code. optimal_range e d computes the maximal range size that can be solved using e eggs and d drops at most, while optimal lower e d computes a strategy for doing so starting from floor lower.

Fixpoint optimal_range_minus_1 (e d : nat) {struct d} : nat :=
  match d, e with
  | S d', S e' => S (optimal_range_minus_1 e' d' +
                     optimal_range_minus_1 (S e') d')
  | _, _ => 0
  end.

Definition optimal_range e d := S (optimal_range_minus_1 e d).

Fixpoint optimal (lower e d : nat) {struct d} : strategy :=
  match d, e with
  | S d', S e' =>
    let floor := lower + optimal_range_minus_1 e' d' in
    Drop floor
         (optimal lower e' d')
         (optimal (S floor) (S e') d')
  | _, _ => Guess lower
  end.

It is easy to show that optimal lower e d scans optimal_range e d floors, as well that it uses the resources that we expect.

Lemma optimal_winning lower e d :
  winning lower (optimal_range e d) (optimal lower e d).

Lemma optimal_eggs lower e d :
  eggs (optimal lower e d) = min e d.

Lemma optimal_drops lower e d :
  drops (optimal lower e d) = min 1 e * d.

To actually show optimality, we need to show that optimal_range indeed computes what it is supposed to. We start by showing two inversion lemmas, relating the range that is scanned by a winning strategy to the range scanned by its sub-strategies.

Lemma winning_inv_guess lower range floor :
  winning lower range (Guess floor) -> range <= 1.

Lemma winning_inv_drop lower range floor broken intact :
  winning lower range (Drop floor broken intact) ->
  exists r1 r2 lower',
    range = r1 + r2 /\
    winning lower r1 broken /\
    winning lower' r2 intact.

Lemma optimal_range_correct :
  forall lower e d s range,
    eggs s <= e ->
    drops s <= d ->
    winning lower range s ->
    range <= optimal_range e d.

Combining this lemma with the ranges we had derived for linear and optimal, we can prove useful results about optimal_range.

Lemma optimal_range_monotone :
  forall e e' d d',
    e <= e' ->
    d <= d' ->
    optimal_range e d <= optimal_range e' d'.
Proof.
  intros e e' d d' He Hd.
  apply (optimal_range_correct 0 e' d' (optimal 0 e d));
    [ rewrite optimal_eggs; lia
    | rewrite optimal_drops; destruct e; simpl; lia
    | apply optimal_winning ].
Qed.

Lemma optimal_range_lower_bound :
  forall e d, d <= (optimal_range (S e) d).
Proof.
  intros e d.
  cut (S d <= optimal_range (S e) d); try lia.
  apply (optimal_range_correct 0 (S e) d (linear 0 d));
    [ rewrite linear_eggs
    | rewrite linear_drops
    | apply linear_winning ]; lia.
Qed.

Given that optimal_range is monotone, we can find what the optimal number of drops for a given range is by picking the smallest value of t such that range <= optimal_range e t. We formalize this idea by writing a generic function find_root that can find such values for any monotone function f, given a suitable initial guess.

Fixpoint find_root (f : nat -> nat) (target n : nat) : nat :=
  match n with
  | 0 => 0
  | S n' =>
    if leb target (f n') then
      find_root f target n'
    else
      S n'
  end.

Lemma find_root_correct :
  forall f target n,
    (forall x y, x <= y -> f x <= f y) ->
    target <= f n ->
    let x := find_root f target n in
    target <= f x /\
    forall y, y < x -> f y < target.

By instantiating this theorem with optimal_range and applying the appropriate theorems, we obtain our final result. The proof of optimality goes by contradiction. Let t = find_optimum (S e) range. if we find another strategy s such that eggs s <= S e and drops s < t, we know that range <= optimal_range (S e) t by optimal_range_correct, but we must also have optimal_range (S e) t < range by the correctness of find_root.

Definition find_optimum e target :=
  find_root (optimal_range e) target target.

Lemma find_optimum_correct :
  forall e range,
    let d := find_optimum (S e) range in
    is_optimal range (S e) d.
Proof.
  intros e range d.
  assert (H : range <= optimal_range (S e) d /\
              forall d', d' < d -> optimal_range (S e) d' < range).
  (* By correctness of find_root *)
  destruct H as [H1 H2].
  exists (optimal 0 (S e) d).
  rewrite optimal_drops, optimal_eggs.
  repeat split; try lia; simpl; try lia.
  - intros x Hx.
    apply optimal_winning. lia.
  - intros s Hs WIN.
    destruct (le_lt_dec d (drops s)) as [LE | LT]; trivial.
    assert (Hd : drops s <= drops s) by lia.
    assert (He : eggs s <= S e) by lia.
    (* optimal_range < range *)
    pose proof (H2 _ LT).
    (* range <= optimal_range *)
    pose proof (optimal_range_correct _ _ _ _ _ He Hd WIN).
    lia.
Qed.

Using this result, we can find the answer for our original problem.

Lemma solution :
  is_optimal 100 2 14.
Proof. apply find_optimum_correct. Qed.

| Comments GitHub

Automatic naming considered harmful

Whenever a user doesn't name a variable in an Ltac script, Coq will automatically generate a fresh name for it. After doing some proofs, however, you start to realize that this behavior is more of a nuisance than a feature. The heuristics used by Coq when choosing new names are hard to predict and depend on the current proof state, making proofs likely to break whenever a small change is introduced. As every experienced Coq user knows, trying to figure out whether hypothesis H22 now corresponds to H9 or H24 when fixing a complicated proof script cannot exactly be described as a pleasant experience, specially when the proof script is 2000 lines long and you've spent hours dealing with similar horrors.
When writing robust proof scripts, one must ensure that the code doesn't depend on weird automatically chosen names. Here are some tips for making the naming problem disappear.

Name variables explicitly

Most of the times, you can get rid of this problem by not being lazy and naming everything explicitly. Avoid using tactics like intros or destruct in their unnamed form; instead, use intros v1 v2 v3 ... or destruct v as [a1 a2|a1 a2 a3|...]. You can also name hypothesis in the statement of your lemma, using a forall instead of a plain implication arrow (remember, in Coq, these are actually the same thing). If you don't pass any names to intros, it will try to keep the same names that appear in the current goal:

Lemma l1 : forall (A : Prop) (HYP : A), A.
Proof. intros. exact HYP. Qed.

This also works when reintroducing a variable from the context into the goal, for instance with generalize dependent.

Name constructor arguments

When working with complicated inductive predicates, naming all new variables explicitly on every destruct or inversion can be cumbersome. A reasonable compromise is to name all the arguments to your constructors in inductive definitions. This makes the names that are generated by those tactics meaningful, more stable and predictable. Here's an example of a simple inductively defined relation with explicit argument names:

Inductive collatz : nat -> nat -> Prop :=
| collatz_even (n m : nat)
               (EVEN : 2 * m = n) : collatz n m
| collatz_odd (n m : nat)
              (ODD : 2 * m + 1 = n) : collatz n (3 * n + 1)
| collatz_trans (n m p : nat)
                (COLLATZnm : collatz n m)
                (COLLATZmp : collatz m p) : collatz n p.

Don't refer to names in proofs

Weird automatically generated names are not a problem if they do not appear in Ltac scripts. Automated proofs can obviously help here, but there are small things you can do even if you're not an automation expert. By using match, you can avoid mentioning automatically generated names without having to convert an unnamed inversion or destruct into a named one:

Lemma collatz_cycle :
  forall n m,
    (n = 1 \/ n = 4 \/ n = 2) ->
    collatz n m ->
    m = 1 \/ m = 4 \/ m = 2.
Proof.
  intros.
  match goal with
    | H : collatz _ _ |- _ => induction H
  end.
  - omega.
  - repeat match goal with
             | H : _ \/ _ |- _ => destruct H
           end; omega.
  - match goal with
      | H1 : ?P,
        H2 : ?P -> ?Q,
        H3 : ?Q -> ?R |- ?R =>
        apply H3; apply H2; apply H1
    end.
Qed.

In the above (admittedly artificial) example, we were able to refer exactly to what we wanted in the context without having to resort to the names that Coq chose. Notice how in the third case we were able to specify how premises and conclusions should match when picking the hypotheses. In most cases, a simple match suffices to remove explicit names from proofs.