Poleiro

Where the Coq lives

| Comments GitHub

Analyzing sorting algorithms


In this post, we will formalize one of the most well-known results of algorithm analysis: no comparison sort can run in asymptotically less than n * log n steps, where n is the size of its input.
Before starting, I should point out that this is the first post in this blog to use the Ssreflect and the Mathematical Components (MathComp) libraries. Ssreflect is an amazing Coq extension that brings several improvements, including a nicer set of base tactics. Both libraries cover a wide range of theories, including fairly sophisticated Mathematics - as a matter of fact, they are featured in the Coq formalization of the Feit-Thompson theorem, known for its extremely complex and detailed proof.
As we will see, having good library support can help a lot when doing mechanized proofs, even for such simple results as this one. Two things that come in handy here, in particular, are the theories of permutations and sets over finite types that are available in MathComp. Indeed, the MathComp definitions enable many useful, higher-level reasoning principles that don't come for free in Coq, such as extensional and decidable equality. Furthermore, many lemmas on the library require a fair amount of machinery to be developed on their own - for example, showing that there are exactly n! permutations over a set of n elements. Previous versions of this post (which you can still find on the repository) tried to avoid external libraries, but were much longer and more complicated, prompting me to bite the bullet and port everything to Ssreflect/MathComp.

Basics

The informal proof of this result is fairly simple:
  • If a comparison sort is correct, then it must be capable of shuffling an input vector of size n according to any of the n! permutations of its elements.
  • On the other hand, any such algorithm can recognize at most 2 ^ k distinct permutations, where k is the maximum number of comparisons performed. Hence, n! <= 2 ^ k or, equivalently, log2 n! <= k.
  • To conclude, Stirling's approximation tells us that n * log2 n = O(log2 n!), which yields our result.
We'll begin our formalization by defining a convenient datatype for representing the execution of a comparison sort. For our purposes, a comparison sort can be seen as a binary tree: internal nodes indicate when a comparison between two elements occurs, with the right and left branches telling how to proceed depending on its result. The leaves of the tree mark when the algorithm ends and yields back a result. Thus, we obtain the following type:

Inductive sort_alg (n : nat) : Type :=
| Compare (i j : 'I_n) (l r : sort_alg n)
| Done of 'S_n.

Let's analyze this declaration in detail. The n parameter will be used later to track the length of the array that we are sorting. 'I_n is the type of natural numbers bounded by n, whereas 'S_n represents permutations of elements of that type. The idea is that, when running our algorithm, the i and j arguments of Compare tell which elements of our array to compare. The permutation in Done specifies how to rearrange the elements of the input array to produce an answer. We can formalize the previous description in the following function:

Fixpoint execute T n c (a : sort_alg n) (xs : n.-tuple T) :=
  match a with
  | Compare i j l r =>
    let a' := if c (tnth xs i) (tnth xs j) then l else r in
    execute c a' xs
  | Done p => [tuple tnth xs (p i) | i < n]
  end.

execute is a polymorphic function that works for any type T with a comparison operator c. Given an algorithm a and an input array xs of length n, the function compares the elements of a, following the appropriate branches along the way, until finding out how to rearrange a's elements.
With execute, we can define what it means for a sorting algorithm to be correct, by relating its results to the MathComp sort function:

Definition sort_alg_ok n (a : sort_alg n) :=
  forall (T : eqType) (le : rel T),
  forall xs, execute le a xs = sort le xs :> seq T.

Finally, to translate the above informal argument, we will need some more definitions. Let's first write a function for computing how many comparisons an algorithm performs in the worst case:

Fixpoint comparisons n (a : sort_alg n) : nat :=
  match a with
  | Compare _ _ l r => (maxn (comparisons l) (comparisons r)).+1
  | Done _ => 0
  end.

And here's a function for computing the set of permutations that an algorithm can perform (notice the use of the set library of MathComp; here, :|: denotes set union):

Fixpoint perms n (a : sort_alg n) : {set 'S_n} :=
  match a with
  | Compare _ _ l r => perms l :|: perms r
  | Done p => [set p]
  end.

(Strictly speaking, both comparisons and perms give upper bounds on the values they should compute, but this does not affect us in any crucial way.)

Show me the proofs

To show that a correct algorithm must be able of performing arbitrary permutations, notice that, if xs is a sorted array with distinct elements, then permuting its elements is an injective operation. That is, different permutations produce different arrays.
Lemma permsT n (a : sort_alg n) : sort_alg_ok a -> perms a = [set: 'S_n].
Proof.
move=> a_ok.
apply/eqP; rewrite -subTset; apply/subsetP=> /= p _.
move: {a_ok} (a_ok _ leq [tuple val (p^-1 i) | i < n]).
rewrite (_ : sort _ _ = [tuple val i | i < n]); last first.
  apply: (eq_sorted leq_trans anti_leq (sort_sorted leq_total _)).
    by rewrite /= val_enum_ord iota_sorted.
  rewrite (perm_eq_trans (introT perm_eqlP (perm_sort _ _))) //.
  apply/tuple_perm_eqP; exists p^-1; congr val; apply/eq_from_tnth=> i.
  by rewrite 3!tnth_map 2!tnth_ord_tuple.
elim: a=> [/= i j l IHl r IHr|p'].
  by case: ifP=> [_ /IHl|_ /IHr]; rewrite in_setU => -> //; rewrite orbT.
move/val_inj=> /=.
rewrite in_set1=> e; apply/eqP/permP=> i; apply/esym/(canRL (permKV p)).
apply/val_inj.
rewrite (_ : val i = tnth [tuple val i | i < n] i); last first.
  by rewrite tnth_map tnth_ord_tuple.
by rewrite -{}e 2!tnth_map !tnth_ord_tuple.
Qed.

Bounding the number of permutations performed by an algorithm is simple, and amounts to invoking basic lemmas about arithmetic and sets.

Lemma card_perms n (a : sort_alg n) : #|perms a| <= 2 ^ comparisons a.
Proof.
elim: a=> [i j l IHl r IHr|p] /=; last by rewrite cards1.
rewrite (leq_trans (leq_of_leqif (leq_card_setU (perms l) (perms r)))) //.
by rewrite expnS mul2n -addnn leq_add // ?(leq_trans IHl, leq_trans IHr) //
   leq_exp2l // ?(leq_maxl, leq_maxr).
Qed.

Doing the last step is a bit trickier, as we don't have a proof of Stirling's approximation we can use. Instead, we take a more direct route, showing the following lemma by induction on n (trunc_log, as its name implies, is the truncated logarithm):

Local Notation log2 := (trunc_log 2).

Lemma log2_fact n : (n * log2 n)./2 <= log2 n`!.
Proof.
In order to get our proof to go through, we must strengthen our induction hypothesis a little bit:
suff: n * (log2 n).+2 <= (log2 n`! + 2 ^ (log2 n)).*2.+1.
We can then proceed with a straightforward (although not completlely trivial) inductive argument.
elim: n=> [|n IH] //=.
(* ... *)
Qed.

Our main result follows almost immediately from these three intermediate lemmas:

Lemma sort_alg_ok_leq n (a : sort_alg n) :
  sort_alg_ok a -> (n * log2 n)./2 <= comparisons a.
Proof.
move=> a_ok; suff lb: n`! <= 2 ^ comparisons a.
  rewrite (leq_trans (log2_fact n)) // -(@leq_exp2l 2) //.
  by rewrite (leq_trans (trunc_logP (leqnn 2) (fact_gt0 n))).
rewrite (leq_trans _ (card_perms a)) // -{1}(card_ord n) -cardsT -card_perm.
by rewrite -(cardsE (perm_on [set: 'I_n])) subset_leq_card // permsT //.
Qed.

Summary

We've seen how to formalize a result of algorithm analysis in an abstract setting: although it is fair to say that our model of a comparison sort is detailed enough for our purposes, we haven't connected it yet to a more traditional computation model, something I plan to discuss on a future post.
Aside from that, we've seen how a rich set of theories, such as the ones in MathComp, allow us to express higher-level concepts in our definitions and proofs, leading to much shorter formalizations.

| Comments GitHub

Coq as its own extensible parser

Coq's built-in extensible parser, although quite convenient in many cases, does have some limitations. On one hand, some syntax extensions that one might like to write cannot be expressed in it. Furthermore, the extensions are not first-class, having to be defined outside of the core language. We will see how to implement some interesting syntax extensions in Coq using just coercions and regular Coq functions. Besides being first-class, we will see that the mechanism can be used for defining extensions that cannot be expressed just with Coq's extensible parser.
We want to describe each syntax extension with a few Coq types and functions that determine how it should be parsed. Our first attempt might look like this:

Module Simple.

Record parser := Parser {
  token : Type;
  result : Type;
  initial_result : result;
  read_token : result -> token -> result
}.

Our parser works by reading tokens and returning a value of type result at the end of the process. It does this by starting with some value initial_result and updating that value with each token read, using function read_token. For instance, here's a function that returns the result of parsing three tokens:

Section WithParser.

Variable p : parser.

Definition read_three_tokens t1 t2 t3 :=
  read_token p (
    read_token p (
      read_token p (initial_result p) t1
    ) t2
  ) t3.

We can use this definition as a syntax extension by declaring read_token as a coercion from result to Funclass, allowing us to use each result as if it were a function. Then, applying initial_result p to a sequence of tokens will correspond to calling our parser on that sequence.

Coercion read_token : result >-> Funclass.

Definition read_three_tokens' t1 t2 t3 :=
  (initial_result p) t1 t2 t3.

This works because each application of read_token returns a result, so trying to apply that updated result to another token triggers yet another coercion, allowing the process to continue indefinitely. We can check that both definitions of read_three_tokens yield the same function, meaning that the coercion is behaving as expected:

Lemma read_three_tokens_same : read_three_tokens = read_three_tokens'.
Proof. reflexivity. Qed.

To make the mechanism more robust, we wrap our parser in a new type, ensuring that the Coq type checker will not fail to perform some coercion by reducing result more than it should.

Record parser_wrapper : Type := ParserWrapper {
  get_result : result p
}.

Definition read_token' (w : parser_wrapper) t :=
  ParserWrapper (read_token p (get_result w) t).
Coercion read_token' : parser_wrapper >-> Funclass.

End WithParser.

As a last tweak, we declare another coercion and a notation to make using our embedded parsers more similar to other systems, like in Template Haskell:

Definition init_parser p := ParserWrapper _ (initial_result p).
Coercion init_parser : parser >-> parser_wrapper.
Notation "[ x ]" := (get_result _ x) (at level 0).

Now, we can invoke a parser simply by writing [name_of_parser <list of tokens>]:

Definition read_three_tokens'' (p : parser) t1 t2 t3 :=
  [p t1 t2 t3].

As a first example, we can define an alternative syntax for Coq lists that doesn't need separators between the elements.

Definition listp (X : Type) := {|
  token := X;
  result := list X;
  initial_result := nil;
  read_token l x := app l (cons x nil)
|}.

The listp parser is parameterized by X, the type of the elements on the list. We initialize the parser with an empty list, and each token that we read is an element of X, which will be progressively added at the end of our list.

Definition list_exp : list nat := [listp nat 0 1 2 3 4 5 6 7 8 9 10].

End Simple.

More parsers

While nice, listp is not especially interesting as a parser, since it doesn't really analyze the tokens it reads. In contrast, parsers usually treat tokens differently, depending on what it has been read so far. To take such dependencies into account, we introduce some new fields to our record:

Module State.

Record parser := Parser {
  state : Type;
  initial_state : state;
  token : Type;
  next : state -> token -> state;
  result : Type;
  initial_result : result;
  read_token : state -> result -> token -> result
}.

End State.

The state field represents the internal state of our parser at a given point. It is set initially set to initial_state, and is updated using function next. We also change read_token to pass it the current state as an additional argument.
While more general, this version isn't quite good yet. We require our parsers to carry around a complete result value that it can return after having read any sequence of tokens. Usually, however, parsing can result in errors, and there is no meaningful value that can be returned by the parser until it finishes its job. To solve this problem, we introduce one last change to our definition: dependent types.

Record parser := Parser {
  state : Type;
  initial_state : state;
  token : state -> Type;
  next : forall s, token s -> state;
  partial_result : state -> Type;
  initial_partial_result : partial_result initial_state;
  read_token : forall s, partial_result s -> forall t, partial_result (next s t)
}.

Now, the type of tokens expected by the parser, as well as its type of partial results, can depend on the current parsing state, and the parsing functions have been updated to take this dependency into account.
With dependent types, the type of the value being built, partial_result, can change during the parsing process, allowing us to distinguish a complete, successfully parsed result, from one that still needs more tokens, or even from a message describing a parse error. By making token depend on the current state, we can constrain which tokens can be read at each parsing state, allowing us to expose parse errors as type errors.
For listp, there is no need to use anything interesting for the state type. For more complicated parsers, however, state comes in handy. To see how, we will define parsers for prefix and postfix arithmetic expressions. This is also a nice example because prefix and postfix expressions cannot be handled by Coq's extensible parser alone.
Expressions can involve three operations: addition, subtraction and multiplication.

Inductive op := Add | Sub | Mul.

The parsers will read tokens that can be natural numbers or one of the above operations. We group those in a common type exp_token.

Inductive exp_token :=
| Op (o : op)
| Const (n : nat).

Notation "''+'" := Add (at level 0).
Notation "''-'" := Sub (at level 0).
Notation "''*'" := Mul (at level 0).

Coercion Op : op >-> exp_token.
Coercion Const : nat >-> exp_token.

Let's consider how to deal with prefix expressions first. At any point during parsing, the parser is waiting for some number of additional expressions it must read in order to complete the top-level expression. We will use this number as the parser state. Initially, the parser wants to read just one expression, so we will use that as the initial state. Once the state reaches zero, there are no more expressions to read, so we know that the parser is done.

Module Pre.

Definition state := nat.
Definition initial_state : state := 1.

If our current state is zero, any additional tokens fed to the parser should result in a parse error. To implement this behavior, we define the type of tokens for that state to be Empty_set. Trying to feed an extra token to parser at that point will result in a type error, signaling that a parse error just occurred. If the parser still expects expressions (i.e., if the current state is greater than zero), we just use exp_token for the token type.

Definition token (s : state) : Type :=
  match s with
  | S _ => exp_token
  | 0 => Empty_set
  end.

The value built by the parser will be a continuation that expects n numbers, one for each expression that the parser still needs to read.

Fixpoint partial_result (n : nat) : Type :=
  match n with
  | 0 => nat
  | S n => nat -> partial_result n
  end.

We must define how the parser actually interprets the tokens it reads. If the parser expects n expressions, reading a constant will make this number go down by one, since a constant is a complete expression. If it reads an operation, on the other hand, that number is increased by one. If n = 0, there is no token to be read, hence no next state, so we perform an empty pattern match to show that can't happen.

Definition next (s : state) : token s -> state :=
  match s with
  | S n' => fun t =>
              match t with
              | Op _ => S (S n')
              | Const _ => n'
              end
  | 0 => fun t => match t with end
  end.

How do we update the result? If we read a constant, we just feed it to the continuation. If we read an operation, we compose that operation with the continuation, which has the net effect of adding one argument to it. Here, ap_op is a function that maps each op to the corresponding Coq function.

Definition read_token s : partial_result s -> forall t, partial_result (next s t) :=
  match s with
  | S n' =>
    fun res t =>
      match t with
      | Op o => fun n1 n2 => res (ap_op o n1 n2)
      | Const n => res n
      end
  | _ => fun _ t => match t with end
  end.

End Pre.

We can now package our definitions as a complete parser and try it on some examples:

Definition pre := {|
  state := Pre.state;
  initial_state := 1;
  token := Pre.token;
  partial_result := Pre.partial_result;
  next := Pre.next;
  initial_partial_result := fun t => t;
  read_token := Pre.read_token
|}.

Definition pre_exp1 : nat :=
  [pre '+ '- 1 2 '+ 4 4].
Definition pre_exp2 : nat :=
  [pre '+ '* 12 '* 12 12 '* 1 '* 1 1].

We can also see that invalid expressions are rejected, as expected.

Fail Definition pre_exp_wrong : nat :=
  [pre '+ 1 1 1].

Error: The term "1" has type "nat" while it is expected to have type
"token pre (next pre (next pre (next pre (initial_state pre) '+) 1) 1)".
A parser for postfix expressions is in some sense the dual of the one we've just seen: instead of carrying around a continuation that expects arguments, we construct a stack of numbers on which we can perform our operations. Our state will also be a number, denoting the number of elements on our stack. Since our stack start empty, the initial state will be zero.

Module Post.

Definition state := nat.
Definition initial_state : state := 0.

Since our operations need at least two numbers on the stack, we restrict our parser to accept only numbers if the stack doesn't have the appropriate size:

Definition token (s : state) : Type :=
  match s with
  | 0 | 1 => nat
  | _ => exp_token
  end.

The partial_result type is a length-indexed list of natural numbers with one small twist: we ensure that partial_result 1 = nat definitionally, so that we can use postfix expressions as having type nat without the need for any projections.

Fixpoint partial_result' (n : nat) : Type :=
  match n with
  | 0 => nat
  | S n => (partial_result' n * nat)%type
  end.

Definition partial_result (s : state) : Type :=
  match s with
  | 0 => unit
  | S n => partial_result' n
  end.

next and read_token are dual to the definitions of the previous parser: reading a constant increases the stack size by one, while reading an operation decreases the stack size by one.

Definition next s : token s -> state :=
  match s with
  | 0 => fun _ => 1
  | 1 => fun _ => 2
  | S (S n) => fun t =>
                 match t with
                 | Op _ => 1 + n
                 | Const _ => 3 + n
                 end
  end.

Definition read_token s : partial_result s -> forall t, partial_result (next s t) :=
  match s with
  | 0 => fun _ t => t
  | 1 => fun res t => (res, t)
  | 2 => fun res t =>
           match res with
           | (n1, n2) =>
             match t with
             | Op o => ap_op o n1 n2
             | Const n => (n1, n2, n)
             end
           end
  | S (S (S n)) => fun res t =>
                     match res with
                     | (res, n1, n2) =>
                       match t with
                       | Op o => (res, ap_op o n1 n2)
                       | Const n => (res, n1, n2, n)
                       end
                     end
  end.

End Post.

We now have a full parser for postfix expressions, which we can test on some examples.

Summary

We've seen how we can use coercions to extend the syntax of Coq. While useful, this technique is complementary to Coq's own extensible parser, having its own limitations. Despite being very programmable, there are also syntax extensions that cannot be defined with this technique in a seamless way, such as anything involving variable binding. Being first class can also be a disadvantage, since the parser and parsing functions are part of the resulting term, possibly affecting conversion and unification. Finally, regular syntax extensions are defined with a high-level declarative syntax, while this mechanism requires one to write a custom parser for each extension.

| Comments GitHub

Searching with eggs

Consider the following problem. Suppose we are in a 100-story building. We know that, when dropping an egg from the window, the egg will stay intact if we are below a certain floor. However, if we repeat the same exercise above that critical floor, the egg will break. How can we find this floor and minimize the number of egg drops performed in the worst case if we have only two eggs? We suppose that we are allowed to reuse eggs that fall without breaking.
We will see how we can model this problem in Coq and find a correct solution. We model a playing strategy as a decision tree:

Inductive strategy : Type :=
| Guess (floor : nat)
| Drop (floor : nat) (broken intact : strategy).

In the above definition, Guess floor represents the end of the algorithm, when we try to guess the target floor. If floor is equal to the target, we win the game; otherwise, we lose. Drop floor broken intact represents an egg drop at floor. If the egg breaks, we continue playing with strategy broken; otherwise, we continue with intact.
Simulating an egg-drop game is just a matter of performing a tree search. play target s returns true if and only if strategy s succeeds in guessing floor target.

Fixpoint play (target : nat) (s : strategy) : bool :=
  match s with
  | Guess floor =>
    beq_nat floor target
  | Drop floor broken intact =>
    play target (if leb target floor then broken
                 else intact)
  end.

We can also find how many eggs a strategy needs and how many drops are performed in the worst case. drops just computes the strategy tree height, whereas eggs computes a "skewed" height, where right branches do not add to the final value.

Fixpoint eggs (s : strategy) : nat :=
  match s with
  | Guess _ => 0
  | Drop _ broken intact => max (S (eggs broken)) (eggs intact)
  end.

Fixpoint drops (s : strategy) : nat :=
  match s with
  | Guess _ => 0
  | Drop _ broken intact => S (max (drops broken) (drops intact))
  end.

Finally, using these concepts, we can describe what the solution for our problem is. winning lower range s says that strategy s is able to find range target floors starting at lower, while is_optimal range e d states that there is a winning strategy for guessing range target floors, uses at most e eggs and performing at most d drops, such that d is the smallest possible.

Definition winning (lower range : nat) (s : strategy) : Prop :=
  forall target, lower <= target < lower + range ->
                 play target s = true.

Definition is_optimal (range e d : nat) : Prop :=
  exists s : strategy,
    eggs s <= e /\
    drops s = d /\
    winning 0 range s /\
    forall s', eggs s' <= e ->
               winning 0 range s' ->
               d <= drops s'.

A simple first strategy

Before trying to find an optimal solution, let us try to find one that works. A simple strategy is to perform linear search, starting at the bottom and going up one floor at a time. As soon as the egg breaks, we know that we have found our target.

Fixpoint linear (lower range : nat) : strategy :=
  match range with
  | 0 => Guess lower
  | S range' => Drop lower (Guess lower) (linear (S lower) range')
  end.

linear lower range works on a range of S range floors, using at most one egg. Because of this, it is not very efficient, performing n drops in the worst case.

Lemma linear_winning lower range :
  winning lower (S range) (linear lower range).

Lemma linear_eggs lower range :
  eggs (linear lower range) = min 1 range.

Lemma linear_drops lower range :
  drops (linear lower range) = range.

Improving our strategy

Now that we know a naive search strategy, we will try to find an optimal one. Intuitively, it seems that it should be possible to perform some form of binary search, starting at the middle of our floor range, and continuing recursively depending on the outcome of our drop.
Although much better than a pure linear strategy, this is still far from being optimal: if our egg doesn't break on our first drop, we will still be using at most only one egg on that upper range. If we allowed ourselves to break one of them, presumably, we would be able to solve the upper range in less than 50 drops, which would in turn allow us to perform our first drop at a lower floor, e.g.

Definition solution_take_2 : strategy :=
  Drop 33 (linear 0 33)
          (Drop 66 (linear 34 32) (linear 67 32)).

Lemma solution_take_2_winning :
  winning 0 100 solution_take_2.

Lemma solution_take_2_eggs :
  eggs solution_take_2 = 2.

Lemma solution_take_2_drops :
  drops solution_take_2 = 34.

Our new solution performs much better, but there is still room for improvement. Once again, if our first two drops are below the target floor, we will be using only one egg to search through 33 floors, which could be done better if we had used both of them. This suggests that the optimal strategy should be some form of skewed binary search, where the upper range that is produced by an egg drop should use its available eggs in an optimal way.

Finding the optimum

How can we formalize the intuition that we just developed? The key insight is to reason by duality and, instead, ask "what is the largest range of floors we can scan using at most some number of eggs and drops?" When looking at the problem this way, it becomes clear that optimality has a recursive structure that is easy to describe: to find a floor using at most e eggs and d drops, we need to combine two optimal strategies: one using at most e-1 eggs and d-1 drops, for the case where our first drop causes the egg to break, and another using at most e eggs and d-1 drops, for the case where our egg does not break at first. We can readily express this idea as code. optimal_range e d computes the maximal range size that can be solved using e eggs and d drops at most, while optimal lower e d computes a strategy for doing so starting from floor lower.

Fixpoint optimal_range_minus_1 (e d : nat) {struct d} : nat :=
  match d, e with
  | S d', S e' => S (optimal_range_minus_1 e' d' +
                     optimal_range_minus_1 (S e') d')
  | _, _ => 0
  end.

Definition optimal_range e d := S (optimal_range_minus_1 e d).

Fixpoint optimal (lower e d : nat) {struct d} : strategy :=
  match d, e with
  | S d', S e' =>
    let floor := lower + optimal_range_minus_1 e' d' in
    Drop floor
         (optimal lower e' d')
         (optimal (S floor) (S e') d')
  | _, _ => Guess lower
  end.

It is easy to show that optimal lower e d scans optimal_range e d floors, as well that it uses the resources that we expect.

Lemma optimal_winning lower e d :
  winning lower (optimal_range e d) (optimal lower e d).

Lemma optimal_eggs lower e d :
  eggs (optimal lower e d) = min e d.

Lemma optimal_drops lower e d :
  drops (optimal lower e d) = min 1 e * d.

To actually show optimality, we need to show that optimal_range indeed computes what it is supposed to. We start by showing two inversion lemmas, relating the range that is scanned by a winning strategy to the range scanned by its sub-strategies.

Lemma winning_inv_guess lower range floor :
  winning lower range (Guess floor) -> range <= 1.

Lemma winning_inv_drop lower range floor broken intact :
  winning lower range (Drop floor broken intact) ->
  exists r1 r2 lower',
    range = r1 + r2 /\
    winning lower r1 broken /\
    winning lower' r2 intact.

Lemma optimal_range_correct :
  forall lower e d s range,
    eggs s <= e ->
    drops s <= d ->
    winning lower range s ->
    range <= optimal_range e d.

Combining this lemma with the ranges we had derived for linear and optimal, we can prove useful results about optimal_range.

Lemma optimal_range_monotone :
  forall e e' d d',
    e <= e' ->
    d <= d' ->
    optimal_range e d <= optimal_range e' d'.
Proof.
  intros e e' d d' He Hd.
  apply (optimal_range_correct 0 e' d' (optimal 0 e d));
    [ rewrite optimal_eggs; lia
    | rewrite optimal_drops; destruct e; simpl; lia
    | apply optimal_winning ].
Qed.

Lemma optimal_range_lower_bound :
  forall e d, d <= (optimal_range (S e) d).
Proof.
  intros e d.
  cut (S d <= optimal_range (S e) d); try lia.
  apply (optimal_range_correct 0 (S e) d (linear 0 d));
    [ rewrite linear_eggs
    | rewrite linear_drops
    | apply linear_winning ]; lia.
Qed.

Given that optimal_range is monotone, we can find what the optimal number of drops for a given range is by picking the smallest value of t such that range <= optimal_range e t. We formalize this idea by writing a generic function find_root that can find such values for any monotone function f, given a suitable initial guess.

Fixpoint find_root (f : nat -> nat) (target n : nat) : nat :=
  match n with
  | 0 => 0
  | S n' =>
    if leb target (f n') then
      find_root f target n'
    else
      S n'
  end.

Lemma find_root_correct :
  forall f target n,
    (forall x y, x <= y -> f x <= f y) ->
    target <= f n ->
    let x := find_root f target n in
    target <= f x /\
    forall y, y < x -> f y < target.

By instantiating this theorem with optimal_range and applying the appropriate theorems, we obtain our final result. The proof of optimality goes by contradiction. Let t = find_optimum (S e) range. if we find another strategy s such that eggs s <= S e and drops s < t, we know that range <= optimal_range (S e) t by optimal_range_correct, but we must also have optimal_range (S e) t < range by the correctness of find_root.

Definition find_optimum e target :=
  find_root (optimal_range e) target target.

Lemma find_optimum_correct :
  forall e range,
    let d := find_optimum (S e) range in
    is_optimal range (S e) d.
Proof.
  intros e range d.
  assert (H : range <= optimal_range (S e) d /\
              forall d', d' < d -> optimal_range (S e) d' < range).
  (* By correctness of find_root *)
  destruct H as [H1 H2].
  exists (optimal 0 (S e) d).
  rewrite optimal_drops, optimal_eggs.
  repeat split; try lia; simpl; try lia.
  - intros x Hx.
    apply optimal_winning. lia.
  - intros s Hs WIN.
    destruct (le_lt_dec d (drops s)) as [LE | LT]; trivial.
    assert (Hd : drops s <= drops s) by lia.
    assert (He : eggs s <= S e) by lia.
    (* optimal_range < range *)
    pose proof (H2 _ LT).
    (* range <= optimal_range *)
    pose proof (optimal_range_correct _ _ _ _ _ He Hd WIN).
    lia.
Qed.

Using this result, we can find the answer for our original problem.

Lemma solution :
  is_optimal 100 2 14.
Proof. apply find_optimum_correct. Qed.