These inference rules may seem limited, and you may have some more general
ones in mind.
Soon, we'll see additional inference rules in the context of first-order logic,
which will give us a richer set of proofs.
In general, a hard problem is finding a language that
is both expressive enough to describe the domain succinctly,
but also limited enough to automate reasoning.
This is a very practical issue in type checking and other program analysis.
While it can be easy to find some program errors automatically, it
is very difficult or impossible to *guarantee* that you
can find *all* errors (of some specific kind,
like type errors).

One thing we would like to eliminate is the need (at least technically) to restate structurally identical proofs, as discussed for commutativity. We will be able to add the idea of generalizing such proofs directly into the logic and inference rules.

Despite the desire for more flexible reasoning, we'd also like to consider whether we have more inference rules than are necessary. Are some of them redundant? This is similar to the software rule that we should have a single point of control, or the similar idea that libraries should provide exactly one way of doing something. In general, this is not easy to ensure. We have shown that some potential additional inference rules, like commutativity and associativity, weren't necessary. But we haven't shown our core inference rules to be minimal. What do you think? (See the homework exercise problems on the redundancy of not-elimination, not-introduction, and case-elimination.)