Difference between pages "Mathematical Extensions" and "Proof Manager"

From Event-B
(Difference between pages)
Jump to navigationJump to search
imported>Pascal
 
imported>Son
 
Line 1: Line 1:
Currently the operators and basic predicates of the Event-B mathematical language supported by Rodin are fixed.
+
The Proof Manager is responsible for constructing proofs and maintaining existing proofs associated with proof obligations.
We propose to extend Rodin to define basic predicates, new operators or new algebraic types.
 
  
== Requirements ==
+
== Overview ==
  
=== User Requirements ===
+
Proof obligations are generated by the proof obligation generator and have the form of ''[[Proof Manager#Sequents|sequents]]''.
* Binary operators (prefix form, infix form or suffix form).
 
* Operators on boolean expressions.
 
* Unary operators, such as absolute values.
 
: Note that the pipe, which is already used for set comprehension, cannot be used to enter absolute values (''in fact, as in the new design the pipe used in set comprehension is only a syntaxic sugar, the same symbol may be used for absolute value. To be confirmed with the prototyped parser. It may however be not allowed in a first time until backtracking is implemented, due to use of lookahead making assumptions on how the pipe symbol is used. -Mathieu ).
 
* Basic predicates (e.g., the symmetry of relations <math>sym(R) \defi R=R^{-1}</math>).
 
: Having a way to enter such predicates may be considered as syntactic sugar, because it is already possible to use sets (e.g., <math>R \in sym</math>, where <math>sym \defi \{R \mid R=R ^{-1}\}</math>) or functions (e.g., <math>sym(R) = \True</math>, where <math>sym \defi (\lambda R \qdot R \in A \rel B \mid \bool(R = R^{-1}))</math>).
 
* Quantified expressions (e.g., <math>\sum x \qdot P \mid y</math>, <math>\prod x \qdot P \mid y</math>, <math>~\min(S)</math>, <math>~\max(S)</math>).
 
* Types.
 
** Enumerated types.
 
** Scalar types.
 
  
=== User Input ===
+
Sequents are proved using ''[[Proof Manager#Proof Rules|proof rules]]''.
The end-user shall provide the following information:
 
* keyboard input
 
* Lexicon and Syntax. <br/>More precisely, it includes the symbols, the form (prefix, infix, postfix), the grammar, associativity (left-associative or right associative), commutativity, priority, the mode (flattened or not), ...
 
* Pretty-print. <br/>Alternatively, the rendering may be determined from the notation parameters passed to the parser.
 
* Typing rules.
 
* Well-definedness.
 
  
=== Development Requirements ===
+
The Proof Manager architecture is separated into two parts: ''extensible'' part and ''static'' part. The extensible part is responsible for generating individual proof rules. The static part is responsible for putting proof rules together to construct and manage proofs. We call components that generate valid proof rules ''[[Proof Manager#Reasoners|reasoners]]''.
* Scalability.
 
  
== Towards a generic AST ==
+
The basic reasoning capabilities of the Proof Manager can be extended by adding new reasoners. A reasoner may implement a decision procedure for automated proof, or a derived rule schema for interactive proof.
  
The following AST parts are to become generic, or at least parameterised:
+
By applying the generated proof rules by different reasoner, the Proof Manager builds a (partial) proof for an proof obligation by constructing ''[[Proof Manager#Proof Trees|proof trees]]''.
* [[Constrained_Dynamic_Lexer | Lexer]]
 
* [[Constrained Dynamic Parser | Parser]]
 
* Nodes ( Formula class hierarchy ): parameters needed for:
 
** Type Solve (type rule needed to synthesize the type)
 
** Type Check (type rule needed to verify constraints on children types)
 
** WD (WD predicate)
 
** PrettyPrint (tag image + notation (prefix, infix, postfix) + needs parentheses)
 
** Visit Formula (getting children + visitor callback mechanism)
 
** Rewrite Formula (associative formulæ have a specific flattening treatment)
 
* Types (Type class hierarchy): parameters needed for:
 
** Building the type expression (type rule needed)
 
** PrettyPrint (set operator image)
 
** getting Base / Source / Target type (type rule needed)
 
* Verification of preconditions (see for example <tt>AssociativeExpression.checkPreconditions</tt>)
 
  
=== Vocabulary ===
+
In order to encapsulate frequently used proof construction and manipulation steps, the Proof Manager provides the concept of ''[[Proof Manager#Tactics|tactics]]''. They provides high-level strategic proof manipulations. Adding new tactics is the second possibility for extending the Proof Manager.
  
An '''extension''' is to be understood as a single additional operator definition.  
+
== Sequents ==
 +
A sequent stands for something we want to prove.
  
=== Tags ===
+
Sequents are of the following form
  
Every extension is associated with an integer tag, just like existing operators. Thus, questions arise about how to allocate new tags and how to deal with existing tags.<br />
+
<math>H \vdash G</math>
The solution proposed here consists in keeping existing tags 'as is'. They are already defined and hard coded in the <tt>Formula</tt> class, so this choice is made with backward compatibility in mind.
 
  
Now concerning extension tags, we will first introduce a few hypotheses:
+
where '''H''' is the set of hypotheses (predicates) and '''G''' is the goal (a predicate in the mathematical language).
* Tags_Hyp1: tags are never persisted across sessions on a given platform
 
* Tags_Hyp2: tags are never referenced for sharing purposes across various platforms
 
In other words, cross-platform/session formula references are always made through their ''String'' representation. These assumptions, which were already made and verified for Rodin before extensions, lead us to restrict further considerations to the scope of a single session on a single platform.
 
  
The following definitions hold at a given instant <math>t</math> and for the whole platform.<br />
+
The meaning of the above sequent is that: ''Under the hypotheses H, prove the goal G''.
Let <math>\mathit{EXTENSION}_t</math> be the set of extensions supported by the platform at instant <math>t</math>;<br /> let <math>\mathit{tag}_t</math> denote the affectation of tags to a extensions at instant <math>t</math> (<math>\mathit{tag}_t \in \mathit{EXTENSION}_t \pfun \intg</math>);<br /> let <math>\mathit{COMMON}</math> be the set of existing tags defined by the <tt>Formula</tt> class (<math>\mathit{COMMON} \sub \intg</math>).<br /> The following requirements emerge:
 
* Tags_Req1: <math>\forall t \qdot \mathit{tag}_t \in \mathit{EXTENSION}_t \tinj \intg</math>
 
* Tags_Req2: <math>\forall e, t_1,t_2 \qdot \mathit{tag}_{t_1}(e)=\mathit{tag}_{t_2}(e)</math> where <math>t_1, t_2</math> are two instants during a given session
 
* Tags_Req3: <math>\forall t \qdot \ran(\mathit{tag}_t) \cap \mathit{COMMON} = \empty</math>
 
  
The above-mentioned scope-restricting hypothesis can be reformulated into: <math>\mathit{tag}</math> needs not be stable across sessions nor across platforms.
+
== Proof Rules ==
 +
In its pure mathematical form, a proof rule is a tool to perform formal proof and is denoted by:
  
=== Formula Factory ===
+
{{InfRule||<math>\frac{\quad A\quad}{C}</math>}}
  
The requirements about tags give rise to a need for centralising the <math>\mathit{tag}</math> relation in order to enforce tag uniqueness.
+
where '''A''' is a (possibly empty) list of sequents:the antecedents of the proof rule; and '''C''' is a sequent: the consequent of the rule. And we interpret the above proof rule as follows: The proofs of each sequent of '''A''' together give a proof of sequent '''C'''.
The Formula Factory appears to be a convenient and logical candidate for playing this role. Each time an extension is used to make a formula, the factory is called and it can check whether its associated tag exists, create it if needed, then return the new extended formula while maintaining tag consistency.
 
  
The factory can also provide API for requests about tags and extensions: getting the tag from an extension and conversely.
+
=== Representation ===
 +
In Rodin, the representation for proof rules are more structure not only to reduce the space required to store the rule, but more importantly to support proof reuse.
  
We also need additional methods to create extended formulæ. A first problem to address is: which type should these methods return ?
+
A rule in Rodin contains the following:
We could define as many extended types as the common AST API does, namely <tt>ExtendedUnaryPredicate</tt>, <tt>ExtendedAssociativeExpression</tt>, and so on, but this would lead to a large number of new types to deal with (in visitors, filters, …), together with a constraint about which types extensions would be forced to fit into. It is thus preferable to have as few extended types as possible, but with as much parameterisation as can be. Considering that the two basic types <tt>Expression</tt> and <tt>Predicate</tt> have to be extensible, we come to add two extended types <tt>ExtendedExpression</tt> and <tt>ExtendedPredicate</tt>.
 
  
ExtendedExpression makeExtendedExpression( ? )
+
* '''used goal''' A used goal predicate.
ExtendedPredicate makeExtendedPredicate( ? )
 
  
Second problem to address: which arguments should these methods take ?
+
* '''used hypotheses''' The set of used hypotheses.
Other factory methods take the tag, a collection of children where applicable, and a source location. In order to discuss what can be passed as argument to make extended formulæ, we have to recall that the <tt>make…</tt> factory methods have various kinds of clients, namely:
 
* parser
 
* POG
 
* provers
 
(other factory clients use the parsing or identifier utility methods: SC modules, indexers, …)
 
  
Thus, the arguments should be convenient for clients, depending on which information they have at hand.
+
* '''antecedents''' A list of antecedents (to be explain later).
The source location does not obviously seem to require any adaptation and can be taken as argument the same way. Concerning the tag, it depends on whether clients have a tag or an extension at hand. Both are intended to be easily retrieved from the factory. As a preliminary choice, we can go for the tag and adjust this decision when we know more about client convenience.
 
  
As for children, the problem is more about their types. We want to be able to handle as many children as needed, and of all possible types. Looking to existing formula children configurations, we can find:
+
* '''reasoner''' the reasoner used to generate this proof rule (See [[reasoners]]).
* expressions with predicate children: <math>\mathit{bool}(P)</math>
 
* expressions with expression children: <math>E_1 + E_2</math>
 
* predicates with predicate children: <math>P_1 \limp P_2</math>
 
* predicates with expression children: <math>\mathit{partition}(S, E_1, E_2)</math>
 
* mixed operators: <math>\{x \qdot P(x) \mid E(x)\}</math>, but it is worth noting that the possibility of introducing bound variables in extended formulæ is not established yet.
 
Thus, for the sake of generality, children of both types should be supported for both extended predicates and extended expressions.
 
  
ExtendedExpression makeExtendedExpression(int tag, Expression[] expressions, Predicate[] predicates, SourceLocation location)
+
* '''reasoner input''' the input to the reasoner to generate this proof rule (See [[reasoners]]).
ExtendedPredicate makeExtendedPredicate(int tag, Expression[] expressions, Predicate[] predicates, SourceLocation location)
 
  
=== Defining Extensions ===
+
Each antecedent of the proof rule contains the following information:
  
An extension is meant to contain every information and behaviour required by:
+
* '''new goal''' A new goal predicate.
* Keyboard
 
* (Extensible Fonts ?)
 
* Lexer
 
* Parser
 
* AST
 
* Static Checker
 
* Proof Obligation Generator
 
* Provers
 
  
==== Keyboard requirements ====
+
* '''added hypotheses''' The set of added hypotheses.
  
'''Kbd_req1''': an extension must provide an association combo/translation for every uncommon symbol involved in the notation.
+
With this representation, a proof rule in Rodin corresponding to a proof schema as follows:
  
==== Lexer requirements ====
+
<math>
 +
  \begin{array}{c}
 +
    H, H_u, H_{A_0} \vdash G_{A_0} ~~~\ldots~~~ H, H_u, H_{A_0} \vdash G_{A_0} \\
 +
    \hline
 +
    H, H_u \vdash G_u
 +
  \end{array}
 +
</math>
  
'''Lex_req1''': an extension must provide an association lexeme/token for every uncommon symbol involved in the notation.
+
Where:
  
==== Parser requirements ====
+
* <math>H_u</math> is the set of used hypotheses
  
According to the [[Constrained_Dynamic_Parser| Parser]] page, the following informations are required by the parser in order to be able to parse a formula containing a given extension.
+
* <math>G_u</math> is the used goal
  
* symbol compatibility
+
* <math>H_{A_i}</math> is the set of added hypotheses corresponding to the ith antecedent.
* group compatibility
 
* symbol precedence
 
* group precedence
 
* notation (see below)
 
  
==== AST requirements ====
+
* <math>G_{A_i}</math> is the new goal corresponding to the ith antecedent.
  
The following hard-coded concepts need to be reified for supporting extensions in the AST. An extension instance will then provide these reified informations to an extended formula class for it to be able to fulfil its API. It is the expression of the missing information for a <tt>ExtendedExpression</tt> (resp. <tt>ExtendedPredicate</tt>) to behave as a <tt>Expression</tt> (resp. <tt>Predicate</tt>). It can also be viewed as the parametrization of the AST.
+
* <math>H</math> is the meta-variable that can be instantiated.
  
===== Notation =====
+
=== Applying Proof Rules ===
 +
Given a proof rule of the form mentioned above, the following describes how to apply this rule to an input ''sequent''. The result of this process is a list of output sequents if successful or ''null'' otherwise.
  
The notation defines how the formula will be printed. In this document, we use the following convention:
+
* The rule is applicable if the goal of the sequent is not exactly the same as the ''used goal'' or any of the ''used hypotheses'' is not contained in the set of hypotheses of the input sequent.
* <math>e_i</math> is the i-th child expression of the extended formula
 
* <math>p_i</math> is the i-th child predicate of the extended formula
 
  
''Example'': infix operator "<math>\lozenge</math>"
+
* In the case of applicable, the antecedent sequents are returned. The goal of each antecedent sequent is the ''new goal'' correspondingly. The hypotheses of each antecedent sequent is the union of the old hypotheses and ''added hypotheses'' of the corresponding antecedent.
  <math>e_1 \lozenge e_2 \lozenge \ldots \lozenge e_n</math>
 
  
We define the following notation framework:
+
== Reasoners ==
  
[[Image:notation_uml.png]]
+
== Proof Trees ==
  
On the "<math>\lozenge</math>" infix operator example, the iterable notation would return sucessively:
+
== [[Tactics ==
* a <tt>IFormulaChild</tt> with index 1
 
* the <tt>INotationSymbol</tt> "<math>\lozenge</math>"
 
* a <tt>IFormulaChild</tt> with index 2
 
* the <tt>INotationSymbol</tt> "<math>\lozenge</math>"
 
* &hellip;
 
* a <tt>IFormulaChild</tt> with index <math>n</math>
 
  
For the iteration not to run forever, the limit <math>n</math> needs to be known: this is the role of the <tt>mapsTo()</tt> method, which fixes the number of children, called when this number is known (i.e. for a particular formula instance).
+
[[Category:Developer documentation]]
 
+
[[Category:Rodin Platform]]
'''Open question''': how to integrate bound identifier lists in the notation?
 
 
 
We may make a distinction between fixed-size notations (like n-ary operators for a given n) and variable-size notations (like associative infix operators).
 
While fixed-size notations would have no specific expressivity limitations (besides parser conflicts with other notations), only a finite set of pre-defined variable-size notation patterns will be proposed to the user.
 
 
 
The following features need to be implemented to support the notation framework:
 
* special meta variables that represent children (<math>e_i</math>, <math>p_i</math>)
 
* a notation parser that extracts a fixed-size notation pattern from a user-input String
 
 
 
===== Well-Definedness =====
 
 
 
WD predicates also are user-provided data.
 
 
 
''Example'':
 
<math>\mathit{D}(e_1) \land (\forall i \cdot i \in 1\mathit{..}(n-1) \Rightarrow (\mathit{D}(e_i) \Rightarrow \mathit{D}(e_{i+1})))</math>
 
 
 
In order to process WD predicates, we need to add the following features to the AST:
 
* the <math>\mathit{D}</math> operator
 
* expression variables (predicate variables already exist)
 
* special expression variables and predicate variables that denote a particular formula child (we need to refer to <math>e_1</math> and <math>e_i</math> in the above example)
 
* a <tt>parse()</tt> method that accepts these special meta variables and the <math>\mathit{D}</math> operator and returns a <tt>Predicate</tt> (a WD Predicate Pattern)
 
* a <tt>makeWDPredicate(aWDPredicatePattern, aPatternInstantiation)</tt> method that makes an actual WD predicate
 
 
 
===== Type Check =====
 
 
 
An extension shall give a type rule, which consists in:
 
* type check predicates (addressed in this very section)
 
* a resulting type expression (only for expressions)
 
 
 
''Example'':
 
<math>(\forall i \cdot \mathit{type}(e_i) = \pow(\alpha)) \land (\forall i,j \cdot \mathit{type}(e_i)=\mathit{type}(e_j))</math>
 
 
 
Type checking can be reified provided the following new AST features:
 
* the <math>\mathit{type}</math> operator
 
* type variables (<math>\alpha</math>)
 
* the above-mentioned expression variables and predicate variables
 
* a <tt>parse()</tt> method that accepts these special meta variables and the <math>\mathit{type}</math> operator and returns a <tt>Predicate</tt> (a Type Predicate Pattern)
 
* a <tt>makeTypePredicate(aTypePredicatePattern, aPatternInstantiation)</tt> method that makes an actual Type predicate
 
 
 
===== Type Solve =====
 
 
 
This section addresses type synthesizing for extended expressions (the resulting type part of a type rule). It is given as a type expression pattern, so that the actual type can be computed from the children.
 
 
 
''Example'':
 
  <math>\pow(\mathit{type}(e_1))</math>
 
 
 
In addition to the requirements for Type Check, the following features are needed:
 
* a <tt>parse()</tt> method that accepts special meta variables and the <math>\mathit{type}</math> operator and returns an <tt>Expression</tt> (a Type Expression Pattern)
 
* a <tt>makeTypeExpression(aTypeExpressionPattern, aPatternInstantiation)</tt> method that makes an actual Type expression
 
 
 
==== Static Checker requirements ====
 
{{TODO}}
 
==== Proof Obligation Generator requirements ====
 
{{TODO}}
 
==== Provers requirements ====
 
{{TODO}}
 
 
 
=== Extension compatibility issues ===
 
{{TODO}}
 
 
 
== User Input Summarization ==
 
 
 
Identified required data entail the following user input:
 
 
 
{{TODO}}
 
 
 
== Impact on other tools ==
 
 
 
Impacted plug-ins (use a factory to build formulæ):
 
* <tt>org.eventb.core</tt>
 
: In particular, the static checker and proof obligation generator are impacted.
 
* <tt>org.eventb.core.seqprover</tt>
 
* <tt>org.eventb.pp</tt>
 
* <tt>org.eventb.pptrans</tt>
 
* <tt>org.eventb.ui</tt>
 
 
 
== Identified Problems ==
 
The parser shall enforce verifications to detect the following situations:
 
* Two mathematical extensions are not compatible (the extensions define symbols with the same name but with a different semantics).
 
* A mathematical extension is added to a model and there is a conflict between a symbol and an identifier.
 
* An identifier which conflicts with a symbol of a visible mathematical extension is added to a model.
 
 
 
Beyond that, the following situations are problematic:
 
* A formula has been written with a given parser configuration and is read with another parser configuration.
 
: As a consequence, it appears as necessary to remember the parser configuration.
 
: The static checker will then have a way to invalid the sources of conflicts (e.g., priority of operators, etc).
 
:: ''The static checker will then have a way to invalid the formula upon detecting a conflict (name clash, associativity change, semantic change...) [[User:Mathieu|mathieu]]
 
 
 
* A proof may free a quantified expression which is in conflict with a mathematical extension.
 
: SOLUTION #1: Renaming the conflicting identifiers in proofs?
 
 
 
== Open Questions ==
 
 
 
=== New types ===
 
Which option should we prefer for new types?
 
* OPTION #1: Transparent mode.
 
:In transparent mode, it is always referred to the base type. As a consequence, the type conversion is implicitly supported (weak typing).
 
:For example, it is possible to define the <tt>DISTANCE</tt> and <tt>SPEED</tt> types, which are both derived from the <math>\intg</math> base type, and to multiply a value of the former type with a value of the latter type.
 
 
 
* OPTION #2: Opaque mode.
 
:In opaque mode, it is never referred to the base type. As a consequence, values of one type cannot be converted to another type (strong typing).
 
:Thus, the above multiplication is not allowed.
 
:This approach has at least two advantages:
 
:* Stronger type checking.
 
:* Better prover performances.
 
:It also has some disadvantages:
 
:* need of ''extractors'' to convert back to base types.
 
:* need of extra circuitry to allow things like <math>x:=d*2</math> where <math>x, d</math> are of type <tt>DISTANCE</tt>
 
 
 
* OPTION #3: Mixed mode.
 
:In mixed mode, the transparent mode is applied to scalar types and the opaque mode is applied to other types.
 
 
 
=== Scope of the mathematical extensions ===
 
* OPTION #1: Project scope.
 
:The mathematical extensions are implicitly visible to all components of the project that has imported them.
 
* OPTION #2: Component scope.
 
:The mathematical extensions are only visible to the components that have explicitly imported them. However, note that this visibility is propagated through the hierarchy of contexts and machines (<tt>EXTENDS</tt>, <tt>SEES</tt> and <tt>REFINES</tt> clauses).
 
:An issue has been identified. Suppose that <tt>ext1</tt> extension is visible to component <tt>C1</tt> and <tt>ext2</tt> is visible to component <tt>C2</tt>, and there is no compatibility issue between <tt>ext1</tt> and <tt>ext2</tt>. It is not excluded that an identifier declared in <tt>C1</tt> conflict with a symbol in <tt>ext2</tt>. As a consequence, a global verification is required when adding a new mathematical extension.
 
 
 
== Bibliography ==
 
* J.R. Abrial, M.Butler, M.Schmalz, S.Hallerstede, L.Voisin, [http://deploy-eprints.ecs.soton.ac.uk/80 ''Proposals for Mathematical Extensions for Event-B''], 2009.
 
:This proposal consists in considering three kinds of extension:
 
# Extensions of set-theoretic expressions or predicates: example extensions of this kind consist in adding the transitive closure of relations or various ordered relations.
 
# Extensions of the library of theorems for predicates and operators.
 
# Extensions of the Set Theory itself through the definition of algebraic types such as  lists or ordered trees using new set constructors.
 
 
 
[[Category:Design proposal]]
 

Revision as of 12:51, 11 September 2008

The Proof Manager is responsible for constructing proofs and maintaining existing proofs associated with proof obligations.

Overview

Proof obligations are generated by the proof obligation generator and have the form of sequents.

Sequents are proved using proof rules.

The Proof Manager architecture is separated into two parts: extensible part and static part. The extensible part is responsible for generating individual proof rules. The static part is responsible for putting proof rules together to construct and manage proofs. We call components that generate valid proof rules reasoners.

The basic reasoning capabilities of the Proof Manager can be extended by adding new reasoners. A reasoner may implement a decision procedure for automated proof, or a derived rule schema for interactive proof.

By applying the generated proof rules by different reasoner, the Proof Manager builds a (partial) proof for an proof obligation by constructing proof trees.

In order to encapsulate frequently used proof construction and manipulation steps, the Proof Manager provides the concept of tactics. They provides high-level strategic proof manipulations. Adding new tactics is the second possibility for extending the Proof Manager.

Sequents

A sequent stands for something we want to prove.

Sequents are of the following form

H \vdash G

where H is the set of hypotheses (predicates) and G is the goal (a predicate in the mathematical language).

The meaning of the above sequent is that: Under the hypotheses H, prove the goal G.

Proof Rules

In its pure mathematical form, a proof rule is a tool to perform formal proof and is denoted by:

\frac{\quad A\quad}{C}

where A is a (possibly empty) list of sequents:the antecedents of the proof rule; and C is a sequent: the consequent of the rule. And we interpret the above proof rule as follows: The proofs of each sequent of A together give a proof of sequent C.

Representation

In Rodin, the representation for proof rules are more structure not only to reduce the space required to store the rule, but more importantly to support proof reuse.

A rule in Rodin contains the following:

  • used goal A used goal predicate.
  • used hypotheses The set of used hypotheses.
  • antecedents A list of antecedents (to be explain later).
  • reasoner the reasoner used to generate this proof rule (See reasoners).
  • reasoner input the input to the reasoner to generate this proof rule (See reasoners).

Each antecedent of the proof rule contains the following information:

  • new goal A new goal predicate.
  • added hypotheses The set of added hypotheses.

With this representation, a proof rule in Rodin corresponding to a proof schema as follows:


  \begin{array}{c}
    H, H_u, H_{A_0} \vdash G_{A_0} ~~~\ldots~~~ H, H_u, H_{A_0} \vdash G_{A_0} \\
    \hline
    H, H_u \vdash G_u
  \end{array}

Where:

  • H_u is the set of used hypotheses
  • G_u is the used goal
  • H_{A_i} is the set of added hypotheses corresponding to the ith antecedent.
  • G_{A_i} is the new goal corresponding to the ith antecedent.
  • H is the meta-variable that can be instantiated.

Applying Proof Rules

Given a proof rule of the form mentioned above, the following describes how to apply this rule to an input sequent. The result of this process is a list of output sequents if successful or null otherwise.

  • The rule is applicable if the goal of the sequent is not exactly the same as the used goal or any of the used hypotheses is not contained in the set of hypotheses of the input sequent.
  • In the case of applicable, the antecedent sequents are returned. The goal of each antecedent sequent is the new goal correspondingly. The hypotheses of each antecedent sequent is the union of the old hypotheses and added hypotheses of the corresponding antecedent.

Reasoners

Proof Trees

[[Tactics