Tuesday, 14 December 2010

Blog Dump 7: Reasoning & Proof

Code is formal mathematical notation, defined only from the axioms specified by the language. Compilers and interpreters are assumed to make-good these axioms based on the axioms of their own implementation. At some point the axioms become the specification of a machine, wherein the machine is assumed to make-good those axioms based on either further axioms of languages (if we're in an emulator), or through the laws of Physics (if we've got physical hardware). Any failure of these axioms results in undefined behaviour of the code. Thus, it is justified to formally study programs with very high-level axioms by assuming that they're true. We can check them later, of course, by studying the implementations. Thus we don't need, even through we can have if we want, a full proof of a program's execution right down to the metal. We just need to prove what it implements given the axioms of its language. Given such high-level axiomatic specification of a language, it is straightforward (if computationally expensive and undecidable) to compare specifications and programs in any relevant language.
The issue becomes what language to use for specifications and axioms. Here there is no point inventing yet another new language, and it would be best to use a declarative language. Standard Mathematical notation, such as propositional logic, Hoare logic, Lambda calculus, etc. would suffice.
An important issue is the structure of our types, as there are two ways we would like to exploit it. We would like to be able to hack our type system when convenient, which requires knowledge of type internals. For example, if we know that C's integers are stored as a list of 32 bits then we can use the individual bits via bit masks, bit-wise operations and even via decimal equivalents. If we treat types only as the concepts which they embody then we can use more high-level reasoning like "this is an integer", and therefore apply whatever we know about integers (including swapping the implementation for a different integer representation, which breaks any hacks being used).

Knowledge should be completely bottom-up. Specify as many facts (triples) as you can, and at some point higher-order patterns may fall out. If they don't, then we write more triples. The reasoning approach should be a 'destructive' one, based on filtering. For example, a 'constructive' approach would construct a path for inference like 'trees are plants and plants are living, therefore trees are living'; whilst a 'destructive' approach would describe the undefined concept of 'tree' by saying 'X is a tree, X is a plant, X is alive'. This is less reliable than the 'constructive' approach (which can also be used), but allows us to use learning like a human: a child sees a red ball and is told "red". As far as the child knows, red balls are "red" but the word "red" could describe its shape, its size, its colour, etc. These possibilities are narrowed down (filtered, destroyed) by describing a red car as "red", a red traffic light as "red", etc. With enough examples of "red", there are enough filters to narrow down what is meant. A red ball and red traffic light are both round, perhaps that is what "red" means? No, because a red car is not round, thus "red" is not the shape. In the tree example we can say 'trees are plants', which narrows down the scope of what trees might be. We can say 'trees are alive'.

The advantage is that we have an implied generality: we don't have to choose whether a particular object is one colour or another, we can say it's both. We can say that white wine is "white", which it is, without having to worry about breaking an ontology by 'misusing' the adjective 'white'. The lack of focus lets the system 'keep an open mind'. It can clarify things and verify its own inferences by asking questions, in order of the impact of the answer (ie. whether yes or no will make the biggest differentiation of the emerging ontology): 'Is a pineapple a tree?', 'Are plants alive?', etc.

If the system is going wrong for some reason, eg. asking a string of ridiculous inferences, and its own differentiation checker isn't stopping it, then the problem is simple: more triples need to be given explicitly.

Reusing URIs is a good idea, so we can have the system search out existing ontologies and instantiations, then ask things like: "are trees plants in the same way that cars are vehicles?". In that case we've said 'trees are plants' and it has found an ontology which says 'cars are vehicles' and is wondering if the relationship is the same, ie. if it should reuse the URI of the predicate. In a grahical way we could offer a multitude of possibly preexisting definitions, along with tick boxes to indicate which ones are similar.

No comments: