Some time ago there was a discussion about human intelligence and what it was. Someone got up and said that intelligence was the ability to see connections, which might be complicated connections between different things. If we are thinking about AI we would say - Heck this involves graph theory. Suppose you start writing, LSA will be performed as you write, after a while the paper clip will come up and tell us about what else is available on the Internet. Finally the paper clip comes up and asks you whether you wish to publish in a learned journal. You say you do. You are then given a list of journals. You select one. It will know the format. Journals of course differ in where they want the references etc. It will put the article in the right format. You are then given a list of references for final approval. If you have actually copied anything you are compelled to include the reference, other references are optional. Also publication in a foreign language will be implicit. We first wanted to target a precision search. Then we said that chatting was essentially precision search. Next we say that we can find related documents by looking at the text of a document.
People have tried to get a computer to write programs. This is I feel the wrong question to ask. There is a lot of software out there, getting a functional program is in general simply a case of stringing different pieces of software together.
Suppose we have a number of agents or processes. These processes can be completely general, they can be programs or they can be processes that produce something. A program module has a set of inputs and a set of outputs. The outputs may form inputs for new processes. We may view the system as being a pool. of locks and keys. Each process has locks (inputs) and keys (outputs). I have mentioned in previous posts that the input of a module may be in a number of forms. Of course if we construct small programs to convert from one form of input to another that problem is thereby solved. as I have repeatedly stated The problem looks like a classical graph theoretical problem.
What we are doing is stringing the output of one program onto another. Suppose we now have a database which describes
1) Inputs (Key field)2) Outputs (Key field)3) Methods (Key field)
Using the database we can construct paths.
. http://www.numenta.com/Numenta_HTM_Concepts.pdf http://www.newswise.com/articles/view/527140/
Let us take an example. The references above tells us how to recognise complex objects such as cars from a basic set of moving images provided by a camera. What they describe are methods by means of which we recognise more and more complicated elements. How do we asses merit? As I have said before if we took a large number of images we could correlate with any possible picture of a car. We must therefore have an index of fitness which includes computability as well as accuracy.
How do we find out which outputs are compatible with which inputs? If you have descriptions of the arguments in (say) natural language we can get an idea from that. Otherwise we can use heuristics. Normally when two programs fit together we can soon tell whether or not they are compatible. Of course the natural language descriptions of variables helps us and if we adopt LSA in any form we can describe our variables as a set of vectors.
A NEW LEARNING PARADIGM
Suppose we do a calculation, argument or whatever by hand and we put down our intermediate results. We might (as MIT have done) produce a set of intermediate results which are biologically based. We then search the modules on the Web so that there is a match with all the intermediate stages. We then assume that we have a sequence that reproduces our reasoning. One could say, in a sense, that the example had been "learnt".
Our problem can now be expressed in terms of graph theory. We now have something with a striking similarity to the travelling salesperson (not "man" we have to be PC). We solve the travelling salesperson problem by exchanging cities. This problem too is soluble by putting in and extracting agents.
A VON NEUMANN MACHINE
Von Neumann presented this in the simplest form. He postulated that a robot put a fuse into a robot with a fuse.
R + F + M -> 2M F = Fuse, R = Robot without fuse and M is the machine. This is a trivial graph although often trivial examples illustrate important general principles. In a Von Neumann machine the number of net inputs, that is to say inputs without being matched by outputs or raw materials is zero. Von Neumann postulated for his simple graph robots and fuses as being raw materials. If we have furnaces on the Moon or asteroids we have a meaty problem. Very different from the trivial graph above. The interesting fact is that given a a set of processes (existing on the Web) AI in the shape of a travelling salesperson type of algorithm could actually help us achieve a minimal seed.
STATISTICS AND RELATIONSHIPS
Suppose we have a set of entities each with a set of properties on the Web, including Erdos numbers This is essentially what I has been traditionally trying to do with Horn clauses etc. The issue here is that AI has traditionally thought in terms of small systems. In reality we have a Web and with a Web we have statistical properties. If we want to carry out statistical research the data will all be there. We can represent "barco" simply as an entity that has consequences in LSA and which alters the list position of "cerradura and éclusia" or we can give it properties.
Suppose I ask for the total tonnage of the US Navy. Assuming that this information is not already on some digest of statistics I must go through all objects of "barco" look to see who owns them and sum. That of course assumes that the system understands the question asked. It is immediately apparent that the system can be used to derive any sort of statistical relationship desired. What we basically need for each "noun" is a set of variables + relationships with other objects and the nature of that relationship. This will enable the derivation of Erdos numbers and also a range of statistical correlations.