#### Publication Date

2009-01-01

#### Availability

Open access

#### Degree Type

Thesis

#### Degree Name

Master of Science (MS)

#### Department

Computer Science (Arts and Sciences)

#### Date of Defense

2009-07-30

#### First Committee Member

Dr. Geoff Sutcliffe - Committee Chair

#### Second Committee Member

Dr. Dilip Sarkar - Committee Member

#### Third Committee Member

Dr. Mirsolav Kubat - Outside Committee Member

#### Abstract

Developing logic in machines has always been an area of concern for scientists. Automated Theorem Proving is a field that has implemented the concept of logical consequence to a certain level. However, if the number of available axioms is very large then the probability of getting a proof for a conjecture in a reasonable time limit can be very small. This is where the ability to learn from previously proved theorems comes into play. If we see in our own lives, whenever a new situation S(NEW) is encountered we try to recollect all old scenarios S(OLD) in our neural system similar to the new one. Based on them we then try to find a solution for S(NEW) with the help of all related facts F(OLD) to S(OLD). Similar is the concept in this research. The thesis deals with developing a solution and finally implementing it in a tool that tries to prove a failed conjecture (a problem that the ATP system failed to prove) by extracting a sufficient set of axioms (we call it Refined Axiom Set (RAS)) from a large pool of available axioms. The process is carried out by measuring the similarity of a failed conjecture with solved theorems (already proved) of the same domain. We call it "process1", which is based on syntactic selection of axioms. After process1, RAS may still have irrelevant axioms, which motivated us to apply semantic selection approach on RAS so as to refine it to a much finer level. We call this approach as "process2". We then try to prove failed conjecture either from the output of process1 or process2, depending upon whichever approach is selected by the user. As for our testing result domain, we picked all FOF problems from the TPTP problem domain called SWC, which consisted of 24 broken conjectures (problems for which the ATP system is able to show that proof exists but not able to find it because of limited resources), 124 failed conjectures and 274 solved theorems. The results are produced by keeping in account both the broken and failed problems. The percentage of broken conjectures being solved with respect to the failed conjectures is obviously higher and the tool has shown a success of 100 % on the broken set and 19.5 % on the failed ones.

#### Keywords

Machine Learning; MLAR; Automated Theorem Proving

#### Recommended Citation

Kakkad, Aman, "Machine Learning for Automated Theorem Proving" (2009). *Open Access Theses*. 223.

http://scholarlyrepository.miami.edu/oa_theses/223