I remember when I was in high school, there was a chapter in the Math book named “Logic”. We spent about a month dealing with the problems such as “Since A is false, A or B is true, we could know that B is true”. It seems to be very easy to solve this kind of problem. However, if there were hundreds or thousands variables, things are getting difficult.
In Artificial Intelligence, this kind of problems are called propositional model checking. Given this kind of problems, it is natural to come up with the idea that generate the whole truth table to find the answer. However, for N variables, there are 2^N possibilities, which makes brute-force impossible for a large N.
How about DFS? We recursively assign True or False to a variable and when it contradicts the statement, we return and try another possible answer. It seems to be a very strong pruning in DFS, but this algorithm still has an upper bound running time of 2^N.
Fortunately enough, an effective propositional model checking algorithm has been introduced in 1962, by Martin Davis, Hillary Putnam, George Logemann, and Donald Loveland. It called Davis-Putnam algorithm, aka DPLL.
DPLL is a clever DFS-based algorithm with powerful pruning to avoid unnecessary subtree in DFS tree. There are three improvements over the normal DFS algorithm:
- Early termination
- Pure symbol heuristic
- Unit clause heuristic
There’s a Python-like pseudocode written by myself to illustrate DPLL:1
2
3
4
5
6
7
8
9
10
11
12# Note this is pseudocode, Python is not a pass-by-value language
def DPLL(clauses, symbols, model):
if allTrue(clauses, model):
return True
if anyFalse(clauses, model):
return False
P, value = findPureSymbol(symbols, clauses, model)
if P is not None:
return DPLL(clauses, symbols-P, model+{P=value})
P = symbols[0]
rest = symbols[1:]
return DPLL(clauses, rest, model+{P=True}) or DPLL(clauses, rest, model+{P=False})
Reference
- DPLL algorithm - Wikipedia
- Stuart R. and Peter N., Artificial Intelligence: A Modern Approach (3rd Edition)