Depth-first Search (Russell Norvig)
As outlined in Russell Norvig Chapter 03: Solving Problems by Searching
Definition
A Search Algorithm that adopts an Uninformed Search Strategy:
Depth-first search always expands the deepest node in the frontier first. (p. 96)
Implementation
Depth-first search could be implemented as a call to Best-first-search where the evaluation function f(n)
is the negative of the depth.
However typically it is not implemented as a graph search at all, rather as a tree search that does not keep a full table of reached states. It just proceeds immediately down the deepest level of the search tree until it reaches a node with no successors. Then it ‘backs up’ to the next deepest node with unexpanded successors and keeps going.
Pseudocode
No pseudocode is given for depth-first search directly, instead the refined Depth-limited Search is provided instead.
Properties and Performance
Depth-first search is not cost-optimal, it returns the first solution it finds, even if that is not the cheapest one.
Depth-first search is also incomplete, it is complete and efficient only for finite state spaces that are trees. For acyclic state spaces it will eventually explore the whole state space, but may visit the same state many times via different paths. For cyclic state spaces it can get stuck in an infinite loop, so some implementations check for cycles. For infinite state spaces, the algorithm can get stuck in a path even if there are no cycles, hence it is incomplete.
So why use it? In the right (finite, tree-like) state space, DFS is preferable because it has low memory requirements. The frontier is very small - while BFS has a frontier like an expanding sphere, DFS the frontier is just the radius of that sphere. So the memory needs are just \(O(bm)\) where b is the branching factor and m is the maximum depth of the tree. Time complexity is proportional to the number of states in the state pace.
So DFS has become the workhorse of many areas of AI, including constraint satisfaction, and logic programming.
A variant has even better memory performance. Backtracking search expands only one successor at a time, rather than all successors, and modifies the current state description rather than allocating new memory for a brand-new state. As such it just requires one state description and a path of \(O(m)\) actions. This gives us the option to check for cycles in \(O(1)\) time rather than \(O(m)\). For backtracking to work we need to be able to undo each action when we backtrack. (p. 98)