Loading AI tools
Heuristic search algorithm From Wikipedia, the free encyclopedia
In computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is a modification of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic. But in beam search, only a predetermined number of best partial solutions are kept as candidates.[1] It is thus a greedy algorithm.
Beam search uses breadth-first search to build its search tree. At each level of the tree, it generates all successors of the states at the current level, sorting them in increasing order of heuristic cost.[2] However, it only stores a predetermined number, , of best states at each level (called the beam width). Only those states are expanded next. The greater the beam width, the fewer states are pruned. With an infinite beam width, no states are pruned and beam search is identical to best-first search.[3] Conversely, a beam width of 1 corresponds to a hill-climbing algorithm.[3] The beam width bounds the memory required to perform the search. Since a goal state could potentially be pruned, beam search sacrifices completeness (the guarantee that an algorithm will terminate with a solution, if one exists). Beam search is not optimal (that is, there is no guarantee that it will find the best solution).
A beam search is most often used to maintain tractability in large systems with insufficient amount of memory to store the entire search tree.[4] For example, it has been used in many machine translation systems.[5] (The state of the art now primarily uses neural machine translation based methods, especially large language models) To select the best translation, each part is processed, and many different ways of translating the words appear. The top best translations according to their sentence structures are kept, and the rest are discarded. The translator then evaluates the translations according to a given criterion, choosing the translation which best keeps the goals.
The Harpy Speech Recognition System (introduced in a 1976 dissertation[6]) was the first use of what would become known as beam search.[7] While the procedure was originally referred to as the "locus model of search", the term "beam search" was already in use by 1977.[8]
Beam search has been made complete by combining it with depth-first search, resulting in beam stack search[9] and depth-first beam search,[4] and with limited discrepancy search,[4] resulting in beam search using limited discrepancy backtracking[4] (BULB). The resulting search algorithms are anytime algorithms that find good but likely sub-optimal solutions quickly, like beam search, then backtrack and continue to find improved solutions until convergence to an optimal solution.
In the context of a local search, we call local beam search a specific algorithm that begins selecting randomly generated states and then, for each level of the search tree, it always considers new states among all the possible successors of the current ones, until it reaches a goal.[10][11]
Since local beam search often ends up on local maxima, a common solution is to choose the next states in a random way, with a probability dependent from the heuristic evaluation of the states. This kind of search is called stochastic beam search.[12]
Other variants are flexible beam search and recovery beam search.[11]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.