Loading AI tools
Multiplication algorithm From Wikipedia, the free encyclopedia
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers, published by Arnold Schönhage and Volker Strassen in 1971.[1] It works by recursively applying fast Fourier transform (FFT) over the integers modulo . The run-time bit complexity to multiply two n-digit numbers using the algorithm is in big O notation.
This article needs additional citations for verification. (October 2024) |
The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007. It is asymptotically faster than older methods such as Karatsuba and Toom–Cook multiplication, and starts to outperform them in practice for numbers beyond about 10,000 to 100,000 decimal digits.[2] In 2007, Martin Fürer published an algorithm with faster asymptotic complexity.[3] In 2019, David Harvey and Joris van der Hoeven demonstrated that multi-digit multiplication has theoretical complexity; however, their algorithm has constant factors which make it impossibly slow for any conceivable practical problem (see galactic algorithm).[4]
Applications of the Schönhage–Strassen algorithm include large computations done for their own sake such as the Great Internet Mersenne Prime Search and approximations of π, as well as practical applications such as Lenstra elliptic curve factorization via Kronecker substitution, which reduces polynomial multiplication to integer multiplication.[5][6]
This section has a simplified version of the algorithm, showing how to compute the product of two natural numbers , modulo a number of the form , where is some fixed number. The integers are to be divided into blocks of bits, so in practical implementations, it is important to strike the right balance between the parameters . In any case, this algorithm will provide a way to multiply two positive integers, provided is chosen so that .
Let be the number of bits in the signals and , where is a power of two. Divide the signals and into blocks of bits each, storing the resulting blocks as arrays (whose entries we shall consider for simplicity as arbitrary precision integers).
We now select a modulus for the Fourier transform, as follows. Let be such that . Also put , and regard the elements of the arrays as (arbitrary precision) integers modulo . Observe that since , the modulus is large enough to accommodate any carries that can result from multiplying and . Thus, the product (modulo ) can be calculated by evaluating the convolution of . Also, with , we have , and so is a primitive th root of unity modulo .
We now take the discrete Fourier transform of the arrays in the ring , using the root of unity for the Fourier basis, giving the transformed arrays . Because is a power of two, this can be achieved in logarithmic time using a fast Fourier transform.
Let (pointwise product), and compute the inverse transform of the array , again using the root of unity . The array is now the convolution of the arrays . Finally, the product is given by evaluating
This basic algorithm can be improved in several ways. Firstly, it is not necessary to store the digits of to arbitrary precision, but rather only up to bits, which gives a more efficient machine representation of the arrays . Secondly, it is clear that the multiplications in the forward transforms are simple bit shifts. With some care, it is also possible to compute the inverse transform using only shifts. Taking care, it is thus possible to eliminate any true multiplications from the algorithm except for where the pointwise product is evaluated. It is therefore advantageous to select the parameters so that this pointwise product can be performed efficiently, either because it is a single machine word or using some optimized algorithm for multiplying integers of a (ideally small) number of words. Selecting the parameters is thus an important area for further optimization of the method.
Every number in base B, can be written as a polynomial:
Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:
Because, for : , we have a convolution.
By using FFT (fast Fourier transform), used in the original version rather than NTT (Number-theoretic transform),[7] with convolution rule; we get
That is; , where is the corresponding coefficient in Fourier space. This can also be written as: .
We have the same coefficients due to linearity under the Fourier transform, and because these polynomials only consist of one unique term per coefficient:
Convolution rule:
We have reduced our convolution problem to product problem, through FFT.
By finding the FFT of the polynomial interpolation of each , one can determine the desired coefficients.
This algorithm uses the divide-and-conquer method to divide the problem into subproblems.
By letting:
where is the nth root, one sees that:[8]
This mean, one can use weight , and then multiply with after.
Instead of using weight, as , in first step of recursion (when ), one can calculate:
In a normal FFT which operates over complex numbers, one would use:
However, FFT can also be used as a NTT (number theoretic transformation) in Schönhage–Strassen. This means that we have to use θ to generate numbers in a finite field (for example ).
A root of unity under a finite field GF(r), is an element a such that or . For example GF(p), where p is a prime number, gives .
Notice that in and in . For these candidates, under its finite field, and therefore act the way we want .
Same FFT algorithms can still be used, though, as long as θ is a root of unity of a finite field.
To find FFT/NTT transform, we do the following:
First product gives contribution to , for each k. Second gives contribution to , due to mod .
To do the inverse:
depending whether data needs to be normalized.
One multiplies by to normalize FFT data into a specific range, where , where m is found using the modular multiplicative inverse.
In Schönhage–Strassen algorithm, . This should be thought of as a binary tree, where one have values in . By letting , for each K one can find all , and group all pairs into M different groups. Using to group pairs through convolution is a classical problem in algorithms.[9]
Having this in mind, help us to group into groups for each group of subtasks in depth k in a tree with
Notice that , for some L. This makes N a Fermat number. When doing mod , we have a Fermat ring.
Because some Fermat numbers are Fermat primes, one can in some cases avoid calculations.
There are other N that could have been used, of course, with same prime number advantages. By letting , one have the maximal number in a binary number with bits. is a Mersenne number, that in some cases is a Mersenne prime. It is a natural candidate against Fermat number
Doing several mod calculations against different N, can be helpful when it comes to solving integer product. By using the Chinese remainder theorem, after splitting M into smaller different types of N, one can find the answer of multiplication xy [10]
Fermat numbers and Mersenne numbers are just two types of numbers, in something called generalized Fermat Mersenne number (GSM); with formula:[11]
In this formula, is a Fermat number, and is a Mersenne number.
This formula can be used to generate sets of equations, that can be used in CRT (Chinese remainder theorem):[12]
Furthermore; , where a is an element that generates elements in in a cyclic manner.
If , where , then .
The following formula is helpful, finding a proper K (number of groups to divide N bits into) given bit size N by calculating efficiency :[13]
N is bit size (the one used in ) at outermost level. K gives groups of bits, where .
n is found through N, K and k by finding the smallest x, such that
If one assume efficiency above 50%, and k is very small compared to rest of formula; one get
This means: When something is very effective; K is bound above by or asymptotically bound above by
Following alogithm, the standard Modular Schönhage-Strassen Multiplication algorithm (with some optimizations), is found in overview through [14]
For implemantion details, one can read the book Prime Numbers: A Computational Perspective.[15] This variant differs somewhat from Schönhage's original method in that it exploits the discrete weighted transform to perform negacyclic convolutions more efficiently. Another source for detailed information is Knuth's The Art of Computer Programming.[16]
This section explains a number of important practical optimizations, when implementing Schönhage–Strassen.
Below a certain cutoff point, it's more efficient to use other multiplication algorithms, such as Toom–Cook multiplication.[17]
The idea is to use as a root of unity of order in finite field ( it is a solution to equation ), when weighting values in NTT (number theoretic transformation) approach. It has been shown to save 10% in integer multiplication time.[18]
By letting , one can compute and . In combination with CRT (Chinese Remainder Theorem) to find exact values of multiplication uv[19]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.