Fast Fourier Transform for speeding up the multiplication of polynomials an Algorithm Visualization

Alexandru Cioaca Defining the problem Consider 2 polynomials of degree N

We want to compute their product By definition, this means cross-multiplying their coefficients The computation is heavy, but it can be sped up with FFT

In the following slides, you will find out why and how The explicit form of a polynomial is given by the list of coefficients and we can use it to compute the polynomials values at any point (for any variable)

This operation is called Evaluation In reverse, if we have the values of a polynomial of order N at at least N distinct points, we can determine its coefficients This operation is called Interpolation

Consider the following polynomial ( )=0 + 1 + 2 2 + 3 3 We can look at it as a sum, whose terms are powers of , multiplied with the

coefficients . ( )=0 + 1 + 2 2 + 3 3 First we have the constant component, represented by

( )=0 + 1 + 2 2 + 3 3 Then, a linear component represented by a line of slope

( )=0 + 1 + 2 2 + 3 3 Next, the root of multiplied with ( )=0 + 1 + 2 2 + 3 3

Last, we have a third-order component, with the coefficient ( )=0 + 1 + 2 2 + 3 3

Adding these 4 components gives us our polynomial (in black) Lets draw a cartesian grid for our polynomial Imagine we choose (or we are given) some points on the axis.

Imagine we choose (or we are given) some points on the axis. Imagine we choose (or we are given) some points on the axis.

We can evaluate our polynomial at these points. This is Evaluation. Now imagine the reverse operation for our polynomial. What if we dont have its explicit form, so we cant evaluate it?

Instead, we only have its value at certain points. From these values, the polynomial can be reconstructed approximately. This approximation is better for more and more values.

This is Interpolation. Why do we care about evaluation and interpolation? Because multiplying two polynomials by cross-multiplying their coefficients can take up a lot of time

But evaluating a polynomial should be simpler So we evaluate our two polynomials at the same points The product of the values for the same point is the value of the product polynomial at that point Then, by interpolating these products, we obtain the

coefficients of the product polynomial Consider the following two polynomials ( ) = 0 + 1 + 2 2+ 3 3 ( )= 0 +1 + 2 2+ 3 3

Their product is The coefficients of the product polynomial can be computed from the following outer-product

This means computing the product of each pair of coefficients And then adding the terms

And then adding the terms And then adding the terms

And then adding the terms However, evaluating polynomials takes up a lot of computation too. Same for multiplying the resulting values. We would like to share some of the computation

For example, evaluating at pairs of the form Thus, we can share some of the computation already performed for some number when evaluating at its negative But, we need even more such symmetries between points

The key is choosing the points where we evaluate the polynomial These points have to share some of the computation A popular method is to use the Nth primitive root of the unity

and its powers The advantage is that we can find as many roots as we need on the Unit Circle, to have enough points for the degree of the polynomial

Look at the symmetry of these roots on the Unit Circle N=1

N=2 N=3 N=4

N=5 N=6

N=7 N=8 Evaluating a polynomial at the Nth roots of unity is known as

the Discrete Fourier Transform and has many applications To perform the DFT, we need a matrix called the DFT matrix The evaluation comes down to multiplying this DFT matrix with the vector of coefficients of the polynomial This DFT matrix is formed from the N powers of the Nth

primitive root of unity , where The DFT matrix is a symmetric, and contains the powers of in ascending order, on the rows and columns We can see the DFT matrix is a Vandermonde matrix of the Nth roots of unity

For a polynomial of degree 8, its size is 8x8 and contains powers of The rows of the DFT matrix correspond to basic harmonic waveforms They transform the seed vector in the spectral plane

This computation is nothing but a matrix-vector product Each element of the result is equal to the inner product of the corresponding row of the matrix with the seed vector

So we are dealing with 8 terms obtained from multiplications Adding these terms that come from multiplications

And, first and foremost, computing the elements of the DFT matrix.. ..for every pair of elements from the matrix and the vector So the computational cost of this matvec is

Because we have to do this for each row. Which might be take a while.. We can speed up the matvec using some nice properties of DFT This is the FFT algorithm (FAST Fourier Transform)

Obviously, we have Since (degree of polynomial) then and

For any integer , we have , so At the same time,

Another property is Which can be confirmed from the elements already placed in the matrix So when we compute

From the first property, we have And from the second property, Going further with this operation, when we compute

We also get And Finally, we have

so And also,

Only after 3-4 steps, we filled the DFT matrix completely Fast Fourier Transform (FFT) FFT is used to compute this matrix-vector product with a smaller number of computations.

It is a recursive strategy of divide-and-conquer. FFT uses the observation made previously that we can express any polynomial as a sum between its terms in odd coefficients and those in even coefficients. This operation can be repeated until we split the polynomial

in linear polynomials that can be easily evaluated FFT procedure FFT ( a, A, N, ) if N=1 then

A[0]=a[0] else for k=0 to (N/2) -1 even_a[k] = a[2k] odd_a[k] = a[2k+1]

end FFT ( even_a, even_A, N/2, ) FFT ( odd_a, odd_A, N/2, ) for k=0 to N-1 A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)]

end end end procedure FFT FFT

procedure FFT ( a, A, N, ) FFT transforms the vector of coefficients a into the vector A.

if N=1 then A[0]=a[0] else for k=0 to (N/2) -1 even_a[k] = a[2k]

odd_a[k] = a[2k+1] end FFT ( even_a, even_A, N/2, ) FFT ( odd_a, odd_A, N/2, ) for k=0 to N-1

A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)] end end end procedure FFT

FFT procedure FFT ( a, A, N, ) if N=1 then A[0]=a[0] else

for k=0 to (N/2) -1 even_a[k] = a[2k] odd_a[k] = a[2k+1] end

It starts by splitting the given vector of coefficients in two subvectors. One contains the odd-order coefficients, and the other one contains those of even order.

FFT ( even_a, even_A, N/2, ) FFT ( odd_a, odd_A, N/2, ) for k=0 to N-1 A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)] end

end end procedure FFT FFT procedure FFT ( a, A, N, )

if N=1 then A[0]=a[0] else for k=0 to (N/2) -1 even_a[k] = a[2k]

odd_a[k] = a[2k+1] end Then, it proceeds in a recursive fashion to FFT ( even_a, even_A, N/2, )

FFT ( odd_a, odd_A, N/2, ) split these vectors again for k=0 to N-1 A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)] end

end end procedure FFT FFT procedure FFT ( a, A, N, )

if N=1 then A[0]=a[0] else for k=0 to (N/2) -1 even_a[k] = a[2k]

odd_a[k] = a[2k+1] end This recursion stops when we reach subvectors of degree 1

FFT ( even_a, even_A, N/2, ) FFT ( odd_a, odd_A, N/2, ) for k=0 to N-1 A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)]

end end end procedure FFT FFT

procedure FFT ( a, A, N, ) if N=1 then A[0]=a[0] else for k=0 to (N/2) -1

even_a[k] = a[2k] odd_a[k] = a[2k+1] end FFT ( even_a, even_A, N/2, ) FFT ( odd_a, odd_A, N/2, )

for k=0 to N-1 A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)] end end end procedure FFT

The actual computation is performed when the algorithm starts to exit the recursion. FFT

procedure FFT ( a, A, N, ) if N=1 then A[0]=a[0] else for k=0 to (N/2) -1

even_a[k] = a[2k] odd_a[k] = a[2k+1] end FFT ( even_a, even_A, N/2, ) FFT ( odd_a, odd_A, N/2, )

for k=0 to N-1 A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)] end end end procedure FFT

At each step backward, the output coefficients are updated. FFT

procedure FFT ( a, A, N, ) if N=1 then A[0]=a[0] else for k=0 to (N/2) -1

even_a[k] = a[2k] odd_a[k] = a[2k+1] end FFT ( even_a, even_A, N/2, ) FFT ( odd_a, odd_A, N/2, )

for k=0 to N-1 A[k] = even_a[k * mod (N/2)] + * odd_A[k * mod (N/2)] end end

It evaluates polynomials from the end procedure FFT Lets follow the algorithm step-by-step on the DFT matrix-vector product.

We pass the vector of coefficients to FFT which starts the recursion First, it splits the 8 coefficients in 2 sets (odd and even) It follows the recursion down one step for the first set of coefficients.

FFT splits this vector too and the recursion goes down one more step. At the third split (log 8 = 3), FFT is passed a linear polynomial and returns.

FFT reached a polynomial of order 1, so it will evaluate it. =0 The first coefficient of A gets updated with this value.

=0 Then, FFT evaluates the polynomial at the negative of the previous root.

=1 The corresponding coefficient is updated with this value. =1

By computing these two values, FFT already computed the pairs for the other 3 polynomials. We now exit the FFT for this polynomial (RED) and enter the branch of the

recursion corresponding to the next polynomial Again, we evaluate the two values. =0

And update the corresponding coefficients. =1

Looking at the corresponding columns, we can see that the other values are computed, but can be used only when the other polynomials are active, and when FFT evaluates at the right power of the primitive root of unity =0

After exiting the recursion to the second level, we can update the output coefficients by interchanging the values computed already. =1

=1 =0

FFT exits the recursion to the higher level and works on the second half. FFT evaluates these basic polynomials too, and updates the coefficients. After evaluating the last linear polynomial, FFT has computed all the values it

needs. From now on, the computation will rely on combining these values. Exiting the recursion, the coefficients are, again, updated at each step. Finally, FFT goes back to the upper level and combines the subpolynomials.

At this level, we can see the strength of FFT. It combines larger subpolynomials, so the computation is being sped up exponentially with each level.

With FFT, after three levels of recursion, we computed the matvec product. Multiplying the polynomials

In order to compute the product-polynomial, we will have to evaluate the two polynomials at enough points (2n-1) and multiply them element-wise

These products correspond to the spectral coefficients of the product.

In order to obtain its explicit form, we have to interpolate these values. This is done with the inverse DFT matrix, which contains the inverse of the DFT matrix, taken element-wise.

We can employ the same FFT algorithm to compute this fast.