Guruswami–Sudan list decoding algorithm

(Learn how and when to remove this message)

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes in the presence of many errors. If a code has relative distance δ {\displaystyle \delta } , then it is possible in principle to recover an encoded message when up to δ / 2 {\displaystyle \delta /2} fraction of the codeword symbols are corrupted. But when error rate is greater than δ / 2 {\displaystyle \delta /2} , this will not in general be possible. List decoding overcomes that issue by allowing the decoder to output a short list of messages that might have been encoded. List decoding can correct more than δ / 2 {\displaystyle \delta /2} fraction of errors.

There are many polynomial-time algorithms for list decoding. In this article, we first present an algorithm for Reed–Solomon (RS) codes which corrects up to 1 2 R {\displaystyle 1-{\sqrt {2R}}} errors and is due to Madhu Sudan. Subsequently, we describe the improved Guruswami–Sudan list decoding algorithm, which can correct up to 1 R {\displaystyle 1-{\sqrt {R}}} errors.

Here is a plot of the rate R and distance δ {\displaystyle \delta } for different algorithms.

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/81/Graph.jpg

Algorithm 1 (Sudan's list decoding algorithm)

Problem statement

Input : A field F {\displaystyle F} ; n distinct pairs of elements ( x i , y i ) i = 1 n {\displaystyle {(x_{i},y_{i})_{i=1}^{n}}} in F × F {\displaystyle F\times F} ; and integers d {\displaystyle d} and t {\displaystyle t} .

Output: A list of all functions f : F F {\displaystyle f:F\to F} satisfying

f ( x ) {\displaystyle f(x)} is a polynomial in x {\displaystyle x} of degree at most d {\displaystyle d}

# { i | f ( x i ) = y i } t {\displaystyle \#\{i|f(x_{i})=y_{i}\}\geq t} (1)

To understand Sudan's Algorithm better, one may want to first know another algorithm which can be considered as the earlier version or the fundamental version of the algorithms for list decoding RS codes - the Berlekamp–Welch algorithm. Welch and Berlekamp initially came with an algorithm which can solve the problem in polynomial time with best threshold on t {\displaystyle t} to be t ( n + d + 1 ) / 2 {\displaystyle t\geq (n+d+1)/2} . The mechanism of Sudan's Algorithm is almost the same as the algorithm of Berlekamp–Welch Algorithm, except in the step 1, one wants to compute a bivariate polynomial of bounded ( 1 , k ) {\displaystyle (1,k)} degree. Sudan's list decoding algorithm for Reed–Solomon code which is an improvement on Berlekamp and Welch algorithm, can solve the problem with t = ( 2 n d ) {\displaystyle t=({\sqrt {2nd}})} . This bound is better than the unique decoding bound 1 ( R 2 ) {\displaystyle 1-\left({\frac {R}{2}}\right)} for R < 0.07 {\displaystyle R<0.07} .

Algorithm

Definition 1 (weighted degree)

For weights w x , w y Z + {\displaystyle w_{x},w_{y}\in \mathbb {Z} ^{+}} , the ( w x , w y ) {\displaystyle (w_{x},w_{y})} – weighted degree of monomial q i j x i y j {\displaystyle q_{ij}x^{i}y^{j}} is i w x + j w y {\displaystyle iw_{x}+jw_{y}} . The ( w x , w y ) {\displaystyle (w_{x},w_{y})} – weighted degree of a polynomial Q ( x , y ) = i j q i j x i y j {\displaystyle Q(x,y)=\sum _{ij}q_{ij}x^{i}y^{j}} is the maximum, over the monomials with non-zero coefficients, of the ( w x , w y ) {\displaystyle (w_{x},w_{y})} – weighted degree of the monomial.

For example, x y 2 {\displaystyle xy^{2}} has ( 1 , 3 ) {\displaystyle (1,3)} -degree 7

Algorithm:

Inputs: n , d , t {\displaystyle n,d,t} ; { ( x 1 , y 1 ) ( x n , y n ) {\displaystyle (x_{1},y_{1})\cdots (x_{n},y_{n})} } /* Parameters l,m to be set later. */

Step 1: Find a non-zero bivariate polynomial Q : F 2 F {\displaystyle Q:F^{2}\mapsto F} satisfying

Q ( x i , y i ) = 0 {\displaystyle Q(x_{i},y_{i})=0} (2)

Step 2. Factor Q into irreducible factors.

Step 3. Output all the polynomials f {\displaystyle f} such that ( y f ( x ) ) {\displaystyle (y-f(x))} is a factor of Q and f ( x i ) = y i {\displaystyle f(x_{i})=y_{i}} for at least t values of i [ n ] {\displaystyle i\in [n]}

Analysis

One has to prove that the above algorithm runs in polynomial time and outputs the correct result. That can be done by proving following set of claims.

Claim 1:

If a function Q : F 2 F {\displaystyle Q:F^{2}\to F} satisfying (2) exists, then one can find it in polynomial time.

Proof:

Note that a bivariate polynomial Q ( x , y ) {\displaystyle Q(x,y)} of ( 1 , d ) {\displaystyle (1,d)} -weighted degree at most D {\displaystyle D} can be uniquely written as Q ( x , y ) = j = 0 l k = 0 m + ( l j ) d q k j x k y j {\displaystyle Q(x,y)=\sum _{j=0}^{l}\sum _{k=0}^{m+(l-j)d}q_{kj}x^{k}y^{j}} . Then one has to find the coefficients q k j {\displaystyle q_{kj}} satisfying the constraints j = 0 l k = 0 m + ( l j ) d q k j x i k y i j = 0 {\displaystyle \sum _{j=0}^{l}\sum _{k=0}^{m+(l-j)d}q_{kj}x_{i}^{k}y_{i}^{j}=0} , for every i [ n ] {\displaystyle i\in [n]} . This is a linear set of equations in the unknowns { q k j {\displaystyle q_{kj}} }. One can find a solution using Gaussian elimination in polynomial time.

Claim 2:

If ( m + 1 ) ( l + 1 ) + d ( l + 1 2 ) > n {\displaystyle (m+1)(l+1)+d{\begin{pmatrix}l+1\\2\end{pmatrix}}>n} then there exists a function Q ( x , y ) {\displaystyle Q(x,y)} satisfying (2)

Proof:

To ensure a non zero solution exists, the number of coefficients in Q ( x , y ) {\displaystyle Q(x,y)} should be greater than the number of constraints. Assume that the maximum degree d e g x ( Q ) {\displaystyle deg_{x}(Q)} of x {\displaystyle x} in Q ( x , y ) {\displaystyle Q(x,y)} is m and the maximum degree d e g y ( Q ) {\displaystyle deg_{y}(Q)} of y {\displaystyle y} in Q ( x , y ) {\displaystyle Q(x,y)} is l {\displaystyle l} . Then the degree of Q ( x , y ) {\displaystyle Q(x,y)} will be at most m + l d {\displaystyle m+ld} . One has to see that the linear system is homogeneous. The setting q j k = 0 {\displaystyle q_{jk}=0} satisfies all linear constraints. However this does not satisfy (2), since the solution can be identically zero. To ensure that a non-zero solution exists, one has to make sure that number of unknowns in the linear system to be ( m + 1 ) ( l + 1 ) + d ( l + 1 2 ) > n {\displaystyle (m+1)(l+1)+d{\begin{pmatrix}l+1\\2\end{pmatrix}}>n} , so that one can have a non zero Q ( x , y ) {\displaystyle Q(x,y)} . Since this value is greater than n, there are more variables than constraints and therefore a non-zero solution exists.

Claim 3:

If Q ( x , y ) {\displaystyle Q(x,y)} is a function satisfying (2) and f ( x ) {\displaystyle f(x)} is function satisfying (1) and t > m + l d {\displaystyle t>m+ld} , then ( y f ( x ) ) {\displaystyle (y-f(x))} divides Q ( x , y ) {\displaystyle Q(x,y)}

Proof:

Consider a function p ( x ) = Q ( x , f ( x ) ) {\displaystyle p(x)=Q(x,f(x))} . This is a polynomial in x {\displaystyle x} , and argue that it has degree at most m + l d {\displaystyle m+ld} . Consider any monomial q j k x k y j {\displaystyle q_{jk}x^{k}y^{j}} of Q ( x ) {\displaystyle Q(x)} . Since Q {\displaystyle Q} has ( 1 , d ) {\displaystyle (1,d)} -weighted degree at most m + l d {\displaystyle m+ld} , one can say that k + j d m + l d {\displaystyle k+jd\leq m+ld} . Thus the term q k j x k f ( x ) j {\displaystyle q_{kj}x^{k}f(x)^{j}} is a polynomial in x {\displaystyle x} of degree at most k + j d m + l d {\displaystyle k+jd\leq m+ld} . Thus p ( x ) {\displaystyle p(x)} has degree at most m + l d {\displaystyle m+ld}

Next argue that p ( x ) {\displaystyle p(x)} is identically zero. Since Q ( x i , f ( x i ) ) {\displaystyle Q(x_{i},f(x_{i}))} is zero whenever y i = f ( x i ) {\displaystyle y_{i}=f(x_{i})} , one can say that p ( x i ) {\displaystyle p(x_{i})} is zero for strictly greater than m + l d {\displaystyle m+ld} points. Thus p {\displaystyle p} has more zeroes than its degree and hence is identically zero, implying Q ( x , f ( x ) ) 0 {\displaystyle Q(x,f(x))\equiv 0}

Finding optimal values for m {\displaystyle m} and l {\displaystyle l} . Note that m + l d < t {\displaystyle m+ld<t} and ( m + 1 ) ( l + 1 ) + d ( l + 1 2 ) > n {\displaystyle (m+1)(l+1)+d{\begin{pmatrix}l+1\\2\end{pmatrix}}>n} For a given value l {\displaystyle l} , one can compute the smallest m {\displaystyle m} for which the second condition holds By interchanging the second condition one can get m {\displaystyle m} to be at most ( n + 1 d ( l + 1 2 ) ) / 2 1 {\displaystyle (n+1-d{\begin{pmatrix}l+1\\2\end{pmatrix}})/2-1} Substituting this value into first condition one can get t {\displaystyle t} to be at least n + 1 l + 1 + d l 2 {\displaystyle {\frac {n+1}{l+1}}+{\frac {dl}{2}}} Next minimize the above equation of unknown parameter l {\displaystyle l} . One can do that by taking derivative of the equation and equating that to zero By doing that one will get, l = 2 ( n + 1 ) d 1 {\displaystyle l={\sqrt {\frac {2(n+1)}{d}}}-1} Substituting back the l {\displaystyle l} value into m {\displaystyle m} and t {\displaystyle t} one will get m ( n + 1 ) d 2 ( n + 1 ) d 2 + d 2 1 = d 2 1 {\displaystyle m\geq {\sqrt {\frac {(n+1)d}{2}}}-{\sqrt {\frac {(n+1)d}{2}}}+{\frac {d}{2}}-1={\frac {d}{2}}-1} t > 2 ( n + 1 ) d 2 d d 2 1 {\displaystyle t>{\sqrt {\frac {2(n+1)d^{2}}{d}}}-{\frac {d}{2}}-1} t > 2 ( n + 1 ) d d 2 1 {\displaystyle t>{\sqrt {2(n+1)d}}-{\frac {d}{2}}-1}

Algorithm 2 (Guruswami–Sudan list decoding algorithm)

Definition

Consider a ( n , k ) {\displaystyle (n,k)} Reed–Solomon code over the finite field F = G F ( q ) {\displaystyle \mathbb {F} =GF(q)} with evaluation set ( α 1 , α 2 , , α n ) {\displaystyle (\alpha _{1},\alpha _{2},\ldots ,\alpha _{n})} and a positive integer r {\displaystyle r} , the Guruswami-Sudan List Decoder accepts a vector β = ( β 1 , β 2 , , β n ) {\displaystyle \beta =(\beta _{1},\beta _{2},\ldots ,\beta _{n})} {\displaystyle \in } F n {\displaystyle \mathbb {F} ^{n}} as input, and outputs a list of polynomials of degree k {\displaystyle \leq k} which are in 1 to 1 correspondence with codewords.

The idea is to add more restrictions on the bi-variate polynomial Q ( x , y ) {\displaystyle Q(x,y)} which results in the increment of constraints along with the number of roots.

Multiplicity

A bi-variate polynomial Q ( x , y ) {\displaystyle Q(x,y)} has a zero of multiplicity r {\displaystyle r} at ( 0 , 0 ) {\displaystyle (0,0)} means that Q ( x , y ) {\displaystyle Q(x,y)} has no term of degree r {\displaystyle \leq r} , where the x-degree of f ( x ) {\displaystyle f(x)} is defined as the maximum degree of any x term in f ( x ) {\displaystyle f(x)} {\displaystyle \qquad } d e g x f ( x ) {\displaystyle deg_{x}f(x)} = {\displaystyle =} max i I { i } {\displaystyle \max _{i\in I}\{i\}}

For example: Let Q ( x , y ) = y 4 x 2 {\displaystyle Q(x,y)=y-4x^{2}} .

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig1.jpg

Hence, Q ( x , y ) {\displaystyle Q(x,y)} has a zero of multiplicity 1 at (0,0).

Let Q ( x , y ) = y + 6 x 2 {\displaystyle Q(x,y)=y+6x^{2}} .

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig2.jpg

Hence, Q ( x , y ) {\displaystyle Q(x,y)} has a zero of multiplicity 1 at (0,0).

Let Q ( x , y ) = ( y 4 x 2 ) ( y + 6 x 2 ) = y 2 + 6 x 2 y 4 x 2 y 24 x 4 {\displaystyle Q(x,y)=(y-4x^{2})(y+6x^{2})=y^{2}+6x^{2}y-4x^{2}y-24x^{4}}

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig3.jpg

Hence, Q ( x , y ) {\displaystyle Q(x,y)} has a zero of multiplicity 2 at (0,0).

Similarly, if Q ( x , y ) = [ ( y β ) 4 ( x α ) 2 ) ] [ ( y β ) + 6 ( x α ) 2 ) ] {\displaystyle Q(x,y)=[(y-\beta )-4(x-\alpha )^{2})][(y-\beta )+6(x-\alpha )^{2})]} Then, Q ( x , y ) {\displaystyle Q(x,y)} has a zero of multiplicity 2 at ( α , β ) {\displaystyle (\alpha ,\beta )} .

General definition of multiplicity

Q ( x , y ) {\displaystyle Q(x,y)} has r {\displaystyle r} roots at ( α , β ) {\displaystyle (\alpha ,\beta )} if Q ( x , y ) {\displaystyle Q(x,y)} has a zero of multiplicity r {\displaystyle r} at ( α , β ) {\displaystyle (\alpha ,\beta )} when ( α , β ) ( 0 , 0 ) {\displaystyle (\alpha ,\beta )\neq (0,0)} .

Algorithm

Let the transmitted codeword be ( f ( α 1 ) , f ( α 2 ) , , f ( α n ) ) {\displaystyle (f(\alpha _{1}),f(\alpha _{2}),\ldots ,f(\alpha _{n}))} , ( α 1 , α 2 , , α n ) {\displaystyle (\alpha _{1},\alpha _{2},\ldots ,\alpha _{n})} be the support set of the transmitted codeword & the received word be ( β 1 , β 2 , , β n ) {\displaystyle (\beta _{1},\beta _{2},\ldots ,\beta _{n})}

The algorithm is as follows:

Interpolation step

For a received vector ( β 1 , β 2 , , β n ) {\displaystyle (\beta _{1},\beta _{2},\ldots ,\beta _{n})} , construct a non-zero bi-variate polynomial Q ( x , y ) {\displaystyle Q(x,y)} with ( 1 , k ) {\displaystyle (1,k)-} weighted degree of at most d {\displaystyle d} such that Q {\displaystyle Q} has a zero of multiplicity r {\displaystyle r} at each of the points ( α i , β i ) {\displaystyle (\alpha _{i},\beta _{i})} where 1 i n {\displaystyle 1\leq i\leq n}

Q ( α i , β i ) = 0 {\displaystyle Q(\alpha _{i},\beta _{i})=0\,}

Factorization step

Find all the factors of Q ( x , y ) {\displaystyle Q(x,y)} of the form y p ( x ) {\displaystyle y-p(x)} and p ( α i ) = β i {\displaystyle p(\alpha _{i})=\beta _{i}} for at least t {\displaystyle t} values of i {\displaystyle i}

where 0 i n {\displaystyle 0\leq i\leq n} & p ( x ) {\displaystyle p(x)} is a polynomial of degree k {\displaystyle \leq k}

Recall that polynomials of degree k {\displaystyle \leq k} are in 1 to 1 correspondence with codewords. Hence, this step outputs the list of codewords.

Analysis

Interpolation step

Lemma: Interpolation step implies ( r + 1 2 ) {\displaystyle {\begin{pmatrix}r+1\\2\end{pmatrix}}} constraints on the coefficients of a i {\displaystyle a_{i}}

Let Q ( x , y ) = i = 0 , j = 0 i = m , j = p a i , j x i y j {\displaystyle Q(x,y)=\sum _{i=0,j=0}^{i=m,j=p}a_{i,j}x^{i}y^{j}} where deg x Q ( x , y ) = m {\displaystyle \deg _{x}Q(x,y)=m} and deg y Q ( x , y ) = p {\displaystyle \deg _{y}Q(x,y)=p}

Then, Q ( x + α , y + β ) {\displaystyle Q(x+\alpha ,y+\beta )} = {\displaystyle =} u = 0 , v = 0 r {\displaystyle \sum _{u=0,v=0}^{r}} Q u , v {\displaystyle Q_{u,v}} ( α , β ) {\displaystyle (\alpha ,\beta )} x u {\displaystyle x^{u}} y v {\displaystyle y^{v}} ........................(Equation 1)

where Q u , v {\displaystyle Q_{u,v}} ( x , y ) {\displaystyle (x,y)} = {\displaystyle =} i = 0 , j = 0 i = m , j = p {\displaystyle \sum _{i=0,j=0}^{i=m,j=p}} ( i u ) {\displaystyle {\begin{pmatrix}i\\u\end{pmatrix}}} ( j v ) {\displaystyle {\begin{pmatrix}j\\v\end{pmatrix}}} a i , j {\displaystyle a_{i,j}} x i u {\displaystyle x^{i-u}} y j v {\displaystyle y^{j-v}}

Proof of Equation 1:

Q ( x + α , y + β ) = i , j a i , j ( x + α ) i ( y + β ) j {\displaystyle Q(x+\alpha ,y+\beta )=\sum _{i,j}a_{i,j}(x+\alpha )^{i}(y+\beta )^{j}}
Q ( x + α , y + β ) = i , j a i , j ( u ( i u ) x u α i u ) ( v ( i v ) y v β j v ) {\displaystyle Q(x+\alpha ,y+\beta )=\sum _{i,j}a_{i,j}{\Bigg (}\sum _{u}{\begin{pmatrix}i\\u\end{pmatrix}}x^{u}\alpha ^{i-u}{\Bigg )}{\Bigg (}\sum _{v}{\begin{pmatrix}i\\v\end{pmatrix}}y^{v}\beta ^{j-v}{\Bigg )}} .................Using binomial expansion
Q ( x + α , y + β ) = u , v x u y v ( i , j ( i u ) ( i v ) a i , j α i u β j v ) {\displaystyle Q(x+\alpha ,y+\beta )=\sum _{u,v}x^{u}y^{v}{\Bigg (}\sum _{i,j}{\begin{pmatrix}i\\u\end{pmatrix}}{\begin{pmatrix}i\\v\end{pmatrix}}a_{i,j}\alpha ^{i-u}\beta ^{j-v}{\Bigg )}}
Q ( x + α , y + β ) = u , v {\displaystyle Q(x+\alpha ,y+\beta )=\sum _{u,v}} Q u , v ( α , β ) x u y v {\displaystyle Q_{u,v}(\alpha ,\beta )x^{u}y^{v}}

Proof of Lemma:

The polynomial Q ( x , y ) {\displaystyle Q(x,y)} has a zero of multiplicity r {\displaystyle r} at ( α , β ) {\displaystyle (\alpha ,\beta )} if

Q u , v {\displaystyle Q_{u,v}} ( α , β ) {\displaystyle (\alpha ,\beta )} {\displaystyle \equiv } 0 {\displaystyle 0} such that 0 u + v r 1 {\displaystyle 0\leq u+v\leq r-1}
u {\displaystyle u} can take r v {\displaystyle r-v} values as 0 v r 1 {\displaystyle 0\leq v\leq r-1} . Thus, the total number of constraints is

v = 0 r 1 r v {\displaystyle \sum _{v=0}^{r-1}{r-v}} = {\displaystyle =} ( r + 1 2 ) {\displaystyle {\begin{pmatrix}r+1\\2\end{pmatrix}}}

Thus, ( r + 1 2 ) {\displaystyle {\begin{pmatrix}r+1\\2\end{pmatrix}}} number of selections can be made for ( u , v ) {\displaystyle (u,v)} and each selection implies constraints on the coefficients of a i {\displaystyle a_{i}}

Factorization step

Proposition:

Q ( x , p ( x ) ) 0 {\displaystyle Q(x,p(x))\equiv 0} if y p ( x ) {\displaystyle y-p(x)} is a factor of Q ( x , y ) {\displaystyle Q(x,y)}

Proof:

Since, y p ( x ) {\displaystyle y-p(x)} is a factor of Q ( x , y ) {\displaystyle Q(x,y)} , Q ( x , y ) {\displaystyle Q(x,y)} can be represented as

Q ( x , y ) = L ( x , y ) ( y p ( x ) ) {\displaystyle Q(x,y)=L(x,y)(y-p(x))} + {\displaystyle +} R ( x ) {\displaystyle R(x)}

where, L ( x , y ) {\displaystyle L(x,y)} is the quotient obtained when Q ( x , y ) {\displaystyle Q(x,y)} is divided by y p ( x ) {\displaystyle y-p(x)} R ( x ) {\displaystyle R(x)} is the remainder

Now, if y {\displaystyle y} is replaced by p ( x ) {\displaystyle p(x)} , Q ( x , p ( x ) ) {\displaystyle Q(x,p(x))} {\displaystyle \equiv } 0 {\displaystyle 0} , only if R ( x ) {\displaystyle R(x)} {\displaystyle \equiv } 0 {\displaystyle 0}

Theorem:

If p ( α ) = β {\displaystyle p(\alpha )=\beta } , then ( x α ) r {\displaystyle (x-\alpha )^{r}} is a factor of Q ( x , p ( x ) ) {\displaystyle Q(x,p(x))}

Proof:

Q ( x , y ) {\displaystyle Q(x,y)} = {\displaystyle =} u , v {\displaystyle \sum _{u,v}} Q u , v {\displaystyle Q_{u,v}} ( α , β ) {\displaystyle (\alpha ,\beta )} ( x α ) u {\displaystyle (x-\alpha )^{u}} ( y β ) v {\displaystyle (y-\beta )^{v}} ...........................From Equation 2

Q ( x , p ( x ) ) {\displaystyle Q(x,p(x))} = {\displaystyle =} u , v {\displaystyle \sum _{u,v}} Q u , v {\displaystyle Q_{u,v}} ( α , β ) {\displaystyle (\alpha ,\beta )} ( x α ) u {\displaystyle (x-\alpha )^{u}} ( p ( x ) β ) v {\displaystyle (p(x)-\beta )^{v}}

Given, p ( α ) {\displaystyle p(\alpha )} = {\displaystyle =} β {\displaystyle \beta } ( p ( x ) β ) {\displaystyle (p(x)-\beta )} mod ( x α ) {\displaystyle (x-\alpha )} = {\displaystyle =} 0 {\displaystyle 0}

Hence, ( x α ) u {\displaystyle (x-\alpha )^{u}} ( p ( x ) β ) v {\displaystyle (p(x)-\beta )^{v}} mod ( x α ) u + v {\displaystyle (x-\alpha )^{u+v}} = {\displaystyle =} 0 {\displaystyle 0}

Thus, ( x α ) r {\displaystyle (x-\alpha )^{r}} is a factor of Q ( x , p ( x ) ) {\displaystyle Q(x,p(x))} .

As proved above,

t r > D {\displaystyle t\cdot r>D}

t > D r {\displaystyle t>{\frac {D}{r}}}

D ( D + 2 ) 2 ( k 1 ) > n ( r + 1 2 ) {\displaystyle {\frac {D(D+2)}{2(k-1)}}>n{\begin{pmatrix}r+1\\2\end{pmatrix}}} where LHS is the upper bound on the number of coefficients of Q ( x , y ) {\displaystyle Q(x,y)} and RHS is the earlier proved Lemma.

D = k n r ( r 1 ) {\displaystyle D={\sqrt {knr(r-1)}}\,}

Therefore, t = k n ( 1 1 r ) {\displaystyle t=\left\lceil {\sqrt {kn(1-{\frac {1}{r}})}}\right\rceil }

Substitute r = 2 k n {\displaystyle r=2kn} ,

t > k n 1 2 > k n {\displaystyle t>\left\lceil {\sqrt {kn-{\frac {1}{2}}}}\right\rceil >\left\lceil {\sqrt {kn}}\right\rceil }

Hence proved, that Guruswami–Sudan List Decoding Algorithm can list decode Reed-Solomon codes up to 1 R {\displaystyle 1-{\sqrt {R}}} errors.

References