Bounds for parameters of codes¶
This module provided some upper and lower bounds for the parameters of codes.
AUTHORS:
David Joyner (2006-07): initial implementation.
William Stein (2006-07): minor editing of docs and code (fixed bug in elias_bound_asymp)
David Joyner (2006-07): fixed dimension_upper_bound to return an integer, added example to elias_bound_asymp.
“ (2009-05): removed all calls to Guava but left it as an option.
Dima Pasechnik (2012-10): added LP bounds.
Let \(F\) be a finite set of size \(q\). A subset \(C\) of \(V=F^n\) is called a code of length \(n\). Often one considers the case where \(F\) is a finite field, denoted by \(\GF{q}\). Then \(V\) is an \(F\)-vector space. A subspace of \(V\) (with the standard basis) is called a linear code of length \(n\). If its dimension is denoted \(k\) then we typically store a basis of \(C\) as a \(k\times n\) matrix (the rows are the basis vectors). If \(F=\GF{2}\) then \(C\) is called a binary code. If \(F\) has \(q\) elements then \(C\) is called a \(q\)-ary code. The elements of a code \(C\) are called codewords. The information rate of \(C\) is
where \(\vert C\vert\) denotes the number of elements of \(C\). If \({\bf v}=(v_1,v_2,...,v_n)\), \({\bf w}=(w_1,w_2,...,w_n)\) are elements of \(V=F^n\) then we define
to be the Hamming distance between \({\bf v}\) and \({\bf w}\). The function \(d:V\times V\rightarrow \Bold{N}\) is called the Hamming metric. The weight of an element (in the Hamming metric) is \(d({\bf v},{\bf 0})\), where \(0\) is a distinguished element of \(F\); in particular it is \(0\) of the field if \(F\) is a field. The minimum distance of a linear code is the smallest nonzero weight of a codeword in \(C\). The relatively minimum distance is denoted
A linear code with length \(n\), dimension \(k\), and minimum distance \(d\) is called an \([n,k,d]_q\)-code and \(n,k,d\) are called its parameters. A (not necessarily linear) code \(C\) with length \(n\), size \(M=|C|\), and minimum distance \(d\) is called an \((n,M,d)_q\)-code (using parentheses instead of square brackets). Of course, \(k=\log_q(M)\) for linear codes.
What is the “best” code of a given length? Let \(A_q(n,d)\) denote the largest \(M\) such that there exists a \((n,M,d)\) code in \(F^n\). Let \(B_q(n,d)\) (also denoted \(A^{lin}_q(n,d)\)) denote the largest \(k\) such that there exists a \([n,k,d]\) code in \(F^n\). (Of course, \(A_q(n,d)\geq B_q(n,d)\).) Determining \(A_q(n,d)\) and \(B_q(n,d)\) is one of the main problems in the theory of error-correcting codes. For more details see [HP2003] and [Lin1999].
These quantities related to solving a generalization of the childhood game of “20 questions”.
GAME: Player 1 secretly chooses a number from \(1\) to \(M\) (\(M\) is large but fixed). Player 2 asks a series of “yes/no questions” in an attempt to determine that number. Player 1 may lie at most \(e\) times (\(e\geq 0\) is fixed). What is the minimum number of “yes/no questions” Player 2 must ask to (always) be able to correctly determine the number Player 1 chose?
If feedback is not allowed (the only situation considered here), call this minimum number \(g(M,e)\).
Lemma: For fixed \(e\) and \(M\), \(g(M,e)\) is the smallest \(n\) such that \(A_2(n,2e+1)\geq M\).
Thus, solving the solving a generalization of the game of “20 questions” is equivalent to determining \(A_2(n,d)\)! Using Sage, you can determine the best known estimates for this number in 2 ways:
Indirectly, using
best_known_linear_code_www(n, k, F)
, which connects to the website http://www.codetables.de by Markus Grassl;codesize_upper_bound(n,d,q)
,dimension_upper_bound(n,d,q)
, andbest_known_linear_code(n, k, F)
.
The output of best_known_linear_code()
,
best_known_linear_code_www()
, or dimension_upper_bound()
would
give only special solutions to the GAME because the bounds are applicable
to only linear codes. The output of codesize_upper_bound()
would give
the best possible solution, that may belong to a linear or nonlinear code.
This module implements:
codesize_upper_bound(n,d,q)
, for the best known (as of May, 2006) upper bound \(A(n,d)\) for the size of a code of length \(n\), minimum distance \(d\) over a field of size \(q\).dimension_upper_bound(n,d,q)
, an upper bound \(B(n,d)=B_q(n,d)\) for the dimension of a linear code of length \(n\), minimum distance \(d\) over a field of size \(q\).gilbert_lower_bound(n,q,d)
, a lower bound for number of elements in the largest code of min distance \(d\) in \(\GF{q}^n\).gv_info_rate(n,delta,q)
, \(log_q(GLB)/n\), where GLB is the Gilbert lower bound and \(\delta = d/n\).gv_bound_asymp(delta,q)
, asymptotic analog of Gilbert lower bound.plotkin_upper_bound(n,q,d)
plotkin_bound_asymp(delta,q)
, asymptotic analog of Plotkin bound.griesmer_upper_bound(n,q,d)
elias_upper_bound(n,q,d)
elias_bound_asymp(delta,q)
, asymptotic analog of Elias bound.hamming_upper_bound(n,q,d)
hamming_bound_asymp(delta,q)
, asymptotic analog of Hamming bound.singleton_upper_bound(n,q,d)
singleton_bound_asymp(delta,q)
, asymptotic analog of Singleton bound.mrrw1_bound_asymp(delta,q)
, “first” asymptotic McEliese-Rumsey-Rodemich-Welsh bound for the information rate.Delsarte (a.k.a. Linear Programming (LP)) upper bounds.
PROBLEM: In this module we shall typically either (a) seek bounds on \(k\), given \(n\), \(d\), \(q\), (b) seek bounds on \(R\), \(\delta\), \(q\) (assuming \(n\) is “infinity”).
Todo
Johnson bounds for binary codes.
mrrw2_bound_asymp(delta,q), “second” asymptotic McEliese-Rumsey-Rodemich-Welsh bound for the information rate.
- sage.coding.code_bounds.codesize_upper_bound(n, d, q, algorithm=None)[source]¶
Return an upper bound on the number of codewords in a (possibly non-linear) code.
This function computes the minimum value of the upper bounds of Singleton, Hamming, Plotkin, and Elias.
If
algorithm="gap"
, then this returns the best known upper bound \(A(n,d)=A_q(n,d)\) for the size of a code of length \(n\), minimum distance \(d\) over a field of size \(q\). The function first checks for trivial cases (like \(d=1\) or \(n=d\)), and if the value is in the built-in table. Then it calculates the minimum value of the upper bound using the algorithms of Singleton, Hamming, Johnson, Plotkin and Elias. If the code is binary, \(A(n, 2\ell-1) = A(n+1,2\ell)\), so the function takes the minimum of the values obtained from all algorithms for the parameters \((n, 2\ell-1)\) and \((n+1, 2\ell)\). This wraps GUAVA’s (i.e. GAP’s package Guava)UpperBound(n, d, q)
.If
algorithm="LP"
, then this returns the Delsarte (a.k.a. Linear Programming) upper bound.EXAMPLES:
sage: codes.bounds.codesize_upper_bound(10, 3, 2) 93 sage: codes.bounds.codesize_upper_bound(24, 8, 2, algorithm='LP') # needs sage.numerical.mip 4096 sage: codes.bounds.codesize_upper_bound(10, 3, 2, algorithm='gap') # optional - gap_package_guava 85 sage: codes.bounds.codesize_upper_bound(11, 3, 4, algorithm=None) # needs sage.symbolic 123361 sage: codes.bounds.codesize_upper_bound(11, 3, 4, algorithm='gap') # optional - gap_package_guava 123361 sage: codes.bounds.codesize_upper_bound(11, 3, 4, algorithm='LP') # needs sage.numerical.mip 109226
>>> from sage.all import * >>> codes.bounds.codesize_upper_bound(Integer(10), Integer(3), Integer(2)) 93 >>> codes.bounds.codesize_upper_bound(Integer(24), Integer(8), Integer(2), algorithm='LP') # needs sage.numerical.mip 4096 >>> codes.bounds.codesize_upper_bound(Integer(10), Integer(3), Integer(2), algorithm='gap') # optional - gap_package_guava 85 >>> codes.bounds.codesize_upper_bound(Integer(11), Integer(3), Integer(4), algorithm=None) # needs sage.symbolic 123361 >>> codes.bounds.codesize_upper_bound(Integer(11), Integer(3), Integer(4), algorithm='gap') # optional - gap_package_guava 123361 >>> codes.bounds.codesize_upper_bound(Integer(11), Integer(3), Integer(4), algorithm='LP') # needs sage.numerical.mip 109226
- sage.coding.code_bounds.dimension_upper_bound(n, d, q, algorithm=None)[source]¶
Return an upper bound for the dimension of a linear code.
Return an upper bound \(B(n,d) = B_q(n,d)\) for the dimension of a linear code of length \(n\), minimum distance \(d\) over a field of size \(q\).
Parameter
algorithm
has the same meaning as incodesize_upper_bound()
EXAMPLES:
sage: codes.bounds.dimension_upper_bound(10,3,2) # needs sage.libs.pari sage.symbolic 6 sage: codes.bounds.dimension_upper_bound(30,15,4) # needs sage.libs.pari sage.symbolic 13 sage: codes.bounds.dimension_upper_bound(30,15,4,algorithm='LP') # needs sage.libs.pari sage.numerical.mip 12
>>> from sage.all import * >>> codes.bounds.dimension_upper_bound(Integer(10),Integer(3),Integer(2)) # needs sage.libs.pari sage.symbolic 6 >>> codes.bounds.dimension_upper_bound(Integer(30),Integer(15),Integer(4)) # needs sage.libs.pari sage.symbolic 13 >>> codes.bounds.dimension_upper_bound(Integer(30),Integer(15),Integer(4),algorithm='LP') # needs sage.libs.pari sage.numerical.mip 12
- sage.coding.code_bounds.elias_bound_asymp(delta, q)[source]¶
The asymptotic Elias bound for the information rate.
This only makes sense when \(0 < \delta < 1-1/q\).
EXAMPLES:
sage: codes.bounds.elias_bound_asymp(1/4,2) # needs sage.symbolic 0.39912396330...
>>> from sage.all import * >>> codes.bounds.elias_bound_asymp(Integer(1)/Integer(4),Integer(2)) # needs sage.symbolic 0.39912396330...
- sage.coding.code_bounds.elias_upper_bound(n, q, d, algorithm=None)[source]¶
Return the Elias upper bound.
Return the Elias upper bound for number of elements in the largest code of minimum distance \(d\) in \(\GF{q}^n\), cf. [HP2003]. If
algorithm="gap"
, it wraps GAP’sUpperBoundElias
.EXAMPLES:
sage: codes.bounds.elias_upper_bound(10,2,3) 232 sage: codes.bounds.elias_upper_bound(10,2,3,algorithm='gap') # optional - gap_package_guava 232
>>> from sage.all import * >>> codes.bounds.elias_upper_bound(Integer(10),Integer(2),Integer(3)) 232 >>> codes.bounds.elias_upper_bound(Integer(10),Integer(2),Integer(3),algorithm='gap') # optional - gap_package_guava 232
- sage.coding.code_bounds.entropy(x, q=2)[source]¶
Compute the entropy at \(x\) on the \(q\)-ary symmetric channel.
INPUT:
x
– real number in the interval \([0, 1]\)q
– (default: 2) integer greater than 1; this is the base of the logarithm
EXAMPLES:
sage: codes.bounds.entropy(0, 2) 0 sage: codes.bounds.entropy(1/5,4).factor() # needs sage.symbolic 1/10*(log(3) - 4*log(4/5) - log(1/5))/log(2) sage: codes.bounds.entropy(1, 3) # needs sage.symbolic log(2)/log(3)
>>> from sage.all import * >>> codes.bounds.entropy(Integer(0), Integer(2)) 0 >>> codes.bounds.entropy(Integer(1)/Integer(5),Integer(4)).factor() # needs sage.symbolic 1/10*(log(3) - 4*log(4/5) - log(1/5))/log(2) >>> codes.bounds.entropy(Integer(1), Integer(3)) # needs sage.symbolic log(2)/log(3)
Check that values not within the limits are properly handled:
sage: codes.bounds.entropy(1.1, 2) Traceback (most recent call last): ... ValueError: The entropy function is defined only for x in the interval [0, 1] sage: codes.bounds.entropy(1, 1) Traceback (most recent call last): ... ValueError: The value q must be an integer greater than 1
>>> from sage.all import * >>> codes.bounds.entropy(RealNumber('1.1'), Integer(2)) Traceback (most recent call last): ... ValueError: The entropy function is defined only for x in the interval [0, 1] >>> codes.bounds.entropy(Integer(1), Integer(1)) Traceback (most recent call last): ... ValueError: The value q must be an integer greater than 1
- sage.coding.code_bounds.entropy_inverse(x, q=2)[source]¶
Find the inverse of the \(q\)-ary entropy function at the point
x
.INPUT:
x
– real number in the interval \([0, 1]\)q
– (default: 2) integer greater than 1; this is the base of the logarithm
OUTPUT:
Real number in the interval \([0, 1-1/q]\). The function has multiple values if we include the entire interval \([0, 1]\); hence only the values in the above interval is returned.
EXAMPLES:
sage: # needs sage.symbolic sage: from sage.coding.code_bounds import entropy_inverse sage: entropy_inverse(0.1) # needs scipy 0.012986862055... sage: entropy_inverse(1) 1/2 sage: entropy_inverse(0, 3) 0 sage: entropy_inverse(1, 3) 2/3
>>> from sage.all import * >>> # needs sage.symbolic >>> from sage.coding.code_bounds import entropy_inverse >>> entropy_inverse(RealNumber('0.1')) # needs scipy 0.012986862055... >>> entropy_inverse(Integer(1)) 1/2 >>> entropy_inverse(Integer(0), Integer(3)) 0 >>> entropy_inverse(Integer(1), Integer(3)) 2/3
- sage.coding.code_bounds.gilbert_lower_bound(n, q, d)[source]¶
Return the Gilbert-Varshamov lower bound.
Return the Gilbert-Varshamov lower bound for number of elements in a largest code of minimum distance d in \(\GF{q}^n\). See Wikipedia article Gilbert-Varshamov_bound
EXAMPLES:
sage: codes.bounds.gilbert_lower_bound(10,2,3) 128/7
>>> from sage.all import * >>> codes.bounds.gilbert_lower_bound(Integer(10),Integer(2),Integer(3)) 128/7
- sage.coding.code_bounds.griesmer_upper_bound(n, q, d, algorithm=None)[source]¶
Return the Griesmer upper bound.
Return the Griesmer upper bound for the number of elements in a largest linear code of minimum distance \(d\) in \(\GF{q}^n\), cf. [HP2003]. If the method is “gap”, it wraps GAP’s
UpperBoundGriesmer
.The bound states:
\[`n\geq \sum_{i=0}^{k-1} \lceil d/q^i \rceil.`\]EXAMPLES:
The bound is reached for the ternary Golay codes:
sage: codes.bounds.griesmer_upper_bound(12,3,6) # needs sage.libs.pari 729 sage: codes.bounds.griesmer_upper_bound(11,3,5) # needs sage.libs.pari 729
>>> from sage.all import * >>> codes.bounds.griesmer_upper_bound(Integer(12),Integer(3),Integer(6)) # needs sage.libs.pari 729 >>> codes.bounds.griesmer_upper_bound(Integer(11),Integer(3),Integer(5)) # needs sage.libs.pari 729
sage: codes.bounds.griesmer_upper_bound(10,2,3) # needs sage.libs.pari 128 sage: codes.bounds.griesmer_upper_bound(10,2,3,algorithm='gap') # optional - gap_package_guava, needs sage.libs.pari 128
>>> from sage.all import * >>> codes.bounds.griesmer_upper_bound(Integer(10),Integer(2),Integer(3)) # needs sage.libs.pari 128 >>> codes.bounds.griesmer_upper_bound(Integer(10),Integer(2),Integer(3),algorithm='gap') # optional - gap_package_guava, needs sage.libs.pari 128
- sage.coding.code_bounds.gv_bound_asymp(delta, q)[source]¶
The asymptotic Gilbert-Varshamov bound for the information rate, R.
EXAMPLES:
sage: # needs sage.symbolic sage: RDF(codes.bounds.gv_bound_asymp(1/4,2)) # needs sage.libs.pari 0.18872187554086... sage: f = lambda x: codes.bounds.gv_bound_asymp(x,2) sage: plot(f,0,1) # needs sage.libs.pari sage.plot Graphics object consisting of 1 graphics primitive
>>> from sage.all import * >>> # needs sage.symbolic >>> RDF(codes.bounds.gv_bound_asymp(Integer(1)/Integer(4),Integer(2))) # needs sage.libs.pari 0.18872187554086... >>> f = lambda x: codes.bounds.gv_bound_asymp(x,Integer(2)) >>> plot(f,Integer(0),Integer(1)) # needs sage.libs.pari sage.plot Graphics object consisting of 1 graphics primitive
- sage.coding.code_bounds.gv_info_rate(n, delta, q)[source]¶
The Gilbert-Varshamov lower bound for information rate.
The Gilbert-Varshamov lower bound for information rate of a \(q\)-ary code of length \(n\) and minimum distance \(n\delta\).
EXAMPLES:
sage: RDF(codes.bounds.gv_info_rate(100,1/4,3)) # abs tol 1e-15 # needs sage.libs.pari sage.symbolic 0.36704992608261894
>>> from sage.all import * >>> RDF(codes.bounds.gv_info_rate(Integer(100),Integer(1)/Integer(4),Integer(3))) # abs tol 1e-15 # needs sage.libs.pari sage.symbolic 0.36704992608261894
- sage.coding.code_bounds.hamming_bound_asymp(delta, q)[source]¶
The asymptotic Hamming bound for the information rate.
EXAMPLES:
sage: # needs sage.symbolic sage: RDF(codes.bounds.hamming_bound_asymp(1/4,2)) # needs sage.libs.pari 0.456435556800... sage: f = lambda x: codes.bounds.hamming_bound_asymp(x,2) sage: plot(f,0,1) # needs sage.libs.pari sage.plot Graphics object consisting of 1 graphics primitive
>>> from sage.all import * >>> # needs sage.symbolic >>> RDF(codes.bounds.hamming_bound_asymp(Integer(1)/Integer(4),Integer(2))) # needs sage.libs.pari 0.456435556800... >>> f = lambda x: codes.bounds.hamming_bound_asymp(x,Integer(2)) >>> plot(f,Integer(0),Integer(1)) # needs sage.libs.pari sage.plot Graphics object consisting of 1 graphics primitive
- sage.coding.code_bounds.hamming_upper_bound(n, q, d)[source]¶
Return the Hamming upper bound.
Return the Hamming upper bound for number of elements in the largest code of length \(n\) and minimum distance \(d\) over alphabet of size \(q\).
The Hamming bound (also known as the sphere packing bound) returns an upper bound on the size of a code of length \(n\), minimum distance \(d\), over an alphabet of size \(q\). The Hamming bound is obtained by dividing the contents of the entire Hamming space \(q^n\) by the contents of a ball with radius \(floor((d-1)/2)\). As all these balls are disjoint, they can never contain more than the whole vector space.
\[M \leq \frac{q^n}{V(n,e)},\]where \(M\) is the maximum number of codewords and \(V(n,e)\) is equal to the contents of a ball of radius \(e\). This bound is useful for small values of \(d\). Codes for which equality holds are called perfect. See e.g. [HP2003].
EXAMPLES:
sage: codes.bounds.hamming_upper_bound(10,2,3) 93
>>> from sage.all import * >>> codes.bounds.hamming_upper_bound(Integer(10),Integer(2),Integer(3)) 93
- sage.coding.code_bounds.mrrw1_bound_asymp(delta, q)[source]¶
The first asymptotic McEliese-Rumsey-Rodemich-Welsh bound.
This only makes sense when \(0 < \delta < 1-1/q\).
EXAMPLES:
sage: codes.bounds.mrrw1_bound_asymp(1/4,2) # abs tol 4e-16 # needs sage.symbolic 0.3545789026652697
>>> from sage.all import * >>> codes.bounds.mrrw1_bound_asymp(Integer(1)/Integer(4),Integer(2)) # abs tol 4e-16 # needs sage.symbolic 0.3545789026652697
- sage.coding.code_bounds.plotkin_bound_asymp(delta, q)[source]¶
The asymptotic Plotkin bound for the information rate.
This only makes sense when \(0 < \delta < 1-1/q\).
EXAMPLES:
sage: codes.bounds.plotkin_bound_asymp(1/4,2) 1/2
>>> from sage.all import * >>> codes.bounds.plotkin_bound_asymp(Integer(1)/Integer(4),Integer(2)) 1/2
- sage.coding.code_bounds.plotkin_upper_bound(n, q, d, algorithm=None)[source]¶
Return the Plotkin upper bound.
Return the Plotkin upper bound for the number of elements in a largest code of minimum distance \(d\) in \(\GF{q}^n\). More precisely this is a generalization of Plotkin’s result for \(q=2\) to bigger \(q\) due to Berlekamp.
The
algorithm="gap"
option wraps Guava’sUpperBoundPlotkin
.EXAMPLES:
sage: codes.bounds.plotkin_upper_bound(10,2,3) 192 sage: codes.bounds.plotkin_upper_bound(10,2,3,algorithm='gap') # optional - gap_package_guava 192
>>> from sage.all import * >>> codes.bounds.plotkin_upper_bound(Integer(10),Integer(2),Integer(3)) 192 >>> codes.bounds.plotkin_upper_bound(Integer(10),Integer(2),Integer(3),algorithm='gap') # optional - gap_package_guava 192
- sage.coding.code_bounds.singleton_bound_asymp(delta, q)[source]¶
The asymptotic Singleton bound for the information rate.
EXAMPLES:
sage: codes.bounds.singleton_bound_asymp(1/4,2) 3/4 sage: f = lambda x: codes.bounds.singleton_bound_asymp(x,2) sage: plot(f,0,1) # needs sage.plot Graphics object consisting of 1 graphics primitive
>>> from sage.all import * >>> codes.bounds.singleton_bound_asymp(Integer(1)/Integer(4),Integer(2)) 3/4 >>> f = lambda x: codes.bounds.singleton_bound_asymp(x,Integer(2)) >>> plot(f,Integer(0),Integer(1)) # needs sage.plot Graphics object consisting of 1 graphics primitive
- sage.coding.code_bounds.singleton_upper_bound(n, q, d)[source]¶
Return the Singleton upper bound.
Return the Singleton upper bound for number of elements in a largest code of minimum distance \(d\) in \(\GF{q}^n\).
This bound is based on the shortening of codes. By shortening an \((n, M, d)\) code \(d-1\) times, an \((n-d+1,M,1)\) code results, with \(M \leq q^n-d+1\). Thus
\[M \leq q^{n-d+1}.\]Codes that meet this bound are called maximum distance separable (MDS).
EXAMPLES:
sage: codes.bounds.singleton_upper_bound(10,2,3) 256
>>> from sage.all import * >>> codes.bounds.singleton_upper_bound(Integer(10),Integer(2),Integer(3)) 256
- sage.coding.code_bounds.volume_hamming(n, q, r)[source]¶
Return the number of elements in a Hamming ball.
Return the number of elements in a Hamming ball of radius \(r\) in \(\GF{q}^n\).
EXAMPLES:
sage: codes.bounds.volume_hamming(10,2,3) 176
>>> from sage.all import * >>> codes.bounds.volume_hamming(Integer(10),Integer(2),Integer(3)) 176