Subsection Polynomial Codes
¶We would like to find an easy method of obtaining cyclic linear codes. To accomplish this, we can use our knowledge of finite fields and polynomial rings over \({\mathbb Z}_2\text{.}\) Any binary \(n\)tuple can be interpreted as a polynomial in \({\mathbb Z}_2[x]\text{.}\) Stated another way, the \(n\)tuple \((a_0, a_1, \ldots, a_{n  1} )\) corresponds to the polynomial
\begin{equation*}
f(x) = a_0 + a_1 x + \cdots + a_{n1} x^{n  1},
\end{equation*}
where the degree of \(f(x)\) is at most \(n  1\text{.}\) For example, the polynomial corresponding to the \(5\)tuple \((10011)\) is
\begin{equation*}
1 + 0 x + 0 x^2 + 1 x^3 + 1 x^4 = 1 + x^3 + x^4.
\end{equation*}
Conversely, with any polynomial \(f(x) \in {\mathbb Z}_2[x]\) with \(\deg f(x) \lt n\) we can associate a binary \(n\)tuple. The polynomial \(x + x^2 + x^4\) corresponds to the \(5\)tuple \((01101)\text{.}\)
Let us fix a nonconstant polynomial \(g(x)\) in \({\mathbb Z}_2[x]\) of degree \(n  k\text{.}\) We can define an \((n,k)\)code \(C\) in the following manner. If \((a_0, \ldots, a_{k  1})\) is a \(k\)tuple to be encoded, then \(f(x) = a_0 + a_1 x + \cdots + a_{k  1} x^{k  1}\) is the corresponding polynomial in \({\mathbb Z}_2[x]\text{.}\) To encode \(f(x)\text{,}\) we multiply by \(g(x)\text{.}\) The codewords in \(C\) are all those polynomials in \({\mathbb Z}_2[x]\) of degree less than \(n\) that are divisible by \(g(x)\text{.}\) Codes obtained in this manner are called polynomial codes.
Example 22.15
If we let \(g(x)= 1 + x^3\text{,}\) we can define a \((6,3)\)code \(C\) as follows. To encode a \(3\)tuple \(( a_0, a_1, a_2 )\text{,}\) we multiply the corresponding polynomial \(f(x) = a_0 + a_1 x + a_2 x^2\) by \(1 + x^3\text{.}\) We are defining a map \(\phi : {\mathbb Z}_2^3 \rightarrow {\mathbb Z}_2^6\) by \(\phi : f(x) \mapsto g(x) f(x)\text{.}\) It is easy to check that this map is a group homomorphism. In fact, if we regard \({\mathbb Z}_2^n\) as a vector space over \({\mathbb Z}_2\text{,}\) \(\phi\) is a linear transformation of vector spaces (see Exercise 20.4.15, Chapter 20). Let us compute the kernel of \(\phi\text{.}\) Observe that \(\phi ( a_0, a_1, a_2 ) = (000000)\) exactly when
\begin{align*}
0 + 0x + 0x^2 + 0x^3 + 0x^4 + 0 x^5 & = (1 + x^3) ( a_0 + a_1 x + a_2 x^2 )\\
& = a_0 + a_1 x + a_2 x^2 + a_0 x^3 + a_1 x^4 + a_2 x^5.
\end{align*}
Since the polynomials over a field form an integral domain, \(a_0 + a_1 x + a_2 x^2\) must be the zero polynomial. Therefore, \(\ker \phi = \{ (000) \}\) and \(\phi\) is onetoone.
To calculate a generator matrix for \(C\text{,}\) we merely need to examine the way the polynomials \(1\text{,}\) \(x\text{,}\) and \(x^2\) are encoded:
\begin{align*}
(1 + x^3) \cdot 1 & = 1 + x^3\\
(1 + x^3)x & = x + x^4\\
(1 + x^3)x^2 & = x^2 + x^5.
\end{align*}
We obtain the code corresponding to the generator matrix \(G_1\) in Example 22.14. The paritycheck matrix for this code is
\begin{equation*}
H
=
\begin{pmatrix}
1 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1
\end{pmatrix}.
\end{equation*}
Since the smallest weight of any nonzero codeword is \(2\text{,}\) this code has the ability to detect all single errors.
Rings of polynomials have a great deal of structure; therefore, our immediate goal is to establish a link between polynomial codes and ring theory. Recall that \(x^n  1 = (x  1)( x^{n  1} + \cdots + x + 1)\text{.}\) The factor ring
\begin{equation*}
R_n = {\mathbb Z}_2[x]/ \langle x^n  1 \rangle
\end{equation*}
can be considered to be the ring of polynomials of the form
\begin{equation*}
f(t) = a_0 + a_1 t + \cdots + a_{n  1} t^{n  1}
\end{equation*}
that satisfy the condition \(t^n = 1\text{.}\) It is an easy exercise to show that \({\mathbb Z}_2^n\) and \(R_n\) are isomorphic as vector spaces. We will often identify elements in \({\mathbb Z}_2^n\) with elements in \({\mathbb Z}[x] / \langle x^n  1 \rangle\text{.}\) In this manner we can interpret a linear code as a subset of \({\mathbb Z}[x] / \langle x^n  1 \rangle\text{.}\)
The additional ring structure on polynomial codes is very powerful in describing cyclic codes. A cyclic shift of an \(n\)tuple can be described by polynomial multiplication. If \(f(t) = a_0 + a_1 t + \cdots + a_{n  1} t^{n  1}\) is a code polynomial in \(R_n\text{,}\) then
\begin{equation*}
tf(t) = a_{n  1} + a_0 t + \cdots + a_{n  2} t^{n  1}
\end{equation*}
is the cyclically shifted word obtained from multiplying \(f(t)\) by \(t\text{.}\) The following theorem gives a beautiful classification of cyclic codes in terms of the ideals of \(R_n\text{.}\)
Theorem 22.16
A linear code \(C\) in \({\mathbb Z}_2^n\) is cyclic if and only if it is an ideal in \(R_n = {\mathbb Z}[x] / \langle x^n  1 \rangle\text{.}\)
Proof
Let \(C\) be a linear cyclic code and suppose that \(f(t)\) is in \(C\text{.}\) Then \(t f(t)\) must also be in \(C\text{.}\) Consequently, \(t^k f(t)\) is in \(C\) for all \(k \in {\mathbb N}\text{.}\) Since \(C\) is a linear code, any linear combination of the codewords \(f(t), tf(t), t^2f(t), \ldots, t^{n1}f(t)\) is also a codeword; therefore, for every polynomial \(p(t)\text{,}\) \(p(t)f(t)\) is in \(C\text{.}\) Hence, \(C\) is an ideal.
Conversely, let \(C\) be an ideal in \({\mathbb Z}_2[x]/\langle x^n + 1\rangle\text{.}\) Suppose that \(f(t) = a_0 + a_1 t + \cdots + a_{n  1} t^{n  1}\) is a codeword in \(C\text{.}\) Then \(t f(t)\) is a codeword in \(C\text{;}\) that is, \((a_1, \ldots, a_{n1}, a_0)\) is in \(C\text{.}\)
Theorem 22.16 tells us that knowing the ideals of \(R_n\) is equivalent to knowing the linear cyclic codes in \({\mathbb Z}_2^n\text{.}\) Fortunately, the ideals in \(R_n\) are easy to describe. The natural ring homomorphism \(\phi : {\mathbb Z}_2[x] \rightarrow R_n\) defined by \(\phi[f(x)] = f(t)\) is a surjective homomorphism. The kernel of \(\phi\) is the ideal generated by \(x^n  1\text{.}\) By Theorem 16.34, every ideal \(C\) in \(R_n\) is of the form \(\phi(I)\text{,}\) where \(I\) is an ideal in \({\mathbb Z}_2[x]\) that contains \(\langle x^n  1 \rangle\text{.}\) By Theorem 17.20, we know that every ideal \(I\) in \({\mathbb Z}_2[x]\) is a principal ideal, since \({\mathbb Z}_2\) is a field. Therefore, \(I = \langle g(x) \rangle\) for some unique monic polynomial in \({\mathbb Z}_2[x]\text{.}\) Since \(\langle x^n  1 \rangle\) is contained in \(I\text{,}\) it must be the case that \(g(x)\) divides \(x^n  1\text{.}\) Consequently, every ideal \(C\) in \(R_n\) is of the form
\begin{equation*}
C = \langle g(t) \rangle = \{ f(t)g(t) : f(t) \in R_n \text{ and } g(x) \mid (x^n  1) \text{ in } {\mathbb Z}_2[x] \}.
\end{equation*}
The unique monic polynomial of the smallest degree that generates \(C\) is called the minimal generator polynomial of \(C\text{.}\)
Example 22.17
If we factor \(x^7  1\) into irreducible components, we have
\begin{equation*}
x^7  1 = (1 + x)(1 + x + x^3)(1+ x^2 + x^3).
\end{equation*}
We see that \(g(t) = (1 + t + t^3)\) generates an ideal \(C\) in \(R_7\text{.}\) This code is a \((7, 4)\)block code. As in Example 22.15, it is easy to calculate a generator matrix by examining what \(g(t)\) does to the polynomials 1, \(t\text{,}\) \(t^2\text{,}\) and \(t^3\text{.}\) A generator matrix for \(C\) is
\begin{equation*}
G =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 \\
1 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}.
\end{equation*}
In general, we can determine a generator matrix for an \((n, k)\)code \(C\) by the manner in which the elements \(t^k\) are encoded. Let \(x^n  1 = g(x) h(x)\) in \({\mathbb Z}_2[x]\text{.}\) If \(g(x) = g_0 + g_1 x + \cdots + g_{nk} x^{nk}\) and \(h(x) = h_0 + h_1 x + \cdots + h_k x^k\text{,}\) then the \(n \times k\) matrix
\begin{equation*}
G =
\begin{pmatrix}
g_0 & 0 & \cdots & 0 \\
g_1 & g_0 & \cdots & 0 \\
\vdots & \vdots &\ddots & \vdots \\
g_{nk} & g_{nk1} & \cdots & g_0 \\
0 & g_{nk} & \cdots & g_{1} \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & g_{nk}
\end{pmatrix}
\end{equation*}
is a generator matrix for the code \(C\) with generator polynomial \(g(t)\text{.}\) The paritycheck matrix for \(C\) is the \((n  k) \times n\) matrix
\begin{equation*}
H =
\begin{pmatrix}
0 & \cdots & 0 & 0 & h_k & \cdots & h_0 \\
0 & \cdots & 0 & h_k & \cdots & h_0 & 0 \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\
h_k & \cdots & h_0 & 0 & 0 & \cdots & 0
\end{pmatrix}.
\end{equation*}
We will leave the details of the proof of the following proposition as an exercise.
Proposition 22.18
Let \(C = \langle g(t) \rangle\) be a cyclic code in \(R_n\) and suppose that \(x^n  1 = g(x) h(x)\text{.}\) Then \(G\) and \(H\) are generator and paritycheck matrices for \(C\text{,}\) respectively. Furthermore, \(HG = 0\text{.}\)
Example 22.19
In Example 22.17,
\begin{equation*}
x^7  1 = g(x) h(x) = (1 + x + x^3)(1 + x + x^2 + x^4).
\end{equation*}
Therefore, a paritycheck matrix for this code is
\begin{equation*}
H =
\begin{pmatrix}
0 & 0 & 1 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 1 & 1 & 0 \\
1 & 0 & 1 & 1 & 1 & 0 & 0
\end{pmatrix}.
\end{equation*}
To determine the errordetecting and errorcorrecting capabilities of a cyclic code, we need to know something about determinants. If \(\alpha_1, \ldots, \alpha_n\) are elements in a field \(F\text{,}\) then the \(n \times n\) matrix
\begin{equation*}
\begin{pmatrix}
1 & 1 & \cdots & 1 \\
\alpha_1 & \alpha_2 & \cdots & \alpha_n \\
\alpha_1^2 & \alpha_2^2 & \cdots & \alpha_n^2 \\
\vdots & \vdots & \ddots & \vdots \\
\alpha_1^{n1} & \alpha_2^{n1} & \cdots & \alpha_n^{n1}
\end{pmatrix}
\end{equation*}
is called the Vandermonde matrix. The determinant of this matrix is called the Vandermonde determinant. We will need the following lemma in our investigation of cyclic codes.
Lemma 22.20
Let \(\alpha_1, \ldots, \alpha_n\) be elements in a field \(F\) with \(n \geq 2\text{.}\) Then
\begin{equation*}
\det
\begin{pmatrix}
1 & 1 & \cdots & 1 \\
\alpha_1 & \alpha_2 & \cdots & \alpha_n \\
\alpha_1^2 & \alpha_2^2 & \cdots & \alpha_n^2 \\
\vdots & \vdots & \ddots & \vdots \\
\alpha_1^{n1} & \alpha_2^{n1} & \cdots & \alpha_n^{n1}
\end{pmatrix}
= \prod_{1 \leq j \lt i \leq n} (\alpha_i  \alpha_j).
\end{equation*}
In particular, if the \(\alpha_i\)'s are distinct, then the determinant is nonzero.
Proof
We will induct on \(n\text{.}\) If \(n = 2\text{,}\) then the determinant is \(\alpha_2  \alpha_1\text{.}\) Let us assume the result for \(n  1\) and consider the polynomial \(p(x)\) defined by
\begin{equation*}
p(x) = \det
\begin{pmatrix}
1 & 1 & \cdots & 1 & 1 \\
\alpha_1 & \alpha_2 & \cdots & \alpha_{n1} & x \\
\alpha_1^2 & \alpha_2^2 & \cdots & \alpha_{n1}^2 & x^2 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
\alpha_1^{n1} & \alpha_2^{n1} & \cdots & \alpha_{n1}^{n1} & x^{n1}
\end{pmatrix}.
\end{equation*}
Expanding this determinant by cofactors on the last column, we see that \(p(x)\) is a polynomial of at most degree \(n1\text{.}\) Moreover, the roots of \(p(x)\) are \(\alpha_1, \ldots, \alpha_{n1}\text{,}\) since the substitution of any one of these elements in the last column will produce a column identical to the last column in the matrix. Remember that the determinant of a matrix is zero if it has two identical columns. Therefore,
\begin{equation*}
p(x) = (x  \alpha_1)(x  \alpha_2) \cdots (x  \alpha_{n1}) \beta,
\end{equation*}
where
\begin{equation*}
\beta = (1)^{n + n} \det
\begin{pmatrix}
1 & 1 & \cdots & 1 \\
\alpha_1 & \alpha_2 & \cdots & \alpha_{n1} \\
\alpha_1^2 & \alpha_2^2 & \cdots & \alpha_{n1}^2 \\
\vdots & \vdots & \ddots & \vdots \\
\alpha_1^{n2} & \alpha_2^{n2} & \cdots & \alpha_{n1}^{n2}
\end{pmatrix}.
\end{equation*}
By our induction hypothesis,
\begin{equation*}
\beta = (1)^{n+n} \prod_{1 \leq j \lt i \leq n  1} (\alpha_i  \alpha_j).
\end{equation*}
If we let \(x = \alpha_n\text{,}\) the result now follows immediately.
The following theorem gives us an estimate on the error detection and correction capabilities for a particular generator polynomial.
Theorem 22.21
Let \(C = \langle g(t) \rangle\) be a cyclic code in \(R_n\) and suppose that \(\omega\) is a primitive \(n\)th root of unity over \({\mathbb Z}_2\text{.}\) If \(s\) consecutive powers of \(\omega\) are roots of \(g(x)\text{,}\) then the minimum distance of \(C\) is at least \(s + 1\text{.}\)
Proof
Suppose that
\begin{equation*}
g( \omega^r) = g(\omega^{r + 1}) = \cdots = g( \omega^{r + s  1}) = 0.
\end{equation*}
Let \(f(x)\) be some polynomial in \(C\) with \(s\) or fewer nonzero coefficients. We can assume that
\begin{equation*}
f(x) = a_{i_0} x^{i_0} + a_{i_1} x^{i_1} + \cdots + a_{i_{s  1}} x^{i_{s  1}}
\end{equation*}
be some polynomial in \(C\text{.}\) It will suffice to show that all of the \(a_i\)'s must be 0. Since
\begin{equation*}
g( \omega^r) = g(\omega^{r + 1}) = \cdots = g( \omega^{r + s  1}) = 0
\end{equation*}
and \(g(x)\) divides \(f(x)\text{,}\)
\begin{equation*}
f( \omega^r) = f(\omega^{r + 1}) = \cdots = f( \omega^{r + s  1}) = 0.
\end{equation*}
Equivalently, we have the following system of equations:
\begin{align*}
a_{i_0} (\omega^r)^{i_0} + a_{i_1} (\omega^r)^{i_1} + \cdots + a_{i_{s  1}} (\omega^r)^{i_{s  1}} & = 0\\
a_{i_0} (\omega^{r + 1})^{i_0} + a_{i_1} (\omega^{r + 1})^{i_2} + \cdots + a_{i_{s1}} (\omega^{r+1})^{i_{s1}} & = 0\\
& \vdots \\
a_{i_0} (\omega^{r + s  1})^{i_0} + a_{i_1} (\omega^{r + s  1})^{i_1} + \cdots + a_{i_{s  1}} (\omega^{r + s  1})^{i_{s  1}} & = 0.
\end{align*}
Therefore, \((a_{i_0}, a_{i_1}, \ldots, a_{i_{s  1}})\) is a solution to the homogeneous system of linear equations
\begin{align*}
(\omega^{i_0})^r x_0 + (\omega^{i_1})^r x_1 + \cdots + (\omega^{i_{s  1}})^r x_{n  1} & = 0\\
(\omega^{i_0})^{r + 1} x_0 + (\omega^{i_1})^{r + 1} x_1 + \cdots + (\omega^{i_{s  1}})^{r + 1} x_{n  1} & = 0\\
& \vdots \\
(\omega^{i_0})^{r + s  1} x_0 + (\omega^{i_1})^{r + s  1} x_1 + \cdots + (\omega^{i_{s  1}})^{r + s  1} x_{n  1} & = 0.
\end{align*}
However, this system has a unique solution, since the determinant of the matrix
\begin{equation*}
\begin{pmatrix}
(\omega^{i_0})^r & (\omega^{i_1})^r & \cdots & (\omega^{i_{s1}})^r \\
(\omega^{i_0})^{r+1} & (\omega^{i_1})^{r+1} & \cdots &
(\omega^{i_{s1}})^{r+1} \\
\vdots & \vdots & \ddots & \vdots \\
(\omega^{i_0})^{r+s1} & (\omega^{i_1})^{r+s1} & \cdots &
(\omega^{i_{s1}})^{r+s1}
\end{pmatrix}
\end{equation*}
can be shown to be nonzero using Lemma 22.20 and the basic properties of determinants (Exercise). Therefore, this solution must be \(a_{i_0} = a_{i_1} = \cdots = a_{i_{s  1}} = 0\text{.}\)
Subsection BCH Codes
¶Some of the most important codes, discovered independently by A. Hocquenghem in 1959 and by R. C. Bose and D. V. RayChaudhuri in 1960, are BCH codes. The European and transatlantic communication systems both use BCH codes. Information words to be encoded are of length \(231\text{,}\) and a polynomial of degree \(24\) is used to generate the code. Since \(231 + 24 = 255 = 2^81\text{,}\) we are dealing with a \((255, 231)\)block code. This BCH code will detect six errors and has a failure rate of \(1\) in \(16\) million. One advantage of BCH codes is that efficient error correction algorithms exist for them.
The idea behind BCH codes is to choose a generator polynomial of smallest degree that has the largest error detection and error correction capabilities. Let \(d = 2r + 1\) for some \(r \geq 0\text{.}\) Suppose that \(\omega\) is a primitive \(n\)th root of unity over \({\mathbb Z}_2\text{,}\) and let \(m_i(x)\) be the minimal polynomial over \({\mathbb Z}_2\) of \(\omega^i\text{.}\) If
\begin{equation*}
g(x) = \lcm[ m_1(x), m_{2}(x), \ldots, m_{2r}(x)],
\end{equation*}
then the cyclic code \(\langle g(t) \rangle\) in \(R_n\) is called the BCH code of length \(n\) and distance \(d\text{.}\) By Theorem 22.21, the minimum distance of \(C\) is at least \(d\text{.}\)
Theorem 22.22
Let \(C = \langle g(t) \rangle\) be a cyclic code in \(R_n\text{.}\) The following statements are equivalent.
The code \(C\) is a BCH code whose minimum distance is at least \(d\text{.}\)
A code polynomial \(f(t)\) is in \(C\) if and only if \(f( \omega^i) = 0\) for \(1 \leq i \lt d\text{.}\)

The matrix
\begin{equation*}
H =
\begin{pmatrix}
1 & \omega & \omega^2 & \cdots & \omega^{n1}\\
1 & \omega^2 & \omega^{4} & \cdots & \omega^{(n1)(2)} \\
1 & \omega^3 & \omega^{6} & \cdots & \omega^{(n1)(3)} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & \omega^{2r} & \omega^{4r} & \cdots & \omega^{(n1)(2r)}
\end{pmatrix}
\end{equation*}
is a paritycheck matrix for \(C\text{.}\)
Proof
(1) \(\Rightarrow\) (2). If \(f(t)\) is in \(C\text{,}\) then \(g(x) \mid f(x)\) in \({\mathbb Z}_2[x]\text{.}\) Hence, for \(i = 1, \ldots, 2r\text{,}\) \(f( \omega^i) = 0\) since \(g( \omega^i ) = 0\text{.}\) Conversely, suppose that \(f( \omega^i) = 0\) for \(1 \leq i \leq d\text{.}\) Then \(f(x)\) is divisible by each \(m_i(x)\text{,}\) since \(m_i(x)\) is the minimal polynomial of \(\omega^i\text{.}\) Therefore, \(g(x) \mid f(x)\) by the definition of \(g(x)\text{.}\) Consequently, \(f(x)\) is a codeword.
(2) \(\Rightarrow\) (3). Let \(f(t) = a_0 + a_1 t + \cdots + a_{n  1}v t^{n  1}\) be in \(R_n\text{.}\) The corresponding \(n\)tuple in \({\mathbb Z}_2^n\) is \({\mathbf x} = (a_0 a_1 \cdots a_{n  1})^\transpose\text{.}\) By (2),
\begin{equation*}
H {\mathbf x} =
\begin{pmatrix}
a_0 + a_1 \omega + \cdots + a_{n1} \omega^{n1} \\
a_0 + a_1 \omega^2 + \cdots + a_{n1} (\omega^2)^{n1} \\
\vdots \\
a_0 + a_1 \omega^{2r} + \cdots + a_{n1} (\omega^{2r})^{n1}
\end{pmatrix}
=
\begin{pmatrix}
f(\omega) \\
f(\omega^2) \\
\vdots \\
f(\omega^{2r})
\end{pmatrix}
= 0
\end{equation*}
exactly when \(f(t)\) is in \(C\text{.}\) Thus, \(H\) is a paritycheck matrix for \(C\text{.}\)
(3) \(\Rightarrow\) (1). By (3), a code polynomial \(f(t) = a_0 + a_1 t + \cdots + a_{n  1} t^{n  1}\) is in \(C\) exactly when \(f(\omega^i) = 0\) for \(i = 1, \ldots, 2r\text{.}\) The smallest such polynomial is \(g(t) = \lcm[ m_1(t),\ldots, m_{2r}(t)]\text{.}\) Therefore, \(C = \langle g(t) \rangle\text{.}\)
Example 22.23
It is easy to verify that \(x^{15}  1 \in {\mathbb Z}_2[x]\) has a factorization
\begin{equation*}
x^{15}  1 = (x + 1)(x^2 + x + 1)(x^4 + x + 1)(x^4 + x^3 + 1)(x^4 + x^3 + x^2 + x + 1),
\end{equation*}
where each of the factors is an irreducible polynomial. Let \(\omega\) be a root of \(1 + x + x^4\text{.}\) The Galois field \(\gf(2^4)\) is
\begin{equation*}
\{ a_0 + a_1 \omega + a_2 \omega^2 + a_3 \omega^3 : a_i \in {\mathbb Z}_2 \text{ and } 1 + \omega + \omega^4 = 0 \}.
\end{equation*}
By Example 22.8, \(\omega\) is a primitive \(15\)th root of unity. The minimal polynomial of \(\omega\) is \(m_1(x) = 1 + x + x^4\text{.}\) It is easy to see that \(\omega^2\) and \(\omega^4\) are also roots of \(m_1(x)\text{.}\) The minimal polynomial of \(\omega^3\) is \(m_2(x) = 1 + x + x^2 + x^3 + x^4\text{.}\) Therefore,
\begin{equation*}
g(x) = m_1(x) m_2(x) = 1 + x^4 + x^6 + x^7 + x^8
\end{equation*}
has roots \(\omega\text{,}\) \(\omega^2\text{,}\) \(\omega^3\text{,}\) \(\omega^4\text{.}\) Since both \(m_1(x)\) and \(m_2(x)\) divide \(x^{15}  1\text{,}\) the BCH code is a \((15, 7)\)code. If \(x^{15} 1 = g(x)h(x)\text{,}\) then \(h(x) = 1 + x^4 + x^6 + x^7\text{;}\) therefore, a paritycheck matrix for this code is
\begin{equation*}
\left(
\begin{array}{ccccccccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}
\right).
\end{equation*}