Why can't there be an error correcting code with fewer than 5 qubits?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







18












$begingroup$


I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?










share|improve this question











$endgroup$



















    18












    $begingroup$


    I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?










    share|improve this question











    $endgroup$















      18












      18








      18


      2



      $begingroup$


      I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?










      share|improve this question











      $endgroup$




      I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?







      error-correction






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 23 '18 at 14:04









      DaftWullie

      15.3k1642




      15.3k1642










      asked Nov 23 '18 at 12:53









      AdexAdex

      1718




      1718






















          4 Answers
          4






          active

          oldest

          votes


















          14












          $begingroup$

          A proof that you need at least 5 qubits (or qudits)



          Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.



          (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)



          Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
          Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).



          It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
          Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.



          On correcting erasure errors



          * The earliest reference I found for this is



          [1]
          Grassl, Beth, and Pellizzari.

               
          Codes for the Quantum Erasure Channel.

               
          Phys. Rev. A 56 (pp. 33–38), 1997.

               
          [arXiv:quant-ph/9610042]



          — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).




          • The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.


          • If the locations of those $d-1$ qubits were unknown, this would be fatal.
            However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
            Knill-Laflamme conditions.


          • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.







          share|improve this answer











          $endgroup$









          • 1




            $begingroup$
            N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
            $endgroup$
            – Niel de Beaudrap
            Nov 30 '18 at 11:26



















          14












          $begingroup$

          What we can easily prove is that there's no smaller non-degenerate code.



          In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
          $$
          |0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
          $$

          must map to orthogonal states.



          If we apply this argument in general, it shows us that we need
          $$
          2+2times(3n)
          $$

          distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
          $$
          2^ngeq 2(3n+1).
          $$

          This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.






          share|improve this answer











          $endgroup$









          • 1




            $begingroup$
            Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
            $endgroup$
            – Norbert Schuch
            Nov 23 '18 at 23:12










          • $begingroup$
            @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
            $endgroup$
            – DaftWullie
            Nov 24 '18 at 6:17










          • $begingroup$
            Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
            $endgroup$
            – DaftWullie
            Nov 24 '18 at 6:56






          • 1




            $begingroup$
            Thought that was a standard argument :-o
            $endgroup$
            – Norbert Schuch
            Nov 24 '18 at 12:11



















          8












          $begingroup$

          As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
          begin{equation}
          2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j,
          end{equation}

          where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfrac{d-1}{2}rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^{n-k}$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.



          However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.






          share|improve this answer









          $endgroup$





















            4












            $begingroup$

            I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of



            A Theory of Quantum Error-Correcting Codes
            Emanuel Knill, Raymond Laflamme
            https://arxiv.org/abs/quant-ph/9604034


            where the specific result is:




            Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.




            Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
            In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!{+}1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and an odd integer $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
            $$
            begin{aligned}
            n
            ;&geqslant;
            4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil
            \[1ex]&geqslant;
            bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil
            \[1ex]&=;
            2d - 2 + k
            ;geqslant;
            6 - 2 + 1
            ;=;
            5.
            end{aligned}
            $$



            (N.B.
            There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)



            As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used



            Quantum MacWilliams Identities
            Peter Shor, Raymond Laflamme
            https://arxiv.org/abs/quant-ph/9610040


            to show that no degenerate code on five qubits exists that can correct any single errors.






            share|improve this answer











            $endgroup$













            • $begingroup$
              Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
              $endgroup$
              – Niel de Beaudrap
              Nov 30 '18 at 10:51










            • $begingroup$
              Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
              $endgroup$
              – Felix Huber
              Nov 30 '18 at 11:05












            • $begingroup$
              As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
              $endgroup$
              – Felix Huber
              Nov 30 '18 at 11:15












            • $begingroup$
              Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
              $endgroup$
              – Niel de Beaudrap
              Nov 30 '18 at 11:17










            • $begingroup$
              Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
              $endgroup$
              – Felix Huber
              Nov 30 '18 at 11:21














            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "694"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fquantumcomputing.stackexchange.com%2fquestions%2f4798%2fwhy-cant-there-be-an-error-correcting-code-with-fewer-than-5-qubits%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            4 Answers
            4






            active

            oldest

            votes








            4 Answers
            4






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            14












            $begingroup$

            A proof that you need at least 5 qubits (or qudits)



            Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.



            (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)



            Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
            Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).



            It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
            Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.



            On correcting erasure errors



            * The earliest reference I found for this is



            [1]
            Grassl, Beth, and Pellizzari.

                 
            Codes for the Quantum Erasure Channel.

                 
            Phys. Rev. A 56 (pp. 33–38), 1997.

                 
            [arXiv:quant-ph/9610042]



            — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).




            • The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.


            • If the locations of those $d-1$ qubits were unknown, this would be fatal.
              However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
              Knill-Laflamme conditions.


            • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.







            share|improve this answer











            $endgroup$









            • 1




              $begingroup$
              N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
              $endgroup$
              – Niel de Beaudrap
              Nov 30 '18 at 11:26
















            14












            $begingroup$

            A proof that you need at least 5 qubits (or qudits)



            Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.



            (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)



            Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
            Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).



            It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
            Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.



            On correcting erasure errors



            * The earliest reference I found for this is



            [1]
            Grassl, Beth, and Pellizzari.

                 
            Codes for the Quantum Erasure Channel.

                 
            Phys. Rev. A 56 (pp. 33–38), 1997.

                 
            [arXiv:quant-ph/9610042]



            — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).




            • The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.


            • If the locations of those $d-1$ qubits were unknown, this would be fatal.
              However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
              Knill-Laflamme conditions.


            • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.







            share|improve this answer











            $endgroup$









            • 1




              $begingroup$
              N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
              $endgroup$
              – Niel de Beaudrap
              Nov 30 '18 at 11:26














            14












            14








            14





            $begingroup$

            A proof that you need at least 5 qubits (or qudits)



            Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.



            (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)



            Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
            Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).



            It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
            Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.



            On correcting erasure errors



            * The earliest reference I found for this is



            [1]
            Grassl, Beth, and Pellizzari.

                 
            Codes for the Quantum Erasure Channel.

                 
            Phys. Rev. A 56 (pp. 33–38), 1997.

                 
            [arXiv:quant-ph/9610042]



            — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).




            • The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.


            • If the locations of those $d-1$ qubits were unknown, this would be fatal.
              However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
              Knill-Laflamme conditions.


            • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.







            share|improve this answer











            $endgroup$



            A proof that you need at least 5 qubits (or qudits)



            Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.



            (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)



            Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
            Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).



            It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
            Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.



            On correcting erasure errors



            * The earliest reference I found for this is



            [1]
            Grassl, Beth, and Pellizzari.

                 
            Codes for the Quantum Erasure Channel.

                 
            Phys. Rev. A 56 (pp. 33–38), 1997.

                 
            [arXiv:quant-ph/9610042]



            — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).




            • The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.


            • If the locations of those $d-1$ qubits were unknown, this would be fatal.
              However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
              Knill-Laflamme conditions.


            • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.








            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 30 '18 at 11:03

























            answered Nov 23 '18 at 22:04









            Niel de BeaudrapNiel de Beaudrap

            6,16011236




            6,16011236








            • 1




              $begingroup$
              N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
              $endgroup$
              – Niel de Beaudrap
              Nov 30 '18 at 11:26














            • 1




              $begingroup$
              N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
              $endgroup$
              – Niel de Beaudrap
              Nov 30 '18 at 11:26








            1




            1




            $begingroup$
            N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
            $endgroup$
            – Niel de Beaudrap
            Nov 30 '18 at 11:26




            $begingroup$
            N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
            $endgroup$
            – Niel de Beaudrap
            Nov 30 '18 at 11:26













            14












            $begingroup$

            What we can easily prove is that there's no smaller non-degenerate code.



            In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
            $$
            |0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
            $$

            must map to orthogonal states.



            If we apply this argument in general, it shows us that we need
            $$
            2+2times(3n)
            $$

            distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
            $$
            2^ngeq 2(3n+1).
            $$

            This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.






            share|improve this answer











            $endgroup$









            • 1




              $begingroup$
              Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
              $endgroup$
              – Norbert Schuch
              Nov 23 '18 at 23:12










            • $begingroup$
              @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:17










            • $begingroup$
              Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:56






            • 1




              $begingroup$
              Thought that was a standard argument :-o
              $endgroup$
              – Norbert Schuch
              Nov 24 '18 at 12:11
















            14












            $begingroup$

            What we can easily prove is that there's no smaller non-degenerate code.



            In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
            $$
            |0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
            $$

            must map to orthogonal states.



            If we apply this argument in general, it shows us that we need
            $$
            2+2times(3n)
            $$

            distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
            $$
            2^ngeq 2(3n+1).
            $$

            This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.






            share|improve this answer











            $endgroup$









            • 1




              $begingroup$
              Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
              $endgroup$
              – Norbert Schuch
              Nov 23 '18 at 23:12










            • $begingroup$
              @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:17










            • $begingroup$
              Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:56






            • 1




              $begingroup$
              Thought that was a standard argument :-o
              $endgroup$
              – Norbert Schuch
              Nov 24 '18 at 12:11














            14












            14








            14





            $begingroup$

            What we can easily prove is that there's no smaller non-degenerate code.



            In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
            $$
            |0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
            $$

            must map to orthogonal states.



            If we apply this argument in general, it shows us that we need
            $$
            2+2times(3n)
            $$

            distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
            $$
            2^ngeq 2(3n+1).
            $$

            This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.






            share|improve this answer











            $endgroup$



            What we can easily prove is that there's no smaller non-degenerate code.



            In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
            $$
            |0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
            $$

            must map to orthogonal states.



            If we apply this argument in general, it shows us that we need
            $$
            2+2times(3n)
            $$

            distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
            $$
            2^ngeq 2(3n+1).
            $$

            This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 26 '18 at 7:49

























            answered Nov 23 '18 at 13:17









            DaftWullieDaftWullie

            15.3k1642




            15.3k1642








            • 1




              $begingroup$
              Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
              $endgroup$
              – Norbert Schuch
              Nov 23 '18 at 23:12










            • $begingroup$
              @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:17










            • $begingroup$
              Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:56






            • 1




              $begingroup$
              Thought that was a standard argument :-o
              $endgroup$
              – Norbert Schuch
              Nov 24 '18 at 12:11














            • 1




              $begingroup$
              Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
              $endgroup$
              – Norbert Schuch
              Nov 23 '18 at 23:12










            • $begingroup$
              @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:17










            • $begingroup$
              Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
              $endgroup$
              – DaftWullie
              Nov 24 '18 at 6:56






            • 1




              $begingroup$
              Thought that was a standard argument :-o
              $endgroup$
              – Norbert Schuch
              Nov 24 '18 at 12:11








            1




            1




            $begingroup$
            Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
            $endgroup$
            – Norbert Schuch
            Nov 23 '18 at 23:12




            $begingroup$
            Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
            $endgroup$
            – Norbert Schuch
            Nov 23 '18 at 23:12












            $begingroup$
            @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
            $endgroup$
            – DaftWullie
            Nov 24 '18 at 6:17




            $begingroup$
            @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
            $endgroup$
            – DaftWullie
            Nov 24 '18 at 6:17












            $begingroup$
            Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
            $endgroup$
            – DaftWullie
            Nov 24 '18 at 6:56




            $begingroup$
            Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
            $endgroup$
            – DaftWullie
            Nov 24 '18 at 6:56




            1




            1




            $begingroup$
            Thought that was a standard argument :-o
            $endgroup$
            – Norbert Schuch
            Nov 24 '18 at 12:11




            $begingroup$
            Thought that was a standard argument :-o
            $endgroup$
            – Norbert Schuch
            Nov 24 '18 at 12:11











            8












            $begingroup$

            As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
            begin{equation}
            2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j,
            end{equation}

            where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfrac{d-1}{2}rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^{n-k}$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.



            However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.






            share|improve this answer









            $endgroup$


















              8












              $begingroup$

              As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
              begin{equation}
              2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j,
              end{equation}

              where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfrac{d-1}{2}rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^{n-k}$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.



              However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.






              share|improve this answer









              $endgroup$
















                8












                8








                8





                $begingroup$

                As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
                begin{equation}
                2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j,
                end{equation}

                where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfrac{d-1}{2}rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^{n-k}$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.



                However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.






                share|improve this answer









                $endgroup$



                As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
                begin{equation}
                2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j,
                end{equation}

                where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfrac{d-1}{2}rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^{n-k}$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.



                However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 23 '18 at 14:45









                Josu Etxezarreta MartinezJosu Etxezarreta Martinez

                1,648219




                1,648219























                    4












                    $begingroup$

                    I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of



                    A Theory of Quantum Error-Correcting Codes
                    Emanuel Knill, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9604034


                    where the specific result is:




                    Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.




                    Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
                    In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!{+}1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and an odd integer $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
                    $$
                    begin{aligned}
                    n
                    ;&geqslant;
                    4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil
                    \[1ex]&geqslant;
                    bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil
                    \[1ex]&=;
                    2d - 2 + k
                    ;geqslant;
                    6 - 2 + 1
                    ;=;
                    5.
                    end{aligned}
                    $$



                    (N.B.
                    There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)



                    As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used



                    Quantum MacWilliams Identities
                    Peter Shor, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9610040


                    to show that no degenerate code on five qubits exists that can correct any single errors.






                    share|improve this answer











                    $endgroup$













                    • $begingroup$
                      Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 10:51










                    • $begingroup$
                      Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:05












                    • $begingroup$
                      As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:15












                    • $begingroup$
                      Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 11:17










                    • $begingroup$
                      Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:21


















                    4












                    $begingroup$

                    I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of



                    A Theory of Quantum Error-Correcting Codes
                    Emanuel Knill, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9604034


                    where the specific result is:




                    Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.




                    Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
                    In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!{+}1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and an odd integer $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
                    $$
                    begin{aligned}
                    n
                    ;&geqslant;
                    4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil
                    \[1ex]&geqslant;
                    bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil
                    \[1ex]&=;
                    2d - 2 + k
                    ;geqslant;
                    6 - 2 + 1
                    ;=;
                    5.
                    end{aligned}
                    $$



                    (N.B.
                    There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)



                    As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used



                    Quantum MacWilliams Identities
                    Peter Shor, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9610040


                    to show that no degenerate code on five qubits exists that can correct any single errors.






                    share|improve this answer











                    $endgroup$













                    • $begingroup$
                      Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 10:51










                    • $begingroup$
                      Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:05












                    • $begingroup$
                      As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:15












                    • $begingroup$
                      Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 11:17










                    • $begingroup$
                      Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:21
















                    4












                    4








                    4





                    $begingroup$

                    I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of



                    A Theory of Quantum Error-Correcting Codes
                    Emanuel Knill, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9604034


                    where the specific result is:




                    Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.




                    Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
                    In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!{+}1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and an odd integer $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
                    $$
                    begin{aligned}
                    n
                    ;&geqslant;
                    4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil
                    \[1ex]&geqslant;
                    bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil
                    \[1ex]&=;
                    2d - 2 + k
                    ;geqslant;
                    6 - 2 + 1
                    ;=;
                    5.
                    end{aligned}
                    $$



                    (N.B.
                    There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)



                    As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used



                    Quantum MacWilliams Identities
                    Peter Shor, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9610040


                    to show that no degenerate code on five qubits exists that can correct any single errors.






                    share|improve this answer











                    $endgroup$



                    I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of



                    A Theory of Quantum Error-Correcting Codes
                    Emanuel Knill, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9604034


                    where the specific result is:




                    Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.




                    Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
                    In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!{+}1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and an odd integer $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
                    $$
                    begin{aligned}
                    n
                    ;&geqslant;
                    4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil
                    \[1ex]&geqslant;
                    bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil
                    \[1ex]&=;
                    2d - 2 + k
                    ;geqslant;
                    6 - 2 + 1
                    ;=;
                    5.
                    end{aligned}
                    $$



                    (N.B.
                    There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)



                    As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used



                    Quantum MacWilliams Identities
                    Peter Shor, Raymond Laflamme
                    https://arxiv.org/abs/quant-ph/9610040


                    to show that no degenerate code on five qubits exists that can correct any single errors.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Dec 1 '18 at 16:14









                    Niel de Beaudrap

                    6,16011236




                    6,16011236










                    answered Nov 30 '18 at 10:26









                    Felix HuberFelix Huber

                    513




                    513












                    • $begingroup$
                      Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 10:51










                    • $begingroup$
                      Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:05












                    • $begingroup$
                      As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:15












                    • $begingroup$
                      Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 11:17










                    • $begingroup$
                      Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:21




















                    • $begingroup$
                      Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 10:51










                    • $begingroup$
                      Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:05












                    • $begingroup$
                      As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:15












                    • $begingroup$
                      Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
                      $endgroup$
                      – Niel de Beaudrap
                      Nov 30 '18 at 11:17










                    • $begingroup$
                      Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
                      $endgroup$
                      – Felix Huber
                      Nov 30 '18 at 11:21


















                    $begingroup$
                    Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
                    $endgroup$
                    – Niel de Beaudrap
                    Nov 30 '18 at 10:51




                    $begingroup$
                    Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
                    $endgroup$
                    – Niel de Beaudrap
                    Nov 30 '18 at 10:51












                    $begingroup$
                    Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
                    $endgroup$
                    – Felix Huber
                    Nov 30 '18 at 11:05






                    $begingroup$
                    Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
                    $endgroup$
                    – Felix Huber
                    Nov 30 '18 at 11:05














                    $begingroup$
                    As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
                    $endgroup$
                    – Felix Huber
                    Nov 30 '18 at 11:15






                    $begingroup$
                    As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
                    $endgroup$
                    – Felix Huber
                    Nov 30 '18 at 11:15














                    $begingroup$
                    Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
                    $endgroup$
                    – Niel de Beaudrap
                    Nov 30 '18 at 11:17




                    $begingroup$
                    Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
                    $endgroup$
                    – Niel de Beaudrap
                    Nov 30 '18 at 11:17












                    $begingroup$
                    Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
                    $endgroup$
                    – Felix Huber
                    Nov 30 '18 at 11:21






                    $begingroup$
                    Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
                    $endgroup$
                    – Felix Huber
                    Nov 30 '18 at 11:21




















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Quantum Computing Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fquantumcomputing.stackexchange.com%2fquestions%2f4798%2fwhy-cant-there-be-an-error-correcting-code-with-fewer-than-5-qubits%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    這個網誌中的熱門文章

                    Hercules Kyvelos

                    Tangent Lines Diagram Along Smooth Curve

                    Yusuf al-Mu'taman ibn Hud