The nature of mathematics

One particularly contentious question in the foundations of both mathematics and physics is whether mathematics is discovered or invented. Related to this is whether mathematics is the way the world actually is or if it is simply a way in which we can model the world.

This is a particularly difficult question to answer since it is quite clear that there are physically impossible situations that can be spoken about in mathematics. The best example I can think of relates directly to some things I bitched about on this blog in recent weeks concerning Brikhoff’s theorem (which I won’t rehash in the interest of brevity): taking n copies of a quantum channel should start to approach something that can be approximated by unitaries as n approaches infinity, i.e. in the asymptotic limit, n copies of a channel have a unitary representation (roughly speaking and if the theorem is correct). But, from a physical point of view, this is ridiculous since it is literally impossible for an infinite number of channels (or anything else, for that matter) to physically exist. Yet whole branches of mathematics are devoted to working with infinity (infinite sets, for example).

Clearly some mathematical results have been originally thought to be entirely abstract only to much later find some application in the physical world. But it could be argued that concrete, mathematical analyses of infinite things could never, by their very definition, find an application in the physical world. Does this mean that mathematics is inherently discovered then? Or could it be taken as “evidence” for the Many-Worlds Interpretation (MWI), i.e. the only way to have a physically infinite result is to have a physically infinite number of universes?

A quirky mathematical problem in need of explanation

In my travails with the QIP Program Committee, one argument that went back-and-forth was whether or not the so-called Gell-Mann matrices (one possible representation of generators of SU(3)), when used as Kraus operators for a quantum channel, are extremal (we’re only concerned with extremality here, and not unitality). Landau and Streeter proved that a set of Kraus operators, A_{i}, is extremal if and only if the set

\{A_{k}^{\dag}A_{l}\}_{k,l \ldots N}

are linearly independent. I argued they were and Michael Ben-Or argued they were not. Oddly it appears we are somehow both correct (which only means there’s something else going on here that one or both of us missed). So I present here our two arguments and solicit your comments on this mathematical quandary.

Argument 1: They are extremal
I proved this in a straightforward way – I used Mathematica. I tried it several ways, but just to be sure I solicited suggestions in my previous blog post on how to tell Mathematica to check exactly what Landau and Streeter proposed. The result indicates that they are, indeed, linearly independent. Click here to see a PDF file export of the Mathematica notebook that includes both the input and the output (I don’t have a premium WordPress account so I can’t upload the actual notebook, but you can copy and paste from this into Mathematica if you want to tweak it yourself).

Argument 2: They are not extremal
Michael’s approach was to show that the channel constructed with the Gell-Mann matrices as Kraus operators are a convex combination of three other channels and thus can’t be extremal. He also crunched his numbers with Mathematica and here is a link to the PDF file of his notebook.

In my estimation, if Michael is correct and I didn’t make any errors in translating things to Mathematica then there’s something wrong with Landau and Streeter’s criterion for extremality. If anyone has any thoughts on this, I am eager to hear them.

Stumped by Mathematica – anyone?

I need to determine if a large set of matrices are linearly independent, i.e. linearly independent from each other (not linearly independent in their individual rows and columns). Anyone have any ideas on how to efficiently do this in Mathematica?

QIP – Final Post (Thursday & Friday recap)

Well, I’ve been so busy with the conference and squeezing a bit of sight-seeing that I am only now getting around to posting the recaps of Thursday and Friday. I would say that my first QIP has been a major success for me personally as I learned an awful lot, including the very important differences between physicists and computer scientists. I have a much better appreciation for how the quantum information community as a whole operates, despite my many years working with APS GQI. Anyway, on to the talks.

Thursday morning’s session began with an awesome talk by Aram Harrow (see arxiv.org:0811.3171v3) on work he performed with Avinatan Hassidim and Seth Lloyd on developing a quantum algorithm for solving linear systems of equations. Now this is something just about anyone with an undergraduate background in engineering, math, physics, computer science, maybe even chemistry or even economics could wrap their head around. Solving linear systems of equations – linear algebra – is one of the most widely used bits of mathematics. For instance, it’s the basis of finite element analysis which forms the backbone of structural engineering. In any case, the idea is to solve the equation A\bar{x}=\bar{b} (by the way, I highly recommend watching the video if you can, since many of his early slides were hilarious). Here we’re assuming that A is an N \times N matrix. If A happens to not be Hermitian, then we have

\left(\begin{array}{cc} 0 & A \\ A^{\dag} & 0\end{array}\right)\cdot\left(\begin{array}{c}0 \\ \bar{x}\end{array}\right) = \left(\begin{array}{c} \bar{b} \\ 0 \end{array}\right).

Of course there are an endless number of classical algorithms out there to solve these types of things (just think back to your linear algebra class or google it). Essentially the goal, of course, is to invert A in order to find x. The efficiency of these algorithms can be compared via their condition number \kappa. So, the quantum algorithm is roughly as follows:

|b\rangle = \sum_{i}^{N} b_{i} |i\rangle is a unit vector. Suppose A is a sparse and that \kappa^{-1} \le |A| \le I. Then we have

|x^{\prime}\rangle = A^{-1}|b\rangle where |x\rangle = \frac{|x^{\prime}\rangle}{|\sqrt{\langle x^{\prime}|x^{\prime}\rangle}}.

What this algorithm produces is |x\rangle and \langle x^{\prime}|x^{\prime}\rangle up to some error, latex \epsilon$ in a much better run-time than classical algorithms. Specifically, its run time is \mathcal{O}(\kappa t_{B} + \textrm{log}Ns^{2}\kappa^{2}/\epsilon) compared to \mathcal{O}(\textrm{min}N^{2.376},Ns\sqrt{\kappa},(s/\epsilon)^{\mathcal{O}(\sqrt{\kappa})}, \textrm{etc.}) (assuming I’m reading my notes correctly).

The advantages of this method come from the relatively small \kappa and the sparseness of A. The run-time really can’t be improved by much and so this is pretty much the limit on the optimality of the problem (from what I gathered anyway). The algorithm is based on two key primitives that are what invert the action of A: Hamiltonian simulation and phase estimation. Doing so coherently gives either a |0\rangle (destruction) or |1\rangle result. The best lower bound on the run-time with this method is \sqrt{\kappa}.

Now, comparing this to finite element models, this quantum algorithm can apparently be tweaked to be a bit better than the best finite element models out there, e.g. it can solve thermal conductivity in 36 spatial dimensions. Actually, there are in fact several types of solutions for the quantum algorithm that can be used to compare to Monte Carlo methods. Now, interestingly enough if the run-time could be improved to \kappa^{\frac{1-\delta}{2}} \cdot \textrm{poly log}N, it would apparently imply that BQP = PSPACE, though I’ll readily admit I’m not entirely sure I understand why (but I trust they know what they’re talking about). In addition the algorithm also apparently proves BQP-hardness, though again, I’m not sure I understand the details.

One thing they’re looking for is more applications. Luckily, I used to be a mechanical engineer who worked on fluid mechanics. I may have some possibly wicked hard real-world problems that might be a good test for them. For instance, one interesting question Aram suggested is how costly would it be to high-energy theory with low-energy states? Anyway, on to the next few talks, but congrats to Aram on an excellent talk – probably my favorite from the workshop.

Stefano Chesi went next and gave a talk about self-correcting quantum memories in conjunction with a bath that is supposed to improve on the Kitaev model. The connection to the bath was in the weak coupling limit. Further details on this work can be found at arxiv.org:0908.4264. The next talk was by Robert Koenig on quantum computation on Turaev-Viro codes. It was basically cool topological stuff – lots of neat topological diagrams including one I’m calling the upside-down pants diagram. Their paper didn’t have a picture of it, but let’s just put it this way: Todd Brun, who was sitting next to me, noted that it seemed to indicate that crossing one’s legs (which I was doing at the time) is like performing a unitary transformation on a pair of pants (hey, I’m a talented guy). Anyway, there was a lot of cool category and graph theory stuff and it’s a really neat approach to coding. I can’t say I understood every last detail, but it was a cool talk nevertheless.

Mark Howard following that with some more topology/graph theory stuff relating to stabilizer codes. Basically it was about trying to enable universal quantum computation (UQC) or, put another way, how close to that ideal can we get? How much of the quantum computing “power” can we obtain? His work uses a Clifford polytrope with a nifty octahedron lattice (while the geek in me should have been thinking it looked like D&D dice, I’ve been playing Zome Tools recently and kept thinking it looked like something I had built with them). One of his results was that UQC for single-qubit gates reduces to UQC for single-qubit states. Given a noisy operation that violates something, for example, his approach can tell you which measurement to perform. At the moment he’s only tackled unital noise, but non-unital would be useful.

Hui Khoon Ng went next with a simple approach to approximate quantum error correction (AQEC). Prior work has mostly been on perfectly-correcting the CP noise channel asking how far a proportionality factor, \varepsilon, is from trace-preserving (in fact she later defined “perfect” QEC). So what is AQEC exactly? It’s a smaller system that’s used to encode the information while not sacrificing much reliance against noise. You could think about AQEC as an optimization problem. Semi-definite programming methods are one example, though it gives an average performance. What she was interested in was the absolute worst-case performance scenario so that she could come up with something that worked for all channels. What AQEC does is condition perturbation conditions by tossing in an extra factor to the channel. So, for example, take a look at this:

PE_{i}^{\dag}\varepsilon (P)^{1/2}E_{j}P=\beta_{ij}P + \Delta_{ij} where we have \eta_{P} = \frac{1}{2}(1-t_{\textrm{min}}). The t_{\textrm{min}} are the minimum eigenvalues of E_{j} and the \Delta_{ij} is traceless.

In any case, she obtained a near-optimal recovery map for all channels with the optimality based on the worst-case fidelity (i.e. she used fidelity to identify her worst-case performance scenario for AQEC).

Following that, Roderich Moessner gave a talk on the statistical mechanics of disordered quantum optimization. Basically – and I find this fascinating – P\neNP apparently implies the existence of glassy physical systems. Now that’s wild. Now, the main crux of the talk was focused on more graph theoretic stuff. So, for instance, when they do stuff like this (applying graph theory to physical systems) you do things (as Moessner did) such as finding that the number of non-zero energy states depend on the properties of the graph. The punchline of his talk was that their methods provide improved lower bounds on the SAT-UNSAT transition (more complexity stuff).

Julia Kempe finished up the afternoon with a talk on a quantum Lovász lemma. A somewhat minor point I noticed was her “definition” of independent events. I’m still not entirely sure I agree (it’s the cosmologist in me talking). She “defines” them as, given two independent events, it means there is a non-zero probability that each occurs at some point. There’s something vaguely unsatisfying about this from a cosmological point of view, but I am also being cautious by putting things in quotes since her paper might be more rigorous. At any rate, her motivation, similar to the previous talk, revolved around K-SAT. Unfortunately, sitting in the airport almost two days later, I can’t read the rest of my own chicken scratch on the talk. My apologies to both readers and Julia on this.

The conference dinner was held at a lakeside restaurant. While very good, those of us who were still a bit jetlagged (like the poor Japanese guys next to me who had come in to town a bit later than me) started dozing. The main dish wasn’t served until 9 and dessert wasn’t served until 11. In the midst of this, Gilles Brassard (who has shed his beard) gave a nice historical talk on the BB84 protocol. The gestation for the idea actually dated to 1970 and a guy who’s name I have since forgotten (did I mention I was exhausted?). It was a great talk and included plenty of discussion about the failures of getting the idea published. Basically, Gilles said that they couldn’t convince anyone until they built an actual prototype.

Friday morning began with a great talk by Marcin Pawlowski in the principle of information causality. First, let me say that Marcin’s talk was by far one of the absolute funniest at the conference. He began with an awesome overview of just exactly what a physical principle is (this is so interesting from a foundational point of view). In relation to the idea of “reasonability” regarding communication between Alice and Bob, for instance, and relating to T’sirelson’s bound he really tried to explain why it is the way it is. He then began talking about mutual information and I SWEAR he was inches away from saying what I’ve said in my poster – mutual information is a statement of the second law, i.e. H(A:B) \ge 0 and \Delta S \ge 0 (for isolated systems and where S is the usual thermodynamic entropy) are really the same thing. Plus he brought causality into it which is another thing I’ve been ranting about for years (not that I’m sold on causality being a necessary condition, just that I know it can be related to this). In regards to information causality, the principle is that the sender decides what information is transmitted. I think I could relate this to my CTC communication protocol too, but I’ll have to think about it some more.

Sergio Boixo and Markus Mueller followed with pretty good foundational talks, though I didn’t take too many notes at this point. Both used the CHSH inequalities (which is good stuff) and Boixo’s work assumed that quantum mechanics is at least good locally. His closing advice was “if you break something, break it big.”

I, alas, skipped the rest of the talks due to a melted brain and a stubborn insistence that I find Liechtenstein which I did (I’ll post pictures eventually). I even found a place to stamp my passport (since there are no official border crossings between Switzerland and Liechtenstein) thanks to a friendly truck driver whose wife is from Seattle. I also found the graveyard of famous astronomer Fritz Zwicky, but don’t think I actually found the real Fritz because there were several who were all about the same age! Every other name in the graveyard (which was the only one in town) was “Zwicky” and the local drug store was “Zwicky’s.” So I missed the actual grave apparently, but saw plenty of his relatives! I then drove up into the mountains until the road I was on was closed due to weather higher up. But I did see some wicked cool stuff up there and will – again – have pictures soon.

So, that was QIP 2010. It was a great experience despite my problems with the Program Committee and I thoroughly enjoyed Switzerland (though I need to bone up on my German for my next visit).

Road trip!

I just got back from the workshop dinner and am going to sleep. But I’m taking a road trip again tomorrow – Lichtenstein (I WILL find it this time) and Frtiz Zwicky’s grave in Mollis. If you’re at the conference, reading this, and want to join me, grab me at the morning session.

QIP – Post 4 (Wednesday recap)

Since it’s already Thursday, I want to give a recap of Wednesday.  I got back to my apartment rather late last night and didn’t get a chance to do the recap (more on why I was late in a bit).

The morning was kicked off by a pretty cool talk given by André Chailloux on quantum coin flipping motivated by the idea that Alice and Bob don’t trust each other (there must be an opportunity for a joke in here somewhere…).  The interesting thing is that their construction is entirely classical except for the presence of a black box.  Given all the other interesting aspects of the results, this seemed the most interesting to me since it seems to say something about how the universe works, i.e. just how that black box interacts with the classical parts is, in essence, the quantum-classical boundary.  At any rate, they have a strong coin flip protocol that matches Kitaev’s bound with a probability of cheating of z=2-\sqrt{2}. Regarding their black box, it is a bit like a back-and-forth teleportation scheme in a way. Their methods used point (as in points on a graph) games as an example and what they were doing had many elements of a directed graph (so there are connections to some of Rob Spekkens’ work in here – in fact they interpret their results in terms of the Spekkens-Rudolph protocol). In this midst of the talk, young André had a “Jan-Ã…ke Larsson” moment when his girlfriend Skyped him (Jan-Ã…ke’s wife did the same thing in the midst of an APS talk he gave in Denver several years ago).

Andreas Winter (who shall forever after be referred to as “The Chairman” – you had to be there), gave a nice talk on highly entangled states that, oddly, offer absolutely no secrecy. Aside from some computer problems (note that all the non-Macs have had trouble with the display connection), The Chairman’s main question was if there was a “bound” key and the answer is that, yes, there is an asymptotically bound one. His results were broken into both symmetric (bosonic) and antisymmetric (fermionic) parts and he focused his talk on the latter. He had some great pictures tossed in (including a Platypus – gotta love the Platypus).

Thomas Vidick followed with a talk about improved extractors for bounded quantum storage which is another interesting cryptography thing related to adversaries attempting to extract keys (oh those nasty adversaries). He was able to prove the security of his work using a proof by contradiction (I love seeing undergraduate stuff pop up in these talks).

I didn’t write down notes on most of Carolin Lunemann’s talk, but it was more cryptography stuff and sounded pretty cool. Stephanie Wehner followed with another talk on two-person cryptography (no Eve is present – essentially, do Alice and Bob trust one another?). In such a scheme, one might imagine Bob as an ATM machine and Alice is the customer. Now, on the one hand, Bob (the ATM) could be malicious, i.e. someone may have hacked in or maybe it’s not even a real ATM. So Bob could be distrustful. On the other hand, Alice might not be who she claims to be and thus she could be distrustful as well. Wehner’s group essentially attacks this really fascinating problem via another quantum bounded storage model.

Vincent Nesme finished up the morning with a talk on unitarity and causality (yeah, now we’re getting somewhere, heh heh heh – or not depending on whether we’re causal – ok, nevermind). In any case, his talk contrasted a structuralist approach with an axiomatic one. In fact, his approach reconciled the two approaches to QCA. In the process he invoked index theory which is something I know almost nothing about. But I think I want to know more since he had an interesting take on the relation between causality and graphs. I need to ponder it some more before commenting in detail on it. The one thing that confused me a bit was his non-symmetric light cone. I wasn’t sure what his horizontal axis was since, in normal Minkowski space, it shouldn’t be non-symmetric but he didn’t seem to specify a different value for that axis and he wasn’t working in curved space (at least he didn’t seem to indicate he was). Anyway, the quantum case had a different (more symmetric) light-cone and he noted that this was the cost of apparently superluminal transmission of information.

Now, in the afternoon, Suze Gildert and I headed out (in a black Mercedes E-Class!) to find four things – at least one of Einstein’s houses, the grave of James Joyce, the grave of Wolfgang Pauli, and the nation of Lichtenstein. We found all but Lichtenstein. If I were any nuttier than I am, I would SWEAR it doesn’t exist (a conspiracy of cartographers?). At least, the Swiss don’t really seem to acknowledge its importance as a country. It turns out we turned around 4 miles from the border and never once saw a sign indicating we were even close. Avis didn’t give me a map and the GPS system in the car was surprisingly bad considering it was a Mercedes. But I refuse to leave until I’ve been to Lichtenstein, so I might skip some of tomorrow’s sessions in order to get there. Regarding the others, I will have photos in due course. Pauli’s was pretty cool in the sense that it probably gets very few visitors. In order to find it, I had to ask a couple of guys who worked for a local floral place and were putting flowers on a recent grave. Neither spoke English and one (oddly the older one) had never heard of Pauli, but the younger one knew exactly what I was talking about (dumb American that I am – I really do wish the US did a better job of language education) and led us straight to it. Of course, Heinz Hopf (of Hopf algebra fame) is buried right next to Pauli. Joyce’s grave was very nice too. There’s a nice statue of him next to it. Across the street happened to be the world headquarters of FIFA (that’s soccer’s international organization). Again, picture will be forthcoming.

Unfortunately I missed the rump session and thus Todd’s talk on umbrellas and – apparently – quantum animal husbandry. Perhaps Todd can enlighten us by linking to his talk…

The elusive nation, QIP, and more…

I will have a post about today’s QIP talks soon, but it’s presently quite late and I just got back not long ago from a rather amusing side excursion that I will relate here soon as well.

Until then, readers of the QIP stuff should go back to Post 3 and scroll down until they see the red text.

FQXi awards – shameless self-promotion

OK, considering all that has gone wrong with my research in recent years, I figure I ought to have the right to brag at least once.  An essay I wrote was awarded a Third Prize by FQXi in the latest essay contest.  In the process, I managed to beat out Stephen Wolfram (yes, THAT Stephen Wolfram), among other well-known physicists, mathematicians, and philosophers.  The prize comes with a rather nice monetary award and membership in FQXi which includes a slew of quantum information people along with Nobel Laureates t’Hooft, Weinberg, and Wilczek.  I should add that I view this as a huge victory for small colleges like Saint Anselm, outreach organizations like ASNNE, and the Anacapa Society from which many of these ideas were spawned.

QIP – Post 3

Let me begin by saying that I’m developing a growing dislike of theoretical computer scientists (cue the flaming in three, two, one…).  These people don’t really seem to be connected to reality.  They discuss, with impunity, results for such outrageous things as T = 0K (any pre-med majors I have taught over the years can explain what’s wrong with this – and, no, it was not a typo) and various parameters (frequently something nebulous like n) at infinity.  While there may be admittedly valid mathematical results in these cases, they have little or no connection to reality.  Personally I don’t think people should be working on physics problems if they can’t recognize the physical from non-physical (note that I consider CTCs physical, though they are not necessarily real – there is an important distinction).  At any rate, I will have another blog post about something related to this in a moment.

So the first talk was by Ben Reichardt from Waterloo on quantum query complexity using span programs.  Monotone span programs are equivalent to linear secret sharing schemes in cryptography.  In fact he was able to derive Grover’s search algorithm using these span programs.  I’m afraid I don’t have much more to say on the issue, though.  Ben has a very soothing voice and the room was sweltering, thus I had a hard time keeping my eyes open.

David Gross (not the one with the Nobel Prize – the other one from Hannover) gave a very well-organized talk (there are so few of these) on non-commutative compressed sensing related to quantum tomography.  Suppose you have some unknown density matrix \rho on \mathbb{C}^{d} with rank r << rd. In any case, that was the purpose of this work – to answer questions like that. David gave a great analogue to a pipe organ in that the pipes were d and the number of fingers the organist had were r. One of the questions that arises from the pipe organ analogy is whether we can perform an exact Fourier transform given only partial information. Again, the answer is yes – even in the classical (i.e. pipe organ) case using classical compressed sensing. So the idea in the quantum case is to recover the density matrix if the rank r is unknown. In the quantum case this is possible of we know rd(log d) randomly chosen Pauli expectation values. Technically the scheme is probabilistic, but it is so with a very high degree of success. David highlighted the difference between physicists and mathematicians by noting that the mathematical proof of the classical case was 50 pages long while the proof of the quantum case was 4.

I missed most of Norbert Schuch’s talk while finishing up my poster. Then I did manage to catch Dominic Berry’s (Caltech) on the query complexity of Hamiltonian simulation. Basically, if you want to implement N \times N unitary matrices, you need \Omega(N^2) gates just to approximate it. In essence this work, done with Andrew Childs, is an extension of Childs’ quantum random walk stuff.

Maarten Van den Nest of MPQ Garching gave a talk about simulating quantum computers using probabilistic methods. It was more algorithmic stuff and I can’t say it was overly exciting to me.

I ate lunch with the D-Wave and Google guys – yes, Google is here. Google recently used D-Wave’s chip to do some interesting things and their presentation at QIP centers around using it to, essentially, make Streetview even scarier. Basically they used D-Wave’s chip to train a piece of recognition software to tell the difference between a car and a truck. I’m not kidding. They gave me a demo at lunch using matchbox cars. I might actually get involved in some of this since it uses adiabatic quantum computing, of which I am swiftly becoming a HUGE fan (his Pontiffness welcomed me to the Dark Side).

Phillippe Corboz (Queensland) gave a very nicely organized talk (I’m starting to notice things like this) on simulating fermionic lattice models in 2-D using tensor network algorithms. As someone else in the audience pointed out, there’s an eerie similarity between some of these methods and categorical quantum mechanics. Anyway, quantum Monte Carlo methods tend to fail for many-body problems and this turns out to be a nice alternative since it allows you to approximate sums over exponentially-many terms with something polynomial. The ground state is the most interesting, of course, though I’ll note that it was Phillipe who equated this to T = 0K (tsk, tsk). The problem with the fermionic case is that you end up with negative probabilities in some cases. Tensor networks, however, get around this problem. They also tackled the 2-D Hubbard model of high-T_C superconductors. Tree tensor networks are very good for describing states obeying an area law. The CS people in the audience had no clue about the physics stuff (I really don’t think they should be working on physics problems). The computational cost of tensor networks is still high but it provides some symmetries that can be exploited for useful purposes.

Hari Krovi (UConn) gave a talk on adiabatic quantum optimization failing for random instances of NP-complete problems, though I heard through the grapevine he wasn’t sure he believed his own results (I suppose one could take that to mean he thought the real world might behave a bit differently than the math). He talked a lot about Anderson localization and had to apologize at one point for not being “rigorous” enough (ah, those CS people…).

Correction: I want to apologize to Hari Krovi for the aforementioned statement.  The last line was actually meant to imply that, personally, I thought he was rigorous enough. At the time I was in a particularly virulent mood about the rigor of the CS people since physicists generally recognize that the real world is a messy place (not to offend CS people – I think the differing views are products of the differing disciplines themselves).  Of course, it was apparently a physicist who made the comment that produced Hari’s reply, but I didn’t realize it at the time.  In addition, spreading an unsubstantiated rumor is, of course, wrong.  As a scientist I’m sure he believes his results.  Why else would he present them?

Kristan Temme (Vienna) gave another very well-done talk on quantum metropolis sampling. Basically they were looking at simulating quantum systems on a universal quantum computer. Adiabatic quantum computation is one way to give access to the states. Anyway, the classical Metropolis problem (named after the lead author of the paper) appeared in 1953 and didn’t find practical use until the first useful computers arrived. So these guys tried to find a quantum version. In both cases they were trying to essentially simulate a system plus a bath. The classical version used stochastic maps that converged (over many iterations) to the Gibb’s state. This improved things by 6-10 orders of magnitude. The quantum case used CP maps instead, though it did introduce a few problems along the way, notably the introduction of quantum phase transitions that brought led to imperfections.

Finally, Sergei Bravyi gave a talk based on the question: is there a fundamental upper bound on the amount of quantum information that can be stored reliably in a volume of d-dimensional space? The main result was that the upper bound found using the quantum situation was k \le \frac{cn}{d^2} while classical codes obey the weaker bound k \le \frac{cn}{\sqrt{d}}. What’s interesting about this is that, usually, the classical cases for things like this are a squareroot of the quantum case. Here it’s the fourth root for some reason (they don’t know the reason yet). Oddly, somewhere in the middle they seemed to come close to invoking the Cerf-Adami inequalities that I will be giving a poster on on Thursday.

More tomorrow…

Create a free website or blog at WordPress.com.

Up ↑