[math-fun] What if Turing/Shannon/Bekenstein were wrong?
I was listening to some of Leonard Susskind's lectures on black holes & information & the fight with Hawking yesterday, and they seemed almost quaint. The reason is that information theory & computer science are 100% classical. Computer science and information theory are founded on the principle that information can be copied, while the "no-clone" theorem of quantum mechanics states that quantum states cannot be cloned. These two distinct views of the world cannot be reconciled, except at scales large enough whereby a quantum system can simulate (with great effort & inefficiency) a classical computer circuit. Szilard was making significant progress on Maxwell's Demon and information in the 1920's, when quantum mechanics knocked the foundation out from under Maxwell's entire enterprise. Shannon's theorems re the information-carrying bandwidth are entirely classical, and completely ignore the quantum nature of radio/light waves. Thus, de Broglie tells us that long wavelengths contain many more quanta than short wavelengths. (In fact, the number of quanta is essentially the number of Planck lengths contained in the wavelength.) But Shannon tells us that shorter wavelengths can contain _more_ information than longer wavelengths! I call this the "ultraviolet catastrophe" of information theory. The Bekenstein argument talks about dropping information into a black hole & calculating the change in size of the black hole that results. But you can't drop "information", per se, into a black hole; you can only drop actual quantum mechanical particles into a black hole. These quantum mechanical particles may be entangled with particles outside the black hole. Thus, the Bekenstein argument re how much classical Shannon "information" can be stored in a black hole is interesting, but ultimately irrelevant. Shannon information theory ultimately rests on probability theory, which is also quintessentially classical. A proper modern _quantum_ information theory would rest instead on a _quantum_ probability theory in which the standard 2-slit results fall out naturally. The two articles below re evading the "diffraction limit" with quantum entangled photons demonstrates once again how far from reality our standard classical viewpoint really is. The recent discussion here re P=NP and quantum systems misses an important point: whether P=NP (a classical notion in a classical framework) may be completely irrelevant in the real quantum world. Modern computer science is fundamentally built on Turing machines that _copy information_ from one place to another. This is the basis of simulation and universal computers. But the entire enterprise fails when we can't even copy one quantum bit. We even require a new type of logic to deal with this problem: so-called "linear logic", which deals with things that can't be copied -- e.g., resources like time, space, energy, etc. It will probably take the rest of the 21st Century to re-invent probability, information theory and computer science, but we need to start soon in order to make even this long deadline. --- http://www.technologyreview.com/view/524521/worlds-first-entanglement-enhanc... February 10, 2014 World's First Entanglement-Enhanced Microscope Physicists have long known that entangled photons can make more precise measurements than independent ones. Now Japanese physicists have built a microscope that proves it. One of the exciting possibilities of quantum mechanics is the ability to measure the world far more precisely than with classical tools. Today, Takafumi Ono and pals at Hokkaido University in Japan say theyÂve exploited this to create the worldÂs first entanglement-enhanced microscope. Their new toy produces images with entangled photons that are significantly sharper than those possible with ordinary light alone. Entanglement is the strange quantum property in which two particles share the same existence, even though they may be far apart. Ono and co say this is particularly useful for a type of imaging known as differential interference contrast microscopy. This works by focusing two beams of photons into spots next to each other on a flat sample and measuring the interference pattern they create after they have been reflected. When both spots hit a flat part of the sample, they travel the same path length and create a corresponding interference pattern. But when the spots hit areas of different heights, the interference pattern changes. It is then possible to work out the shape of the surface by analysing the change in the interference pattern as the spots move across it. The difference in phase of photons can be measured with huge accuracy, but even this has a limit, known as the standard quantum limit. However, physicists have known for some time that itÂs possible to improve on this by using entangled photons rather than independent ones. ThatÂs because a measurement on one entangled photon gives you information about the other, so together they provide more information than independent photons. Ono and co demonstrate this using entangled photons to image a flat glass plate with a Q-shaped pattern carved in relief on the surface. This pattern is just 17 nanometres higher than the rest of the plate and so tricky to resolve with ordinary optical techniques. Entangled photons significantly improve on this. Ono and co say the signal to noise ratio using their technique is 1.35 times better than the standard quantum limit. And the resulting image is noticeably improved, simply by visual inspection (the image with entangled photons is on the left in the above figure). ÂAn image of a Q shape carved in relief on the glass surface is obtained with better visibility than with a classical light source, they say. That should be useful in a number of different applications; when samples might be damaged by intense light, for example. Enhanced microscopy is just one of many applications for quantum metrology. It should also help improve the resolution of the interferometers used in gravitational wave astronomy, for example. So itÂs good to see a success like this in another area. Ref: arxiv.org/abs/1401.8075: An Entanglement-Enhanced Microscope https://medium.com/p/5f473cf5a4bc How To Build A Quantum Telescope Quantum optics has revolutionised microscopy. Now astronomers are planning to jump on the quantum bandwagon The Physics arXiv Blog in The Physics arXiv Blog The diffraction limit is astronomyÂs greatest enemy. The resolution of all telescopes is limited by factors such as imperfections in the optics and turbulence in the atmosphere. But these can be overcome using better equipment, adaptive optics and by getting above the atmosphere. But there is one limit that astronomers cannot overcome because it is set by the laws of physicsÂ-the diffraction limit. Every telescope on the planet, and all those orbiting it, are limited in this way. And until recently there was no known way to beat it. Now physicists have begun to develop various quantum techniques that can overcome the diffraction limit, at least in the lab. These techniques have begun to revolutionise microscopy, where the light source can be carefully controlled. But they have yet to be considered for astronomy because astronomers have little, if any, control over the sources of the light they are interested in. Today, however, Aglae Kellerer at the University of Durham explains how to build a quantum telescope. She says quantum techniques could dramatically improve the resolution of telescopes by beating the diffraction limit for the first time. When light from a point source enters a lens, it bends. This bending causes the light to spread out so that it interacts with itself generating an interference pattern. The result is that a lens always resolves a point source as a bright disc surrounded by concentric circles of light, a pattern known as an Airy disc. The size of this disc, which is determined by the wavelength of light and the size of the lens, determines the ultimate resolution of the instrument. In practice, most telescopes are limited by other factors, in particular, turbulence in the several kilometres of atmosphere above them. But techniques such as adaptive optics, which iron out the effects of turbulence, are allowing telescopes to get much closer to the diffraction limit. So how to go further? KellererÂs idea is to exploit the strange effects of quantum mechanics to improve things. Entanglement, for example. Entangled photons are so deeply linked that they share the same existence. Measure one and you automatically influence the other. That gives you information about the other photon, regardless of its distance from you. Last month, physicists used this idea to build the worldÂs first entanglement-enhanced microscope that dramatically increases its resolution over purely classical instruments. They created entangled photons and used one to illuminate the object. The second photon can then give them information about the first that they use to increase the resolution of the resulting image. ThereÂs an obvious problem in employing these kinds of techniques in astronomyÂ-the photons of interest arenÂt under your control, having travelled many lightyears from their astrophysical source. But Kellerer says there is a way round this. Her idea is to use the astrophysical photons to stimulate the production of an entangled pair, inside a telescope. The first of this pair then hits the detector, generating an image. But the other can be used to increase the information known about the first, thereby increasing the resolution and beating the diffraction limit. ThatÂs an interesting idea that has the potential to significantly increase the resolution of conventional telescopes. In fact, Kellerer has simulated the effect of this process on computer to enhance the resolution of a conventional astronomical image by a factor of six. But building an instrument that works in this way will be hard and the devil is in the detail. The first problem is in producing entangled photons in the first place. KellererÂs idea is to use a crystal of excited atoms that emit entangled photons when stimulated by passing astrophysical ones. The problem is the efficiency of such a process. With so few astrophysical photons to play with, any that are lost due to inefficiencies are a serious problem. Then there is the problem of spontaneous emission. Excited atoms have a nasty habit of emitting photons, even when they are not stimulated by passing photons. ThatÂs noise, which could end up overwhelming the signal from the photons astronomers are interested in. Both of these problems can be minimised but they cannot be removed completely. The question is whether the advantages of this technique can be made to outweigh the disadvantages in a real device. ThereÂs only one way to find out, of course. Today, the technology required to test this idea is in its infancy. But thereÂs good reason to think that significant advances will be made in the near future. And that means that future telescopes could be very different from the ones we have today. ItÂs a sobering thought that if the pioneers of telescopic imaging were around today, they would find that the worldÂs best telescopes work in more or less exactly the same way as their own from 400 years ago. But KellererÂs approach would be entirely alien to them and could finally take astronomy into a new era of quantum imaging. Ref: arxiv.org/abs/1403.6681 : Quantum Telescopes
Henry, " Computer science and information theory are founded on the principle that information can be copied, while the "no-clone" theorem of quantum mechanics states that quantum states cannot be cloned. " I always understood this to be valid for particles (fermions) not for photons (bosons). Or am I wrong here? Wouter. -----Original Message----- From: Henry Baker Sent: Monday, April 07, 2014 6:01 PM To: math-fun@mailman.xmission.com Subject: [math-fun] What if Turing/Shannon/Bekenstein were wrong? I was listening to some of Leonard Susskind's lectures on black holes & information & the fight with Hawking yesterday, and they seemed almost quaint. The reason is that information theory & computer science are 100% classical. Computer science and information theory are founded on the principle that information can be copied, while the "no-clone" theorem of quantum mechanics states that quantum states cannot be cloned. These two distinct views of the world cannot be reconciled, except at scales large enough whereby a quantum system can simulate (with great effort & inefficiency) a classical computer circuit. Szilard was making significant progress on Maxwell's Demon and information in the 1920's, when quantum mechanics knocked the foundation out from under Maxwell's entire enterprise. Shannon's theorems re the information-carrying bandwidth are entirely classical, and completely ignore the quantum nature of radio/light waves. Thus, de Broglie tells us that long wavelengths contain many more quanta than short wavelengths. (In fact, the number of quanta is essentially the number of Planck lengths contained in the wavelength.) But Shannon tells us that shorter wavelengths can contain _more_ information than longer wavelengths! I call this the "ultraviolet catastrophe" of information theory. The Bekenstein argument talks about dropping information into a black hole & calculating the change in size of the black hole that results. But you can't drop "information", per se, into a black hole; you can only drop actual quantum mechanical particles into a black hole. These quantum mechanical particles may be entangled with particles outside the black hole. Thus, the Bekenstein argument re how much classical Shannon "information" can be stored in a black hole is interesting, but ultimately irrelevant. Shannon information theory ultimately rests on probability theory, which is also quintessentially classical. A proper modern _quantum_ information theory would rest instead on a _quantum_ probability theory in which the standard 2-slit results fall out naturally. The two articles below re evading the "diffraction limit" with quantum entangled photons demonstrates once again how far from reality our standard classical viewpoint really is. The recent discussion here re P=NP and quantum systems misses an important point: whether P=NP (a classical notion in a classical framework) may be completely irrelevant in the real quantum world. Modern computer science is fundamentally built on Turing machines that _copy information_ from one place to another. This is the basis of simulation and universal computers. But the entire enterprise fails when we can't even copy one quantum bit. We even require a new type of logic to deal with this problem: so-called "linear logic", which deals with things that can't be copied -- e.g., resources like time, space, energy, etc. It will probably take the rest of the 21st Century to re-invent probability, information theory and computer science, but we need to start soon in order to make even this long deadline. --- http://www.technologyreview.com/view/524521/worlds-first-entanglement-enhanc... February 10, 2014 World's First Entanglement-Enhanced Microscope Physicists have long known that entangled photons can make more precise measurements than independent ones. Now Japanese physicists have built a microscope that proves it. One of the exciting possibilities of quantum mechanics is the ability to measure the world far more precisely than with classical tools. Today, Takafumi Ono and pals at Hokkaido University in Japan say they’ve exploited this to create the world’s first entanglement-enhanced microscope. Their new toy produces images with entangled photons that are significantly sharper than those possible with ordinary light alone. Entanglement is the strange quantum property in which two particles share the same existence, even though they may be far apart. Ono and co say this is particularly useful for a type of imaging known as differential interference contrast microscopy. This works by focusing two beams of photons into spots next to each other on a flat sample and measuring the interference pattern they create after they have been reflected. When both spots hit a flat part of the sample, they travel the same path length and create a corresponding interference pattern. But when the spots hit areas of different heights, the interference pattern changes. It is then possible to work out the shape of the surface by analysing the change in the interference pattern as the spots move across it. The difference in phase of photons can be measured with huge accuracy, but even this has a limit, known as the standard quantum limit. However, physicists have known for some time that it’s possible to improve on this by using entangled photons rather than independent ones. That’s because a measurement on one entangled photon gives you information about the other, so together they provide more information than independent photons. Ono and co demonstrate this using entangled photons to image a flat glass plate with a Q-shaped pattern carved in relief on the surface. This pattern is just 17 nanometres higher than the rest of the plate and so tricky to resolve with ordinary optical techniques. Entangled photons significantly improve on this. Ono and co say the signal to noise ratio using their technique is 1.35 times better than the standard quantum limit. And the resulting image is noticeably improved, simply by visual inspection (the image with entangled photons is on the left in the above figure). “An image of a Q shape carved in relief on the glass surface is obtained with better visibility than with a classical light source,” they say. That should be useful in a number of different applications; when samples might be damaged by intense light, for example. Enhanced microscopy is just one of many applications for quantum metrology. It should also help improve the resolution of the interferometers used in gravitational wave astronomy, for example. So it’s good to see a success like this in another area. Ref: arxiv.org/abs/1401.8075: An Entanglement-Enhanced Microscope https://medium.com/p/5f473cf5a4bc How To Build A Quantum Telescope Quantum optics has revolutionised microscopy. Now astronomers are planning to jump on the quantum bandwagon The Physics arXiv Blog in The Physics arXiv Blog The diffraction limit is astronomy’s greatest enemy. The resolution of all telescopes is limited by factors such as imperfections in the optics and turbulence in the atmosphere. But these can be overcome using better equipment, adaptive optics and by getting above the atmosphere. But there is one limit that astronomers cannot overcome because it is set by the laws of physics—-the diffraction limit. Every telescope on the planet, and all those orbiting it, are limited in this way. And until recently there was no known way to beat it. Now physicists have begun to develop various quantum techniques that can overcome the diffraction limit, at least in the lab. These techniques have begun to revolutionise microscopy, where the light source can be carefully controlled. But they have yet to be considered for astronomy because astronomers have little, if any, control over the sources of the light they are interested in. Today, however, Aglae Kellerer at the University of Durham explains how to build a quantum telescope. She says quantum techniques could dramatically improve the resolution of telescopes by beating the diffraction limit for the first time. When light from a point source enters a lens, it bends. This bending causes the light to spread out so that it interacts with itself generating an interference pattern. The result is that a lens always resolves a point source as a bright disc surrounded by concentric circles of light, a pattern known as an Airy disc. The size of this disc, which is determined by the wavelength of light and the size of the lens, determines the ultimate resolution of the instrument. In practice, most telescopes are limited by other factors, in particular, turbulence in the several kilometres of atmosphere above them. But techniques such as adaptive optics, which iron out the effects of turbulence, are allowing telescopes to get much closer to the diffraction limit. So how to go further? Kellerer’s idea is to exploit the strange effects of quantum mechanics to improve things. Entanglement, for example. Entangled photons are so deeply linked that they share the same existence. Measure one and you automatically influence the other. That gives you information about the other photon, regardless of its distance from you. Last month, physicists used this idea to build the world’s first entanglement-enhanced microscope that dramatically increases its resolution over purely classical instruments. They created entangled photons and used one to illuminate the object. The second photon can then give them information about the first that they use to increase the resolution of the resulting image. There’s an obvious problem in employing these kinds of techniques in astronomy—-the photons of interest aren’t under your control, having travelled many lightyears from their astrophysical source. But Kellerer says there is a way round this. Her idea is to use the astrophysical photons to stimulate the production of an entangled pair, inside a telescope. The first of this pair then hits the detector, generating an image. But the other can be used to increase the information known about the first, thereby increasing the resolution and beating the diffraction limit. That’s an interesting idea that has the potential to significantly increase the resolution of conventional telescopes. In fact, Kellerer has simulated the effect of this process on computer to enhance the resolution of a conventional astronomical image by a factor of six. But building an instrument that works in this way will be hard and the devil is in the detail. The first problem is in producing entangled photons in the first place. Kellerer’s idea is to use a crystal of excited atoms that emit entangled photons when stimulated by passing astrophysical ones. The problem is the efficiency of such a process. With so few astrophysical photons to play with, any that are lost due to inefficiencies are a serious problem. Then there is the problem of spontaneous emission. Excited atoms have a nasty habit of emitting photons, even when they are not stimulated by passing photons. That’s noise, which could end up overwhelming the signal from the photons astronomers are interested in. Both of these problems can be minimised but they cannot be removed completely. The question is whether the advantages of this technique can be made to outweigh the disadvantages in a real device. There’s only one way to find out, of course. Today, the technology required to test this idea is in its infancy. But there’s good reason to think that significant advances will be made in the near future. And that means that future telescopes could be very different from the ones we have today. It’s a sobering thought that if the pioneers of telescopic imaging were around today, they would find that the world’s best telescopes work in more or less exactly the same way as their own from 400 years ago. But Kellerer’s approach would be entirely alien to them and could finally take astronomy into a new era of quantum imaging. Ref: arxiv.org/abs/1403.6681 : Quantum Telescopes _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
See http://en.wikipedia.org/wiki/No-cloning_theorem At 09:19 AM 4/7/2014, Wouter Meeussen wrote:
" Computer science and information theory are founded on the principle that information can be copied, while the "no-clone" theorem of quantum mechanics states that quantum states cannot be cloned."
I always understood this to be valid for particles (fermions) not for photons (bosons).
The no-cloning theorem says that there's no linear function that will map (a,b) to (a,b) tensor (a,b) = (a^2, ab, ba, b^2). On Apr 7, 2014 9:20 AM, "Wouter Meeussen" <wouter.meeussen@telenet.be> wrote:
Henry,
" Computer science and information theory are founded on the principle that information can be copied, while the "no-clone" theorem of quantum mechanics states that quantum states cannot be cloned. "
I always understood this to be valid for particles (fermions) not for photons (bosons). Or am I wrong here?
Wouter.
-----Original Message----- From: Henry Baker Sent: Monday, April 07, 2014 6:01 PM To: math-fun@mailman.xmission.com Subject: [math-fun] What if Turing/Shannon/Bekenstein were wrong?
I was listening to some of Leonard Susskind's lectures on black holes & information & the fight with Hawking yesterday, and they seemed almost quaint.
The reason is that information theory & computer science are 100% classical. Computer science and information theory are founded on the principle that information can be copied, while the "no-clone" theorem of quantum mechanics states that quantum states cannot be cloned.
These two distinct views of the world cannot be reconciled, except at scales large enough whereby a quantum system can simulate (with great effort & inefficiency) a classical computer circuit.
Szilard was making significant progress on Maxwell's Demon and information in the 1920's, when quantum mechanics knocked the foundation out from under Maxwell's entire enterprise.
Shannon's theorems re the information-carrying bandwidth are entirely classical, and completely ignore the quantum nature of radio/light waves. Thus, de Broglie tells us that long wavelengths contain many more quanta than short wavelengths. (In fact, the number of quanta is essentially the number of Planck lengths contained in the wavelength.) But Shannon tells us that shorter wavelengths can contain _more_ information than longer wavelengths! I call this the "ultraviolet catastrophe" of information theory.
The Bekenstein argument talks about dropping information into a black hole & calculating the change in size of the black hole that results. But you can't drop "information", per se, into a black hole; you can only drop actual quantum mechanical particles into a black hole. These quantum mechanical particles may be entangled with particles outside the black hole.
Thus, the Bekenstein argument re how much classical Shannon "information" can be stored in a black hole is interesting, but ultimately irrelevant.
Shannon information theory ultimately rests on probability theory, which is also quintessentially classical. A proper modern _quantum_ information theory would rest instead on a _quantum_ probability theory in which the standard 2-slit results fall out naturally.
The two articles below re evading the "diffraction limit" with quantum entangled photons demonstrates once again how far from reality our standard classical viewpoint really is.
The recent discussion here re P=NP and quantum systems misses an important point: whether P=NP (a classical notion in a classical framework) may be completely irrelevant in the real quantum world.
Modern computer science is fundamentally built on Turing machines that _copy information_ from one place to another. This is the basis of simulation and universal computers.
But the entire enterprise fails when we can't even copy one quantum bit. We even require a new type of logic to deal with this problem: so-called "linear logic", which deals with things that can't be copied -- e.g., resources like time, space, energy, etc.
It will probably take the rest of the 21st Century to re-invent probability, information theory and computer science, but we need to start soon in order to make even this long deadline. --- http://www.technologyreview.com/view/524521/worlds-first- entanglement-enhanced-microscope/
February 10, 2014 World's First Entanglement-Enhanced Microscope
Physicists have long known that entangled photons can make more precise measurements than independent ones. Now Japanese physicists have built a microscope that proves it.
One of the exciting possibilities of quantum mechanics is the ability to measure the world far more precisely than with classical tools. Today, Takafumi Ono and pals at Hokkaido University in Japan say they've exploited this to create the world's first entanglement-enhanced microscope. Their new toy produces images with entangled photons that are significantly sharper than those possible with ordinary light alone.
Entanglement is the strange quantum property in which two particles share the same existence, even though they may be far apart. Ono and co say this is particularly useful for a type of imaging known as differential interference contrast microscopy.
This works by focusing two beams of photons into spots next to each other on a flat sample and measuring the interference pattern they create after they have been reflected. When both spots hit a flat part of the sample, they travel the same path length and create a corresponding interference pattern. But when the spots hit areas of different heights, the interference pattern changes.
It is then possible to work out the shape of the surface by analysing the change in the interference pattern as the spots move across it.
The difference in phase of photons can be measured with huge accuracy, but even this has a limit, known as the standard quantum limit. However, physicists have known for some time that it's possible to improve on this by using entangled photons rather than independent ones.
That's because a measurement on one entangled photon gives you information about the other, so together they provide more information than independent photons.
Ono and co demonstrate this using entangled photons to image a flat glass plate with a Q-shaped pattern carved in relief on the surface. This pattern is just 17 nanometres higher than the rest of the plate and so tricky to resolve with ordinary optical techniques.
Entangled photons significantly improve on this. Ono and co say the signal to noise ratio using their technique is 1.35 times better than the standard quantum limit. And the resulting image is noticeably improved, simply by visual inspection (the image with entangled photons is on the left in the above figure). "An image of a Q shape carved in relief on the glass surface is obtained with better visibility than with a classical light source," they say.
That should be useful in a number of different applications; when samples might be damaged by intense light, for example.
Enhanced microscopy is just one of many applications for quantum metrology. It should also help improve the resolution of the interferometers used in gravitational wave astronomy, for example. So it's good to see a success like this in another area.
Ref: arxiv.org/abs/1401.8075: An Entanglement-Enhanced Microscope
https://medium.com/p/5f473cf5a4bc
How To Build A Quantum Telescope
Quantum optics has revolutionised microscopy. Now astronomers are planning to jump on the quantum bandwagon
The Physics arXiv Blog in The Physics arXiv Blog
The diffraction limit is astronomy's greatest enemy. The resolution of all telescopes is limited by factors such as imperfections in the optics and turbulence in the atmosphere. But these can be overcome using better equipment, adaptive optics and by getting above the atmosphere.
But there is one limit that astronomers cannot overcome because it is set by the laws of physics---the diffraction limit. Every telescope on the planet, and all those orbiting it, are limited in this way. And until recently there was no known way to beat it.
Now physicists have begun to develop various quantum techniques that can overcome the diffraction limit, at least in the lab. These techniques have begun to revolutionise microscopy, where the light source can be carefully controlled. But they have yet to be considered for astronomy because astronomers have little, if any, control over the sources of the light they are interested in.
Today, however, Aglae Kellerer at the University of Durham explains how to build a quantum telescope. She says quantum techniques could dramatically improve the resolution of telescopes by beating the diffraction limit for the first time.
When light from a point source enters a lens, it bends. This bending causes the light to spread out so that it interacts with itself generating an interference pattern.
The result is that a lens always resolves a point source as a bright disc surrounded by concentric circles of light, a pattern known as an Airy disc. The size of this disc, which is determined by the wavelength of light and the size of the lens, determines the ultimate resolution of the instrument.
In practice, most telescopes are limited by other factors, in particular, turbulence in the several kilometres of atmosphere above them. But techniques such as adaptive optics, which iron out the effects of turbulence, are allowing telescopes to get much closer to the diffraction limit.
So how to go further? Kellerer's idea is to exploit the strange effects of quantum mechanics to improve things. Entanglement, for example.
Entangled photons are so deeply linked that they share the same existence. Measure one and you automatically influence the other. That gives you information about the other photon, regardless of its distance from you.
Last month, physicists used this idea to build the world's first entanglement-enhanced microscope that dramatically increases its resolution over purely classical instruments. They created entangled photons and used one to illuminate the object. The second photon can then give them information about the first that they use to increase the resolution of the resulting image.
There's an obvious problem in employing these kinds of techniques in astronomy---the photons of interest aren't under your control, having travelled many lightyears from their astrophysical source.
But Kellerer says there is a way round this. Her idea is to use the astrophysical photons to stimulate the production of an entangled pair, inside a telescope. The first of this pair then hits the detector, generating an image. But the other can be used to increase the information known about the first, thereby increasing the resolution and beating the diffraction limit.
That's an interesting idea that has the potential to significantly increase the resolution of conventional telescopes. In fact, Kellerer has simulated the effect of this process on computer to enhance the resolution of a conventional astronomical image by a factor of six.
But building an instrument that works in this way will be hard and the devil is in the detail.
The first problem is in producing entangled photons in the first place. Kellerer's idea is to use a crystal of excited atoms that emit entangled photons when stimulated by passing astrophysical ones.
The problem is the efficiency of such a process. With so few astrophysical photons to play with, any that are lost due to inefficiencies are a serious problem.
Then there is the problem of spontaneous emission. Excited atoms have a nasty habit of emitting photons, even when they are not stimulated by passing photons. That's noise, which could end up overwhelming the signal from the photons astronomers are interested in.
Both of these problems can be minimised but they cannot be removed completely. The question is whether the advantages of this technique can be made to outweigh the disadvantages in a real device.
There's only one way to find out, of course. Today, the technology required to test this idea is in its infancy. But there's good reason to think that significant advances will be made in the near future.
And that means that future telescopes could be very different from the ones we have today. It's a sobering thought that if the pioneers of telescopic imaging were around today, they would find that the world's best telescopes work in more or less exactly the same way as their own from 400 years ago.
But Kellerer's approach would be entirely alien to them and could finally take astronomy into a new era of quantum imaging.
Ref: arxiv.org/abs/1403.6681 : Quantum Telescopes
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 4/7/2014 9:01 AM, Henry Baker wrote:
I was listening to some of Leonard Susskind's lectures on black holes & information & the fight with Hawking yesterday, and they seemed almost quaint.
The reason is that information theory & computer science are 100% classical. Computer science and information theory are founded on the principle that information can be copied, while the "no-clone" theorem of quantum mechanics states that quantum states cannot be cloned.
You can't make a copy of an unknown state, but a known state can be copied, which I think is the more relevant case in computation.
These two distinct views of the world cannot be reconciled, except at scales large enough whereby a quantum system can simulate (with great effort & inefficiency) a classical computer circuit.
It goes the other way too. A quantum system can be simulated on a Turing machine (just not efficiently).
Szilard was making significant progress on Maxwell's Demon and information in the 1920's, when quantum mechanics knocked the foundation out from under Maxwell's entire enterprise.
Shannon's theorems re the information-carrying bandwidth are entirely classical, and completely ignore the quantum nature of radio/light waves. Thus, de Broglie tells us that long wavelengths contain many more quanta than short wavelengths. (In fact, the number of quanta is essentially the number of Planck lengths contained in the wavelength.) But Shannon tells us that shorter wavelengths can contain _more_ information than longer wavelengths!
The energy of a photon is hf = hc/l , where l is the wavelength. So for total energy E in an EM wave the number of photons is N = El/hc. The number of photons in an EM wave can be any positive integer, regardless of wavelength - it has nothing to do with Planck lengths. Shannon wasn't really talking about wavelengths, he was talking about bandwidth, the frequency with which you could switch between 1 and 0. But you're right that he treated it as a purely classical problem. From a quantum perspective it takes longer to detect whether there's a photon or not (1 or 0) if it's a low energy photon, so the information transfer rate is lower. But I don't know that any real devices operate down at this quantum limit (maybe radio telescopes?).
I call this the "ultraviolet catastrophe" of information theory.
The Bekenstein argument talks about dropping information into a black hole & calculating the change in size of the black hole that results. But you can't drop "information", per se, into a black hole; you can only drop actual quantum mechanical particles into a black hole. These quantum mechanical particles may be entangled with particles outside the black hole.
Thus, the Bekenstein argument re how much classical Shannon "information" can be stored in a black hole is interesting, but ultimately irrelevant.
Shannon information theory ultimately rests on probability theory, which is also quintessentially classical. A proper modern _quantum_ information theory would rest instead on a _quantum_ probability theory in which the standard 2-slit results fall out naturally.
There's been quite a bit of work on this, see arXiv:quant-ph/9806047v1 and 1311.5253v1 Quantum-Bayesianism, or "The Ithaca" interpretation of QM, is one of the principal rivals to the "Many Worlds" interpretation. Interesting articles on quantum microscopy - thanks for posting. Brent Meeker
The two articles below re evading the "diffraction limit" with quantum entangled photons demonstrates once again how far from reality our standard classical viewpoint really is.
The recent discussion here re P=NP and quantum systems misses an important point: whether P=NP (a classical notion in a classical framework) may be completely irrelevant in the real quantum world.
Modern computer science is fundamentally built on Turing machines that _copy information_ from one place to another. This is the basis of simulation and universal computers.
But the entire enterprise fails when we can't even copy one quantum bit. We even require a new type of logic to deal with this problem: so-called "linear logic", which deals with things that can't be copied -- e.g., resources like time, space, energy, etc.
It will probably take the rest of the 21st Century to re-invent probability, information theory and computer science, but we need to start soon in order to make even this long deadline. --- http://www.technologyreview.com/view/524521/worlds-first-entanglement-enhanc...
February 10, 2014 World's First Entanglement-Enhanced Microscope
Physicists have long known that entangled photons can make more precise measurements than independent ones. Now Japanese physicists have built a microscope that proves it.
One of the exciting possibilities of quantum mechanics is the ability to measure the world far more precisely than with classical tools. Today, Takafumi Ono and pals at Hokkaido University in Japan say they’ve exploited this to create the world’s first entanglement-enhanced microscope. Their new toy produces images with entangled photons that are significantly sharper than those possible with ordinary light alone.
Entanglement is the strange quantum property in which two particles share the same existence, even though they may be far apart. Ono and co say this is particularly useful for a type of imaging known as differential interference contrast microscopy.
This works by focusing two beams of photons into spots next to each other on a flat sample and measuring the interference pattern they create after they have been reflected. When both spots hit a flat part of the sample, they travel the same path length and create a corresponding interference pattern. But when the spots hit areas of different heights, the interference pattern changes.
It is then possible to work out the shape of the surface by analysing the change in the interference pattern as the spots move across it.
The difference in phase of photons can be measured with huge accuracy, but even this has a limit, known as the standard quantum limit. However, physicists have known for some time that it’s possible to improve on this by using entangled photons rather than independent ones.
That’s because a measurement on one entangled photon gives you information about the other, so together they provide more information than independent photons.
Ono and co demonstrate this using entangled photons to image a flat glass plate with a Q-shaped pattern carved in relief on the surface. This pattern is just 17 nanometres higher than the rest of the plate and so tricky to resolve with ordinary optical techniques.
Entangled photons significantly improve on this. Ono and co say the signal to noise ratio using their technique is 1.35 times better than the standard quantum limit. And the resulting image is noticeably improved, simply by visual inspection (the image with entangled photons is on the left in the above figure). “An image of a Q shape carved in relief on the glass surface is obtained with better visibility than with a classical light source,” they say.
That should be useful in a number of different applications; when samples might be damaged by intense light, for example.
Enhanced microscopy is just one of many applications for quantum metrology. It should also help improve the resolution of the interferometers used in gravitational wave astronomy, for example. So it’s good to see a success like this in another area.
Ref: arxiv.org/abs/1401.8075: An Entanglement-Enhanced Microscope
https://medium.com/p/5f473cf5a4bc
How To Build A Quantum Telescope
Quantum optics has revolutionised microscopy. Now astronomers are planning to jump on the quantum bandwagon
The Physics arXiv Blog in The Physics arXiv Blog
The diffraction limit is astronomy’s greatest enemy. The resolution of all telescopes is limited by factors such as imperfections in the optics and turbulence in the atmosphere. But these can be overcome using better equipment, adaptive optics and by getting above the atmosphere.
But there is one limit that astronomers cannot overcome because it is set by the laws of physics—-the diffraction limit. Every telescope on the planet, and all those orbiting it, are limited in this way. And until recently there was no known way to beat it.
Now physicists have begun to develop various quantum techniques that can overcome the diffraction limit, at least in the lab. These techniques have begun to revolutionise microscopy, where the light source can be carefully controlled. But they have yet to be considered for astronomy because astronomers have little, if any, control over the sources of the light they are interested in.
Today, however, Aglae Kellerer at the University of Durham explains how to build a quantum telescope. She says quantum techniques could dramatically improve the resolution of telescopes by beating the diffraction limit for the first time.
When light from a point source enters a lens, it bends. This bending causes the light to spread out so that it interacts with itself generating an interference pattern.
The result is that a lens always resolves a point source as a bright disc surrounded by concentric circles of light, a pattern known as an Airy disc. The size of this disc, which is determined by the wavelength of light and the size of the lens, determines the ultimate resolution of the instrument.
In practice, most telescopes are limited by other factors, in particular, turbulence in the several kilometres of atmosphere above them. But techniques such as adaptive optics, which iron out the effects of turbulence, are allowing telescopes to get much closer to the diffraction limit.
So how to go further? Kellerer’s idea is to exploit the strange effects of quantum mechanics to improve things. Entanglement, for example.
Entangled photons are so deeply linked that they share the same existence. Measure one and you automatically influence the other. That gives you information about the other photon, regardless of its distance from you.
Last month, physicists used this idea to build the world’s first entanglement-enhanced microscope that dramatically increases its resolution over purely classical instruments. They created entangled photons and used one to illuminate the object. The second photon can then give them information about the first that they use to increase the resolution of the resulting image.
There’s an obvious problem in employing these kinds of techniques in astronomy—-the photons of interest aren’t under your control, having travelled many lightyears from their astrophysical source.
But Kellerer says there is a way round this. Her idea is to use the astrophysical photons to stimulate the production of an entangled pair, inside a telescope. The first of this pair then hits the detector, generating an image. But the other can be used to increase the information known about the first, thereby increasing the resolution and beating the diffraction limit.
That’s an interesting idea that has the potential to significantly increase the resolution of conventional telescopes. In fact, Kellerer has simulated the effect of this process on computer to enhance the resolution of a conventional astronomical image by a factor of six.
But building an instrument that works in this way will be hard and the devil is in the detail.
The first problem is in producing entangled photons in the first place. Kellerer’s idea is to use a crystal of excited atoms that emit entangled photons when stimulated by passing astrophysical ones.
The problem is the efficiency of such a process. With so few astrophysical photons to play with, any that are lost due to inefficiencies are a serious problem.
Then there is the problem of spontaneous emission. Excited atoms have a nasty habit of emitting photons, even when they are not stimulated by passing photons. That’s noise, which could end up overwhelming the signal from the photons astronomers are interested in.
Both of these problems can be minimised but they cannot be removed completely. The question is whether the advantages of this technique can be made to outweigh the disadvantages in a real device.
There’s only one way to find out, of course. Today, the technology required to test this idea is in its infancy. But there’s good reason to think that significant advances will be made in the near future.
And that means that future telescopes could be very different from the ones we have today. It’s a sobering thought that if the pioneers of telescopic imaging were around today, they would find that the world’s best telescopes work in more or less exactly the same way as their own from 400 years ago.
But Kellerer’s approach would be entirely alien to them and could finally take astronomy into a new era of quantum imaging.
Ref: arxiv.org/abs/1403.6681 : Quantum Telescopes
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda. The energy of a complete wave is computed by multiplying this equation by the wavelength to get: Total energy = E*lambda = h*c = constant. At 10:26 AM 4/7/2014, meekerdb wrote:
On 4/7/2014 9:01 AM, Henry Baker wrote: Shannon's theorems re the information-carrying bandwidth are
entirely classical, and completely ignore the quantum nature of radio/light waves. Thus, de Broglie tells us that long wavelengths contain many more quanta than short wavelengths. (In fact, the number of quanta is essentially the number of Planck lengths contained in the wavelength.) But Shannon tells us that shorter wavelengths can contain _more_ information than longer wavelengths!
The energy of a photon is hf = hc/l , where l is the wavelength. So for total energy E in an EM wave the number of photons is N = El/hc. The number of photons in an EM wave can be any positive integer, regardless of wavelength - it has nothing to do with Planck lengths.
Shannon wasn't really talking about wavelengths, he was talking about bandwidth, the frequency with which you could switch between 1 and 0. But you're right that he treated it as a purely classical problem. From a quantum perspective it takes longer to detect whether there's a photon or not (1 or 0) if it's a low energy photon, so the information transfer rate is lower. But I don't know that any real devices operate down at this quantum limit (maybe radio telescopes?).
I call this the "ultraviolet catastrophe" of information theory.
________________________________ From: Henry Baker <hbaker1@pipeline.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Tuesday, April 8, 2014 3:57 AM Subject: Re: [math-fun] What if Turing/Shannon/Bekenstein were wrong?
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda.
The energy of a complete wave is computed by multiplying this equation by the wavelength to get:
Total energy = E*lambda = h*c = constant. --------------------------------------------------------------------------- What's a "complete wave"?
-- Gene
On 4/8/2014 3:57 AM, Henry Baker wrote:
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda.
Right.
The energy of a complete wave is computed by multiplying this equation by the wavelength to get:
Total energy = E*lambda = h*c = constant.
I'm not sure what you mean by "a complete wave". EM energy comes in discrete photons, so you get the energy of, for example, a radio broadcast by multiplying the above value of E by the number of photons N, which can be any integer and doesn't depend on the wavelength. Of course in practice you do it the other way around because it's easier to measure the broadcast energy, from which you calculate the number of photons as N=(broadcast energy)/E. Your formula doesn't even have the right units; it has (total energy)=energy*length. Brent
At 10:26 AM 4/7/2014, meekerdb wrote:
On 4/7/2014 9:01 AM, Henry Baker wrote: Shannon's theorems re the information-carrying bandwidth are
entirely classical, and completely ignore the quantum nature of radio/light waves. Thus, de Broglie tells us that long wavelengths contain many more quanta than short wavelengths. (In fact, the number of quanta is essentially the number of Planck lengths contained in the wavelength.) But Shannon tells us that shorter wavelengths can contain _more_ information than longer wavelengths! The energy of a photon is hf = hc/l , where l is the wavelength. So for total energy E in an EM wave the number of photons is N = El/hc. The number of photons in an EM wave can be any positive integer, regardless of wavelength - it has nothing to do with Planck lengths.
Shannon wasn't really talking about wavelengths, he was talking about bandwidth, the frequency with which you could switch between 1 and 0. But you're right that he treated it as a purely classical problem. From a quantum perspective it takes longer to detect whether there's a photon or not (1 or 0) if it's a low energy photon, so the information transfer rate is lower. But I don't know that any real devices operate down at this quantum limit (maybe radio telescopes?).
I call this the "ultraviolet catastrophe" of information theory.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Measure length as a multiple of Planck lengths (or some other convenient standard length), so lambda becomes unitless. For this purpose, I'm not interested in the actual value, but merely the fact that it is constant. At 10:04 AM 4/8/2014, meekerdb wrote:
On 4/8/2014 3:57 AM, Henry Baker wrote:
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda.
Right.
The energy of a complete wave is computed by multiplying this equation by the wavelength to get:
Total energy = E*lambda = h*c = constant.
I'm not sure what you mean by "a complete wave". EM energy comes in discrete photons, so you get the energy of, for example, a radio broadcast by multiplying the above value of E by the number of photons N, which can be any integer and doesn't depend on the wavelength. Of course in practice you do it the other way around because it's easier to measure the broadcast energy, from which you calculate the number of photons as N=(broadcast energy)/E. Your formula doesn't even have the right units; it has (total energy)=energy*length.
So E(λ/L) = hνλ/L = hc/L is a constant, with L = Planck length. But until you explain what is a complete wave (beyond saying it is one cycle from 0 to 2π), I fail to see the significance of this constant quantity. -- Gene
________________________________ From: Henry Baker <hbaker1@pipeline.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Tuesday, April 8, 2014 10:29 AM Subject: Re: [math-fun] What if Turing/Shannon/Bekenstein were wrong?
Measure length as a multiple of Planck lengths (or some other convenient standard length), so lambda becomes unitless.
For this purpose, I'm not interested in the actual value, but merely the fact that it is constant.
At 10:04 AM 4/8/2014, meekerdb wrote:
On 4/8/2014 3:57 AM, Henry Baker wrote:
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda.
Right.
The energy of a complete wave is computed by multiplying this equation by the wavelength to get:
Total energy = E*lambda = h*c = constant.
I'm not sure what you mean by "a complete wave". EM energy comes in discrete photons, so you get the energy of, for example, a radio broadcast by multiplying the above value of E by the number of photons N, which can be any integer and doesn't depend on the wavelength. Of course in practice you do it the other way around because it's easier to measure the broadcast energy, from which you calculate the number of photons as N=(broadcast energy)/E. Your formula doesn't even have the right units; it has (total energy)=energy*length.
________________________________ From: Eugene Salamin <gene_salamin@yahoo.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Tuesday, April 8, 2014 10:40 AM Subject: Re: [math-fun] What if Turing/Shannon/Bekenstein were wrong?
So E(λ/L) = hνλ/L = hc/L is a constant, with L = Planck length. But until you explain what is a complete wave (beyond saying it is one cycle from 0 to 2π), I fail to see the significance of this constant energy.
Actually, hc/L = 1.2e10 J is 2π times the rest energy of a Planck mass (22 μg).
-- Gene
________________________________ From: Henry Baker <hbaker1@pipeline.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Tuesday, April 8, 2014 10:29 AM Subject: Re: [math-fun] What if Turing/Shannon/Bekenstein were wrong?
Measure length as a multiple of Planck lengths (or some other convenient standard length), so lambda becomes unitless.
For this purpose, I'm not interested in the actual value, but merely the fact that it is constant.
At 10:04 AM 4/8/2014, meekerdb wrote:
On 4/8/2014 3:57 AM, Henry Baker wrote:
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda.
Right.
The energy of a complete wave is computed by multiplying this equation by the wavelength to get:
Total energy = E*lambda = h*c = constant.
I'm not sure what you mean by "a complete wave". EM energy comes in discrete photons, so you get the energy of, for example, a radio broadcast by multiplying the above value of E by the number of photons N, which can be any integer and doesn't depend on the wavelength. Of course in practice you do it the other way around because it's easier to measure the broadcast energy, from which you calculate the number of photons as N=(broadcast energy)/E. Your formula doesn't even have the right units; it has (total energy)=energy*length.
On 4/8/2014 11:17 AM, Eugene Salamin wrote:
________________________________ From: Eugene Salamin <gene_salamin@yahoo.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Tuesday, April 8, 2014 10:40 AM Subject: Re: [math-fun] What if Turing/Shannon/Bekenstein were wrong?
So E(λ/L) = hνλ/L = hc/L is a constant, with L = Planck length. But until you explain what is a complete wave (beyond saying it is one cycle from 0 to 2π), I fail to see the significance of this constant energy.
Actually, hc/L = 1.2e10 J is 2π times the rest energy of a Planck mass (22 μg).
Which is an enormous energy. Roughly eight orders of magnitude more than the most energetic photon ever observed. Brent
On 4/8/2014 10:29 AM, Henry Baker wrote:
Measure length as a multiple of Planck lengths (or some other convenient standard length), so lambda becomes unitless.
It also makes a huge difference in the number you get for total energy. What are you going to do about the h*c term which also has units of energy*length? Brent
For this purpose, I'm not interested in the actual value, but merely the fact that it is constant.
At 10:04 AM 4/8/2014, meekerdb wrote:
On 4/8/2014 3:57 AM, Henry Baker wrote:
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda. Right.
The energy of a complete wave is computed by multiplying this equation by the wavelength to get:
Total energy = E*lambda = h*c = constant. I'm not sure what you mean by "a complete wave". EM energy comes in discrete photons, so you get the energy of, for example, a radio broadcast by multiplying the above value of E by the number of photons N, which can be any integer and doesn't depend on the wavelength. Of course in practice you do it the other way around because it's easier to measure the broadcast energy, from which you calculate the number of photons as N=(broadcast energy)/E. Your formula doesn't even have the right units; it has (total energy)=energy*length.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Let's do an actual example with green light ~550nm wavelength = 5.5x10^-7 meters. hc ~ 2x10^-25 Joule meters, so the energy of a green quantum: E = hc/lambda = 2x10^-25/5.5x10^-7 Joules ~ 3.64x10^-19 Joules. Planck length ~ 1.62x10^-35 meters. So the wavelength of green light is 5.5x10^-7/1.62x10^-35 ~ 3.4x10^28 Planck lengths. So the total energy (using these units) is 3.4x10^28 * 3.64x10^-19 ~ 1.24x10^10 Joules. 1 kilowatt-hour ~ 3.6x10^6 Joules, so the total energy in kWh is 1.24x10^10/3.6x10^6 ~ 3.44x10^3 kWh ~ 3.44 MWh (that's a pretty healthy green laser!) The whole point of this exercise is to show that while Shannon wants to put more "information bits" into shorter wavelengths, Planck tells us that there are fewer quanta per bit at shorter wavelengths. There's also another type of problem: as the universe expands, the "same" light gets redder (= longer wavelength = more quanta). Thus, the number of quanta isn't a property of the light itself, but of something else -- perhaps the granularity of the space where it is detected. At 10:55 AM 4/8/2014, meekerdb wrote:
On 4/8/2014 10:29 AM, Henry Baker wrote:
Measure length as a multiple of Planck lengths (or some other convenient standard length), so lambda becomes unitless.
It also makes a huge difference in the number you get for total energy. What are you going to do about the h*c term which also has units of energy*length?
Brent
For this purpose, I'm not interested in the actual value, but merely the fact that it is constant.
At 10:04 AM 4/8/2014, meekerdb wrote:
On 4/8/2014 3:57 AM, Henry Baker wrote:
A single photon of an electromagnetic wave of wavelength lambda has energy E = h*c/lambda. Right.
The energy of a complete wave is computed by multiplying this equation by the wavelength to get:
Total energy = E*lambda = h*c = constant. I'm not sure what you mean by "a complete wave". EM energy comes in discrete photons, so you get the energy of, for example, a radio broadcast by multiplying the above value of E by the number of photons N, which can be any integer and doesn't depend on the wavelength. Of course in practice you do it the other way around because it's easier to measure the broadcast energy, from which you calculate the number of photons as N=(broadcast energy)/E. Your formula doesn't even have the right units; it has (total energy)=energy*length.
On 4/8/2014 2:27 PM, Henry Baker wrote:
Let's do an actual example with green light ~550nm wavelength = 5.5x10^-7 meters.
hc ~ 2x10^-25 Joule meters, so the energy of a green quantum:
E = hc/lambda = 2x10^-25/5.5x10^-7 Joules ~ 3.64x10^-19 Joules.
Planck length ~ 1.62x10^-35 meters. So the wavelength of green light is
5.5x10^-7/1.62x10^-35 ~ 3.4x10^28 Planck lengths.
So the total energy (using these units) is 3.4x10^28 * 3.64x10^-19 ~ 1.24x10^10 Joules.
1 kilowatt-hour ~ 3.6x10^6 Joules, so the total energy in kWh is
1.24x10^10/3.6x10^6 ~ 3.44x10^3 kWh ~ 3.44 MWh (that's a pretty healthy green laser!)
That's one interpretation. The other would be that dividing by Planck lengths doesn't mean anything physical.
The whole point of this exercise is to show that while Shannon wants to put more "information bits" into shorter wavelengths, Planck tells us that there are fewer quanta per bit at shorter wavelengths.
Yes, Shannon made an analysis in terms of 'bandwidth', i.e. switching frequency, which didn't consider quanta. But his analysis was to determine the maximum information transfer rate. Longer wavelength photons take more time to detect dt ~lambda/c. The number of quanta per bit doesn't change (it's 1photon=1 0photon=0) with wavelength, but the number you can transmit per unit time does.
There's also another type of problem: as the universe expands, the "same" light gets redder (= longer wavelength = more quanta). Thus, the number of quanta isn't a property of the light itself, but of something else -- perhaps the granularity of the space where it is detected.
The number of quanta is a conserved property of the light. What's not conserved is the energy per quanta, each photon gets red-shifted by the expansion of the universe. Brent
________________________________ From: Henry Baker <hbaker1@pipeline.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Tuesday, April 8, 2014 2:27 PM Subject: Re: [math-fun] What if Turing/Shannon/Bekenstein were wrong?
... There's also another type of problem: as the universe expands, the "same" light gets redder (= longer wavelength = more quanta). Thus, the number of quanta isn't a property of the light itself, but of something else -- perhaps the granularity of the space where it is detected. ...
---------------------------------------- No, as the universe expands, the number of photons is conserved.
-- Gene
participants (5)
-
Eugene Salamin -
Henry Baker -
meekerdb -
Mike Stay -
Wouter Meeussen