A little light reading for the keen photographer

Join me in a little thought experiment: A collection of photons is admitted to a sealed container under carefully controlled conditions. After precise measurement of the distribution and intensity of the trapped photons, processing is performed by sophisticated algorithms. 

Output from this process is then the subject of intense scrutiny and further manipulation before stringent peer review. You might ask what this strange business is all about?

The strange business is of course photography. You may think that I have chosen a strange way to describe a familiar process but then photography is all about light and, right from the word go, light is an exceedingly strange customer.

  Once upon a time we were content to take pictures of clouds. We just could smell the light and knew instinctively how to set our cameras. Now we click away on our little light-sensing computers without a thought for what
Once upon a time we were content to take pictures of clouds. We just could smell the light and knew instinctively how to set our cameras. Now we click away on our little light-sensing computers without a thought for what’s going on inside. Instead, we store the results in the clouds. We’ve come a long way — but perhaps we’ve lost something along the road.

Werner Heisenberg was responsible for discovering the second most famous equation after Einstein’s E=mc2. In case it has slipped our minds, Heisenberg’s memorable equation is ΔpΔx ≥ h/4π

Doesn’t it feel better to get that out of the way? Heisenberg worked out in this equation a fundamental truth about the observation of all subatomic particles including the particles of light or photons. 

In a nutshell, the consequences of Heisenberg’s discovery are mind boggling, At subatomic level, you can measure the position of a particle with some accuracy but are then necessarily quite unable to define its momentum precisely. 

Conversely, if you measure the momentum of a particle precisely, its position remains unknowable. This is the famous uncertainty principle which, although key to the understanding of subatomic particles, also has a deep philosophical impact on our understanding of our world. Thanks to Heisenberg’s equation, we know that the principle of uncertainty is built into the fabric of nature.

All very confusing, As Richard Feynman said with characteristic humour:

“It is often stated that of all the theories proposed in this century, the silliest is quantum theory. Some say the only thing that quantum theory has going for it, in fact, is that it is unquestionably correct.”

By now, you will be sighing with relief that our cameras do not have to measure both the position and the momentum of the photons we rely upon for our images. All that matters is that we are able to measure both the distribution and intensity of our “trapped” photons simultaneously and that is fortunately all we need for photography. 

However much we may be tempted sometimes, it really won’t do to blame Heisenberg’s uncertainty principle for those pictures which don’t quite turn out as we hoped.

There’s more strangeness to the business of photography. Take the moment of image capture. Henri Cartier-Bresson urged that photographers seize upon the moment but with a precise end in view:

“To me, photography is the simultaneous recognition, in a fraction of a second, of the significance of an event as well as of a precise organisation of forms which give that event its proper expression.”

I don’t want to downplay the difficulty of seizing the moment because that can be as hard to pin down as catching an eel with your bare hands. The problems are however compounded when we are required simultaneously to recognise the “significance of an event” and to arrange “precise organisation of forms”. 

It’s like conducting a symphony orchestra while at the same time catching that eel with your bare hands. I am making a mental note to self to avoid thinking about quantum mechanics just before pressing the shutter.

Cartier-Bresson has already more than filled that brief window of time with tricky tasks! As with the uncertainty principle itself, photographic choices have to be made about what you do, how you do it and what are the consequences.

The strange complexity of photography is also apparent when you consider the immense difference between what the camera records and what the eye sees. A camera shutter is open for business, as it were, only for a finite, usually very short, time. By and large it involves one aperture, one shutter speed, one focus point and one ISO. 

The eye is however in constant scanning mode, focusing near, far and in-between, changing aperture on the fly, balancing detailed central vision against broad brush peripheral vision and, as it were, doing all this in continuous 4K colour video in real time. 

The process of actually taking a photo therefore bears only a distant relationship to the process of seeing as we experience it every waking moment. Of course we have been conditioned by nearly two hundred years of picture viewing to accept the two dimensional photographic image as an acceptable stand-in for the multi-dimensional and multi-faceted experience of our eyes but it does no harm to remind ourselves occasionally that it is indeed a stand-in.

Yet the most important differentiator between a camera and the eye is the operation of the brain which processes the constantly varying live feed from our eyes. 

Our brain reconstructs a synthesised whole from the torrent of visual input and it is this mentally reconstructed entity which we call “seeing”. By comparison the job done by the camera’s processor is more akin to the role played by Fred Flintstones square-wheeled car in comparison to Lewis Hamilton’s F1 Mercedes. 

Now we’ve dealt with quantum uncertainty and the fact that cameras bear precious little resemblance to eyes, we must now grapple with another strange problem. We are constantly urged to “get it right in camera” since, as the argument goes, photography is really about being out there taking images and not indoors hunched over a computer engaged in post-processing. 

At one level, this is indeed sound advice since of course opportunities missed when taking the shot can often never be rectified. The elements of a good picture must therefore be got right in camera. However, at another level, it is perverse to imply, as some do, that post processing is an unnecessary distraction. Because of the inbuilt disparity between what we experience through our eyes and the photographs we take with our cameras, it is inevitable that this gap will sometimes be a disappointing one. 

It is therefore wholly legitimate to use post processing to address this problem. As I see it therefore, post processing is an essential skill in the photographer’s toolbox. Perfection may be unattainable but progress towards it is not. 

What is most important I believe is the ability to re-create the image as we feel it ought to be. The image does not replicate our experience but needs to be congruent with it or, at least, with our memory of it. If this is not substantially so, we may ask was it worth taking the photo at all? It sometimes happens by chance that our photo may be even better than the experience itself but this being a special case is unfortunately not easily replicated

In 1803 Thomas Young had propounded the wave theory of light on the basis of his double slit experiment. This met early opposition because it conflicted with Newton’s support for Descartes’ 1637 corpuscular (little particles) theory. Add link https://en.wikipedia.org/wiki/Corpuscularianism Both the interference and the diffraction of light were demonstrated between 1810 and 1820, giving firm evidence for the wave theory of light. Electron diffraction is an excellent demonstration of the wave nature of particles and the photoelectric effect of the particular nature of waves. Duality in both cases.

The rise of quantum theory in the early 20th century further confirmed the wave-particle duality of each particle. Just recently, the cherry has been placed on this cake by news that the first photograph has been taken of light behaving simultaneously as a particle and a wave: (please do not miss the amusing and explanatory short clip illustrating this in this short video)

Direct observation of this behaviour is confirmation of a central tenet of quantum mechanics and Richard Feynman would have been delighted to see with his own eyes this visual demonstration of quantum strangeness. 

As photographers, let’s choose to regard the behaviour of photons as waves after they have been refracted through our lenses and we are actually focusing on our subject. As photographers, let’s choose to regard photons as particles when they impact upon specific photo-sites on our sensors. As photographers, we can now confidently embrace the wave-particle duality of quantum theory.

I hope that we can all agree that light, the key raw material for our photographs, is a strange business indeed. How lucky we are to have eyes and cameras.

All images in this article taken by David Bailey with either the Fuji X-T20 or X-T10 and the XF 16mm f/1.4 Fujinon

____________

 

23 COMMENTS

  1. I did like the first phrase of the headline “A little light reading…”. Either the author or editor must have been smiling inwardly with self satisfaction when they came up with that one.

  2. I’m still confused about calling analog CCD and CMOS sensors "digital" and calling chemical films "analog"…

    • I would never use the term ‘analogue’ to describe film or film cameras. They always were, and still are, film or film cameras. The term ‘analogue’, or ‘analog’ in the US, seems to have been applied to film to explain things to the ‘digi-kids’ who have never used film. My other pet hates include the term ‘file’ to describe photos or images and calling Leica Thread Mount (LTM) or Screw mount (SM) cameras ‘Barnacks’; no real collector would use that term. A lot of terminology has been invented in the internet age for people for whom digital is the norm. There is no need to be confused about whether something is ‘digital’ or ‘analogue’, just use the name that has been around for over 100 years. Perhaps I am asking too much, as ‘digi-babble’ seems to expand by the day. Blockchains, what are they?

      William

        • Ah, for the benefit of readers who don’t know, a prime is a fixed focal length lens —28mm, 35mm, 50mm, etc. — as opposed to a zoom.

          • I don’t remember prime being used before the digital age (ie 12 years ago?). I’ve heard old movie people use prime and wonder if it comes into stills photography from cine vernacular (overheard, misused and reused in bleary London or NYC pub)?

            I’m thinking of those old TV cams with a turret of fixed lenses, one of which was the ‘primary’ or a fixed lens described as the primary shot lens. It’s now a strange label divorced from its original meaning. ‘Fixed’ was broken?

          • You could be right on this. I also cannot remember using "prime" prior to the digital age but that doesn’t mean it wasn’t used — I was probably not well enough informed at the time. Fixed focal length is a suitable alias of course, but prime now seems to be established.

  3. Thanks David. You are certainly raising the intellectual tone around here. When I was doing my article about Thomas and Howard Grubb and their lenses and telescopes for Macfilos, I had thought about adding a piece about William Rowan Hamilton who had been the Head of Dunsink Observatory near Dublin just before the Grubb telescope was installed in the 1860s. On a walk from Dunsink, Hamilton thought up quaternion equations which are the foundation of a lot of things, including computer graphics, control theory and signal processing https://en.wikipedia.org/wiki/William_Rowan_Hamilton . The Quaternion equation is inscribed on a bridge at Broombridge thanks to the interest of the late Eamon De Valera, President and Prime Minister of Ireland at various times, who was very interested in mathematics. My main reason for not including this were (a) I do not understand quaternion equations and (b) my two brothers are both experts in this field and would quickly spot any errors on my part. I recently visited the grave of Hamilton and took some photographs. It is quite near to the grave of Thomas Grubb who built the telescope for the observatory. Grubb’s son, Sir Howard, made a device called a coelostat which was used to measure light and I recently photographed two examples of the device at Dunsink. One of the Grubb coelostats in Dunsink was used by Arthur Eddington to prove Einstein’s Theory of Relativity in 1919. I can share photos of the devices etc with you if you want. There is a photo of part of the device here https://www.facebook.com/DIASDublin/posts/1000636600001994

    Moving back to simpler things, the Fujifilm cameras can take excellent cloud photos particularly, in Black and White Acros with Red Filter. This can be applied in camera with JPEG or in Lightroom with RAW (Fujifilm RAF). I don’t know whether it is waves or particles that are being altered, but I certainly like the results.

    William

    • William,

      As a historian with an interest in the history of science, I knew the broad brush story of discoveries about light. I did however get two friends who are physicists to check this text over for me. The equations terrify me so were put in for literary effect only!

      Quaternians are also utterly mystifying to me but thanks to your link I loved the story of Hamilton having a light bulb moment on his way to the Academy and then carving the Quaternian equation into the parapet of the Brougham Bridge as he passed.

      I would indeed be interested to see photos of the coelostats etc if you would be so kind as to share them. I have viewed the image link on Facebook. My e mail is thebail@mac.com.

      Thanks the advice re Acros plus the Red Filter. For my money, the light is waves as it goes through the lens but particles as it impacts on the sensor but maybe that all changes if you measure it and how you measure it…….

      David

      • I suspect that technically you are measuring both position and momentum in most digital cameras, because colour is related to the photon’s momentum…

        However, most of the uncertainty in my photographs comes from the quantum processes behind the camera rather than those inside of it 🙂

        • In case anyone is wondering, I am completely at sea on this one. Like Mark, most of my problems are user related. I just trust to luck that all these photons are moving in the right direction. All credit to David for setting up the conversation…..

          Mike

          • Jonathan,

            I very much like your idea of a demosaicing detective story. Sherlock Holmes, presented with the puzzle of light, might have exclaimed : “Elementary particles, dear Watson!”

            I have tried to understand how light can display both wave-like and particle-like characteristics. Perhaps it is more correct to believe that it has the potential to display either characteristic depending upon how it is measured or otherwise interacted with. In a lens interaction, it behaves like a wave but in a collision with a sensor photo-site it behaves like a particle.

            Your detective analogy describes very well what the camera actually does, putting the bits of the puzzle back together in order to give us what we seek, the colour image.
            The case is clear. Despite the opacity of quantum mechanics, the little black box, masquerading as Sherlock Holmes, interprets all the clues to solve the case,

            There is however one further missing element, the mind’s eye of a tsuperb photographer such as yourself. Love the images on your website and have just been enjoying the ones of Suffolk.

            David

        • Mark

          Thanks your comments. Your second point is absolutely correct and applies to me too!

          I am not a scientist but I think your first paragraph is making a different point to the one I was. The Heisenberg uncertainty relates to the impossibility of being able to measure precisely both the position and the momentum of a single photon or other single fundamental particle.

          The camera sensor does not set out to measure either of these parameters for a single photon. It merely records initially the aggregate number of photons, or intensity of light, hitting each particular photo-site.

          Colour information is a function of the array, or mosaic, of red, green and blue filters that live above the millions of light-sensitive photosites on the surface of a sensor chip. This is a Bayer array for many cameras but a different array for Fuji cameras. So colour is determined by the array working on aggregates of photons at each photosite and not on single photons.

          I am eager to be corrected by other readers if my understanding is wrong.

          Best wishes

          David

          • Yes, the thing that I am hazy about is whether or not the CFA’s discrimination constitutes a measurement of the colour or not. If you think about a very long exposure, such that only one photon passes through the camera at a time, the sensor will either record the photon or not depending on whether or not it is blocked by the colour filter. If a detection is made, I think that you have some information about both position and momentum (colour) and so you can not make both arbitrarily precise.

            Perhaps a clearer but different example is the case of a Foveon type sensor, where each pixel can directly determine position and momentum (colour).

            One of the most amazing aspects of quantum mechanics is how incredibly difficult it is to be sure that there are no loop holes in an analysis, as the most unexpected and obscure subtleties can completely upset what you might expect. Feynman’s comment – to the effect that anyone who thinks that they understand quantum mechanics plainly does not – remains as true today as ever.

            But I think that the good news is that current commercial sensors have many many more limitations to deal with before they have to worry about the Heisenburg limit 🙂

          • Mark

            Very interesting further comments.

            I like very much your thought experiment of imagining a very long exposure "such that only one photon passes through the camera at a time". Not easy to set up but then this is a thought experiment.

            You then go on to say that if a detection is made you then have some information on both position and momentum and here I can’t quite grasp the situation.
            We are agreed that the position of the photon will be where it impacts with the sensor. but as far as the momentum is concerned I have two queries:

            1 A photon has both momentum (like any particle) and a wavelength. These two are related but I do not understand how. (one article tells me that they are related by something called the de Broglie wavelength: p=h/lamda but this does not help me)

            2 Surely the impact on the sensor has necessarily to occur before there can be any effect on the filter array. So the measurement any way will not be a simultaneous one and Heisenberg uncertainty is not involved. We can therefore have a precise observation of position and then a fraction later a precise measurement of wavelength but we can’t have both at the same time. But then as you say what about the Foveon sensor? Is that truly simultaneous?

            There is no reason why I should expect to understand quantum mechanics. Perhaps I should not however have given up physics at the age of fifteen.

            David

          • Yes, there is only a single measurement (when the photon interacts with the sensor/film). But I suspect that the location of the detection can be viewed as giving (statistical) information about the momentum due to the CFA.

            The idea of what constitutes “a measurement” in quantum mechanics has always felt rather ill defined and ambiguous. In the many years since I studied, I suspect that this has been resolved somewhat by better formalisations based on the ideas of information as a (the) fundamental basis of reality. But equally I think that much of the difficulty have with QM is simply that the universe at small scales is not easily described by our macroscopic human perspectives and language, and this means that it will never be easy.

            If the universe is a computer, then perhaps understanding QM is analogous to trying to understand a computer operating system and hardware by sneaking sideways glances at it through the Spectre and Meltdown bugs 🙂

          • What Fun!
            Great article David – and great discussion between you and Mark. Like Mike, I’m completely at sea. I am (was) a scientist, but as a botanist, not much of this stuff came my way.

            I realise that I’ve always thought of the demosaicing as a sort of detective story where the camera works out what the colour must have been like based on what it and it’s friends (in the sensor array) are saying. . . if this is the case, then I think the CFA might be rather confused when presented with 1 proton and one photosite.

LEAVE A REPLY

Please enter your comment!
Please enter your name here