Whatever happened to the Light L16?

Computational photography is the future. Or so we are led to believe. Smartphones computer techniques to knit together images to mimic results from larger sensors; the iPhone’s portrait mode makes a good fist of simulating a wide aperture and there is no doubt that the results can be pleasing.

In 2015 the Light company, based in California, announced what was heralded as the future of photography: The sixteen-sensor Light L16. It was revolutionary, a brick-like device which combined lens/sensor combinations in various focal lengths to create an image that was supposed to be greater than the sum of its parts. The L16, costing the best part of £2,000, reached consumers two years later. Two years after that, in December 2019, production ended.

Image: DearSusan.et and Light Company

So what went wrong? Was the L16 a concept before its time or did it just not live up to expectations? An intriguing article on the DearSusan photography blog gives us a first-hand insight into the L16 from an early adopter, Jean-Claude Louis.

Click here to read the full story at DearSusan.net

Thanks to Macfilos contributor John Shingleton for bringing this to our attention.

16 COMMENTS

  1. I am guessing that Zeiss had a problem with the inbuilt Lightroom, Adobe decided to change to a subscriber model, and the ZX1 had a bought and paid for version, which would be difficult to update.

    I looked at the output on the referred website for the Light, and I find the images look a little bit like paintings, not in the filmic way that an old lens, particularly Zeiss, renders, more like an old television, like there is some interference, strange but quite beautiful at the same time.

    The results are interesting, but they do not achieve the pinpoint clarity and almost 3d look of a modern digital sensor/lens combination. Perhaps that is why it didn’t take off and sell by the bucket load.

    Incidentally that is one of the reasons that I like film and old lenses, I am not particularly attracted to that ultra clean look, it sort of takes on a kind of “Captain Pugwash” effect, where one can almost see figures to ground as though they have been added, after the fact.

    After composition, how the scene is rendered, either on screen or on paper, is the next important ingredient, and perhaps a reason why we are as interested in gear as we are in creating?

    Or maybe that we are just mostly male and into kit. I know that I swapped around my HiFi gear for years, until I found nirvana, and then stopped, I have had the same (more or less) system for thirty years now, and it still delights me, only the source has changed, and then only because we have moved through LP’s to CD’s and now uncompressed downloads.

    • I remember this being discussed during the LHSA meeting at Wetzlar in October 2018. Stefan Daniel referred to the Zeiss in response to a question from the audience. He said that Leica’s policy was to avoid putting too much into the camera by way of post-processing, preferring to rely on external edition (and the FOTOS app) which can be more easily updated. That way, we inferred, the actual cameras would not become out-dated so quickly.

      • Indeed Michael, my M-D has no user configurable software apart from the date, and arguably even that is unnecessary.

        I have just realised, that following a really long wait for the M3D as forecast by this blog… I think it was early April some years back?

        I now have an M3 and an M-D, one camera function two bodies. It is better that way anyway.

  2. And there was another one, I can’t recall what it was called, but its USP was that one could select the depth of field after the photo was taken. Wonder what happened to that.

    • I think that was Lytro Farhiz.

      The newer iPhones do that now.

      Isn’t it sort of taking the photography out of photography though?

  3. Hhhmmmm. Would I have seriously considered one of these if they had eventuated?
    The top bottom and back look good, image quality would likely be good, and compact size a plus.
    But the randomised splatter of lenses on the front would have been a deal breaker. Personal view I know, but what was the designer thinking?

    • They were thinking about image quality. A repeated grid array would carry harmonics across into pixel overlap. This arrangement minimises double sampling and maximises new information recorded by the sensors. They do the same thing with telescope arrays

      • Thanks for this insight Simon. Makes sense if each lens/sensor was identical. I hadn’t thought about that purpose. Or was there some partitioning of function amongst the array, analogous to what we are now seeing with the multi- lens smartphone cameras?

        But now you’ve set me thinking about the compound eyes of insects……

  4. Also, whatever happened to the Zeiss camera announced about 2 years ago? If it comes out the technology might be getting long in the tooth.

    • Quite a few of these. There’s also that Pixii rangefinder from France, although I did hear a recent rumour that it was on its way. And then the Russian Leica. Never did see one of those. But I think the Zeiss Wotitsane must have been conveniently forgotten. Anyone know more?

  5. I have thought, and still think, that computational photography is the future, as far as a creative use of the “photographic media” goes. The possibilities are vast, and in the right hands, I am sure some very interesting, and probably surreal, pictures will be made. I am not sure these pictures will be “photos” anymore…but rather art based on photographic techniques.
    My iPhone does a pretty good imitation of a “real” camera in portrait mode, and some of the computational interpretations it does are sometimes head-scratchingly wonderful. I once made a picture of a wineglass, and it placed the blur on the forward edge of the glass, which made it seem like the glass was turned inside-out. Picasso and/or Escher would have loved it.

    If you want to simply “record” a scene, the phone camera is perfect for that.

  6. The answer to this simple. Light was effectively an interruptor trying to enter the industry. This technology would probably take off if it were introduced by one of the industry ‘majors’, but they lack the incentive to do this as they still make a lot of money out of interchangeable lenses. In time, when smartphones have more features and really eat into the middle layer of the market, I expect that the ‘majors’ will have no option but to jump in. The cameras they produce will probably look quite a bit different to the L16.

    I am saying this as someone who is mainly interested in old film cameras and so a lot of this is academic to me personally, but the notion of computational photography was inevitable from the first day that a digital sensor was put into a camera.

    William

LEAVE A REPLY

Please enter your comment!
Please enter your name here