Does the mention of computational photography bring you out in a cold sweat? Or, are you relishing the challenge of learning new computational photographic techniques? Whether we like it or not, machine learning algorithms are already impacting our photography. And, there is more to come. This week, we hear about yet another computationally aided way to turn your smartphone into a ‘real’ camera. We also learn about a new AI-assisted way to add a splash of colour to your black and white shots. And, we have a primer on all the ways computational photography is enhancing that phone camera in your pocket.
Alice in camera wonderland
Smartphone cameras have rapidly become the ubiquitous means of taking pictures. So, rather than trying to turn back the tide, in Canute-like fashion, companies are trying to build on this phenomenon. One example is the Fjorden Grip, highlighted right here in Macfilos. It’s so cool that Leica recently acquired the company. A further example is about to hit the street — the Alice Camera, from Photogram.
Like the Fjorden, this system integrates a smartphone with a cleverly designed auxiliary unit, yielding a camera-like composite device. The auxiliary component provides a lens mount and camera-like ergonomics. The smartphone provides the camera screen, just like a regular smartphone. EVF-fans need not apply.
The Alice is described as an AI-powered Micro Four Thirds camera. It uses a Sony CMOS image sensor, a Qualcomm Snapdragon processor, and Google’s Edge TPU to handle its deep learning features. These help Alice drive the camera’s computational photography features. These include noise reduction, sharpening, and expanded dynamic range.
The Alice Camera is compatible with both iOS and Android operating systems. Photogram says it will ship the camera by mid-July. You can read a detailed overview here.
Add a splash of colour
Are you perpetually wondering whether to present your image in colour or black and white? If so, Adobe has released a new Photoshop feature for indecisive photographers just like you. You can now have the best of both worlds — black and white images with a splash of colour! No surprise, it’s another example of computational photography. Using the Colorize Neural Filter, Photoshop users can duplicate an image layer and then colourize it. They can also tweak the colourized layer using standard Photoshop editing tools.
This capability has been available to iPhone users for some time, via apps such as Color Splash. Here’s an example from Keith James.
But, it requires tedious ‘painting in’ of the colours, and produces low-resolution images. In contrast, the Photoshop feature uses the power of computational photography to select the region of a high-resolution image to be colourized.
You can read more about it here. Please let us know if you’ve tried it yet, and if so, what you think.
Spatial lenses: L-Mount next?
One camera, one lens projecting an image to a sensor or film? That’s what we all know. With ever-higher resolving sensors, it becomes more and more interesting to use only parts of it. Or to shoot two images simultaneously, each using only a fragment of the full sensor. Stereo photography has worked on this basis for a long time. It takes two slightly differing images, one after the other. But it is simply impossible if the subject is only a little dynamic and the camera not really firmly attached to a tripod.
But now, with the triumph of computational photography, new options emerge. Imaging a lens that “sees” just like you do, with two eyes and an enormous angle of view. In addition, each lens half triggers the same “brain”, that is, via the sensor, the processor in the camera. That’s what Canon, for example, has been working on with their stereo lenses. Now, two new ones have been announced. The RF-S 3.9mm F3.5 STM DUAL FISHEYE and the RF-S 7.8mm F4 STM DUAL, are designed to take virtual reality videos. Read here what MacRumors has to say about it.
Interesting: The new spatial lenses are for Canon’s APS-C cameras
The 7.8mm lens might be especially interesting as it could provide a somewhat “natural” angle of view (rather than the fisheye). It will, according to Canon’s press release, “enable users to capture life’s most precious moments in spatial video, and then relive those memories on the Apple Vision Pro.” As the appropriate camera, the Canon R7 is presented. This is an APS-C sensor camera, so you don’t need a huge sensor to accommodate the two images from the two lenses.
This leads to question, if and when the 3D technology, or spatial photography, might be available for L-Mount. There is no APS-C camera at the moment. To create a similar effect as with the new Canon 7.8mm lens, we would need something around 12mm as a focal length, which would make the lens taller, probably even too tall to leave room for the twin optics. But who knows?
Leica Lux App: Can computational photography replace a Summilux?
The new Leica Lux App has been on the market for a few weeks now (read here our initial coverage), and we guess that many Macfilos users have given it a try. From what we hear, it seems to perform quite well, but the subscription policy it not to everyone’s liking. At any rate, if you pay for the app, you get some of the wonders of computational imaging. Your iPhone will be transformed into a camera with either a Noctilux 50, or a Summilux 35, or a Summilux 28 mounted. That’s the proposition.
Our first impression was that the app itself works quite well and that there are visible differences between the three lenses — mainly in their angle of view, of course. But also the “artificial bokeh” makes a difference. And also the user experience with a pretty clean, Leica-styled design gets good marks. The question is, when might this technology become available in proper cameras. And how authentic will photos be in the future… Or what sense it makes to spend thousands of £/$/€ on the occasional Summilux if a similar look can be produced so much more cheaply?
But what about you? We are planning a longer article on the Lux App and would be pleased to hear about your experiences. Have you already tried the Leica Lux App? If so, with what result? And if not, what’s your objection? How far should computational photography go, and when should an image be marked as computer generated? Or are you probably joining the counter-movement, putting a Kodak Portra into your M6, attaching a “real” Summilux and always thinking twice before pulling the trigger? Do share your thoughts in the comments section.
Computational photography also pushes the authenticity issue
When is an image “real”? Many of our Newsround bits and pieces this time somehow raise this question. We know that Leica has addressed it with a unique feature in the M11-P. With the Content Credentials function, it was marketed as the world’s first camera “capable of verifying the authenticity of digital images”. There was a time when photographers had to submit their slides to National Geographic and other newsrooms. This way, it was easy to check that nothing was tweaked. That’s long gone now, and manipulating images has become popular, right up to the highest circles of society.
With the question of authenticity becoming more and more relevant, it is remarkable that the next player in the camera market has joined the Content Authenticity Initiative. Leica was one of the pioneers, Nikon and Sony also. Now, Fuijflim is taking part. They all obviously feel the urge to do something in order to “rebuild trust online” by proving that a photograph or video is not manipulated aka fake.
A computational photography primer
If you are interested in learning even more about how computation is impacting photography, here’s a great overview from PetaPixel. It focuses primarily on smartphone photography. That’s because computational approaches have been used to compensate for the optical limitations of these devices. How else would we get such great photos from such a tiny lens and small sensor?
Techniques such as portrait mode, panorama mode, high-dynamic-range mode, and daytime-long-exposure mode are all covered. Each is enabled by machine learning methodologies such as segmentation and image recognition. Although driven by those smartphone optical limitations, these approaches are also finding their way into more conventional camera formats. As we have covered previously, post-processing applications such as Lightroom are also steadily rolling out these techniques.
It’s an exciting time to be a photographer.
Signing up for the Macfilos newsletter
The SUBSCRIBE button (below) is now working again. If you have recently been unable to register for the Macfilos newsletter, please try again now. We apologise for the error which crept in during the recent site redesign. If you have any other queries or wish to contact us, use the CONTACT button.
I think what would be nice is if the Alice Camera opened the world up to programmable photography, and not only computational photography. Take the UX for doing focus stacks, for example. Why’s there no interface for focusing once at the front plane, again at the back plane, then specifying how many shots in between, and then letting the camera compute the focus shift? None of the camera companies are particularly good at UX. And I think Apple cynically won’t cannibalize their iPhone Pro sales.
I only know I still much prefer real cheese to processed cheese no matter how convenient it is. That opinion isn’t going to stop those little cheese slices being available and sold almost everywhere and I expect it will be the same with AI and photography. It just isn’t to my taste. The paella picture looks exactly like what it is, a manipulated image, nothing more. Does the addition of color on the main subject get our attention? Yes, certainly. Does it make the image a better photograph? Not for me, I’m afraid. Will computational photography open up new possibilities for photography? Of course. But the situation reminds me of those cameras that have far too many exposure modes and options on the menu or dial when all you really need is Aperture priority and manual and a good eye.
On the AI/ISO issue, I may be a mouse squeaking in the wilderness, but my experiences have been quite mixed.
I recently ran experiments comparing the low-cost Affinity Photo (no AI; sharpening and denoising using sliders) versus a trial edition of an AI sharpener/denoiser (name withheld; this isn’t a professional review and I’ve no desire to defame a developer).
The first image was taken in a Tokyo backstreet brightly illuminated on the left, dark on the right. I used the D-Lux Typ 109 at ISO 1600 — not the camera’s best; the image was very grainy. The dark area had two women; in back an illuminated red sign detailing drinks offerings and prices (I published this image some years ago on MACFILOS). Both programs removed grain on the faces; the AI based program produced a sharper face and was the clear winner. Both programs reproduced the lettering on the red sign; neither program retained the prices, a tie.
Going below the neck, the Affinity result was still blurry; the AI still sharper, but here, AI produced artifacts on the necks and arms; I’d describe it as looking like a rather bad skin disease. Affinity the winner.
The second photo was taken in a Tokyo department store food section; same camera, much brighter light. The image shows obento (boxed foods) and signs describing them. Again, the AI did a great job of removing noise, but this time the Japanese characters were reduced to meaningless squiggles.
As I can’t predict the future, or even my lunch, I’ve no idea whether these issues will disappear. Right this minute, I wouldn’t purchase the product or use it in any serious way.
I am a shooter who wants raw images to work with. The Leica Lux App will give us quite nice but faux images as jpegs at 12MP. For their intended purpose, that is fine. I really like it. The M8 did about as well; actually the Lux App is better mostly. The App will also give you raw, but plain vanilla without the bells and whistles. It basically has to piggyback onto the structure that Apple offers, of course.
I find this all lovely, and have purchased a year of the App — because while I mark a difference between my iPhone images and my M-ones, no image is possible without some sort of camera, and the Lux images are potentially always right there in my pocket. I took a great portrait yesterday with the App, at a simulated 50/1.4.
So, for me we are still at some remove from having a fully functioning iPhone camera that will give us raw at 48 MP along side the computational AI assist. Perhaps in the iPhone 16. Things are moving so fast I would not be surprised.
Ed
I’m right with you Ed, and I’ve bought a years subscription of Lux (and a Fjorden grip as well). not because it’s the answer to life the universe and everything . . . but because it’s another option, and it’s interesting and worth keeping up with .
Best
Jono
AI-powered applications abound now. You no longer have to be an expert to restore old film images if you use the Neural filters in Photoshop. You can colorize and get rid of scratches, etc. quite easily. The new DeNoise tool in photoshop (Adobe Camera Raw) and Lightroom is very good and can eliminate the need for very fast (f/1.4 or larger aperture) lenses since high ISO is not much of a constraint any more.
That‘s interesting, Bill. Two main reasons for buying expensive and heavy super-fast lenses start to crumble. Shallow depth of field can be produced with AI (no excellent results so far, see the image of the two lenses above), and low-light images even at high ISO numbers can be massively improved with AI (with excellent results already). I wonder what the reaction of the industry will be. Massive campaigns to persuade us that a f/1.4 lens can, if at all, only be replaced with a f/1.2 lens? Or product innovations that take the new possibilities into account? At any rate, interesting times indeed. All the best, JP