{"id":28275,"date":"2021-11-27T01:47:35","date_gmt":"2021-11-27T01:47:35","guid":{"rendered":"https:\/\/www.thepicpedia.com\/blog\/adobe\/from-the-acr-team-super-resolution\/"},"modified":"2021-11-27T01:47:36","modified_gmt":"2021-11-27T01:47:36","slug":"from-the-acr-team-super-resolution","status":"publish","type":"post","link":"https:\/\/www.thepicpedia.com\/blog\/adobe\/from-the-acr-team-super-resolution\/","title":{"rendered":"From the ACR team: Super Resolution"},"content":{"rendered":"
\n

Article by Eric Chan, senior principal scientist at Adobe, explaining how the new machine learning powered Super Resolution feature in Lightroom works.<\/p>\n<\/p>\n

Panorama enhanced with Super Resolution. The inset images show zoomed-in crops of two areas of the photo, with crisply rendered branches and flying snow.<\/p>\n

\u201cFrom the ACR team\u201d is a blog series that brings you insights directly from the team that builds the imaging features for Lightroom, Lightroom Classic, Lightroom mobile, Adobe Camera Raw, and the Camera Raw filter in Photoshop. I recently worked on a feature called Enhance Super Resolution and I\u2019m delighted to share it is now live today. I collaborated closely on this project with Micha\u00ebl Gharbi and Richard Zhang of Adobe Research. Micha\u00ebl also previously developed a related feature, Enhance Details.<\/p>\n

Super Resolution is now shipping in Camera Raw 13.2 and will be coming soon to Lightroom and Lightroom Classic. In this post I\u2019ll explain what it is, how it works, and how to get the most from it.<\/p>\n

\"Photo<\/p>\n

My name is Eric Chan, and I\u2019ve worked at Adobe for thirteen years. Previous projects that I\u2019ve worked on include Highlights and Shadows, Clarity, Dehaze, Camera Profiles, Lens Corrections, and Upright. You might sense a pattern here \u2014 I like to mess around with pixels.<\/p>\n

Super Resolution is also a pixels project, but of a different kind. Imagine turning a 10-megapixel photo into a 40-megapixel photo. Imagine upsizing an old photo taken with a low-res camera for a large print. Imagine having an advanced \u201cdigital zoom\u201d feature to enlarge your subject. There\u2019s more goodness to imagine, but we\u2019re getting ahead of ourselves. To understand Super Resolution properly, we must first talk about Enhance Details.<\/p>\n

The origin story<\/h3>\n

Two years ago, we released Enhance Details, a feature that uses machine learning to interpolate raw files with an uncanny degree of fidelity, resulting in crisp details with few artifacts. You can read more about that here. We reasoned at the time that similar machine learning methods might enable us to improve photo quality in new and exciting ways.<\/p>\n

\"Bayer<\/p>\n

Camera sensors see the world through mosaic patterns like the ones shown above. Enhance Details uses machine learning to interpolate those patterns into RGB color images.<\/p>\n

The sequel<\/h3>\n

Today I am thrilled to introduce our second Enhance feature: Super Resolution. The term \u201cSuper Resolution\u201d refers to the process of improving the quality of a photo by boosting its apparent resolution. Enlarging a photo often produces blurry details, but Super Resolution has an ace up its sleeve \u2014 an advanced machine learning model trained on millions of photos. Backed by this vast training set, Super Resolution can intelligently enlarge photos while maintaining clean edges and preserving important details.<\/p>\n

\n
\n
\n

\"Close<\/p>\n

Example using bicubic resampling<\/p>\n<\/div>\n

\n

\"Close<\/p>\n

Super Resolution<\/p>\n<\/div>\n<\/div>\n<\/div>\n

If all that machine learning language sounds complicated, well, that\u2019s because it is. Don\u2019t worry, though \u2014 the Super Resolution feature we built around this technology is very simple to use \u2014 press a button and watch your 10-megapixel photo transform into a 40-megapixel photo. It\u2019s a bit like how Mario eats a mushroom and suddenly balloons into Super Mario, but without the nifty sound effects.<\/p>\n

Do we really need more pixels?<\/h3>\n

I know what you\u2019re thinking \u2014 \u201cC\u2019mon Eric, it\u2019s 2021, are we really still talking about more megapixels?\u201d Modern cameras have pixels to spare, don\u2019t they? Once upon a time, we all thought 6 megapixels was plenty. Then 12 became the new 6, and now 24 is the new 12. There are even cameras with a whopping 40 to 100 megapixels. With all those pixels floating around, why do we need more?<\/p>\n

The short answer is \u2014 Usually we don\u2019t, but occasionally we do. And sometimes, we really really do.<\/p>\n

\"Brown<\/p>\n

Here\u2019s one of those cases where it\u2019s helpful to have more resolution. After photographing the bear from a safe distance and cropping the image down, I was left with \u201conly\u201d 4 megapixels.<\/p>\n

Here\u2019s the longer answer.<\/p>\n

First, not all cameras have those sky-high resolutions. Most phones are 12 megapixels. Many cameras are still in the 16-to-24-megapixel range. This is plenty for many scenarios, like posting online or sending to a friend. If you want to make a large print to display on your wall, though, extra resolution helps to keep edges clean and details intact. We\u2019ll look at some examples later.<\/p>\n

Even if you have a shiny new camera with a zillion pixels, what about those older pictures already in your catalog taken with a lower resolution model? Some of my favorite pictures were taken fifteen years ago using a camera with \u201conly\u201d 8 megapixels. Here\u2019s one:<\/p>\n

\"Photo<\/p>\n

Previously I tried making a large print of this image, but I was disappointed in the results. The foreground rocks came out overly smooth and the background trees below the incoming fog were mushy and hard to see. With the help of Super Resolution, I can now make a large print with textured, natural-looking rocks and distinct background trees. In short, Super Resolution can breathe new life into old photos.<\/p>\n

More resolution also comes in handy when working with tightly cropped photos. Ever been in a situation where you\u2019re photographing from farther away than you\u2019d like, so you end up with your subject occupying only a small part of the image? Happens to me all the time. Here\u2019s an example:<\/p>\n

\"\"<\/p>\n

This gyrfalcon flew overhead, and I snapped a few frames before she vanished. Sure, it would\u2019ve been nice to switch to an 800 mm lens with a 2x extender, but the gyrfalcon was present for literally just seconds. (I suppose a bigger problem is that I don\u2019t have any of that exotic gear!) With \u201conly\u201d a 400 mm lens on a 1.6x camera body, I ended up with an uncropped image like this:<\/p>\n

\"Photoe<\/p>\n

This is one of my favorite bird photos, but due to the capture circumstances described above, the cropped file is a mere 2.5 megapixels. That\u2019s where Super Resolution comes in \u2014 I now have a 10-megapixel image from which I can make a decent sized print. Used in this way, Super Resolution is like having an advanced \u201cdigital zoom\u201d capability.<\/p>\n

Now that we\u2019ve talked about some of the potential use cases for Super Resolution, let\u2019s take a closer look at the underlying tech.<\/p>\n

How does it work?<\/h3>\n

Micha\u00ebl Gharbi and Richard Zhang of Adobe Research developed the core technology behind Super Resolution.<\/p>\n

The idea is to train a computer using a large set of example photos. Specifically, we used millions of pairs of low-resolution and high-resolution image patches so that the computer can figure out how to upsize low-resolution images. Here\u2019s what some of them look like:<\/p>\n

\"Collage<\/p>\n

These are small 128 x 128-pixel crops from detailed regions of real photos. Flowers and fabrics. Trees and branches. Bricks and roof tiles. With enough examples covering all kinds of subject matter, the model eventually learns to up sample real photos in a naturally detailed manner.<\/p>\n

Teaching a computer to perform a task may sound complicated, but in some ways it\u2019s similar to teaching a child \u2014 provide some structure and enough examples, and before long they\u2019re doing it on their own. In the case of Super Resolution, the basic structure is called a \u201cdeep convolutional neural network,\u201d a fancy way of saying that what happens to a pixel depends on the pixels immediately around it. In other words, to understand how to up sample a given pixel, the computer needs some context, which it gets by analyzing the surrounding pixels. It\u2019s much like how, as humans, seeing how a word is used in a sentence helps us to understand the meaning of that word.<\/p>\n

Training a machine learning model is a computationally intensive process and can take days or even weeks. Here\u2019s a visual walkthrough of what progress looks like, starting from a Fujifilm X-Trans raw pattern of a scene:<\/p>\n

\"Same<\/p>\n

An example training pair: The X-Trans input patch (left) and enlarged RGB color output patch (right). The model is trying to learn the correspondence between the two. The image on the right is called the \u201creference\u201d or \u201cground truth\u201d image.<\/p>\n

\"6<\/p>\n

Six snapshots of training progress, from beginning (top left) to end (bottom right).<\/p>\n

You can see how the initial results (top left and top center) are comically bad \u2014 they don\u2019t even look remotely like photographs! That\u2019s what happens at the very beginning, when training is just getting started. Just like how a child doesn\u2019t learn to walk on day one, machine learning models don\u2019t immediately figure out how to demosaic and upsize cleanly. With more training rounds, though, the model rapidly improves. The final result (bottom right) looks quite similar to the reference image.<\/p>\n

\"2<\/p>\n

Ground truth (left) vs fully-trained model (right). Pretty close, right?<\/p>\n

We have a few unique ingredients in our training regimen for Super Resolution. One of them is that for Bayer and X-Trans raw files (used by the vast majority of camera models), we train directly from the raw data, which enables us to optimize the end-to-end quality. In other words, when you apply Super Resolution to a raw file, you\u2019re also getting the Enhance Details goodness as part of the deal. A second key piece is that we focused our training efforts on \u201cchallenging\u201d examples \u2014 image areas with lots of texture and tiny details, which are often susceptible to artifacts after being resized. Finally, we built our machine learning models to take full advantage of the latest platform technologies, such as CoreML and Windows ML. Using these technologies enables our models to run at full speed on modern graphics processors (GPUs).<\/p>\n

\"Photo<\/p>\n

How do I use it?<\/h3>\n

Using Super Resolution is easy \u2014 right-click on a photo (or hold the Control key while clicking normally) and choose \u201cEnhance\u2026\u201d from the context menu. In the Enhance Preview dialog box, check the Super Resolution box and press Enhance.<\/p>\n

\"Screenshot<\/p>\n

Your computer will put on its thinking cap, crunch a lot of numbers, then produce a new raw file in the Digital Negative (DNG) format that contains the enhanced photo. Any adjustments you made to the source photo will automatically be carried over to the enhanced DNG. You can edit the enhanced DNG just like any other photo, applying your favorite adjustments or presets. Speaking of editing, I recommend taking another look at your Sharpening, Noise Reduction, and possibly Texture settings. All of these controls affect fine details, and you may need to tune these for best results on the enhanced photo.<\/p>\n

Super Resolution doubles the linear resolution of the photo. This means that the result will have twice the width and twice the height of the original photo, or four times the total pixel count. For instance, the following source photo is 16 megapixels, so applying Super Resolution will result in a 64 megapixel DNG.<\/p>\n

\"Image<\/p>\n

Images are currently limited to 65000 pixels on the long side and 500 megapixels. If you try to apply Super Resolution to a file that\u2019s close to those numbers, like a big panorama, you\u2019ll get an error message because the result would be too large. We\u2019re looking into ways to raise these limits in the future. For now, don\u2019t get too worried \u2014 a 500-megapixel file is still pretty darn big!<\/p>\n

Applying Super Resolution to a Bayer or X-Trans raw file will automatically apply Enhance Details, too. Combining these steps results in higher quality and better performance.<\/p>\n

Super Resolution also works on other file formats such as JPEGs, PNGs, and TIFFs. Here\u2019s an example where I captured a time-lapse sequence in raw format, then composited them in Photoshop to produce a TIFF file. I then applied Super Resolution to this composite.<\/p>\n

\"Photo<\/p>\n

If you use Enhance a lot, you may find the following tips handy to speed up your workflow. You can apply Enhance to several images at a time by first selecting the desired images in the filmstrip, then running the Enhance command. The dialog will only show you a preview for the primary photo, but your chosen Enhance options will apply to all selected photos. You can also skip the dialog entirely by pressing the Option (on macOS) or Alt (on Windows) key before picking the Enhance menu command. Using this \u201cheadless\u201d option will apply the previous Enhance settings.<\/p>\n

Comparison<\/h3>\n

Let\u2019s take a closer look at the results. We\u2019ll start with a studio test scene available at dpreview.com:<\/p>\n

\"Collage<\/p>\n

While this is obviously not a \u201creal-world photograph,\u201d it is a good way to get a sense of the benefits provided by Super Resolution over traditional upsizing methods. Here are some zoomed-in crops from various parts of this test scene. Images on the left use standard bicubic upsizing, and images on the right use Super Resolution. Notice how the new approach does a better job at preserving small details and colors.<\/p>\n

\"\"<\/p>\n

\"\"<\/p>\n

\"\"<\/p>\n

\"\"<\/p>\n

\"\"<\/p>\n

\"\"<\/p>\n

Now let\u2019s examine some practical cases, starting with a landscape image<\/p>\n

\"Black<\/p>\n

This is actually a fairly tight crop from a larger scene, as shown below:<\/p>\n

\"Screenshot<\/p>\n

Here is a side-by-side zoomed in view of the branches and foliage, with bicubic resampling on the left and Super Resolution on the right:<\/p>\n

\"Side<\/p>\n

While we\u2019re on the super theme, here\u2019s a photo of Super Bear:<\/p>\n

\"Photo<\/p>\n

All that\u2019s missing is a red cape!<\/p>\n

I kept my distance from this brown bear as she fished for sockeye salmon. (Coming between a hungry bear and her lunch is a Very Bad Idea.) Here are closeups of the fur and spraying water, with bicubic resampling on the left and Super Resolution on the right.<\/p>\n

\"Side<\/p>\n

Best practices<\/h2>\n

Here are some additional tips for getting the most out of Super Resolution.<\/p>\n

Use raw files whenever possible. More generally, start from the cleanest source photo available. If the source photo has artifacts, as often happens with highly compressed JPEGs or HEIC files, then these artifacts might become more visible after applying Super Resolution.<\/p>\n

A faster GPU means faster results. Both Enhance Details and Super Resolution perform millions of calculations and benefit immensely from a fast GPU. For laptop owners, an external GPU (eGPU) can make a big difference. We\u2019re talking about seconds vs minutes to process a single image!<\/p>\n

\"Photo<\/p>\n

If you\u2019re in the market for a new computer or GPU, look for GPU models optimized for CoreML and Windows ML machine learning technologies. For example, the Neural Engine in the Apple M1 chip is highly tuned for CoreML performance. Similarly, the TensorCores in NVIDIA\u2019s RTX series of GPUs run Windows ML very efficiently. The GPU landscape is changing quickly and I expect big performance improvements around the corner.<\/p>\n

Super Resolution can produce very large files, which take longer to read from disk. I recommended using a fast drive like a Solid-State Drive, or SSD.<\/p>\n

Finally, don\u2019t feel that you need to apply Super Resolution on all your photos! Think of it as a new option for those special photos and print projects that really need it. As for myself, I have a hundred thousand photos in my catalog, but I\u2019ve used Super Resolution on just a handful of them. After long and careful consideration, I decided that I really don\u2019t need a hundred megapixel photos of my cat. Really.<\/p>\n

\"A<\/p>\n

This pan-blur photo doesn\u2019t have any fine details and doesn\u2019t need Super Resolution, even when making a big print.<\/p>\n

What\u2019s next?<\/h3>\n

Enhance Details was the first Enhance feature. Super Resolution is the second. We\u2019re now looking into ways to extend Super Resolution to produce even larger and cleaner results. We\u2019ll also be exploring other potential applications of the same underlying technology, such as improved sharpening or noise reduction. Anything we can do to make images look better is fair game!<\/p>\n<\/div>\n

Source : Adobe<\/p>\n","protected":false},"excerpt":{"rendered":"

Article by Eric Chan, senior principal scientist at Adobe, explaining how the new machine learning powered Super Resolution feature in Lightroom works. Panorama enhanced with Super Resolution. The inset images show zoomed-in crops of two areas of the photo, with crisply rendered branches and flying snow. \u201cFrom the ACR team\u201d is a blog series that …<\/p>\n","protected":false},"author":1,"featured_media":28307,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22],"tags":[],"_links":{"self":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts\/28275"}],"collection":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/comments?post=28275"}],"version-history":[{"count":1,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts\/28275\/revisions"}],"predecessor-version":[{"id":28308,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts\/28275\/revisions\/28308"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/media\/28307"}],"wp:attachment":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/media?parent=28275"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/categories?post=28275"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/tags?post=28275"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}