Colour Labels in Lightroom

Have you ever used colour labels in Lightroom to organise your photos?

It may surprise you that these colour labels are actually not what they are called. There is no colour assigned to a photo, it is a string containing the name of the colour.

You can easily try it yourself. Open Lightroom, choose a photo and set the colour label to red by hitting “6” on the keyboard. Depending on your settings, Lightroom will display a red frame around your image, add a reddish background or show a red square beneath the image in grid view. Once you did that, open the metadate panel, choose the default display and observe the field called “Label”. What do you see? The text “Red”, not the colour.

You can edit this field. Try typing “Green” and observe the colour marking change. Of course you can type in anything you like. Say you want to mark an image as approved, just type in “Approved”. No colour shows up? Well, what did you expect?

Colour Label Sets

If the field “Label” contains just text, how does Lightroom know it is supposed to display a colour? How does this work for different languages? You might write “Yellow” for yellow, I’d write “Gelb” in my mother tongue, still we mean the same colour. The key is a colour label set. You will find it in the menu under “Metadata – Color Label Sets”. Lightroom comes with three predefined sets: “Lightroom Default”, “Bridge Default” and “Review Status”. Select “Lightroom Default”, open the Menu again and klick “Edit…”. You will see this:

That is the complete secret. Lightroom reads the field “Label” and checks the currently selected colour label set to see whether there is a colour to display for the text found. You can use any text you like. The two other default sets use other text and are good examples for what you might want to use this for. You can create and save you rown colour label sets as well.

By the way, switching the colour label set does not alter the label-fiels. So you can use different sets for different purposes and switch between those. Only thing to keep in mind: there can only be one label per photo. So if you mark a photo as Red with the “Lightroom Default” set, then switch to “Bridge Default” and use the colour label yellow to mark the same photo as second choice, the metadata will only contain the word “Second” – the first colour label is gone.

I’ll leave it to you to find out what happens if you switch languages.

Macros of Insects

Marienkäfer Nahaufnahme

For some time now I have been trying my hand at macros of small insects. At the moment I’m using a Sigma MACRO 105mm F2.8 DG with Kenko extension rings, mostly with a 20mm ring, on my Canon EOS R. I’m still quite a beginner, but a few insights are emerging.

There is Too Little Light!

The theoretical consideration that the depth of field will not be sufficient even for a fly if I open the aperture to use as much light as possible is immediately confirmed in practice. The aperture must be closed as far as possible. And here we come to the second theory that is confirmed: even if the lens allows aperture 22, it’s no use. From f/16 on, the image becomes visibly blurrier due to diffraction at the aperture, f/14 is apparently the optimum.

My new R can go further than my old 7D, but of course it also has its limits. Especially when it comes to very fine details, image noise that can be seen even at ISO 800 is disturbing. In other contexts, e.g. theatre photography, I tolerate noise up to ISO 3200, possibly even more. With insects, I don’t like it. Mostly I want to crop the image a bit, the quality should be optimal. So down to ISO 100.

To be able to work at ISO 100 and f-stop 14, you either need a lot of light or long exposure times. I therefore tried to put the camera on a tripod and to get there with exposure times around 1/4. This has not worked for me. Firstly, by the time my tripod is in the right position my subjects have usually long since disappeared unnerved, and secondly, the beasts move. Even flies that seem to be sitting still often move their hind body. I am not a biologist, but I suspect they are respiratory movements. So I try to get exposure times around 1/200 to 1/250 second. I don’t use the big tripod, I just try to use existing objects as a support or use a monopod.

Too slow despite bright sunlight at noon

Unfortunately, even in the blazing full midday sun (more on its qualities later), there is still usually too little light.

Get Me More Light – Flash!

After a short search on the internet, it became clear to me: I would really love to try out a macro flash with two separately controllable halves, be it a ring or small individual flashes, but it is too expensive for me to start with (and I don’t trust the cheap offers). So it’s a case of trying it out and tinkering.

The first idea of simply adjusting the pop-up flash downwards quickly finds its limits. These things are not made for illuminating objects at a short distance in front of the lens. A better way was to use a reflector surface attached to the flash, which diffuses the light downwards. But the technique also has two disadvantages: There is so much weight attached to the flash head that it always bends downwards. In any case, my flashes can be adjusted to different angles via locking positions – but the locking is much too delicate for the reflector. And secondly, the light now always comes from above – it also looks bad.

Next I tried controlling the flash via radio trigger and holding it with the reflector to the side of the camera. My conclusion: I have at least one hand too few. So it doesn’t work either.

So I searched the internet again and thought about solutions. One idea that I might pursue is a flash rail under the camera, on which you then mount two short vertical rails that hold two clip-on flashes pointing forwards, so that their flash tubes come to rest to the right and left of the lens. At the moment I consider this too bulky and wobbly. There are a few similar approaches in the accessories trade, the user comments tend to confirm my concerns.

So I decided to stick with one flash for now. Why would I need two? The answer is quickly found: perhaps because of the shadows.

Portraits, Light and Shadow

Harsh shadow of direct sunlight

A pop-up flash is not really a point source of light, but the area from which the light emanates is so small compared to the photographed objects that hard shadows appear behind them. These are usually perceived as unattractive. If you have two light sources, one can illuminate the shadows produced by the other. If the brightness of the two sources can be controlled separately, the illumination can be influenced quite well.

This is also often done in portrait photography in studios, where there is a typical set-up with a model light (key) and a fill light (fill), often the latter is just a reflector. By the way, in order to achieve a three-dimensional impression in a two-dimensional photo, we do need the shadows, we don’t want them completely gone, otherwise the picture looks very flat, just soft and with a little direction. For portraits, it is classically considered beautiful if the light falls more or less as we know it from natural lighting, i.e. diagonally from above and usually a little from the side (I don’t want to go into details here, there are all kinds of lighting variations and exceptions to almost everything).

Soft shadows resulting from a slight overcast

If you want to get the shadows soft, it is crucial that the light does not only come from one or a few fixed directions. If the light source is a point, the light always comes from exactly one direction, if the light source is a surface, light falls over a certain spatial angle and there is a soft transition between light and shadow.

You can easily observe this outside. The sun is big, but far away, so it appears in a small spatial angle, almost like a point. Everyone knows the hard, clear shadows that sunlight casts. On overcast days, however, the incoming sunlight is scattered by the clouds, and the entire sky appears luminous, contributing to the light and shade. Because the light comes “from all directions”, you see almost no and only very soft shadows. Days with not too dense cloud cover but just not clear skies are ideal for outdoor portraits.

The spatial angle depends on the size of the light source and the distance to the illuminated object. For portraits and studio flashes, I would estimate the ratio between the size of the light source (first of all the naked flash tube) and the object to be in the order of 1:10, the distance typically 1:3. So there are hard shadows and, if you use two sources, just hard shadows that overlap and no longer look quite so dark. The hard edges of the shadows are usually considered unattractive; you eliminate them by making the effectively lit areas larger – with reflectors, umbrellas and softboxes. If you place a round softbox with a diameter of 2 m 1 m in front of an object, the light will not only come straight from the front but also from up to 45° from the right, left, top, bottom and all directions in between. Seen from the object, the angle of beam spread is 90°. The same light source 5 m away gives an opening angle of only 22°, the shadows become harder again.

Taken with a large softbox and a reflector as fill

I have already done portraits quite successfully with only one softbox. I would estimate the opening angle at about 30° – 45°. That should also be feasible for macros. The surface area of the pop-up flash alone, even with the diffuser folded out, would be a bit small, but if I had a small softbox 10-20 cm in diameter and could place it about 20-30 cm away from my subjects, that would be enough.

Then I found a gooseneck to attach to the camera’s flash shoe and a mini softbox in the accessories shop. It looks like this:

The construction is a bit unwieldy, but with a little practice it can be controlled with two hands. I am quite satisfied with the results. Only the positioning of the flash needs a bit of trial and error, the light still comes a bit too much from the side.

Pop-up flash only
With the small soft box


Re-Run Lightroom Face Detection

Update June 2018: New Face Detection in Lightroom Classic 7.3

With the new version 7.3 of Lightroom Classic Adobe introduced a new, much improved face detection as well as an option to run face detection again. The methods described in this article should not be required any more.

Original Article

To my knowledge as of today there is no official way to re-run face-detection in Adobe Lightroom (LR), neither for a complete catalogue nor for a single folder. You may find you want this option if for example you deleted a whole bunch of already recognised face-areas and now regret you did so. The following works for me.

First make a backup of your catalogue.

Then get the SQLite Browser ( You will need it to edit the LR catalogue file. Install it, run it and open a database. Use your catalogue as database. Lightroom must be closed while you do this.

You will see, a LR catalogue is a relational database, it contains a lot of tables. With some experience you’ll be able to guess how these work together. A lot you can do here – a lot you can damage! If you try what I describe here and it fails, don’t come to me for complaining. This is definitely not officially supported, no warranty, at your own risk.

There is one table that can help you if you want to re-run the face-detection: Adobe_libraryImageFaceProcessHistory. It contains information about what happened with the images already, the last status of face-detection, face-recognition and indexing as well as whether you touched the image after detection, for example to confirm the find by adding a name.

The table is pretty useless without additional information. It contains a ID for each image it has data for, which you can use to link to the relevant information in other tables like this:

select Folder.pathFromRoot, File.baseName, File.extension, Hist.*
from  Adobe_libraryImageFaceProcessHistory AS Hist
inner join Adobe_images AS Img ON Hist.image = Img.id_local
inner join AgLibraryFile AS File on  Img.rootFile = File.id_local
inner join AgLibraryFolder AS Folder on File.folder = Folder.id_local

As you have the folder path in the query, you can also use it to filter for photos in a particular folder (including subfolders if you use a wildcard ‘%’). Like this:

select Hist.*
 from  Adobe_libraryImageFaceProcessHistory AS Hist
 inner join Adobe_images AS Img ON Hist.image = Img.id_local
 inner join AgLibraryFile AS File on  Img.rootFile = File.id_local
 inner join AgLibraryFolder AS Folder on File.folder = Folder.id_local
 where Folder.pathFromRoot LIKE 'Photos/2015/2015-04-25/%'

Now, with this you can selectively delete the face detection history entries for photos in a particular folder. Before you do, make sure you have really made that backup.

delete from Adobe_libraryImageFaceProcessHistory
 Where id_local in (
 select Hist.id_local
 from  Adobe_libraryImageFaceProcessHistory AS Hist
 inner join Adobe_images AS Img ON Hist.image = Img.id_local
 inner join AgLibraryFile AS File on  Img.rootFile = File.id_local
 inner join AgLibraryFolder AS Folder on File.folder = Folder.id_local
 where Folder.pathFromRoot LIKE 'Photos/2015/2015-04-25/')

Now, close the database and open Lightroom. Go to the folder and switch to people-view. The detection runs again.

This does not delete already detected face-areas or keyword assignments.

Let me know if this worked for you in the comments.

Englisches Lightroom mit deutscher Tastatur

Mit einem einfachen Trick eine deutsche Tastaturbelegung mit englischem Lightroom benutzen.

Die Herausforderung

Weil ich bevorzugt englischsprachige Fotoseiten lese und auch eine Menge Foto-Lehrbücher in dieser Sprache besitze, bevorzuge ich ein englisches Lightroom. So spare ich mir die Suche nach der deutschen Bezeichnung für einen Begriff, der z. B. in einer Anleitung verwendet wird. Bei Photoshop halte ich es genau so, doch darum soll es hier heute nicht gehen.

Die Herausforderung ist, dass Adobe leider mit dem Sprachwechsel auch die Tastaturbelegung wechselt. Es ist nicht vorgesehen, ein Tastaturlayout separat einzustellen. Tasten, die auf einem englischen Layout bequem sind, liegen auf der deutschen Tastatur unmöglich. Zum Beispiel der Backslash “\” oder die eckigen Klammern “[” und “]”. Sie liegen in der US-Version direkt nebeneinander und sind ohne Kombination mit anderen Tasten erreichbar:

US Tastaturlayout
„KB United States-NoAltGr“ von Diese Datei wurde von diesem Werk abgeleitet: KB United States.svg. Lizenziert unter CC BY-SA 3.0 über Wikimedia Commons –

Für uns Deutsche sieht das anders aus, alle drei Tasten sind nur per Kombination mit Alt Gr erreichbar.

Quelle: Wikipedia
Quelle: Wikipedia

Die “Lösung”

Bei einem Sprachwechsel wird in Lightroom auf eine andere Sprachressourcedatei umgestellt. Die Datei für Deutsch ist bei einer Installation von Lightroom Classic CC zu finden (unter Windows, hab leider keinen Mac zum Testen): %Program Files%\Adobe\Adobe Lightroom Classic CC\Resources\de\TranslatedStrings_Lr_de_DE.txt.

Es gibt erwartungsgemäß keinen Ordner und keine Datei für die Sprache Englisch (en). Interessant ist aber, dass diese, wenn es sie gibt, berücksichtigt werden. Das kann man leicht ausprobieren, indem man den Ordner …\Resources\de kopiert und in …\Resources\en umbenennt, die Datei TranslatedStrings_Lr_de_DE.txt zu TranslatedStrings_Lr_en_US.txt umbenennt und danach eine englische Version Lightroom startet. Es werden jetzt nicht nur die deutschen Anzeigetexte verwendet, sondern auch die Tastaturbelegung.

Die Datei selber ist eine reine Textdatei. Hier mal ein Auszug aus dem Inhalt:


Die rot hervorgehobenen Zeilen sind interessant. Es werden offensichtlich Tastaturbelegungen vorgenommen. Was liegt also näher als alle Zeilen, die nichts mit Tastaturbelegungen zu tun haben, aus der Datei zu werfen und diese dann für das englische Lightroom zu verwenden? Nichts, genau so funktioniert es.

Bei Bedarf lassen sich dann gleich noch ein paar unglücklich gewählte Belegungen korrigieren und ein paar wenige Texte anpassen, fertig ist die eigene Tastaturbelegung für deutsche Tasten an englischem Lightroom.


Einen Download einer fertigen Datei möchte ich an dieser Stelle nicht abieten, weil

  • ich meine persönlichen Vorlieben in meine eingebaut habe, die nicht jeder teilen mag,
  • die Datei von der Version von Lightroom abhängen dürfte und
  • ich nicht sicher bin, wie begeistert Adobe wäre.

Löschen überflüssiger Zeilen mit Notepad++.

Ich will nicht die deutschen Texte, sondern nur die Tastaturbelegung. Also müssen alle Zeilen raus, die nichts damit zu tun haben. Man kann das Löschen von nicht benötigten Zeilen mit Suchen und Ersetzen mit regulären Ausdrücken deutlich erleichtern. Ich habe alle Textteile mit notepad++ gelöscht, auf die folgender Ausdruck passte:



Es gibt einige verbliebene Zeilen, die deutsche Begriffe enthalten, die der Anzeige der aktuellen Tastenbelegungen auf dem Hilfeschirm, den man mit Strg+< erreicht, dienen. Es erscheint mir sinnvoll, die hier auftauchenden deutschen Begriffe durch die englischen zu ersetzen, also:

  • Command statt Befehl
  • Delete statt Löschen
  • Option statt Wahl
  • Enter statt Eingabe
  • Backspace statt Rücktaste
  • Shift statt Umschalt
  • Right Arrow statt Nach-rechts-Taste
  • Left Arrow statt Nach-links-Taste
  • Up Arrow statt Nach-oben-Taste
  • Down Arrow statt Nach-unten-Taste
  • Ctrl statt Strg
  • Tab statt Tabulatortaste
  • Space statt Leertaste


Danach ist die Anzahl der Zeilen, die man manuell löschen muss, überschaubar. Man findet sie leicht beim Durchscrollen, und man kann ja auch rasch korrigieren, wenn man dann doch einen vergessenen deutschen Text in der Lightroom-Oberfläche findet.

Natürlich braucht man immer noch nicht wirklich alle verbleibenden Zeilen aus der Datei. Da, wo die englische Originalbelegung gut funktioniert, muss man nicht korrigierend eingreifen. Es ist viel einfacher, alle Zuordnungen aus der deutschen Datei zu übernehmen, als mühselig herauszusuchen, welche man braucht.

Wenn man will, kann man jetzt noch Belegungen anpassen. Aktuell gibt es aber für meinen Geschmack gar nicht mehr viel zu tun, die echten Fehler sind inzwischen behoben, nur ein paar Anzeigen auf den Hilfeschirmen passen noch nicht so ganz. Kann man machen, muss man nicht.

Aktueller Stand

Zum Schluss noch meine aktuellen “Korrekturen” für Lightroom Classic CC 7.2:

“$$$/AgDevelopShortcuts/Create_Virtual_Copy/Key=Command + T”
“$$$/AgDevelopShortcuts/Rotate_left/Key=Command + ,”
“$$$/AgDevelopShortcuts/Rotate_right/Key=Command + .”
“$$$/AgLibrary/Bezel/FilterBarHidden=Press < to show the filter bar again”
“$$$/AgLibrary/Bezel/Mac/FilterBarHidden=Press < to show the filter bar again”
“$$$/AgLocation/Bezel/Filterbar/FilterBarHidden=Press < to show the filter bar again”
“$$$/AgLocation/Bezel/Filterbar/Mac/FilterBarHidden=Press < to show the filter bar again”
“$$$/Slideshow/Bezel/HeaderHidden=Press < to show the header bar again”
“$$$/Slideshow/Bezel/Mac/HeaderHidden=Press < to show the header bar again”

Photoshop Blend Modes - Part 3

Soft Light

Soft Light is the one blend mode I read most nonsense about. Part of this may be due to the fact that vendors define the method differently, get quoted incorrectly and thus add to the confusion. Alas, I couldn’t find explanations for their particular choices, so I can only show what I figured out and present my own thoughts.


My Photoshop CC uses the following formula (as documented in PDF Reference):

f(a,b) = \left\{ \begin{array}{lr}(1-a)ab + a(1-(1-a)(1-b)) & \text{f\"ur } b < {1/2}\\a+(2b-1)(\sqrt{a}-a) & \text{f\"ur } b \geq {1/2} \land a > {1/4}\\a+(2b-1)(((16a-12)a+4)a-a) & \text{f\"ur } b \geq {1/2} \land a \leq {1/4}\end{array} \right.

I was able to verify that with the help of a little Postscript script may father generously provided (thank you very much for your support, (Dieter)). It fills a square with the result of a blend between two linear gradients using a formula only limited by your Postscript skills (reverse Polish notation). After comparing the result with Photoshop’s own blend, I can assure you now that this indeed is the formula used, whatever others try to tell you. (For more details about the process see here).

The two source images:

Gradient A
Image A – horizontal gradient

Gradient B
Bild B – vertical gradient

After blending using the Soft Light formula with the Postscript script:

Soft Light calculated
Calculated Soft Light

and using Photoshop’s Soft Light blend mode:

Photoshop Soft Light
Photoshop Soft Light

By subtraction one can easily show both images are identical (but for small deviations due to limited resolution and rounding errors).

A closer look at the brightness for fixed b-values is quite interesting:

Gamma-Curve-PDFAs you can see for b=0,5 you get the diagonal, i.e. a linear translation from a to brightness. Hence if you chose an evenly grey image for B (50% grey), the image A is unchanged by the blending. Actually, this is a design goal for all formulas we are looking at here, 50% grey is supposed to be neutral.

For b=0 you get the much simpler formula:

f(a,b) = a(1-(1-a)) = {a}^{2}.

This is nothing else but the γ-correction (Gamma-correction) with γ=2. The curve for b=1 looks a lot like the square root function, that would be a γ-correction with γ=0.5, but due to the corrective term for small a it is not identical.

I was interested in the course of the deviation from the diagonal

f(a,b) = a

for different values of b, as that might show me the degree of symmetry of the method. So I plotted curves for

f(a,b) = \left\{ \begin{array}{lr}(1-a)ab - a(1-a)(1-b) & \text{f\"ur } b < {1/2}\\(2b-1)(\sqrt{a}-a) & \text{f\"ur } b \geq {1/2} \land a > {1/4}\\(2b-1)(((16a-12)a+4)a-a) & \text{f\"ur } b \geq {1/2} \land a \leq {1/4}\end{array} \right.

Delta-Curve-PDFThe strong asymmetry for b > 0,5 is striking.

Variations of the Soft Light formula

Using the following formula, also sometimes presented as the Photoshop-formula, you get a slightly different result. Here you actually get the γ=0.5 correction for b=1.

f(a,b) = \left\{ \begin{array}{lr}(1-a)ab + a(1-(1-a)(1-b)) & \text{f\"ur } b < {1/2}\\a+(2b-1)(\sqrt{a}-a) & \text{f\"ur } b \geq {1/2}\end{array} \right.

Without the corrective termfor small a
Without the “correction” for small a

At a close look you might see that the top left corner is slightly brighter than the one from Photoshop.

The corresponding curves look like this:

Gamma-Curve-PDF-2Again no symmetry. Yes, you could mirror at the diagonal, but that is not keeping a fixed. For blending we want to know how a fixed value of a would be translated for varying b. Here identical changes in b result in more or less strong effect depending on the direction you move from b=0.5. You can see this more easily after subtaction of a.


In this representation you can clearly see how strong the change of the image would be for small a. Maybe too strong for Adobe so they chose to add that corrective term?

But then, you could only use the first part of the original formula, this one:

f(a,b) = (1-a)ab + a(1-(1-a)(1-b))

Just the first term
Just the first term

Also a lovely, very smooth result, linear in b. This linearity you can see in the curves as well:

Gamma-Kurven-PDF-1Now you can no longer get mirror-symmetry along the diagonal, but the distance between curves for fixed a is always the same.

Delta-Curves-PDF-1Here you see symmetry, looks very plausible, but for b=1 you miss γ=0,5 by a long shot.

Illusions’ parametric Gamma-Correction

A completely different approach to Soft Light you can find at Starting from the idea of a parametric γ-correction, controlled by b, they use:

f(a,b) = a^{2^{(1-2b)}}

The result of the blend is very pleasing, at least to my eyes: Gamma-Correction Gamma-Correction

At first glance, the curves almost look like the ones for the original Adobe-formula:

Delta-Curves-IllusionsAt a closer look the transitions are much smoother of course. This is not glued together from different parts, no wonder.

Mirror Games

I also have my own suggestion, a spin-off from my trying to answer the question if the second part of the Photoshop-formula might be the first mirrored at the diagonal. The answer is: it isn’t. You get the formula for the mirrored curve by resolving the first part for a and replacing by with (1-b’). The latter because for b > 0,5 the curve must be the mirror of one with b’=1-b.

f(a,b) = \left\{ \begin{array}{lr}(1-a)ab + a(1-(1-a)(1-b)) & \text{f\"ur } b < {1/2}\\ \frac{1-b}{1-2b}+\sqrt{\frac{(1-b)^2}{(2b-1)^2}+\frac{a}{2b-1}}& \text{f\"ur } b \geq {1/2}\end{array} \right.

In this case the deviations from the diagonal are:

I find this is not bad at all. Slightly brighter bottom left than the Illusions version, less drastic top left that the reduced Photoshop-formula (the one without the corrective for small a).

"My" Soft Light
“My” Soft Light

I have to admit, the whole is pretty academic. Alas, we cannot add new blend modes to Photoshop that easily. If you can, let me know. I’d love to have the last two blend modes there.

Photoshop Blend Modes - Part 2

I already introduced the blend modes Multiplication, Linear Dodge, Linear Burn and Linear Light in the first part of the series. In this article I’ll add some more that might be interesting for photographers. For one particular blend mode there is a separate third part, beause it is rather complicated and frequently used: Soft Light.

Throughout the first parts of this series, I’ll deal with the math behind the blend modes only. I plan to use these as a reference in later articles treating more practical aspects and use cases. The retouching method that triggered my research has already been given its own article: Frequency Separation.


Let me introduce a notation for inversion in addition to the ones I already introduced in the first part. Let the inverted image for an image A be overline{A}. Inversion doesn’t mean anything else but subtraction of the brightness values of each pixel from 1. Thus:

\overline{a(x,y)} = 1 - a(x,y).

Or short:

\overline{a} = 1 - a.

And now to the blend modes.

Color Burn

I am aware that it is slighly strange to explain something called Color Burn with black and white images. But as I said before, the math is what I will focus on first. The formulas are the same regardless or the number of channels. The formula for Color Burn is:

f(a,b) = \left\{ \begin{array}{lr}\overline{(\frac{\overline{a}}{b})} = 1 - \frac{1- a}{b} & \text{f\"ur } b > 0\\0 & \text{f\"ur } b =0 \end{array} \right.

Farbig Nachbelichten
Color Burn

You will notice that the whole bottom-left half drowns in black.

Color Dodge

The counterpart to Color Burn.

f(a,b) = \left\{ \begin{array}{lr}\frac{a}{\overline{b}} = \frac{a}{1-b} & \text{f\"ur } b < 1\\1 & \text{f\"ur } b =1 \end{array} \right.

Farbig Abwedeln
Color Dodge

Here you get white for a > 1-b.


Another rather simple but nice one:

f(a,b) = \overline{\overline{a}\overline{b}} = 1 - (1-a)(1-b)


Overlay und Hard Light

Overlay and Hard Light are siblings. The formulas are for Overlay:

f(a,b) = \left\{ \begin{array}{lr} 2ab & \text{f\"ur } a < 1/2\\\overline{2\overline{a}\overline{b}} = 1 - 2(1-a)(1-b) & \text{f\"ur } a \geq 1/2 \end{array} \right.

and for Hard Light:

f(a,b) = \left\{ \begin{array}{lr} 2ab & \text{f\"ur } b < 1/2\\\overline{2\overline{a}\overline{b}} = 1 - 2(1-a)(1-b) & \text{f\"ur } b \geq 1/2 \end{array} \right.

Both are combinations of slightly modified Multiply and negative Multiply.


Hard Light

You may see it in the images, this is not a very smooth function. Plotting the brightness over a for different fixed b one gets the following:


Obviously not smooth. Make of it what you like.

50 % grey is neutral for both Overlay and Hard Light.

Vivid Light

Vivid Light can be seen as a combination of Linear Burn and Linear Dodge.

f(a,b) = \left\{ \begin{array}{lr}\frac{a}{2(1-b)} & \text{f\"ur } b > {1/2}\\1- \frac{1-a}{2b} & \text{f\"ur } b \leq {1/2}\end{array} \right.

Vivid Light

Frequency Separation

Recently I found an interesting article on frequency separation on the Fstoppers’ pages that made me curious about this method. Seems to be pretty much standard now for beauty retouching. I learned something, but the article left me with unsanswered questions:

  1. Why is Linear Light the correct blend mode for rejoining the two layers created during frequency separation?
  2. What exactly is the difference between addition and subtraction blend mode?
  3. Why should there be a difference in the process depending on the colour-depth (8-bit versus 16-bit)?

So I decided to have a closer look. But first, I had to understand Blend Modes in general. Might be worth starting there if you have never dealt with the subject before.

The Task

We want to split one image into two such that one of the resulting images only contains fine detail, the other the large scale changes in colour and brightness. Take the following crop from a portrait as an example:


You can see pores, a light stubble, skin texture, and you can discern features like part of a nose, a line, shadows. Here the skin texture should end up in the first image, colours and larger shadows defining the shape in the second. Later on I’d like to combine the two images again in a way that we – unless we manipulated one of the images – get the original back.

“Why?”, you may ask. Simply because after this so called frequency separation you can retouch skin-tones without having to worry about the texture and vice versa. If I wanted to get rid of the red spot you can see top right, I’d just correct the colour and brightness and leave the texture alone. All the unmodified areas will, once blended again, look unchanged. The manipulated area will look very natural. So that’s why, I’ll show you how.

But before we start, some (very little) theory.


Fine structures mean spatially rapid changes of brightness, or a strong local contrast. You can interpret the changes of brightness as a superposition of waves. Waves, you may remember from your physics lessons, have a frequency, in this case a spatial frequency. Fine structure means high frequency, changes over a larger distance means low frequency.

Frequency Filtering

If you are an audiophile, you may have heard of high-pass or low-pass filters used in audio equipment. They do what their names say, the low-pass lets the low frequencies pass, i.e. the bass notes, while the high-pass does the same for high frequencies. For the latter, we indeed have a ready-made filter in Photoshop. It lets the high frequencies of an image pass and blocks the low frequencies yielding an image wich is mostly grey, close to 50 %, with little deviations on a small scale. You can control the scale – or as the audiophile might say, the cutoff frequency – with the radius parameter.

Photoshop also knows low-pass filters, only they are called differently. You may be able to identify them yourself. What evens out all the small scale changes? Blurring does. There are several blur filters available. For our purpose it does not really matter which one we choose, though you have to be careful with filters like surface blur or smart blur, as they tend to create sharp edges, which mean high frequency again. I personally would use the good old workhorse gaussian blur with a rather large radius.

Back to the Task – Splitting the Image

First we have an Image A. We create two copies and modify one so we get a new image B that is different from the original A. In this example I used a gaussian blur with a radius of 6 pixels:


You see, all the fine structure is gone, no skin texture to speak of left.

Now I want to create another image C from the second copy that contains only the differences between original A and B. Should be easy, just take the second copy and use the Apply Image command of Photoshop with the blend mode Subtract with A as target and B as source and be happy. Unfortunately there is a catch.

Photoshop only allows values between 0 and 1 as result, everything below 0 and above 1 gets cut off. Not a very good idea when subtracting two very similar images, there will almost certainly be some pixels with values below 0. So what we actually do is:

  1. divide by 2
  2. add ½

Thus the complete information is kept. The following formula is for a pixel-value of C:

c = f(a,b) = \frac{a-b+1}{2}.

It is easy to see that nothingis lost when you look at the extremes, i.e. combinations of a,b \in \{0,1\}.

Btw, I am showing all this for one channel only, for an RGB image this can be done for each channel separately.

The result of the “subtraction” is quite similar to that of Photoshop’s own High Pass filter – not exactly surprising.



Photoshop can handle images with a depth of 8-bit, 16-bit or even 32-bit per pixel. The 1 I talked about above correspondes to 255, 65536 or 4294967295 respectively. The apply image dialogue offers two paramerters for the blend mode subtraction, scale and offset. For our intention we need 2 for scale – that is for the aforementioned division by two, and 128 for offset. Bit hard to recognise the addition of ½, but that is what it is. Regardless of the bit depth the offset has to be specified in parts of 256, and 128 divided by 256 happens to be ½. This is confusing, and probably the reason for some more complicated workflows I have seen.

One mentioned frequently is this (for 16-bit images):

  1. invert image B
  2. add the result to A
  3. set the scale-parameter to 2 (i. e. divide by 2)

Looking at the math I wasn’t able to find any difference to my method. My practical tests didn’t show any either, neither for 8-bit nor for 16-bit images. If you know that inversion just means subtracting the pixel value from 1, the formula is easy:

\frac{a+(1-b)}{2} = \frac{a-b+1}{2} = \frac{a-b}{2}+{1/2}.

So the answer to question three is: there is no difference.


Now we just need another blend mode that can reassemble the two images B and C so that, if both are unchanged, we get A again. Simply adding the pixel-values won’t work of course, we scaled and shifted the result of the subtraction. So we need to find the reverse operation. So we subtract ½ from C, multiply the result by 2 and add B.

f(b,c) = 2(c-1/2) + b = 2c - 1 +b = b + 2c -1.

Surprisingly this is exactly the formula for the Linear Light blend mode. So my first question is answered as well: Linear Light is the reverse of Subtraction.

Using the Frequency Separation Technique

In practice you will of course not use Apply Image to join the hard-won separate images immediately again. You will instead use both images as layers, C over B, and set the blend mode for C to Linear Light. So you keep two layers you can work on separately. Of course you can add layers between the two, for example for non-destructive retouching. In the example I was able to remove the red spot while keeping the texture. Retouching is not within the scope of this article.


The remaining answer to question two you will find in my little series on Blend Modes – with more math and simulations.

I hope you liked my little excursus on frequency separation. If you have questions or find mistakes, do not hesitate to leave a comment.

Photoshop Blend Modes - Part 1

I tried to figure out the frequency separation method for retouching portraits. Initially I didn’t really understand why which step was used. So I first had to dig deeper into the Photoshop (PS) blend modes. There’s a lot you can find on the internet regarding blend modes. Not everything I found was correct, let alone understandable for me. Especially photographers and PS users approach the subject rather intuitively, not many try to understand the math behind the tools they apply. Though that would help predicting the effects that can be achieved.

So I decided to find out what exactly happens with the goal to write a concise, but not too complicated explanation myself. The first surprise during my research was that not everybody (not even the graphic software vendors) mean the same when they talk about soft light for example. Adobe is undisputedly one of the leaders on the market, but in PS in particular some of the blend modes are implemented not really well (my humble opinion, of course).

The Task

Calculate a new image from two existing.

This is always used when you place two layers above each other in PS. The simplest, but also most boring blend mode, is making the upper layer opaque so you see nothing of the lower layer. You can use other blend modes to manipulate colours and luminosity of the lower source by the upper source (yes, in some cases the order is important, like with an opaque layer). That is what photographers use blend modes for.

Assumptions, Simplifications and Definitions

Assumption: the the value for a particular pixel of the new image (the target) can be deduced from the values of the corresponding pixels (same position, i. e. same coordinates) of the two source images.

Simplification: all I present I only show for luminosity or rather black and white images with just one channel. The same holds for RGB images with three separate channels, on just uses the same formulas thrice, they are not influencing each other. Transparency (alpha-channel) shall be ignored for now as well.

I will call the source images A and B, das target image C. If you are thinking layers, think A the lower layer, B the upper. C is what PS displays on the screen.
I shall use a common orthogonal coordinate system defined by counting pixel columns and rows from the lower left corner. The location of any pixel is precisely defined by its coordinates (x, y).

Let a(x,y) be the value (luminosity) of a pixel of image A, b(x,y) the value for a pixel of B with the same coordinates. As a result of a calculation you get a pixel-value of C: c(x,y) = f(a(x,y),b(x,y)).

For better readability I will leave out the coordinates, at least as long as no pixels with differring coordinates are involved in the calculatation.

Pixel-values will always be given in numbers between 0 (zero) for black and 1 (one) for white, regardless of the actual bit depth. When calculating c(x,y), you have to keep in mind that its value also is restricted to this interval.

a,b,c \in [0,1]

You will immediately see a difficulty: there are plenty of ways to combine 0 and 1 by a formula that gives values outside this interval. PS solves this problem by truncating the exceeding values. Anything below 0 or above 1 is ignored. That has some nasty consequences whenever you try to do several calculation steps in sequence. Part of the information is lost. You can not calculate \frac{a+b}{2} in PS by executing a+b and \frac{1}{2} this, though it seems mathematically correct to do so.


To visualise the results I use two images as a basis for all blends. These contain values linear from 0 to 1, once from left to right (a(x,y) = x, defines a horizontal axis for a from 0 to 1), once from bottom to top (b(x,y) = y, the vertical axis from 0 to 1). This makes sure the result contains all possible combinations and we will see how smooth transitions are.

Image A

Image B

Blend Modes

To begin with I’ll have a look at some simple, but not trivial methods, multiplication and addition, then a combination of both and finally a version that uses two different formulas depending on the pixel-values of B. Further blend modes I will explain in the next part of this series.


Multiplication is one of the simplest blends, because the function does not give any values outside the allowed interval or 0 to 1. By the way, PS indeed uses multiplication, not the geometric mean, though the latter might seem more plausible.

f(a,b) = a \cdot b

Results for a and b between 0 and 1 lie within that interval as well.

For my test images you get:

f(x,y) = a(x,y) \cdot b(x,y) = x \cdot y

Looking at the result you can see that two edges and hence three of the corners should be black, because a multiplication with 0 always yields 0. Only in the top right corner you get a value of 1 for white.


Aside: Geometric Mean (not available in Photoshop)

For me the geometric mean

f(a,b) = \sqrt{a \cdot b}

seems to be the better option. There’d be a nice linear transition along the diagonal.

Linear Dodge

Linear Dodge is the name given to the addition. It is immediately clear that an addition of two numbers between 0 and 1 yealds results between 0 and 2. For half of the pixels the results get truncated to 1. Used on photos, this can easily lead to burnt out areas.

f(a,b) = a + b

If the sum a+b gets larger then 1, it is replaced by 1. The formula never gives results below 0 anyway, so no truncating there. Hence you’d expect white on and above the diagonal between top left and bottom right, below that a linear gradient down to black in the bottom left corner.

Linear Abwedeln
Linear Dodge

Linear Burn

This is the negative inverse of linear dodge. It is the same as problematic on photos, you might get fully black areas.

f(a,b) = a + b - 1

If the sum a+b gets less then 1, f is replaced by 0. Values above 1 are never reached anyway, no truncating. Hence you’d expect black on and below the diagonal between top left and bottom right, above that a linear gradient up to white in the top right corner.

Linear Burn

Linear Light

Frequently you will find linear light described as a combination of linear dodge for b<0.5 and linear burn for b \geq 0.5. Not quite right, but close. If for b \geq 0.5 you compress the formula for linear dodge on the b-axis by a factor 2 and shift it up that axis by 1/2, and then also compress the one for linear burn for b \geq 0.5 by a factor 2 on the b-axis, you indeed get:

f(a,b) = \left\{\begin{array}{lr} a+2b-1 & \text{for } b < {1/2}\\ a + 2(b -1/2) & \text{for } b \geq {1/2} \end{array} \right.

Or, by resolving the terms, simply:

f(a,b) = a + 2b - 1

This is no longer commutative in a and b. Swapping the images A and B will yield a different resulting C.

Along the left edge (a=0) you expect black from the bottom up to b=0.5, then a linear transition to white. On the bottom edge (b=0) just black, because the -1 makes sure all possible results are negative. For b= 1 you get white on the top edge. On the right edge (a=1) you get from b=0 to b=0.5 a linear transition from black to white, above that just white.

Lineares Licht

Of interest for the PS user is the fact that an image B with b=0.5 for all pixels transfers image A unchanged to C when combined with this blend mode. b=0.5 is 50 % grey, which is called the neutral colour for linear light. Linear light is often used to manipulate a photo by using an almost grey layer with only minimal deviations, for example in high-pass sharpening or as one step of the frequency separation method.

Linear light can be described as the sequential execution of linear dodge and linear burn (with B multiplied by 2) or as linear burn with doubled B. The latter unfortunately cannot be recreated in PS, as it would need multiple steps, one of them yielding results outside the allowed interval. Intermediate results would be truncated, the main result would be wrong.

f(a,b)= (a+b) + b - 1 = a + 2b - 1