What follows is a guest post by Daniel Star (Boston University). All photographs are the author’s own. (Readers are encouraged to follow the links in captions for full-size, full-resolution images.)
We’ve all seen it. Maybe we’ve done it. Maybe we’ve “liked” it. Someone takes a snapshot of a wonderful sunset with a smartphone and posts it on a social media site with the “#nofilter” hashtag. This is one of the most popular hashtags on Instagram, and it is now also used widely on Facebook and Twitter. The sunset was no doubt beautiful (sunsets tend to be beautiful), but it’s unlikely that the photograph itself was of a high quality – smartphone shots rarely are, and even a setting sun will tend to blow out highlights (bright regions in images, see below), leaving empty space in part of the photo. Perhaps this doesn’t matter, because the point of such a social media post may not be aesthetic, but rather to simply communicate that a person witnessed a beautiful sunset, and to relay to friends a substitute in the form of a snapshot. And it’s true that applying one of the filters supplied by Instagram is unlikely to have improved the snapshot from an aesthetic point of view (the original aim of using “#nofilter” may have simply been to indicate that one of these filters, in particular, has not been used, but its now much broader pattern of usage strongly suggests its meaning has expanded).
If the intent is simply to convey that one witnessed a beautiful sunset by providing an imagistic substitute, why use the “#nofilter” hashtag at all? A widespread misconception about photographs that may explain the popularity of this hashtag is the idea that an unedited photo is likely to be closer to what was actually seen by the human eye than one that has undergone further editing or processing after the original shot was taken. Such “unfiltered” photos, it may be thought, are more accurate. This overlooks the fact (for it is a fact) that all cameras filter what we see in quite particular ways. In saying this, I don’t mean to be referring to the fact that the way we experience the world through direct visual perception – I see the world surrounding me in three dimensions – is very different than the way we perceive images on flat surfaces. I don’t suppose people are likely to be overlooking this fact: we don’t mistake our experience of photos as two dimensional reproductions with experiences of the world more generally (at least, we don’t tend to do this at the time of experiencing one or the other, even if our memories sometimes end up being shaped by photos from the past). Nor do I mean to be referring to such facts about lenses as that they tend to distort the dimensions of objects in various ways (especially when the lenses have either fairly short or fairly long focal lengths), or may produce photos that contain both sharp and blurry areas (when lens apertures are set to open fairly wide, in order to produce shallow depth of field, as in the photo immediately below).
Rather, my point is that camera and smartphone manufacturers must make decisions about how colors and details will be represented: in effect, each manufacturer provides its own filter that affects, for a start, white balance, color saturation and contrast. Manufacturers must make aesthetically relevant decisions with respect to the interpretation of sensor outputs in digital cameras, the film constitution and development process with analog film, and many complex aspects of camera lens design. The color profiles that come with digital cameras and smartphones vary, and they are, to a large extent, the product of conscious, proprietary decisions made by different manufacturers, with viewers and consumers of various kinds in mind.
If we’re talking about accuracy or naturalness, smartphones tend to produce photos where the colors are quite a bit more saturated than the way colors appear to ordinary vision. In addition, increased saturation goes hand in hand with a decrease in fine details (which will already be somewhat lacking in smartphone cameras — smartphone cameras have been improving with the release of every new model, but the significant limitations for light collection that result from utilizing very small sensors and very small lenses remain unchanged). Experienced photographers take “post-processing” very seriously not because they wish to wildly boost the saturation in photos or make them look unnatural (although some do, since stylistic preferences and aesthetic aims vary); they generally have increased accuracy or naturalness as one of their main guiding aims. A common way in which accuracy is increased in editing, for instance, is that highlights are reduced in brighter parts of a photo, enabling us to see blue skies and clouds again where there would otherwise be blank space. For an example of this, compare the “before” and “after” photos below. The editing is not particularly radical in this case. Apart from details in the sky being provided through adjusting highlights, shadows (darker regions) have been lifted to reveal details in the trees in the foreground, and contrast has been increased, as is particularly noticeable if you look at the forest in the background.
For the uninitiated, I should explain that “post-processing” is a term for the work that people do when they edit image files in software like the Apple Photos app, Lightroom, Photoshop, or Capture One Pro. It is possible to edit the “jpeg” format files that smartphone cameras produce in ways that improve them, but there are significant limitations to how effective the editing process can be with jpeg files (even jpeg files produced by better cameras), and these limitations become apparent when one compares the experience of editing them with the experience of editing “raw” files. Raw files are much larger because they contain much more information. This information comes more directly from the camera sensor, although this still does not mean it is unfiltered. Raw files cannot be viewed directly, but must always be interpreted by a particular software application, and, crucially, different software applications will make the same raw files appear quite different, even before the user starts editing images (this is another example of how choices made by certain manufacturers – in this case, software manufacturers – must influence the way images appear).
To return to the main thread of our discussion: we can conclude that a key justification people might give for using the “#nofilter” hashtag is not really a good justification at all. Possibly I’m being unfair. It could be that all that people are wishing to indicate when they use the “#nofilter” hashtag is that they, anyway, didn’t do anything to increase the saturation or contrast, etc. in the photo. They didn’t make the captured image more unnatural than the camera and lens made it. In one admittedly indirect sense, it may not be true that individual choices of this kind are unrelated to the inbuilt filters we find in our smartphones: the preferences of consumers influence the choices smartphone manufacturers make, so our preferences to post photos without scaling back saturation may collectively be partly to blame for the unnaturalness of photos that haven’t been carefully processed. Nonetheless, this may be a more charitable interpretation than the previous one. If this is really what is going on, “#noedits” would be a more accurate hashtag than “#nofilter”. Still, bearing in mind the points about the inescapability of filters and the way editing can make photographs more realistic that I made above, why should we think that there is anything particularly noteworthy or valuable about the fact that one hasn’t edited a photo?
I noted above that increasing accuracy, with respect to color saturation, contrast, etc., is very often one of the aims that photographers have in mind when they are editing photos. That being said, I think we should be careful not to put too much weight on color accuracy when it comes to forming aesthetic judgments about photos. A simple way to bring out the extent to which this would be a mistake is to think about black and white photography (or monochrome photography more generally). Black and white photos are, by definition, highly inaccurate with respect to color (which is not to say color becomes irrelevant: when producing black and white photos there are an indefinitely large number of ways that different color tones may be reproduced, and, again, one may either take control of this process, or leave such choices to be made by software manufacturers). Yet a significant portion of the most aesthetically valuable photos being produced today are black and white photos. It clearly isn’t the case that the color version of a photo is always the better photo. This is easy to verify in an era one can easily produce both a color and a black and white version from the same image file.
Of course, there are ways in which different black and white photos might still be ranked as more or less accurate. And some ways in which photos can be inaccurate because of poor editing can be aesthetically repellant, but recognizing this fact does not require one to think increased accuracy is always an aesthetically good thing. In the case of photographs, I think we may be led astray when thinking about the value of accuracy in general partly because we are well aware that introducing certain kinds of inaccuracies into journalistic photos is unethical. I’m all in favor of norms that news journalists shouldn’t manipulate photos in various ways (this is not the place to explore which dimensions of accuracy concern us when it comes to photojournalism, but clearly some don’t, as we never complain about images appearing in black and white). But it isn’t clear why, from a point of view that takes aesthetic considerations seriously, we should be concerned about genuine accuracy in photography outside of the domain of journalism. Accuracy is distinct from the kind of truthfulness or insightfulness we might associate with good novels, or other works of fiction, and also distinct from the fineness of grain or high level of detail that both good novels and good photographs usually contain. Much more could be said about these issues. I’ll end here, but not before suggesting that the reader compare the two photographs below and ask him or herself: (a) which is more accurate?; and (b) which is the better photo? I think the answer to (b) is clear. Let’s consider (a). Given that I did, in fact, take this long exposure shot in near dark conditions (using a tripod), and that an extensive editing process involved making many choices, the reader might be tempted to respond to (a) by saying that the first, darker image is more accurate. Yet clearly the raw file I edited contained a large amount of information concerning details and colors in the landscape that weren’t visible to the naked eye, so I am far from certain that this would be the correct conclusion to reach.
Notes on the Contributor
Daniel Star is Associate Professor of Philosophy at Boston University. He maintains a collection of his photos at http://dstar.photodeck.com/
April 19, 2018 at 9:20 pm
“If the intent is simply to convey that one witnessed a beautiful sunset by providing an imagistic substitute, why use the “#nofilter” hashtag at all? A widespread misconception about photographs that may explain the popularity of this hashtag is the idea that an unedited photo is likely to be closer to what was actually seen by the human eye than one that has undergone further editing or processing after the original shot was taken. Such “unfiltered” photos, it may be thought, are more accurate.”
No the reason is because instagram filters are recognised as a complete distortion of reality, it’s a practice resulting from how over the top the instagram filters are. Blue becomes turquoise etc.
April 20, 2018 at 2:51 am
This may explain how the practice of using the hashtag began (I do indicate the Instagram filters are terrible), but it simply doesn’t explain many later uses of the hashtag. It’s not just popular on Instagram, but on Facebook and Twitter as well.