首页    期刊浏览 2024年05月19日 星期日
登录注册

文章基本信息

  • 标题:Layers: looking at photography and photoshop - Features - Evaluation
  • 作者:Are Flagan
  • 期刊名称:Afterimage
  • 印刷版ISSN:0300-7472
  • 出版年度:2002
  • 卷号:July-August 2002
  • 出版社:Visual Studies Workshop

Layers: looking at photography and photoshop - Features - Evaluation

Are Flagan

BACKGROUND

Despite the conjunctive statement in the title of this essay, there is currently an ontological dividing line between photographs and images spawned by Photoshop. Apart from tracing the outlines of different technologies, this contested line circumscribes the divide between analog and digital manifestations of photographic representations. The crucial difference between the two has often, and quite dramatically, been construed as a matter of life and death--one is slipping into darkness; the other is seeing the light. The sources and causes of these morbid concerns have of course already been explored to their own finitude. With digital imaging the material transparency that allowed for objective views across time and space met an opaque surface in the form of a manipulable array of pixels. There was much talk of a reconfigured eye," (1) as if our natural vision had viscerally been gauged out of its sockets and replaced by prosthetic sight, incapable of making the contrasting distinction necessary to see with the same decisive clarity. Blinded by this technological shift and thereby destined to grope around in uncertainties, what may be termed a "reconfigured mind" also emerged; a new reality was quickly decoded from the technical adjustments to our retinal impressions. Incessantly opaque and self-referential, the new photograph, invariably brought to you via Adobe Photoshop after its February 1990 launch, became a perplexing hybrid--an indexical imprint of the world that had no corresponding shape in reality. (2)

But if there was, or is, no crossing of this impossible divide, this conclusive matter of life and death, why did the splash screen of Photoshop introduce itself, when it first arrived, with an icon that put users eye to eye with another paradigm advertised through seeing? And why is the optical nerve still offered a reciprocal view in version 7.0, after several innovative makeovers? If the shift from an optical and lens-based to a digital culture truly amounted to a leap from dark to light with no twilight in between, as it has been suggested, why would every launch choose to look back from this brave new world with what it dismisses as hindsight? It appears, at first sight and second thought, that this software application is invested in negotiating the same analogies of vision that also equates the photographic apparatus with an eye. Seen together they may very well compose an odd stereoscopic pair of added depth and dimensionality. To achieve this effect it is necessary to layer some sidelong glances and perpendicular views, invoking a horizontal stacking of photography and Photoshop rather than a vertical divide.

LAYER 1

One of the very first accounts of looking at photographs, in this case a daguerreotype of Boulevard du Temple in Paris, came from an American, Samuel F. B. Morse. Meeting with Louis-Jacques-Mande Daguerre on March 7, 1839, (3) he wrote his family shortly thereafter to express his first lingering impressions:

You cannot imagine how exquisite is the fine detail portrayed. No painting or engraving could ever hope to touch it. For example, when looking over a street one might notice a distant advertisement hoarding and be aware of the existence of lines or letters, without being able to read these tiny signs with the naked eye. With the help of a hand lens, pointed at this detail, each letter became perfectly and clearly visible, and it was the same thing for the tiny cracks on the walls of buildings or the pavements of the street. (4)

What struck Morse and other early observers most was the unique precision of the daguerreotype image; its ability to represent the tiniest detail with such delicate clarity. In his descriptive experience or looking Morse even resorted to the use of a hand lens, serving as a magnifying glass, to examine each element closely and marvel at the resolving resolution--"each letter became perfectly and clearly visible." Morse seemed far less concerned about explicating the index of this newfound reality, at least with some measure of scale and ratio intact, than "looking over a street," as if he was somehow moving around inside the image and intimately exploring its internal architecture. His "naked eye" was airily levitating across the rooftops and chimneys before it turned pedestrian and crossed the street with an optical aid to read the hoarding. He even bent over, as he would if strolling the boulevard itself, to peruse the inconsequential cracks that miraculously belonged to the brick and mortar of buildings, t o the image itself, rather than a glitch in the silver iodide, revealing the copper plate beneath. The closer Morse got in his ocular wanderings, the greater the clarity on display.

If we slowly retrace this visionary path, it appears that looking turned from a general overview--exemplified in his account by the unfavorable touch of painting or engraving where the brush or needle would quickly assert its mark-to a condensed and slowed pace of visualization that dwelled on details; distinct and minute elements of the picture that were breaking into what seemed like infinite fractals under the power of a hand lens. Zooming in and zooming out, switching between micro and macro views, Morse traversed this new complexity in a manner that reveals the photograph as a space comprised of hierarchical layers of information, not a simple recognition of flat, analog equivalence. Morse does not only see the street from above--he eagerly crosses it to the hoarding, the letters and their cracks. In the process, photography unveils an amazing depth that can only be discerned by breaking the whole into distinct pieces, while one retains a realistic overview for comparison and classification.

Photography to Morse, then, was essentially the seductive ordering of information. The view composed and provided by Daguerre allowed him to organize a worldly complexity in multiple and hierarchical layers that facilitated several levels of contextual readings; simply zoom in and zoom out to rearrange and reorder selections. This example obviously reflects the proposed and projected uses for photography at the time (in the natural sciences) and no doubt Morse's own scientific inclinations. (5) It may also be a basic question of vantage point; the view of Boulevard du Temple is arguably the kind of multifarious photograph destined to be read like scattered small print. But it was, in 1858, essentially the same appeal of photography that inspired a contemporary, Felix Tournachon (Nadar), (6) to launch a balloon equipped with a camera over the growing metropons and center of the industrial age, Paris, to capture a bird's eye view of its cultivated fields and intricate urban passages. Elevating the camera shifte d the scale of detail by bringing more into the frame and resolving less (a question, again, of resolution). But the practice of looking at photography like it was a sliding presentation of discrete details, which were fixed geographically but continuously reconstructed through a segmenting gaze, did not falter. As early as 1859, aerial views were actively used to determine enemy positions in the Franco-Piedmontese war against Austria, thereby explicating new links in what must, first and foremost, be construed as the information flow between photography, knowledge and power.

Today we have of course launched this optical eye into celestial orbit. Satellites circle the distant skies above to monitor and map, in war and peace, with unprecedented elevation and clarity; live links now feed a mirror image of the Earth through constant data streams. It is as if Paul Virilio's foreboding information bomb (7) is armed and permanently suspended in the images beamed down for closer scrutiny and analysis. Looking back once more at Morse wandering down Boulevard du Temple through photography, it is evident that he primarily perceived of it, with some foresight, as macro/micro presentations of information, navigable layers that have increased exponentially in depth and complexity since he proclaimed "you cannot imagine how exquisite is the fine detail portrayed."

LAYER 2

Digital image processing had its tentative start on the National Bureau of Standards Electronic Automatic Computer (SEAC) in the mid-1950s. (8) Up to that time computers had mainly been devoted to numerical and algebraic data processing, but the projected importance of a device that would be able to offer image pattern recognition, essentially of characters, led to the first scanner being connected in 1957. Quite interestingly, decisions built into that inaugural hardware, such as the choice of making sampling-units (or pixels) square in a tessellating mask, (9) survive in engineering practices today. Algorithms were written to both transform the image with edge--enhancing filters and process it for analysis; one example of the latter was the ability to measure objects within the sample. To see what the computer and scanner "saw," several of these binary scans, entered at various thresholds, were superimposed to simulate gray levels and displayed on a cathode-ray oscilloscope.

Although the results were very crude due to the use of binary representation, the basic process initiated with SEAC does not differ significantly from how digital imaging is performed with the latest equipment today. To capture an image the engineers invented a method where areas of, let's say a complex full-tonal-range photograph, were reduced to one of two states, either black or white (if the threshold setting is ignored), and mapped in a grid. The engineers essentially envisaged a process where the system determined, or previsualized, the outcome based on the set hardware parameters of value and position. These are still, of course, the defining coordinates of a digital image; it is an array of integers. But anyone familiar with photography, and especially the large-format variety, will perhaps recognize immediate echoes of another process, another system, invented many years prior in the 1940s by Yosemite legend and modernist photographer extraordinaire, Ansel Adams. Adams called his invention the Zone S ystem. (10) Anchored in the principles of densitometry (used here in relation to the optical density of photographic negatives and usually referred to in earlier literature as sensitometry), the Zone System is a method that allows the photographer to coordinate exposure readings with exposure and development controls based on a pre-visualization of the final photographic print.

The first step in understanding and utilizing the Zone System comes with a division of the continuous, analog grayscale of a photographic print into ten discrete units, or zones. To maintain a separation of the zones from other measurements, such as exposure readings, Adams gave them Roman numerals, capturing the entire range of tones from of the deepest black, where all the silver in the paper has been exposed, to the brightest white, rendering nothing but the paper base, on a scale of o-X, In the field a Zone System photographer would set up her equipment before the chosen scene and perform a series of meter readings. She would then envisage what the desired final print should look like and expose the negative accordingly. Normally she would make her exposure for the deepest shadow area with detail, which would fall on Zone Ill in Adams's system, and then develop the negative with contraction or expansion of the highlight values, essentially controlling contrast through changes in development time. The crea tive result, as Adams himself noted, had little to do with "reality:" "Many consider my photographs to be in the 'realistic' category. Actually, what reality they have is in their optical-image accuracy; their values are definitely 'departures from reality.'" (11) Speaking in terms already defined here by the operations at SEAC, the claim could be rephrased as a manipulation of value or integer while positions in the array are being maintained.

After the widely used Zone System made an entry in 1941, it is impossible to exclusively consider photography an analog operation; every photograph conceived with the aid of this system (and there are many) was pre-visualized from a table of ten discrete values. It was composed and manufactured according to these ten units, and largely presented as the creative result of applying various zones to elements of the original scene. Pre-visualization was mediated by a system, much like the photographer meditated upon the system's application for effect. A Zone System photographer, in other words, worked methodically in much the same manner as the SEAC scanner; the scene" was seen through a table of ten discrete zones, a mask that primarily served to allocate value (while, in the SEAC case, it logged position). Interestingly, Ansel Adams's photography, being exemplary of this vision, has been passed down to his 100-year anniversary, currently touring the country, as the most natural and timeless there is, both in t erms of subject matter, which helped spawn the National Park system, and interpretive vision, expressively channeled as an inner, personal style. An early advocate of maximum optical clarity through his association with the f-64 group (f-64 being the smallest available aperture, which gives the largest depth of field), Adams pushed the lens-based component of photography to its sharpest and clearest rendering, making sure that everything from near to far in his photographs was resolved with the same crisp detail. Perceptively collapsing sharp focus into infinite depth, the "reality" component of photography, found in its optical facility (as Adams noted above), was unprecedented. Individual values were then augmented by the Zone System without risk of retracing the Pictorialist's penchant for broad painterly strokes. But consider also that there is no detail without value. Every object in a photograph is composed of contrasting changes in tone: optics simply enhance their borders, just like an aperture makes any differentiation possible. (12) Without these gradations photography is only light-sensitive materials responding to an absence or excess of light, with all the gray values, divisible to infinity, of the Zone System found in between.

Analog photography, then, is merely the enduring potential of a reality effect after the Zone System came into popular use. Linked to the resolving power of optics, it retained its representative power through the prevailing presence and proliferation of details, while the image itself dissolved into mutable tones-defining areas that were in turn conceived through a decimal system of zones. Not only did the Zone System photographer look at the world of photography as one comprised of tonal values rather than actual objects (such as a pine tree or a waterfall, representing both the shadows, Zone III, and highlights, Zone VIII, of a typical Ansel Adams photograph), but she also employed a method that is digital in scope. Pre-visualization of the final print posits photography as the imaginative manipulation of values through a matrix of integers drawn from a limited range of ten, each crucially linked to an appearance. While the overall look of photography maintains its indexical detailing, the practice of phot ographing crucially becomes one of programmed processing, and looking at photographs after the Zone System essentially dwells on an appreciation of masterful tonal manipulations. (13)

LAYER 3

Photoshop came of age with the computer desktop revolution. Back in the late 1980s two brothers, Thomas and John Knoll, one working on graduate studies at the University of Michigan and the other employed by Industrial Light and Magic, of George Lucas fame, in California, gradually eked out several variations of the computer software that has become Photoshop. (14) Its first commercial appearance, in version 0.87, came under the less-than-flattering title of Barneyscan XP (it was named after the scanner hardware that bundled it). A subsequent signing with Adobe in 1988 led to increased support and development before the official launch of Photoshop, for the Apple Macintosh only (Windows entered with version 2.5), in February 1990. The early code was, like all initial releases, bug-ridden. Most users settled on version 1.0.7 as the first workable, easy-to-use pixel editor aimed at a popular as well as a specialist market. Currently in its 7.0 incarnation, with many intermediate fixes and upgrades in between, P hotoshop has arguably, through its overwhelming market share, become synonymous with digital image editing and processing.

Initial versions stayed true to the basic concept of pixel selection and manipulation. Most primitive Photoshop tools, contained by the Toolbox, were consequently very painterly in scope, offering such crafty utensils as brush, pencil, spray-gun, paint bucket, pattern-stamp and eraser to change the values of integers in the image array. The remaining options were those that selected, by various methods, what pixels would be affected. Given these aspirations toward the plastic arts of painting and drawing, it came as no surprise that the above Toolbox was supplemented in version 2.5 with floating docks, fittingly called Palettes, for other functions. With version 3.0 and its subsequent five upgrades; however; came a significant feature called Layers. This addition to the Photoshop inventory profoundly directed the way digital imaging has developed through subsequent processing.

The introduction of Layers (15) advertised that the image was no longer a flat surface array of pixels, each open to integer variations, but a deep merger of several data streams that were kept as separate "objects" to facilitate editing. It is also possible to toggle the visibility of each layer to review options and changes. For anyone working with Photoshop this ability to import and export data packets, coupled with the capability of isolating, individually manipulating, selectively viewing, and keeping or discarding them, is an indispensable feature of how digital images, not to mention their aesthetic, (16) has developed. The very logic of Photoshop was unmistakably screaming "collage" of a highly metamorphic variety after Layers was introduced to supplement the logic of select, copy and paste. The main shift, however, is one that moved further away from a perception of Photoshop functions and actions as direct pixel manipulations, effects borrowing largely from creative darkroom techniques and the post -processing of photographs through print-surface or silver-compound alterations. From now on digital image-making was about the algorithmic management and merger of increasingly complex data structures. Put succinctly, the primary working space for developments of digital imaging was no longer the photographic print after Photoshop 3.0, but the computer's processing and handling of information.

Looking back at the upgrades, this change in outlook was heavily reflected in a new company policy. After version 3.0, Adobe started work on consolidating the interfaces of their many software packages in an effort to breed a cross-disciplinary, cross-media and cross-platform approach that would aim for a similar look and tool behavior in all applications. Whether the user was, behind the interface scenes, working with a grid of pixels in Photoshop or the mathematical formulas for vectors in Illustrator, there was to be a common logic to the operations. In streamlining the interfaces, Adobe sought to Implement an early and practical realization of the emerging transcoding principle. Adobe had realized that the core of their enterprise could or should no longer be conceived as specific to older media categories, such as Photoshop for Photography, Premiere for Video, Illustrator for painting and drawing. Instead, it must be faithful to a crucial new media separation between algorithms and data structures, the r eplacement of constants for variables and, most importantly, the Increased modularity of these operations. (17)

Photoshop received its interface facelift with version 4.0. This update clearly signaled its familiarity with other applications: the same "intuitive" learning curve, employing the same iconic tools/algorithms, were now set to work on different data structures (still a major selling point for Adobe). Other additions, like Adjustment Layers and Actions, further reinforced the changes. Earlier Layers had remained true to what could still be recognized as a sandwiched-negative approach-each additional data packet retained a largely photographic variation to the calculated total of a pixel value, including blending modes. Adjustment Layers inserted the "invisible" interaction of an algorithm, most frequently, perhaps, rendered "visible" as tonal or color correction controls. Numeric coding was now layered with the same substance as colored pixels in the increasingly modular organization of Photoshop's features. And recordable Actions, scripted sequences of steps performed by the computer for repetitive tasks, fur ther persuaded users that the work and practice of digital imaging was about computation (the aforementioned separation of algorithms and data structures obviously facilitates this automation), as well as painting by numbers in an array of perceptively expanding variables.

Version 5.0 brought the non-linear History palette, featuring multiple Undos organized in layers, to finally, in this particular vein of updates, replace the photograph's residual time and place with an ability to organize data packets in an unknown number of dimensions. Charting a science fiction imagination, you could now retrace your steps of visualization through History with a brand new timeline, without erasing the future that has already passed. When Photoshop started out in 1.0.7 with basic selection and pixel-altering tools, the graphic constellations it borrowed from to play with through variables were still relatively pre-existing and stable. The gradual influx of more complex computation into the array has introduced Layers that fluctuate the image in space and a History that traverses, non-linearly, user actions in time. This deepening information architecture, slowly yet steadily erasing the world in favor of data, branched into the network with its very next update. Version 5.5 of Photoshop fea tured the integration of ImageReady, which had up to that point been a stand-alone Adobe application mainly intended for the enhanced processing of various Web graphic formats. Photoshop had previously allowed for G1F89 export through a plug-in and JPEG compression on a sliding scale, but ImageReady offered, in terms of static images, additional and valuable controls for, to take only a couple of introductory examples, the dither of JPEGs to compress them further and better optimization of GIF color tables. Above all, ImageReady signaled and facilitated a move toward a networked screen presence rather than a print destination for the Photoshop image. A couple of ImageReady's added features, now accessed through the Photoshop Toolbox so images can be toggled between the two applications, will serve to expand on the increased layering and complexity of digital images; these are Image Maps and Image Slices.

It is important to realize that, for the first time in Photoshop's short history, we are dealing with new application utilities that are not directly inspired by traditional views of photography or darkroom techniques, nor the cumulative effects of computer operations per se, but HTML, the markup language or code used to display Web pages in Web browsers. Image Maps and Image Slices basically perform the same task of linking HTML-coded behaviors to areas of the image that have either been defined by coordinates, in image maps, or sliced into segments that are seamlessly rebuilt with an HTML table, in image slices. The process is simple: selection tools comprised of either a divided grid overlay or areas that map coordinates, are linked to dialog boxes with the requisite fields of Name, URL, Target and Alt needed for the markup tags. Click Save Optimized As... and your image, sliced or mapped, can be exported with a text file containing HTML code that displays it in a browser. The primary reason for doing this is to make the image, or select parts of it, interactive, linking it to other sites in the network or to other functions, such as contextual menus or image rollovers, performed by code. A digital image imported into Photoshop and exported through ImageReady has become a navigable extension of vast information architectures. Not only is the digital image itself codified by a readable header and a string of numbers relating to pixel values, it is purely the illuminated desktop layer of untold data references, incorporating the Internet and various scripting and programming languages-invoking here a pensive reversal of Roland Barthes's infamous dictum that the photograph is "a message without a code." Clearly the message of the digital image, refined by seven major Photoshop upgrades and the integration of ImageReady, is nothing but a gradual implementation and deepening of codifications; its unstable composition of exchanges increasingly resembles the modulations of a language (thereby re-introducing Barthes a nd his semiotics). Through Photoshop, both the significance and signification of an image is prepared and inscribed by its codified interactions. Semantic value is gathered in programmed links and any psychological component attached to the unfolding equations of the index is perhaps supplanted and satisfied by a habitual urge to connect through hyper-linkage. Photography's historical materiality and memorial function, its profound ability to touch the past that was briefly resuscitated in post-photography's insistence on a self-reflexive "objectness," (18) has been surpassed by a cognizant data morphology.

When the eye wanders across the image screen in tandem with the cursor, it turns to a pointing hand over hyperlinks. This avatar invitation to touch, echoing of course the seductive tactility of the index, is usually followed by a muscle reflex and a digital mouse click. But where the ensuing journey once transpired in a set time and place, it now unfolds and collapses indefinitely through information architectures that are powered by computational algorithms and only intermittently substantiated by pixel values. To look at photography through Photoshop is a complex and ambulatory vision quest with no end in sight-only the Midas touch of the encoded hand now resolves the image of a palpable origin in the ImageReady working spaces.

LAYER 4

The task of envisioning Information has been construed as an escape from flatland. (19) Communication between a graphical display of any sort and its viewer is usually destined to take place in the two-dimensional flatland of a piece of paper or a screen, and this metaphor does not easily reflect the many aspects involved in meaningful interaction. To better facilitate the exchange, graphical representations with an informational purpose seek to expand the number of dimensions represented within the limiting confines of flatland, and, if desirable, Increase the data density displayed per unit area. A whole science of theories and representational schemes, often used to image prohibitively large and otherwise unimaginable quantities of data, has developed around the methods whereby this may be done. The common key to understanding the many approaches is that they all aim for an increased depth, detail and pace to analytical display--these are graphics or images that are rich in meaning and demand careful and k nowledgeable looking. Do not, in other words, confuse the map with the territory. Information design and display is not about analogous representation; it seeks to offer conjunctive notations of time and space (by adding dimensions) or information and world (by increasing density) with a new clarity. This intelligibility takes its reading into account prematurely through preparing what may, on one level, be described as encoded re-enactments. It is looking to reiterate hybrid narratives of words, numbers and images in a coherent vision. As Edward R. Tufte succinctly notes, "What is to be sought in designs for the display of information is the clear portrayal of complexity. Not the complication of the simple; rather the task of the designer is to give visual access to the subtle and the difficult--that is, the revelation of the complex. (20)

Time and space can take the most mundane but also the most profound forms in information display. Consider, for example, the common time table or route map, where temporal and spatial coordinates comprise each point in the data set. The process of envisioning how numerous itineraries connect through a transportation hub is arguably daunting, since it must, often spatially, account for each element while retaining an overview of all the journeys. It is a testimony to the skill with which we envision and represent this information, along with our trained ability to Interpret it, that we usually get from A to B with relative ease. But another, much earlier example, will serve to unite distinct and extended practices of observation with their subsequent recording.

Over a period of years starting In 1610, Galileo Galilel trained his telescopic vision on the heavens to track the orbit of Jupiter's newly discovered moons. Gradually, as they emerged one by one from the planet's shadow to settle on a final count, the paths were noted in what has later been described as a time-series: a series of drawings, each plotting the location of the moons at a given time. Sequentially arranged in Galileo's journal, arid interspersed with explanatory texts, the entries read like subtitled movie stills, projected from paragraph to paragraph and page to page, akin to a storyboard. Initially the draft drawings were transcribed into tables for the clarification of lunar cycles. Jupiter would form the stable center column of each cell, a pivotal hub, and the moons would circle as dots or identifying numbers (much later they would finally be traced as continuous lines in flowing corkscrews). But the remote and earthbound vantage point forced, like all astronomical views seen through the tele scope, a two-dimensional scene with a fixed viewpoint--a celestial flatland. Orreries, mechanical models for planetary motion driven by clockwork, became at the start of the eighteenth century a way of presenting the crucial information of dimensional and temporal orbits. Although they were undoubtedly what Tufte has described as "a triumph in gear ratios," (21) and committed the information design sin of extravagant commotion for the display of simple motions, there is a sense of the desire to overcome flat and distant views in the perpendicular illustration of their surface. Deeply layered with data gathered from observation over time, the mechanical steel cogs grab every piece of information and put it into a new perspective, showing the projected interplay of planets in their proper paths. As the hand lever turns and the clockwork grinds into action, the data that was invested in calculating the ratios of each rotation and the size of each minuscule globe is brought to the surface, animating the elements with a plotted duration and position. The flat image has, through an elaborate kinetic and interactive contraption modeled after information collected from the far reaches of the universe, miraculously recovered our knowledge of its history. Unlike a filmic representation of the same scene, the observational data has here escaped the frames of the journal and the sequence of the table to be recalculated and transcribed into a real working wonder of mechanical engineering, To help us understand the spectral conundrums of flatlands, seen today in the phosphorus traces of photography on our computer screens, it seems a crafty measure of information animation is needed.

In conclusion, let us first return to an Item buried deep in the Layers menu of Photoshop. Flatten Image proclaims to eradicate all the working spaces gathered so far and merge all blend modes and opacities into a concluding pixel value. Flatten Image; prepare for final output. But this look at photography and Photoshop, constructed over many interweaving tiers, has sought to build another look that is actually consistent with what the Photoshop proprietary file format (PSD) itself proposes: save and archive your image, unfinished perhaps, with all its working spaces intact. A brief re-examination and recapitulation of each layer assembled here displays a certain transparency that allows us to recognize affinities in their active search for and presentation of information: Morse's inquisitive eye seeking out and modulating the instructive details; Adams's pre-visualization and processing of ten tonal digits to invoke the touch of his inner vision; photography's gradual maturity within Photoshop's increasingly cybernetic architecture; and a primitive example of a flatland-derived apparatus involved in interactively envisioning information. There is also an obstinate sense of opacity in the layers, a certain resistance and substance, that belongs to a lingering view of photography as a neutral support of information--a passive, self-explanatory receptacle of reality. As Morse initially showed us through his intelligent gaze, this dying perception was not really there at first sight. It is rather an anomaly promoted by interests seeking to breed a dumb container out of the photograph and view it without any of its informative baggage. To look at both photography and Photoshop in layers ultimately Involves the constant negotiation of various modes for blending the transparency of information with the opacity of the world; i.e. what emerges as discernable shows what knowledge looks like.

Even though the Flatten Image command results in a return to flatland--compressing the Layers into the Background--it maintains that surface is depth and vice versa in oscillating patterns of interpretation. Although the contention here has been that Images, and especially those of a recent digital variety, must be seen as complex Information architectures, they are, as the dissolving touch of the ImageReady hyperlink mislcadingly proposes, not ornamental and illusory accessories to a deeper understanding situated in code. Another privileging of language will not facilitate the visual cognition that both photography and Photoshop practices with a rapidly evolving sophistication. it is key to consider this increased visualization of knowledge, arguably a forceful part of postmodern life, a positive move that has continued a legacy initiated with the Enlightenment. (22) Within this long trajectory of looking as a way of understanding, certain views of photography, and primarily those we now lament as having pas sed, were arguably a blind alley-- a dead end. History comes, according to Photoshop, in a manifold of realizations. Its working spaces cannot be entered and experienced passively, despite the revocations offered through Flatten Image commands and the edifying move toward linguistics installed by ImageReady. There is, however, a distinct and troubling possibility that canned hyperlinks will breed the only semantic leads, and that pixel constellations always cluster in the patently obvious, if the mounting sophistication of technology surpasses that of its users. This cannot be emphasized enough. Focusing the paradigmatic eye on the complex economy of technological and epistemological layers in the context of photography and Photoshop has been an attempt to visualize, in the twinkling twilight of its emergence, the highly intelligent information machine now coming into view.

NOTES

(1.) Most notable, of course, is William J. Micthell, The Reconfigured Eye (Cambridge, MA: MIT Press, 1992).

(2.) See Geoffrey Batchen's lively "postmortem" In the epitaph to Burning with Desire (Cambridge, MA: MIT Press, 1997). The introduction to that section offers a string of quotes from other sources that read like obituaries for the death of photography. Also, see his revised "Ectoplasm" in Each Wild Idea: Writing. Photography. History (Cambridge, MA: MIT Press, 2001).

(3.) Photography was announced to the public in August 1839.

(4.) Samuel F. B. Morse as quoted in Michel Frizot, A New History of Photography (Koln, Germany: Konemann, 1998). There are numerous other examples, for instance: Edgar Allan Poe remarks In his 1840 essay "The Daguerreotype:" "If we examine a work of ordinary art, by means of a powerful microscope, all traces of resemblance to nature will disappear--but the closest scrutiny of the photogenic drawing discloses only a more absolute truth, a more perfect identity of aspect with the thing represented. The variations of shade, and the gradations of both linear and aerial perspective, are those of truth itself in the supremeness of its perfection."

(5.) Morse did, among other things, invent the electric telegraph and the transmitter code named after him.

(6.) Nadar, of course, also pioneered the use of artificial light, mainly through his illuminated photographs from the dark catacombs of Paris.

(7.) See Paul Virilio, The Information Bomb (London: Verso, 2000), where he emphasizes the strong links between technology and the military in what he sees as a mounting information war.

(8.) For a brief history of these inaugural steps, see Russell A. Kirsch "SEAC and the Start of Image Processing at the National Bureau of Standards" in IEEE Annals of the History of Computing, Vol. 20, No. 2, 1998, pp. 7-12.

(9.) Each sampling unit, or pixel, was 0.25mm X 0.25mm square and arranged in a grid of 176 X 176 such units.

(10.) There are many books on the Zone System; it serves as the cornerstone of Ansel Adams's photography. See, for example, the second part of his trilogy For the budding photographer, The Ansel Adams Photography Series; Ansel Adams, The Negative (Boston, MA: Little, Brown and Company, 1995). Adams developed the Zone System with Fred Archer in the early 1940s.

(11.)Ansel Adams, "Introduction" in The Negative, p. IX.

(12.) A simple pinhole camera (modeled after the Camera Obscura) provides a good example; its circles of confusion are much larger due to the lack of a focusing lens, and the object is consequently rendered softer, but still legible, by its exposure on photographic film or paper.

(13.) While the Zone System is increasingly considered the obsessive amateur's claim to be professional, its most visible proponents over the years have come from those art circles Interested in a "spiritual" agenda. Following Ansel Adams's book-length explications, Minor White, founder of the magazine Aperture, perfected it further in several volumes. He expanded the technical knowledge of the system by introducing calibration methods that increased the number of zones beyond ten to as many as the photographer, and the decimals on his light meter, was willing and able to accommodate. These developments can, of course, be likened to an increased amount and complexity of information--a digital equivalent would be going from 8-bit to 16-bit, for instance.

(14.) The history of Photoshop has been well documented. The program appears to have a "following," perhaps due to the Adobe Evangelists, named so by the corporation, that frequently demo and die for it. A good and comprehensive example of Photoshop history is Jeff Schewe, "The Birth of a Killer Application: 10 Years of Photoshop" in PEI (Photo Electronic Imaging), February 2000. The article also shows you all the faces behind the code.

(15.) A Photoshop competitor at the time, Live Picture, introduced Layers before Photoshop. Another product, Specular Collage, also had a similar feature/ability.

(16.) For a highly amusing and informative take on the chessboard landscapes and morphing body tropes of layered digital images, see Stephen Bull, "The Lexicon of Digital Art" in DPICT, No. 7, April/May, 2001, pp. 33-35.

(17.) This defining list of the transcoding conditions and principles comes from a much more comprehensive articulation of the concept in Lev Manovich, The Language of New Media (Cambridge, MA: MIT Press, 2001), pp. 45-48. This section refers specifically to transcoding.

(18.) For an elaboration on this theme, compare two essays in Geoffrey Batchen, Each Wild Idea: Writing, Photography, History (Cambridge, MA: MIT Press, 2001); "Post-Photography" and "Vernacular Photography."

(19.) The compelling body of work that serves as an inspiration and reference for this "Layer" is Edward R. Tufte, Envisioning Information (Cheshire, CT: Graphics Press, 1990) and The Visual Display of Quantitative Information (Cheshire, CT: Graphics Press, 1983).

(20.) The Visual Display of Quantitative Information, p. 191.

(21.) Tufte displays a great sense of humor in his presentation of potentially dry statistics; see Envisioning Information, p. 16, for his example of "Pridefully Obvious Presentation."

(22.) See Barbara Maria Stafford, Good Looking: Essays on the Virtue of Images (Cambridge, MA: MIT Press, 1996) for an itinerary of this journey. The first part of the introductory chapter, entitled "The Visualization of Knowledge from the Enlightenment to Postmodernism," is especially well suited to expand historically on ideas covered in this conclusion.

Are Flagan is currently editor of afterimage.

COPYRIGHT 2002 Visual Studies Workshop
COPYRIGHT 2002 Gale Group

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有