The border between sound and image has always been blurred, if not obsolete. In fact, the two fields are, in many ways, inherently enmeshed. The question is just how much one re-veals or exposes it, and from what lens or angle one takes in their artistic approach. Numerous artists in Austria, as well as international ones, work on precisely that particular cusp.
In another chapter of our series, CROSSWAYS IN CONTEMPORARY MUSIC, Patrick Lechner examines the crossroads, overlaps, influences and intersections of Visual Arts and New Music.
This synesthetic approach yields such a variety of manifestations that it seems difficult to discern a structure or to obtain an overview. This article attempts to do so briefly and with significant examples.
From visual art, which possibly produces sounds in an installation-like manner, to 3D music visualizations, “visual music”, soundtracks in films, and classic music videos, there are countless projects that approach this topic in the most diverse ways.
Any point could serve as a starting point here. Historically, the desire for an image to a composition can be found in the works of Alexander Scriabin, Arnold Schoenberg, Wassily Kandinsky, Olivier Messiaen and many greats of “New Music”. A figure worthy of special mention here would be the pioneer Mary Hallock-Greenewalt who, born in Beirut in 1871, studied in Vienna and to whom we owe the oldest surviving painted films, which she made for a music visualization machine she invented (“Farbor-gel”).
Posters, Record Covers and Performances
At present, one could consider the visual arts beyond the core of music visualization in the narrower sense. Here, of course, one finds a long history of graphic art, which has always played a major role in the mediation of music on posters and record covers. At the interface between composition and visual art are, for example, Christopher Sturmer, who performs pourings on stage with the band Fuckhead, Andreas Trobollowitsch, whose sculptures remind us of the related art of instrument making, or the works of Maja Osojnik, Kathrin Stumreich, Karl Salzman, Paul Gründorfer, Manuel Knapp, Klaus Filip, and noid.
When images become sound – and vice versa
Closely related to visualization is the field of graphic notation. Historically, Iannis Xenakis should be mentioned here, who also gave his name to a current software package: “Iannix”.
The graphical notation builds a bridge to the kind of music visualization in the literal sense, which could be called “didactic”. Here the attempt is to make pitches, durations, possibly harmonic structures, and volumes visible, through detailed pre-analysis.
Stephen Malinowsk, ‘Animated Score’
A project worth mentioning that is closer to graphic notation than didactic visualization is SYN-Phon:
What is fascinating about SYN-Phon is the balance of rigor that is striking initially and its later breaking in favor of artistic expression and a richer arc of tension. We see a red line suggesting that time is passing by, that we are standing at an exact moment, and that the ensemble is setting the moment of the red line to music. The decision to let us look more into the future than into the past (the red line is to the left of the center), following this imposing logic, is certainly not only due to the simpler performance practice. The audience thus has time to reflect on how the approaching shapes may sound. When the lines and shapes seem to travel back in time in loops, our imagination is challenged, all accuracy and one-to-one transferability questioned.
Candaş Şişman is the video artist behind the work “FLUX“, influential for the scene of real-time music visualization, which in turn explicitly refers to visual art in the person of İlhan Koman.
A technique of composition more or less based in Austria, which aims at a visually appealing result, is the “Oscilloscope Music” developed by Jerobeam Fenderson and Hansi Raber. Here, audio signals are deliberately generated to produce visually interesting shapes when displayed on an oscilloscope. Thus, almost no translation, analysis or interpretation precedes the visualization as an intermediate step.
A composition that used this technique while working with acoustic instruments was presented by Salzburg-based Matthias Leboucher in the course of the project “Sounding Visions”.
Although seemingly similar at first glance, the equally spectacular performances by Lukas König with visualizations by Bernhard Rasinger are different in that the latter intervene in the visualization in a performative way – according to this, the visual material is not shaped by the raw audio signals alone, but an interpretation is added.
Before going into more detail about music visualization in the literal sense, it should also be mentioned that inversion – sonification – plays an interesting, if not so widespread, role. One work in this field, shown at Ars Electronica 2017, is the animated film “geophone” by Greek VFX artist Georgios Cherouvim. Here we find an attempt to synthesize sounds directly from point positions in a three-dimensional space. The intention lies in the very direct translation and transparency of the method. It is shown how exactly the sounds are created – theoretically, the information on the audio level would probably be enough to roughly reconstruct the seen objects. Perhaps the appeal of the work is its directness, rawness, simplicity in the tension of a tidy, elegant, clean, clear visual representation, certainly also the selection of geometry with high complexity (e.g. busts) in contrast with seemingly random sonic thunder, whose origin nevertheless seems obviously clearly defined and tangible:
In the course of the already mentioned project “Sounding Visions”, a freer form of sonification also took place. One part of the project consisted of the “visuals” of the very active Austrian visualist Conny Zenk. This work was seen as a basis for composition, as a kind of graphic notation, and was made into sound in a reversal of the usual process of musical visualization by the composer Hannes Kerschbaumer.
After it became obvious that numerous forms exist to intertwine music and image or to combine them into a form of aesthetic value, the classic music visualization will now be examined in more detail.
Protagonists in this field make use of different aesthetics, whereby the prevalence of a certain formalism is particularly striking, as well as references to “glitch”, i.e. the play with various technical artifacts.
An interesting work to get us closer to these trends is for example the work “Aphàiresis” by the Italian artist Gianluca Iadema, who was supposed to present his work “Aritmie” in cooperation with Davide Santini at Wien Modern in 2020 (which was prevented by CO-VID).
In “Aphàiresis”, various aesthetic influences that seem specific and significant for our time can be identified. On the one hand, video material (e.g. Ingmar Bergman’s “Persona”) is sampled and rearranged. On the other hand, artifacts of older equipment are used aesthetically to recontextualize and to elicit an added value from the materiality of the dispositifs (cf. McLuhan etc.).
Furthermore, the work contains formalistic material such as simple lines in black and white, combined with sounds created by “databending”. All these aspects might remind us of more or less acoustic works from the label and platform Raster-Noton, in particular perhaps the influential works of Ryōji Ikeda, who presented his exhibition “micro | makro” in 2018 in the course of the Wiener Festwochen at the Museumsquartier Wien.
A work that could also be seen as anchored in this aesthetic, although the focus here seems to be more on the sonification and visualization of raw data, is “MESH ANALYSIS” (2013) by Thomas Wagensommerer:
In general, it can be said that the influence of the Bauhaus movement is still a defining factor of music visualization in Central Europe. This clearly emerges from the aesthetics of many visualizations. Then again, it is probably due to the necessity of having to demonstrate a certain level of craftsmanship in order to produce contemporary music visualizations and could be proven by a lecture for visualists that attempts to demonstrate this proximity in a technical sense. A book that is influential for the aesthetics and working methods of our time is also “Generative Design” (Hartmut Bohnacker et al., 2009) , in which the very widespread approaches of generative art are practically examined.
Analog versus digital?
To be a little more specific about common ways of working, one can first distinguish between analog and digital ways of working. “Analog” means in this case that although signals may be generated digitally, they are ultimately reproduced on old CRT screens, for example. This reproduction makes it possible to work with analog video signals – which in turn allow a wide variety of intervention options. Local examples of this would be the unfortunately scarcely documented analog visuals of Michael Perl or the audiovisual performances of the well-known artist Billy Roisz.
By far the more common working method is of course the digital generation of visualizations. It is probably necessary to state here right away that, especially in the more commercial sector, the predominant technique is to more or less indulge in either presenting pre-rendered or recorded content oneself, or even accessing “visual packs”. This sampling offers the possibility to be very eclectic and, due to its beginner-friendliness, is probably the most common technique to quickly present a wide variety of material in clubs.
An example of a creative and effective application of the sampling-based technique is the work “GIF Frenzy” by Christof Ressi, which was performed by Studio Dan as well as the Vienna Black Page Orchestra, which often works with visualizations. This work fascinates due to a certain humor, which seems to be occasionally difficult to find in “serious” music compared to music in the entertainment sector:
Interview: “IT’S A HELL OF A JOB” – CHRISTOF RESSI
Of course, there are also examples of very well produced commercial works, some of which are backed by large teams. An international example of this is Amon Tobin’s “isam” tour, which caused a sensation due to the extensive use of 3D mapping technology at the time:
Austrian-based works, some of which are a bit more commercial, include those by the young VFX artist Jascha Suess or the large-scale light installations by Neon Golden/Stefan Kainbacher.
Another international example of works that were executed in large teams and succeeded in the commercial sector, but still retained pioneering character, are the works of Daito Manabe, which often move in the fields of biotechnology, dance and robotics:
As already seen in some of the examples given, the digital techniques of visualization enable synchronicity, perhaps even suggest it. This brings us to what might be called the ultimate question of visualization: “Is synchronicity desired?” Is it not rather more interesting to create a symbiosis, a counterpoint? Why let music artificially spill over to another level, let another medium mindlessly follow, as an afterthought and spectacle? This seems to be a formative factor in the discourse of studied composers, which, along with the rootedness in the club context, perhaps offers one of the explanations why visualizations still seem to be rare in the context of “serious” contemporary music. A wonderful historical example of a different approach is the work “Arnulf Rainer” by Peter Kubelka:
There are numerous ways to create digital visualizations. Digital visualizations that aim at strong synchronicity can be divided into two categories:
1. manually controlled synchronicity, in the sense of a producer matching the visual events to the musical by hand and by ear.
2. data visualization, where essentially symbolic data (e.g. MIDI) or audio analysis forms the data basis.
Of course, there are many hybrids here. A very well-known example of a music video that achieves a very high degree of synchronicity without extensive audio analysis or data extraction is the music video “Gantz Graf” by Alex Rutterford:
For performative settings and experimental applications, it is partly a requirement to analyze the musical material in real time, as for instance improvisational or stochastic elements can be found in the music. However, synchronicity should be preserved.
There is a group of software packages that are made exactly for these purposes and allow their users to manipulate and generate content in real time. While the actual pre-production of a “movie” is a possibility, actual synchronicity in a live context can only be achieved if the performed music can be fixed as well (e.g. by clicking in the musicians’ ears), which again is a major limitation in most cases.
The history of music visualization in this narrow, analysis-based, digital sense is as rich as it is young, ranging from everyday visualization engines in music players (such as Winamp or iTunes) to offline rendered midi-based digital orchestras (e.g., the highly influential and peculiarly hypnotic, if not ideally aged Animusic series) to data visualization for structural or harmonic analysis to contemporary visual interpretations of classical orchestral works.
The extraction of signals and its limitations
The problem with the whole idea is, after all, this: How can perceptually informed parameters be extracted from musical performance? That is, how can a signal be extracted from music in real time that has something to do with our perception, has semantic content such as “loudness,” “shrillness,” “warmth,” “sweetness,” or the like? On the one hand, MIDI can be used here if the performers generate such in real time, but this is typically only the case with acousmatic or electronic music. There have long been efforts to build interfaces into instruments that allow a computer to read the corresponding pitch, but such a large space of articulation is potentially untapped here, so this approach has its limitations. A current project in this direction is the start-up ‘Digitaize’ founded by the two Austrian-based musicians/composers Rafał Zalech and Alessandro Baticci, which reads MIDI data from violins and wind instruments in real time (for didactic purposes, to control synthesizers or effects, or even to visualize performances).
If a visualist is confronted with an ensemble that does not have such a technique, but where synchronicity is required, by far the most common way is the so-called ‘audio feature extraction’. In the simplest case, the “loudness” can simply be measured (approximately, e.g. via ‘RMS’) or an attempt can be made to analyze the loudness of different frequency bands by means of digital filters, in order to visualize, for example, the bass content of the performance separately from the treble. If several microphones are used to have each performer available as a separate signal, one can try to go further into detail. In most cases, these approaches already provide a sufficient analysis for visualization.
A combination of MIDI and audio analysis can be found, for example, in the Amon Tobin video mentioned above, but it is so common that it is difficult to give a particularly striking example here. Numerous examples of various forms of analysis for mostly experimental electronic music can be found on the YouTube channel audioreact-lab. An example of a slightly more complex spectral audio analysis would be the collaboration below between the very successful visualist and VFX artist Andrew Quinn (e.g. responsible for VFX on the movie “The Matrix”) and composer Nikolay Popov:
What can be observed in this work is real-time rendered audio-reactive 3D content, some of which takes on graphic aesthetics, and some of which depicts spatial structures in a more or less photorealistic manner. A roughly similar technique – audio reactive real-time 3D content – is pursued by many visualists, such as the visualist Marian Essl under his pseudonym Monocolor.
Obviously, spatiality is also more strongly negotiated in the above-mentioned work, which is made possible by the dispositive, a so-called “full dome”. Similar possibilities are offered by the “Deep Space” located in the Ars Electronica Center. In this mode of presentation, works are projected onto a screen, onto a dome, onto several screens, perhaps onto semipermeable gauzes as in the works of Conny Zenk from time to time, or possibly a house is illuminated with 3D mapping – this has hardly been discussed here yet. These different variations offer specific possibilities like a stronger spatial impression or almost holographic experiences. The Austrian artist group NO1[/ˈnuːn/] (to which the author of this text admittedly belongs) has developed a specific movable dispositif on which a kind of holographic effect is achieved by means of 3D mapping and the idea of rapidly rotating the illuminated surface.
All of the digitally generated examples we have now looked at are either two-dimensional in nature or were generated using some sort of 3D render engine. Currently, however, there is a tenderly blossoming plant, or perhaps rather a proliferating rhizome, to which the future probably belongs: artificial intelligence. There are international examples of this, such as the video “T69 Collapse” by artist Richard D. James AKA Aphex Twin, and the research field in the area of motion transfer, style transfer and GANs (see below) is enormously active.
This, in turn, naturally opens up more and more opportunities for artists. Austrian artists who explore these new potentials are, for example, the author of this text or Rainer Kohlberger:
In an international comparison, the visualization scene is probably more pronounced in countries like Canada, Japan or Russia. But the local scene also benefits from the activity in Eastern Europe through cooperations such as FLUCA, the Austrian Cultural Pavilion in Plo-vdiv, Bulgaria, where for example the visualist Petko Tanchev is involved, as well as through influxes from those regions such as the Vienna-based versatile media artist Antoni Rayzhekov.
To bring what has been said together, here is another example by Salzburg-based Austrian composer Marco Döttlinger, who presented a work on November 12, 2022 as part of Wien Modern that makes use of AI-generated composition, audio analysis, a kind of graphic notation. Also to be experienced on this evening are compositions, video works, performances and visual art by Peter Jakober, Alexander Martinz, Thomas Hörl and Peter Kozek.
Patrik Lechner, born 1986 in Vienna, has worked since the 2000’s in the fields of experimental music and real-time video art. Publications in the fields of digital audio effects, multimedia programming in general, audio analysis, AI, Industry 4.0. Lectureships at the University of Applied Arts, University of Music and Performing Arts Vienna, FH Salzburg and FH St. Pölten.
Previous performances of audio/visual performances for example in:
Austria (Musik Protokoll, Impuls Tanz Festival and others), Belgium (BAM Festival), Italy, Bulgaria, Germany (ZKM), Canada, Dubai, Mexico (MUTEK), Montreal (MUTEK), Shanghai 2010 (Expo 2010). Lechner received an honorable mention at PRIX ars electronica 2019/category Sound Art.
For all CROSSWAYS IN CONTEMPORARY MUSIC articles and interviews, go here.
Translated from the German original by Itta Francesca Ivellio-Vellin.