By Michael Grasso / January 8, 2018
Excerpts from video synthesizer artist Stephen Beck’s 1973 Illuminated Music 2 & 3.
Like many of the artists in this piece, Beck’s work found a home on public television.
In our contemporary media landscape, where consumers are surrounded by full-to-bursting computer-generated imagery, the wondrous has become drearily everyday. The thousands of hours of artists’ labor and computer programming time involved in creating visual effects often vanishes in the face of overwrought spectacle. But in the early days of video manipulation, a small group of researchers, artists, and technical pioneers tested the boundaries of brute signal manipulation in the new medium of television. The result of all this invention, innovation, and play—the development and perfection of video synthesizers—eventually became an essential and unavoidable part of the larger corporate media landscape.
Like its close temporal and conceptual counterpart, the audio synthesizer, the video synthesizer was created iteratively by academics, artists, and tinkerers, then eagerly snatched up by the world’s biggest media producers once prototypes had proved their power and versatility. The first true audio synthesizers, developed in the 1960s, used the power and reduced size of the electronic transistor to alter and transform electronic signals into completely artificial sounds. Further refinements allowed for a more user-friendly and versatile experience for musical composers and artists. This evolution, of a massive series of audio oscillators into a portable personal keyboard containing dozens of instrument sounds, occurred over the course of a little more than a generation. In a similar fashion, the first video synthesizer experiments exploited the quirks of the physics of television technology—the cathode ray tube, the electron scanning gun, and the electrons’ vulnerability to interference by magnets—to create entirely new and unexpected visual effects.
At the center of these early experiments was a musician and artist named Nam June Paik. Born in Korea between the two World Wars, he was displaced by the Korean War. He settled in Tokyo, where he formally studied the piano but also became involved with avant-garde art: specifically, the post-Dadaist Fluxus movement. In the early ’60s, Paik studied in Munich among adherents of the hands-on musique concrète school and electronic music pioneers like Karlheinz Stockhausen and John Cage. Inspired by these musicians’ experiments with the new technology of audio synthesizers, and aided by working with electronic engineers in Japan, Paik soon changed his primary mode of artistic expression from music to video.
Video art in the 1960s was a rare and expensive proposition; the equipment necessary to produce and broadcast video was almost entirely in the hands of television production companies and networks. The first consumer-grade videotape cameras were just beginning to appear on the market in the mid-’60s, and as soon as Paik got his hands on one, he began creating art with it. The ability to bring video footage to an audience immediately, without film processing time, was revolutionary. Paik demonstrated the video camera’s power by shooting footage of Pope Paul VI’s visit to New York, which he showed that very evening to a crowd at Greenwich Village’s Cafe au Go Go (famous for being the home of avant garde performances from Lenny Bruce to Bob Dylan to the Grateful Dead). Paik’s vision of this technology was one where power was put back into the hands of ordinary people for immediate creative use. This “guerilla” use of video was an inspiration to later “video collectives” such as Radical Software and Videofreex, both in New York, as well as TVTV in the Bay Area, in the 1970s. These collectives would leave behind a wealth of both technological advances and video art theory that would become foundational in the underground art and media scenes of the 1970s and beyond.
As Paik became known as a video artist in the mid-’60s, he never forgot his experiments in Japan with the brute-force alteration of video images. He would continue to experiment with them throughout his career. Meanwhile, parallel to Paik’s early work, a series of more formal scientific experiments was happening elsewhere, closer to the hubs of media production, that would collide with these artistic explorations to create the first true video synthesizers.
ANIMAC test reel, 1968-1969
The possibilities of manipulating video images were not limited to artists. Engineers, too, were beginning to experiment with live manipulation of the electrons on a television display. Lee Harrison III, an electronics engineer at television manufacturer Philco, put together a video animation processor called ANIMAC. ANIMAC evolved out of experiments with creating line segments on a television through electron manipulation, which Harrison joined up to form “skeletons”: a crude stick figure (the original in-house name for the ANIMAC was “the Bone Generator”). Harrison was even able to use a human model to project the ANIMAC’s animations, thereby creating a very early form of analog motion capture. But the ANIMAC differed from later computer-aided animation in one important way: the video skeletons were generated live and spontaneously, using video signal manipulation.
As professional engineers began to explore the possibilities of video manipulation, academia remained not too far behind. In 1969, a professor of video and computer art at Binghamton University named Ralph Hocking, inspired by what Nam June Paik had been doing in New York, decided to put together an academic program for aspiring video artists and tinkerers. In 1971 he founded the Experimental Television Center (ETC) off-campus in Binghamton. There, aspiring artists met with technological and artistic pioneers like Paik, learning how to use video equipment and the principles of video production. The ETC was funded by a series of public grants, and from its beginning, ETC was tied deeply to the then-burgeoning public television scene. Early experiments from ETC students were broadcast on WNET in New York and WGBH in Boston, and both of these public television stations (along with the ETC) would receive the honor of landing one of the first dedicated video synthesizers, the Paik-Abe synthesizer.
Paik had worked with his old friend from Tokyo, Shuya Abe, to construct a more stable platform to continue their original set of video-warping experiments back in Japan. The Paik-Abe was built by the pair in 1969 during a Rockefeller Fellowship residency at WGBH. The synthesizer allowed for the overlaying of video images from up to seven camera feeds with abstract color interference patterns. The synth was a definitely an odd duck; one WGBH employee described it as “a collection of the cheapest electronics around, the bare minimum. It was a miracle that it even made an image.” Paik himself called it “a sloppy machine, like me.” And its video effects were, by their nature, ephemeral. A particular interference pattern would shift and dissolve before the user’s eyes; like early audio synthesizers, tinkering with the patterns would create beauty and uniqueness that lasted only temporarily, and would be impossible to reproduce later. The Paik-Abe synthesizer might not have been equipped to create repeatable art, but it expanded the possibilities of what could be done in the field of video art. The Paik-Abe transformed a television from a receiver of centrally-broadcast information to a palette upon which an individual artist could improvise.
These largely theoretical and artistic experiments would soon meet market demand from America’s biggest producers of video content: the three television networks and the countless local television stations scattered across America. The ability to create graphics on-demand was of immense importance to the television industry. Up until the 1970s, on-screen graphics were a crude, handmade affair: for subtitles or live sports score updates, video producers would need to painstakingly hand-assemble titles, then project them onto the broadcast either by combining video feeds of scrolling white text on black felt or by using chroma key compositing. Producing titles on the fly started with the use of Vidifont in 1968, produced by CBS Laboratories, the pure research arm of the CBS television network. The possibilities of electronic captioning were already being pioneered by in association with Gallaudet College for the Deaf in Washington, D.C.; so-called “open” captioning was introduced on PBS in 1972, with closed captioning requiring a special decoder coming a year later. But the name that would become synonymous in the television industry with on-screen text and effects would be that of a Long Island company named after the centaur teacher to the Greek gods: Chyron.
Chyron became the industry standard for television text in the 1970s; today, in fact, “chyron” is a generic term for any on-screen “lower third” or caption. The company’s first series of character generators released in the early ’70s were more user-friendly than any other system on the market at the time. A simple, QWERTY-style keyboard joined with tools for changing on-screen typefaces and positioning made these character generators indispensable for generating immediate on-screen information, becoming popular during live events such as election broadcasts and sporting events. Along with instant replay, the Chyron became one of the primary tools in the arsenal of ABC Sports’ pioneering producer Roone Arledge.
But it wasn’t just on-screen text that television producers desired. There was a real need for animated graphics that could be produced without costly cel-animation techniques. The early work that Lee Harrison III had done with ANIMAC had continued into the 1970s. After leaving Philco, he’d gone into business for himself and created a new line of computer animation tools for television. His signature invention, the Scanimate, was responsible for a great deal of the animation seen on TV in the 1970s and ’80s. A massive machine, only eight Scanimates were ever made; naturally, one lived in New York, and one lived in Los Angeles (there was also one at Harrison’s headquarters in Denver). The Scanimate worked a bit like the ANIMAC and a bit like the Paik-Abe: it took real-world input (in this case, pieces of hand-drawn art) like the ANIMAC and fed the art through multiple cameras, filters, and analog processors to create motion and color effects, like the Paik-Abe. If you’ve enjoyed a vintage station identification or animated TV commercial from this era—complete with that indefinable washed-out, glow-y aesthetic—then you’ve probably enjoyed a product of the Scanimate. When used on actual video footage, the Scanimate could produce striking and unique after-effects, as seen in the groundbreaking music video for Earth, Wind & Fire’s “Let’s Groove” from 1981.
Earth, Wind & Fire, “Let’s Groove,”
produced by video artist Ron Hays on the Scanimate, 1981
As the personal computer and microchip revolution swept across America in the 1980s, slowly but surely the power of these massive video synthesizers was made smaller and put in the hands of average consumers, just as the guerrilla TV collectives in the 1970s had dreamed. It was absolutely the introduction of the Commodore Amiga computer in 1985 that touched off this revolution. The Amiga was designed to be a graphics-intensive machine, even more so than the recently-introduced Macintosh. One might say that what the Mac would become to desktop publishing, the Amiga soon would become to independent video production. None other than legendary pop artist Andy Warhol was an early advocate of the Amiga’s potential, and the Amiga’s processing and graphics power allowed third-party products like the Video Toaster (introduced in 1990) to put, for the first time, the power of a television production facility in a desktop. Local cable companies and other low-tech private video producers would use Video Toasters on Amiga computers well into the 1990s to create on-screen graphics and text. By the end of the decade, video production would be done increasingly on off-the-shelf PCs and Macs, eclipsing the unique appeal and power of the Amiga.
Today, with computer-generated imagery able to create entire worlds out of thin air, some artists are consciously hearkening back to the analog video synthesizers of old. Big Pauper, a video artist who constructs his own video mixers and synthesizers, recently produced a haunted music video for the Boards of Canada remix of electronic artist Odd Nosdam’s “Sisters.” Other music video production groups, such as JJ Stratford’s Telefantasy Studios, deliberately use vintage, analog equipment to reproduce effects reminiscent of the Scanimate’s eerie lambent glow. The messiness and imprecision of ’70s and ’80s era analog signals create unexpected serendipities, just as Paik noticed more than a half-century ago. In an era where every visual miracle is programmed, recorded, and immediately archived, where every gluttonous CGI spectacle is shoved down the audience’s collective throat—the liberating appeal of this unique and ephemeral aesthetic is clear. The artists described above, and countless others, have inspired a new generation raised on these enchanting visuals, and eager to recapture their avant-garde, underground spirit for a new century.
Michael Grasso is a Contributing Editor and Exhibit Curator at We Are the Mutants. He is a Bostonian, a museum professional, and a podcaster. You can read his thoughts on museums and more on Twitter at @MuseumMichael.