For the last year or so, there has been a growing mainstream critique of social media. Silicon Valley entrepreneurs and investors are raising their concerns about what Facebook and other cyber gangs are doing to society. See for example the Center for Humane Technology. The recent concerns are often embedded in a discourse that “Russia” has abused Facebook to influence voting. But did they really abuse it? Or did they merely use it, as an article in WIRED recently put it?
Once upon a time the App Store rejected a Commodore 64 emulator because its BASIC interpreter could be used to program stuff. That was unacceptable at the time, but these policies later changed to allow the fierce power of C64 BASIC. It makes the point clear enough: what the iPhone and other iOS-devices can do is not just conditioned by its hardware. The possibilities of a programmable computer are there, only hidden or obscured. But there are ways to get around it.
And this is true for all kinds of hardware. Maybe today it’s even true for cars, buildings and pacemakers. There are possibilities that have not yet been discovered. We rarely have a complete understanding of what a platform is. My talk in Utrecht will focus on how the chip- and demoscenes over time have unfolded the platforms that they use. What is possible today was not possible yesterday. Even though the platforms are physically the same, our understanding – and “objective definitions” of them change. And it almost seems like the emulators will never be complete?
With a less object-oriented definition of these platforms, it’s reasonable to define the 8-bit platforms not only as the platform itself, but as an assemblage of SD-card readers, emulators and other more-or-less standard gadgets for contemporary artists and coders. The Gameboy, for example, might have been an inter-passive commodity at first, but after development kits were released, it changed. It used to be really difficult or expensive to get access to its guts, but now it’s relatively easy. So it might be time to stop framing Gameboy music – and most other chip music – as something subversive; something that goes against the “intended uses” of the platforms.
Sure, the Gameboy was probably not designed for this, in the beginning. And Facebook was probably not designed to leak data, influence elections, and make people feel like shit. But that’s not really the point. These possibilities were always there, and they always will be. But perhaps the Center for Humane Technology will push such materialist mumbo jumbo to the side, and re-convince people of the “awesomeness” of social media.
Releasing music on a cartridge that needs an old 8-bit platform to work, might seem like the worst way of releasing music today. But if you think about it a bit more…. A cartridge takes the best parts of the software-world and the hardware-world: You get a good-looking physical object, and it doesn’t have to contain only static recordings that are the same forever and ever.
The first cartridge release I heard about was Vegavox, a NES-cartridge made by Alex Mauer in 2007 with a basic interface to select songs. The follow-up, Vegavox II (below) was more refined with custom moving graphics for each song.
This looks similar to music videos, but under the hood it’s actually quite different. A video is a recording – a stream that plays from A to B the same way every time. Vegavox II on the other hand, is code and instructions that requires a very specific platform for playback. It’s more like a theater than a movie. Potentially, the user/viewer can ruin the whole thing by interrupting and destroying.
In the 1960’s this was a politically fueled idea that became prevalent in the computer arts to come. The power of the user. Today there are of course countless apps, games and sites with playful audiovisual interaction. But there’s not a whole lot of musical apps and situations where the composer really tries to give the user power over their own composition. Ah, the neurotic narcissism of music folks, eh? ^__^
+++
In the mid-1980’s, people started to rip game music and make compilations for the user to choose songs and trigger sound effects. The teenagers in the burgeoning demoscene started to make their own music, and by 1991 the music disk was an established format with quality releases such as Bruno’s Box 3, Crystal Symphonies and His Master’s Noise and plenty of gritty hip house megamix type of things, like Tekkno Bert.
These music disks normally pretended to be recorded music, even if it wasn’t. Under the hood there were notes and instruments being played live by software/hardware. You can see it in The Top Boys’ music disk above, where the notes are “played” on the keyboard. Theoretically the user could change each and every note, unlike a video where you can’t change the music at all. Music disks normally didn’t allow that, but commercial releases like the Delta Loader and To be on Top did.
While musical interaction almost seemed (and seems) a bit sinful to the genius music brain, visual interaction was (and is) more common. Back in the 1980’s there was 8-bit generative visuals like Jeff Minter’s Psychedelia (and other acid-ish stuffhm) that taps into earlier things like Atari’s Video Synthesizer.
Returning to the topic of cartridges and jumping ahead to 2016, RIKI released the Famicom-cartridge 8bit Music Power with music by eg Hally and Saitone. The user could interact with the music aswell as play games, and there were visualizers for the music. It’s like a mixture of a music disk and interactive music games.
Musical user interaction is still a rather unexplored field. Perhaps the user can mute instruments (8bit music power), move back and forth through a timeline (jazz.computer, dynamic game music) or trigger sounds/visuals in a game/composer environment (Playground). One recent interesting example is Yaxu’s Spicule, where the user can change the algorithms that compose the music in realtime.
A while back, Ray Manta at DataDoor came up with the idea to make a C64-cartridge and continue this exploration. So me, 4mat and iLKke got to work (and also did this). DUBCRT is our attempt to merge ideas from these different eras. There’s some music disk vibes to it, but in a kind of abstract and 1960’s modernist way. For each track there is a visualizer that spits out PETSCII-graphics, based on the music that is played.
The interaction is not all rationally easy to understand, but you can change the parameters of the visuals and (in a hidden part) change which audio sequences are played for each voice. You can also superimpose audio waveforms onto them, which means that you can pretty much ruin the song completely. A big plus! Nobody’s in charge. You can hear an example in Tim Koch’s remix in the album-release on Bandcamp.
All of this fits in 64 kilobytes, which means less than 8 kilobyte per song/visual. 4mat is known to only need 23 bytes to make good C64-stuff, and I tried to optimize my songs to fit aswell. All of ilKke’s graphics are in PETSCII, which also helped to keep the filesize down.
Here’s hoping to more absurd musical power interactions in the future! And since DUBCRT sold out in three hours, it actually seems like more people see this is as the best of two worlds. He he he…
Discogs is supposed to be an open place where everybody contributes with information about music releases. Theoretically, atleast. In praxis, decisions need to be made and that doesn’t exactly involve thousands of people… About a year ago there was a discussion whether a NES-cartridge should be listed at the site or not. No, someone said, because it’s not recorded music. The NES-cartridge contains code that only plays once the right platform is there to execute it. After all, it’s not as direct as a vinyl record that you can play with your own finger nail.
Most other music formats, however, require complex platforms to be played. CDs in particular, need complex digital error correction to be played correctly. What’s on the CD might be better described as data, compared to the code of the NES-cartridge, but still – you can store “pure audio data” on an NES-cartridge aswell, if you’d like. A storage medium can contain different kinds of information. A CD can contain the code of the NES cartridge. You can encode an MP3 or a JPG or a Hollywood movie on to a piano roll, as long as you have the right technology to decode it with. Didn’t the modernists teach us better than to argue about that?
People pretend like there is a definite answer to the debate about recorded music. It’s certainly a question about media technologies, but it can’t be answered in some pure technical sense. This is a cultural question because the answers depends on ideology, aesthetics, history, and so on. In Western music, there has been a solid separation between written sheet music and performanced music for a long time. It would roughly correspond to the separation between “author” and “performer”. Ideas and praxis. Art and work, even? Maybe. And then piano rolls came and disturbed the dichotomy. Then recorded music arrived and caused a terribly complicated music economy in order to make both composers, labels and musicians’ unions happy. And we’re still stuck with that mess.
Computer music has made these concepts even more hard to use. What is the difference between sheet music and code? How does algorithmic music fit in here? If chipmusic is not recorded music, then who is the performer? When I was a member of a Swedish copyright society (to get money when e.g radio/tv uses my music) I tried to discuss this. Since the radio show Syntax Error played my C64-music straight from a SID-emulator, I told them that it was performed live by the C64 and not recorded music (which affected the payment). Needless to say, they were not impressed by my argument.
And neither were the discogs people. After the discussion last year, they deleted all the NES cartridges from the database and lived happily ever after.
Or did they…?
On discogs there is this category called Floppy. In the format list you can also see things like USB sticks, File, CD, miniDV, flexi disc, and so on. Problem is – these are not formats. They are storage media, that can store many different format. All in all, discogs is bound to run into some pretty difficult choices in the future…
But anyway. This floppy category. What kind of releases can we find there? Right now there are 605 floppy releases listed. Quite a lot of them have been released within the last couple of years. The Hungarian label Floppy Kicks has been very active and there seems to be plenty of noise/lo-fi/drone kind of stuff. Diskette Etikette and Floppyswop are two other floppy labels. I made a release for Floppyswop, and they were sort of connected to the micromusic world it seemed. Here we should also mention Sascha Müller’s Pharmacom records with floppy releases that sometimes had some 8-bittish things. I released stuff there too.
A psy trance compilation was released on 20 floppy disks in 2014. With 20 songs in FLAC. Now that’s pretty impressive! DUMPSTERAC1D released four acid floppies on the Moss Archive label, but Chris Moss Acid has never heard about them. Ethnic techno is a 1989 floppy release from Zambia that also includes a 4″ vinyl.
Most of these releases are legit for the discog man, because they usually contain lo-bit MP3s, interactive media, promo material, and so on. Proper music industry stuff. My releases had mod-files, which is not recorded music. But it seems to have been accepted.
But wait – there’s more! To my surprise, there is plenty of demos and music disks in discogs aswell. I won’t mention them here, out of respect for their discogs presence. But we can be sure that the discog man will eventually hunt and destroy.
And why shouldn’t they? Discogs reflect the “recording industry” and if you’re looking for non-recorded digital music you’d be better of looking at demoscene forums, media art, games, and so on. Things like that might be listed at discogs – like Brian Eno’s Generative Music I or Tristan Perich’s 1-Bit Music, but they are merely tolerated anomalies, it seems. If you don’t like it, you could always buy diskogs.com and start ze revolution!
FACT magazine just published 14 pieces of music software that shaped modern music. It writes a history that seriously portrays computer music history as going from “bad” to “good” and from “no options” to “anything you want”. It’s quite strange, since it’s written by Xela who did his first (?) release on the demoscene label Monotonik back in the days. Ok, well:
*initiate uncool data-rant*
1980’s computers are portrayed in the article as word processors that only a few people made some experimental sounds with (of course, USA’s computer music inventor is mentioned as always). First of all – as much as I love text mode, computers had been using colours and vector graphics for ages. They had also generated pop music in 1956, made christmas carrols and TV-music in 1958, played Bach in 1959 and in 1960 you could draw music with a light pen. And in 1968 Douglas Engelbart did that demo that sort of featured all those gimmicks we still use today. So no, it wasn’t like computer music was just a grey little blob in the 1980’s. But that’s what the article claims. But it was followed by a revolution in quality!
Over time, however, music software blossomed, and transitioned from fiddly time wasters, doomed to the forgotten directories on an Commodore Amiga cover disk, to the plethora of usable and sturdy apps we have available to use today.
“Plethora of usable and sturdy”… what? Let me count the times that Ableton Live has crashed compared to how many times Protracker has crashed. Let me count how many years that your spankin’ new [DAW/VST/whatever] will be usable for, and then compare that to the sequencers and softsynths from the 1980’s. Let me count the amount of bloat that got added to music software in the 1990’s, and compare that to the ultra-fast interfaces of 1980’s trackers. Let’s look at the huge archives of MOD-files and chiptunes that are freely available today. And if we strip away all the normal stuff, there’s a quite fair amount of innovative or impressive works. Just like today. Made “despite of” or “because of” the software, depending on your perspective. I can only assume that these things are not important for the author, but let me say this: usability & usefulness are not exactly objective concepts.
I know the purpose of the article is not to give a thorough history lesson on computer music. Seems more like a click-bate, although there are some very interesting bits in there too. But if you start at 1985 and basically only say what the software did and who used it, you’re not going to be able to say anything about “shaping modern music”. And I don’t know, the tone of that first page of text just pisses me off, actually. The author might not like people (“hipsters”?) who don’t use computers to record audio nowadays, but he does it on the expense of more or less thousands of years of music that didn’t have these “apps” that have been fashionable for, oh, 20 years?
Oh and one last thing: The article opens by saying “We’re at the stage in history where using music software isn’t so much an option as it is a necessity.” What does that even mean? Hardware and software need eachother – you can’t have one without the other. And in fact, the software metaphor as we use it today leads people like Florian Cramer to say that software has existed for thousands of years in magic, music composition and poetry.
Yesterday I wrote about the new scene issue in Wider Screen, where several noteworthy scholars write on chipmusic, demoscene and warez culture. Today I return to that, to discuss the ethnographic study of authenticity in the chipscene. Chipmusic, Fakebit and the Discourse of Authenticity in the Chipscene was written by Marilou Polymeropoulou who I’ve met a few times around Europe when she’s been doing field studies for her dissertation. Her article is refreshing because it deals with technology in a non-technological way, so to say. It takes a critical look at the ideologies of chipmusic (which I also tried to do in my master’s thesis) and she doesn’t get caught up in boring discussions about what chipmusic actually is (which, uhm, I have done a lot).
Polymeropoulou divides the chipscene into three generations. The first generation is described as a demoscene-inspired strive for being an original elite, by challening the limitations of original 8-bit hardware from the 1980’s. As I understand, this generation is everything that happened before the internet went mainstream. The second generation is internet-based and focused on mobility (read Gameboy), learning by copying and making more mainstream-ish chipmusic. The third generation is characterized as “chipsters” that are more interested in sounds and timbres rather than methods and technologies.
The first generation of chipmusicians would be a very diverse bunch of people, activities and machines. Perhaps even more diverse than the chipscene is now. Back then there were not as many established norms to relate to. I mean, we hardly knew what computers or computer music was. The terms chipmusic or chiptune didn’t exist, and I doubt that it was relevant to talk about 8-bit music as a general concept. It was computer music, game music, SID-music, Nintendo-music, etcetera. People were using these 8-bit home computers to make music for school, for games, for art, for their garage band, for themselves, for Compunet, for bulletin boards, the demoscen, for crack-intros, etcetera. However, looking back through the eyes of “chipscene 2014” it makes sense to zoom in on only the demoscene during this period, as it is normally considered as one of the most important precursors.
In the demoscene there were many people who ripped songs to copy the samples, look at their tracker tricks, or just use the song for their own demo. Copying was common, but it wasn’t exactly elite to do it. There was certainly a romantic ideology of originality at work. But I’m not so sure about ascribing a technological purism to the demoscene of that time. Sure, people loved their machines. But most sceners eventually moved on to new platforms (see Reunanen & Silvast). So I’m not sure that this generation would be the anti-thesis to fakebit. In fact, when the chipmusic term first appeared around 1990 it refered to sample-based Amiga-music that mimicked the timbres of the PSG-soundchips and the aesthetics of game music.
So, in a sense, the Amiga/PC chip-generation of the 1990’s (when the 8-bit demoscenes were very small) was actually not so far from what is called fakebit today. And that’s obviously why this big and important momentum with tens of thousands of open source chip-modules is so often ignored in histories of chipmusic. It just doesn’t fit in. (It’s also worth noting here that many if not most 8-bit demoscene people today use emulators such as VICE or UAE to make music, and use the original hardware more like a media player.)
My theory is that the hardware-fetish of the chipscene is a more recent phenomenon, established sometimes in the mid 2000’s, and I think that Malcolm McLaren’s PR-spree had something to do with it, regardless of the scene’s reaction. If you listen to the early releases at micromusic.net and 8bitpeoples today, you could call it fakebit if you wanted to. Just like with the Amiga-chip music of the 1990’s. So it seems to me that this generation didn’t build much on what had been done in the demoscene, other than perhaps using tools developed there. Games, on the other hand, were a popular reference. So to me, the post-2000 generation of chipmusicians feels more like a rupture than a continuation from the previous generation (something like hobbyism->crackerscene->demoscene->trackerscene->netlabels).
At this time I was still a purist demoscene snob, and I thought that this new kind of bleepy music was low quality party/arty stuff. Still, I decided to gradually engage in it and I don’t regret it. But I was one of very few demosceners who did that. Because this was, in short, something very different from the previous chipmusic that was characterized by lots of techné and home consumption. Micromusic was more for the lulz and not so serious, which was quite refreshing not only compared to the demoscene but compared to electronic music in general (you know, IDM and drum n’ bass and techno = BE SERIOUS).
It’s funny, but when Polymeropoulou describes the third generation of the chipscene (the chipsters) it actually reminds me a bit of the early demoscene people, perhaps even during the 1980’s.
Chipsters compose chipmusic – and of course, fakebit – on a variety of platforms, including modern computers, applying different criteria, based on popular music aesthetics rather than materialist approaches. [..] Chipsters find creative ways combining avant-garde and subcultural elements in order to break through to mainstream audiences, a practice which is criticised by purists.
In the 1980’s they used modern computers to try to make something that sounded like the “real” music in the mainstream. They borrowed extensively from contemporaries such as Iron Maiden, Laserdance and Madonna and tried to make acid house, new beat, synth pop, etc. There was definitely some freaky stuff being made (“art”), and something like comedy shows (Budbrain) and music videos (State of the Art) and later on so called design demos (Melon Dezign) and those demos appealed to people who were not sceners. And the megamixes! Here’s one from 1990:
Okay… how did we end up here? Oh yeah — my point is, I suppose, that the demoscene is not as purist as people think, and never was. Atleast that’s my impression of it. But even if I disagree with the generational categorization of Polymeropoulou’s text, I consider this article as an important contribution to the field of techno-subcultures. Also, I am even quoted a few times both as a researcher and as an anonymous informant. Maybe you can guess which quotes are mine, hehe.
I’ve complained about Bruce Sterling before, and now I’m about to do it again. The reaon is this chart of platform convergence by Gary Hayes that he posted on Wired. It argues that we’re moving towards one device that can play everything. But here’s the thing:
No device can play everything. That’s just common sense, right? You can digitize a VHS-tape and convert it into a format that modern media players can understand. But then it’s not a VHS-tape anymore. Everything that is special about VHS has been removed. It’s a bleak imitation, at best. Sure, the difference is less if you discuss, uhm, Real Audio or executable files. But it’s still the same principle. The juicy materiality (hard- or softwareal) has been stripped away.
Emulators are not the same thing as the original machine. They are not worse or better – they are just different. One example is the C64-emulator for iPhone that wasn’t allowed to include BASIC. Coding is not something that the iPhone should support. So the C64 became yet another boring gaming device, in iWorld. Btw, that follows the logic of the chart, that places the C64 just before … XBOX! Lol! The point is: every remediation & convergence both adds and subtracts. Things disappear. For good and bad.
Media convergence is obviously something that’s going on, in many different ways. And when I think about it – perhaps Sterling and his crew are right. There will be a machine in the future that can do everything. Yeah. I’m pretty sure there will be. Because we already had that machine so many times before. The magical device that can delete the material constraints and make your dreams come true instantly and without friction. Remember virtual reality in the 1990’s? Or home computers in the 1980’s? Or … I don’t know, beamers and wheel chairs and jet paks?
Silly comparison? Maybe a little. But we have to accept that these interface fantasies are cultural constructions that were as “real” or relevant in the 80’s as they are today. In 30 years people will patronize our fantasies just like we do today.
And when you think about it… A touch screen that you can use some fingers on? No keyboard? Unprogrammable systems, automatic surveillance, distribution monopolies… I mean. Eh?
This convergence is just a bubble-bubble. It’s not some unavoidable teleological future. Seems more like a temporary phase before we move towards divergence and paint that in terms of progress and optimism. Just like we did with the 1980’s computer market, for example. Seems pretty likely to me.
Last week I made a presentation at Merz Academy called Hackers and Suckers: Understanding the 8-bit Underground. I was invited by Olia Lialina for a lecture series called Do You Believe in Users? in Stuttgart. This question should be understood in the context of a disappearing user in modern discourses on design. Computers have become normalized and invisible, and the user seems to have a similar fate. (read more in Olia’s Turing Complete User)
The talk was about 8-bit users, and the hype around 8-bit aesthetics. I talked about different 8-bit users – from those who unknowingly use 8-bit systems embedded in general tech-stuff, through stock freaks and airports, to chipmusic people and hackers. I explained how “8-bit” is both a semiotic and materialist concept, but often used as a socially constructed genre. 1950s music or 1920s textile can be called 8-bit today.
I explained what the qualities of 8-bit computing are, as based on my thesis: simple systems, immediacy, control and transgression. Some examples of technical and cultural transgression followed, and then I gave the whole “8-bit-punk-appropriator-reinvent-the-obsolete” speech and then dissed that perspective completely. Finally, I tried to explain my own view of non-antropocentric computing, man-machine creativity, media materialism, and so on. When I prepaired the presentation I called this Cosmic Computing, but I changed it because my presentation was already hippie enough…
Humans cannot have a complete & perfect understanding of a computer. Following ideas from Kittler – and the fact that 30-year-old technologies still surprise us – this seems controversial for computer scientists, but not so much for artists?
Users bring forth new states, but that might be all normal for the machine. This is controversial for all ya’ll appropriatingz artistz, but not for Heidegger and computer scientists.
All human-machine interactions are both limited and enriched by culture, technology, politics, economy, etcetera. Meaning that “limitations” and “possibilities” are cultural concepts that change all the time.
Don’t make the machine look bad — don’t be a sucker. Make it proud! Another anti-human point, to get away from the arrogant ways that we treat technologies.
In hindsight, it was a pretty bad idea to be so anti-user in a lecture series designed to promote the user. (: And the discussion that followed mostly evolved around the concept of suckers. Some people seemed to interpret what I said as “if you are not a hacker you are a sucker”. This was unfortunate but understandable. I don’t mean that there are only two kinds of users. They are merely two extremes on a continuum.
Hackers explore the machine in artistic ways and they can be coders, musicians, designers — whatever. They are not necessarily experts but they know how to transgress the materiality/meaning of the hardware/software. They can make things that have never been done before with a particular machine, or something that wasn’t expected from it. That often requires not-so-rational methods, which is not always based on hard science. Just because you know “more” doesn’t make you better at transgression. There is a strong connection between user and computer. Respect, and sometimes a strong sense of attachment – even sexual? That’s probably easier to develop if you don’t plan to sell it when the next model comes out. (btw, this is not some kind of general-purpose-definition of the term hacker, just how I used it in this presentation)
Suckers, on the other hand, don’t seem to have this connection. They buy it, use it and throw it away. Either they don’t feel any connection to the object, or they don’t want to. They act as if they are disconnected from technology, and only suck out the good parts when it suits their personal needs.
It is a disrespectful use. The machines are treated merely as instrumental tools for their own satisfaction. Suckers are consumers to the bone. Amazing technologies are thrown at them, and suckers treat them as if they don’t even exist – until something stops working. Or they go all cargo cult.
I don’t like it when I act as a sucke.r, but it happens all the time. I recently got an iPhone for free. I’ve had it for months without using it, because I am scared of becoming a sucker 24/7. I am definitely not in charge of my life when it comes to technology. And I like that. Hm…
The theoretical base is Friedrich Kittler, who is more interested in machines than humans. From this Botz constructs a media materialism that takes the potentials/limitations of the machine seriously. Human fantasies about subverting the machine is not primary. Demos are immanent in the machine and are only “carved out” by the sceners. They are states of the machines, and not products. There is no software, even.
Still – as a researcher of art rather than computers – Botz describes the aesthetical norms also from a social perspective, occasionally with some ideas from cultural studies. New effects typically reference “oldschool” elements to make it graspable. It’s not a virtual and limitless digital “freedom” where anything is possible, which is often implied elsewhere. You know, Skrju can make lots of fucked up noise but still fit in, while perhaps Critical Artware could use some more rotating cubes.
Unfortunately this book is only available in German. You can read a sample here. My German is not very good, so my apologies if this post contains any misinformation. Having said that, this book is the best demoscene research I’ve read. It’s quite traditional in its theory and methods, which I think is required to cover the topic thoroughly. Still, it offers plenty of surprises compared to the usual clichés about hacker aesthetics. Perhaps that’s because the theoretical perspective is down-to-earth instead of pretentiously post-whatever or ideologically biased (e.g. humans or machines).
What does a computer want to say, really? What is inside the machine? If there’s just 256 bytes of software, we might be getting closer to some sort of answer. Or is that just bullshit?
It is of course a craft that demosceners have worked with for many years. Ever since the 1990s demoparties have categories for intros made in for example 4 kilobytes. But in the last years, this has dropped well below 1 kilobyte. Now there are audiovisual “demos” that consist of less than 32 bytes. Usually it’s “coder porn”. There’s for example the 224-byte tunnel-effect for PC, coded in Photoshop (!) – check the video. Also, Loonies have made some impressive audiovisual Amiga-works with hot code and soft-synth electro: ikadalawampu (Amiga, 4096 bytes). On the C64, there’s music that use almost no CPU-power at all.
Other works are chaotic systems that look so good that it doesn’t have to matter that it’s just 256 bytes. Look at the video of Difúze by Rrrola (PC). It’s some kind of audiovisual (General MIDI) new age minidemo. Rndlife 2 by Terric/Meta is a text mode C64-production where the PETSCII characters are sliming around the screen like there’s no tomorrow (exe).
256 bytes is, in itself, rather useless. In a way, software doesn’t exist without hardware. Minidemos require nice hardware. If the hardware is complex enough, then 4 kilobytes can look and sound like a Hollywood movie intro. If the hardware is low-tech fresh, then 23 bytes can be a 9-minute audiovisual data catastrophe/victory. Just look at the video of 4mat’s Wallflower for C64. I wonder if he himself can explain what’s going on?
It’s also possible to do story telling in minidemos. Check out A true Story From the Life of a Lonely Cell by Skrju (256 bytes, Spectrum). Dramaturgy with two pixels. Viznut made a similar thing in 4k, that also has music to help the storytelling.
Still, my favourite minidemo is still Rrrola‘s 32-byte masterpiece for MS-DOS: Ameisen. Two years ago I recorded it, so I could show it at the online exhibition Minimum Data >> Maximum Content that I curated for Cimatics’ defunct Intermerz project. If you don’t like compression, the video looks pretty crappy. I really made my best to translate the data performance into recorded video, but well, a performance is usually better than a recording! 32 bytes of instructions can obviously be better than 300 megabytes of video.
So, goodol’lft made a presentation about chipmusic (on his custom-built powerpoint-chip). There’s some very refreshing ideas and concepts. It’s a nice mix between engineering and musicology (just as expected), so it’s similar to my thesis where I interviewed him, only that I had more political aspects.
First of all – he starts with a diagram of frequencies. The idea is that the early cheap digital hardware could only work with low frequencies, but gradually became able to play rhythmic frequencies and then finally refined pitches and timbres. The software caught up with it around 1995 and now – as we all now – it can be quite complicated to distinguish between software and hardware. I really like how this frequency-centric perspective resonates with the sonic theories in Sonic Warfare.
He talks about compositional strategies for various limitations in a very clear way. Some things are especially worth noting. Returning to the importance of frequency, he discusses what happens when effects are played at the frequency-rate of pitch/timbre. In other words – when soundchips play samples and sounds it was not intended to play (lol). It’s an important point, since a soundchip can do pretty much anything just if you play it fast enough.
On a similar note, he mentions something I didn’t know about tempi. There’s one tempo-setting that is the same for PAL and NTSC: 150 BPM. Otherwise, the tempo is different between PAL and NTSC since it’s a multiple of the frame rate. In other words – international chipmusic is in 150 BPM!
He also uses the term “channel sharing” to describe how musicians try to get as much as possible into one channel. At the rhythmical rate by putting bass and snaredrum on the same channel, at the structural rate by obsessively adding just about anything when there’s a bit of space in the lead, for example. He uses Hubbard’s The Last V8 as a great example.
But what I liked the most was his concept of the famichord. This is a chord that is mostly found in NES-music. Since the Japanese game musicians wanted to make jazz, they tried to use 4-note jazz chords, but with the lack of channels it wasn’t really possible. So they had to remove notes, while still keeping the jazz flavour. They removed the 5th so it became a maj7no5 chord. This is quite unusual, since the 5th makes the chord sound less dissonant. So in non-chip music this is really uncommon. But on the NES, this became very popular. Reminds me of Karen Collins’ idea that the tonality of the Atari 2600 influenced rave music, which has similar tone scales.