Archive for the ‘platformstudies’ Category

Who decides what “intended uses” are?

March 18, 2018

For the last year or so, there has been a growing mainstream critique of social media. Silicon Valley entrepreneurs and investors are raising their concerns about what Facebook and other cyber gangs are doing to society. See for example the Center for Humane Technology. The recent concerns are often embedded in a discourse that “Russia” has abused Facebook to influence voting. But did they really abuse it? Or did they merely use it, as an article in WIRED recently put it?

It’s an important distinction, which has everything to do with how we talk about chip music and low-tech art. I’m doing a talk in Utrecht this summer, which has brought me back to these ideas. And they feel highly relevant now, with discussions about what social media are and how they should or should not be used.

Once upon a time the App Store rejected a Commodore 64 emulator because its BASIC interpreter could be used to program stuff. That was unacceptable at the time, but these policies later changed to allow the fierce power of C64 BASIC. It makes the point clear enough: what the iPhone and other iOS-devices can do is not just conditioned by its hardware. The possibilities of a programmable computer are there, only hidden or obscured. But there are ways to get around it.

And this is true for all kinds of hardware. Maybe today it’s even true for cars, buildings and pacemakers. There are possibilities that have not yet been discovered. We rarely have a complete understanding of what a platform is. My talk in Utrecht will focus on how the chip- and demoscenes over time have unfolded the platforms that they use. What is possible today was not possible yesterday. Even though the platforms are physically the same, our understanding – and “objective definitions” of them change. And it almost seems like the emulators will never be complete?

With a less object-oriented definition of these platforms, it’s reasonable to define the 8-bit platforms not only as the platform itself, but as an assemblage of SD-card readers, emulators and other more-or-less standard gadgets for contemporary artists and coders. The Gameboy, for example, might have been an inter-passive commodity at first, but after development kits were released, it changed. It used to be really difficult or expensive to get access to its guts, but now it’s relatively easy. So it might be time to stop framing Gameboy music – and most other chip music – as something subversive; something that goes against the “intended uses” of the platforms.

Sure, the Gameboy was probably not designed for this, in the beginning. And Facebook was probably not designed to leak data, influence elections, and make people feel like shit. But that’s not really the point. These possibilities were always there, and they always will be. But perhaps the Center for Humane Technology will push such materialist mumbo jumbo to the side, and re-convince people of the “awesomeness” of social media.


My Presentation of 8-bit Users

November 22, 2012

Last week I made a presentation at Merz Academy called Hackers and Suckers: Understanding the 8-bit Underground. I was invited by Olia Lialina for a lecture series called Do You Believe in Users? in Stuttgart. This question should be understood in the context of a disappearing user in modern discourses on design. Computers have become normalized and invisible, and the user seems to have a similar fate. (read more in Olia’s Turing Complete User)

The talk was about 8-bit users, and the hype around 8-bit aesthetics. I talked about different 8-bit users – from those who unknowingly use 8-bit systems embedded in general tech-stuff, through stock freaks and airports, to chipmusic people and hackers. I explained how “8-bit” is both a semiotic and materialist concept, but often used as a socially constructed genre. 1950s music or 1920s textile can be called 8-bit today.

I explained what the qualities of 8-bit computing are, as based on my thesis: simple systems, immediacy, control and transgression. Some examples of technical and cultural transgression followed, and then I gave the whole “8-bit-punk-appropriator-reinvent-the-obsolete” speech and then dissed that perspective completely. Finally, I tried to explain my own view of non-antropocentric computing, man-machine creativity, media materialism, and so on. When I prepaired the presentation I called this Cosmic Computing, but I changed it because my presentation was already hippie enough…

  • Humans cannot have a complete & perfect understanding of a computer.  Following ideas from Kittler – and the fact that 30-year-old technologies still surprise us – this seems controversial for computer scientists, but not so much for artists?
  • Users bring forth new states, but that might be all normal for the machine. This is controversial for all ya’ll appropriatingz artistz, but not for Heidegger and computer scientists.
  • All human-machine interactions are both limited and enriched by culture, technology, politics, economy, etcetera. Meaning that “limitations” and “possibilities” are cultural concepts that change all the time.
  • Don’t make the machine look bad — don’t be a sucker. Make it proud! Another anti-human point, to get away from the arrogant ways that we treat technologies.

In hindsight, it was a pretty bad idea to be so anti-user in a lecture series designed to promote the user. (: And the discussion that followed mostly evolved around the concept of suckers. Some people seemed to interpret what I said as “if you are not a hacker you are a sucker”. This was unfortunate but understandable. I don’t mean that there are only two kinds of users. They are merely two extremes on a continuum.

Hackers explore the machine in artistic ways and they can be coders, musicians, designers — whatever. They are not necessarily experts but they know how to transgress the materiality/meaning of the hardware/software. They can make things that have never been done before with a particular machine, or something that wasn’t expected from it. That often requires not-so-rational methods, which is not always based on hard science. Just because you know “more” doesn’t make you better at transgression. There is a strong connection between user and computer. Respect, and sometimes a strong sense of attachment – even sexual? That’s probably easier to develop if you don’t plan to sell it when the next model comes out. (btw, this is not some kind of general-purpose-definition of the term hacker, just how I used it in this presentation)

Suckers, on the other hand, don’t seem to have this connection. They buy it, use it and throw it away. Either they don’t feel any connection to the object, or they don’t want to. They act as if they are disconnected from technology, and only suck out the good parts when it suits their personal needs.

It is a disrespectful use. The machines are treated merely as instrumental tools for their own satisfaction. Suckers are consumers to the bone. Amazing technologies are thrown at them, and suckers treat them as if they don’t even exist – until something stops working. Or they go all cargo cult.

I don’t like it when I act as a sucke.r, but it happens all the time. I recently got an iPhone for free. I’ve had it for months without using it, because I am scared of becoming a sucker 24/7. I am definitely not in charge of my life when it comes to technology. And I like that. Hm…


Why Videotex is Better Than the Web

June 14, 2012

Videotex was one of the precursors to the web, invented in the early 1970’s. It’s a two-way communication standard that uses a standard television set and a modem, and was used for both commerce, leisure and art.

Viewdata is one form of videotex. In the USA it was mostly known as Viewtron, and reached some 15,000 users before it was cancelled. It was unsuccesful since most consumers simply do not have a need nor a desire to access vast computerized data-bases of general information (A. Michael Noll, 1985). But in France, there was apparently a need for exactly that. Minitel still had 10 million connections every month when it was shut down in 2009. (one reason is that the French government gave away plenty of terminals for free)

Videotex is slow and lacks graphical details. But on the other hand – it’s  easy and direct. You plug it in, and you’re set to go. Wi-fi. In the comfort of your TV-couch, instead of your computer work chair. CRT-lifestyle! No annoying operating system, no maze of protocols that control your interaction.

It’s actually quite easy to get sucked into the magic of Videotex advertising. There’s something very appealing with it. No more overload! No www-addiction! Oddly enough, it was actually markated like this already in 1983 – described as an alternative to information overload. Check out this video, for example.

My own fascination might come from growing up in Northern Europe, where videotex’s sibling teletext has always been quite popular. In fact, it is really popular. About 25% of Sweden’s total population checks out teletext on TV – every day. In Denmark it’s almost half! And it’s just not just on TV. There are teletext apps for smartphones that are some of the most popular ones around here. Last year, the most popular iPad app was public service teletext. Yeah!

Scandinavia is extremely into both internet and news. So these are informed choices, or atleast not a choice made from a lack of options. But is teletext just something that old people are into? Or is teletext used by young people too, as an alternative to the spam freedom of the web?

It’s likely an old tradition in decline. But at the same time, I can definitely see a demand for a cheap, reliable, ad-free service with Twitter-like shortness in the future too. And if you want to go a bit more luxurious with a two-way communication, videotex is your lady!

Also, it’s worth mentioning that teletext and videotex doesn’t have to use text graphics and a low amount of colours. Take for example the amazing Telidon, developed in Canada around 1980. It is an alphageometric standard that works with changeable fonts and vector graphics instead. Telidon looks incredibly good in my eyes. It’s a shame that the UK won the standardization war, otherwise teletext might’ve been even more popular today.

Or maybe the text graphics are actually part of the winning concept. More reliable; more serious. That might be. But just look at these Telidon wonders! (and if you want more, check out

Fantastic PhD Demoscene Book

June 27, 2011

Daniel Botz has finally published his PhD dissertation on the demoscene. Chipflip’s conversation with him in 2009 revealed some of his approaches. Entitled Art, Code and Machine – the Aesthetics of the Computer Demoscene, it is an extremely well-researched study of the demoscene’s history and aesthetics.

The theoretical base is Friedrich Kittler, who is more interested in machines than humans. From this Botz constructs a media materialism that takes the potentials/limitations of the machine seriously. Human fantasies about subverting the machine is not primary. Demos are immanent in the machine and are only “carved out” by the sceners. They are states of the machines, and not products. There is no software, even.

Still – as a researcher of art rather than computers – Botz describes the aesthetical norms also from a social perspective, occasionally with some ideas from cultural studies. New effects typically reference “oldschool” elements to make it graspable. It’s not a virtual and limitless digital “freedom” where anything is possible, which is often implied elsewhere. You know, Skrju can make lots of fucked up noise but still fit in, while perhaps Critical Artware could use some more rotating cubes.

Unfortunately this book is only available in German. You can read a sample here. My German is not very good, so my apologies if this post contains any misinformation. Having said that, this book is the best demoscene research I’ve read. It’s quite traditional in its theory and methods, which I think is required to cover the topic thoroughly. Still, it offers plenty of surprises compared to the usual clichés about hacker aesthetics. Perhaps that’s because the theoretical perspective is down-to-earth instead of pretentiously post-whatever or ideologically biased (e.g. humans or machines).

I can’t wait for the translation, Daniel! :) Meanwhile, check out the great Demoscene Research site and join the (scarce) discussions in the Google group.


Famichord and Other Elements of Chipmusic

May 11, 2011

So, good ol’ lft made a presentation about chipmusic (on his custom-built powerpoint-chip). There’s some very refreshing ideas and concepts. It’s a nice mix between engineering and musicology (just as expected), so it’s similar to my thesis where I interviewed him, only that I had more political aspects.

First of all – he starts with a diagram of frequencies. The idea is that the early cheap digital hardware could only work with low frequencies, but gradually became able to play rhythmic frequencies and then finally refined pitches and timbres. The software caught up with it around 1995 and now – as we all now – it can be quite complicated to distinguish between software and hardware. I really like how this frequency-centric perspective resonates with the sonic theories in Sonic Warfare.

He talks about compositional strategies for various limitations in a very clear way. Some things are especially worth noting. Returning to the importance of frequency, he discusses what happens when effects are played at the frequency-rate of pitch/timbre. In other words – when soundchips play samples and sounds it was not intended to play (lol). It’s an important point, since a soundchip can do pretty much anything just if you play it fast enough.

On a similar note, he mentions something I didn’t know about tempi. There’s one tempo-setting that is the same for PAL and NTSC: 150 BPM. Otherwise, the tempo is different between PAL and NTSC since it’s a multiple of the frame rate. In other words – international chipmusic is in 150 BPM!

He also uses the term “channel sharing” to describe how musicians try to get as much as possible into one channel. At the rhythmical rate by putting bass and snaredrum on the same channel, at the structural rate by obsessively adding just about anything when there’s a bit of space in the lead, for example. He uses Hubbard’s The Last V8 as a great example.

But what I liked the most was his concept of the famichord. This is a chord that is mostly found in NES-music. Since the Japanese game musicians wanted to make jazz, they tried to use 4-note jazz chords, but with the lack of channels it wasn’t really possible. So they had to remove notes, while still keeping the jazz flavour. They removed the 5th so it became a maj7no5 chord. This is quite unusual, since the 5th makes the chord sound less dissonant. So in non-chip music this is really uncommon. But on the NES, this became very popular. Reminds me of Karen Collins’ idea that the tonality of the Atari 2600 influenced rave music, which has similar tone scales.

Good work Linus!

C64 Graphics – Data or Light?

July 21, 2010

There is a very geeky discussion about C64-graphics over at CSDb, which is strangely annoying and fascinating at the same time. It is essentially an argument about what a C64-image ‘is’, or perhaps more correctly, how it should be represented at CSDb. Is it the raw pixel data, or is it the way the image looks on an old CRT TV-screen?

From a data-materialist perspective, the image is archived most correctly as pixel data. Nobody in the thread disagrees with this. The discussion concerns the screen shot, and whether it should be modified to look like it does on a CRT-screen (by re-constructing a ‘correct’ palette and using a TV-emulation). It is a question of what is the most ‘accurate’ representation of the image.

By STE’86

STE, a commercial pixel artist from the 80s who was active in the demoscene-ish universe Compunet, wants CSDb to “let me display my work in the manner and spirit it WAS created in. and let ME be the judge of that being as how i actually did it 25 years ago and may indeed have some recollection of what it looked like”. His idea of the image is a construction of e.g. two things: memories and screens. The way he remembers the image is not necessarily what was actually on the screen. Even if it was, his CRT-screen was different from those of others. Furthermore, his PC/Mac-screen might show graphics a bit different compared to your screen. Nevertheless, his point is that an archive such as CSDb should not modify the images in anyway, because for one it’ll be a huge problem to update it as the emulator improves.

The problem is that some images need some kind of filter/emulation, because they rely on the blending that PAL-artefacts create. In short, C64-graphics looks different on modern ultra-sharp screens. Bogost describes the inaccuracies of emulators in terms of texture, afterimage, color bleed & noise. These can be vital aspects for  pixel artists who work with CRT-screens, of course.

By Joe

What’s funny is how the technical discussions runs into a little halt half-way through the thread. It’s discussed if we can actually tell the difference between palette-issues and TV-emulation. In fact, the cause of the whole thread is revealed to have been an anti-alias issue in Firefox that was interpreted as a case of TV-emulation. For me, this is a little reminder to not get too stuck in technical details that, when it really comes down to it, is not something we are aware of anyway. In another way, it’s a reminder of what makes demoscene forums great!?<

Platform Studies: Think Inside the Box

December 18, 2009

Earlier this year, Nick Montfort & Ian Bogost released a book called Racing the Beam – The Atari Video Computer System. It examines how the Atari VCS was produced – how the cultural and economic contexts shaped the hardware – and perhaps more importantly, how it was used by videogame programmers.

This is the first book in an MIT book series called Platform studies, which somewhat surprisingly claims to introduce a new academic field. Hasn’t these sociotechnical studies been done many times before, both by scholars and other writers? There are hundreds of books about the social and the technological. Yeah, sure. But the point is that they are usually focusing on either technology or the social. Social scientists don’t code, and computer scientists don’t know sociocultural theory: they are two cultures. Eventhough that’s not really true, what is true is that Montfort & Bogost’s idea of Platform studies attentuates to “both sides” and “no sides” at the same time. They’re bringing social theory past the level of software, to the bare metal that feeds our data souls.

And that’s difficult. I know, because that is what I am currently doing with my thesis about chipmusic. It is, of course, crucial to use both technical and social perspectives – a perfect example of the relevance of platform studies. There is no way of understanding the personal motivations and (sub)cultural fields without studying the hardware. But of course, a soundchip is not much in itself. It is given meaning by software, people, culture and economics; it is society that continuosly shape both the materiality of and the conceptions about soundchips. The materiality has all the potential uses inside from the start, but maybe only certain sociocultural settings brings it forth.

Anyway, Montfort & Bogost recently published a paper, addressing some of the critique they have been receiving, most of which seem rather predictable considering their novel approach inbetween ‘two cultures’. It’s an interesting read, and while you’re at it you should also read their book(s). Oh, and as a nice coincidence Ian Bogost showed his Atari work Guru Meditation at Pixxelpoint where also e.g. HT Gold also was shown. And, well, tons of other good low-fi oriented stuff by Florian Cramer, Rosa Menkman, Vuk Cosic, Math Wrath,, Tonylight, and many others!

(btw, the title of the blog post was taken from here)

SIDmon and Other Synthetic Amiga Music Software

August 23, 2009

The other day I stumbled across Metin Seven, one of the people involved in making SIDmon (the first synthetical tracker for the Amiga). I e-mailed him and received extensive answers. He published (much of) these e-mails as an interview: the origin of the chiptune phenomenon.

I was mainly mailing him to hear his take on the etymology of the word chipmusic/chiptune. Usually, it’s said that the term first arose with sample-based Soundtracker chipmusic around 1990. But according to Seven, chiptune was used (just) earlier to refer to synthetical Amiga music. It will take some more research to find out if this was a wide-spread practice.

Listening to songs made in SIDmon 1+2, it sounds quite different from both sample-based and synthetical chipmusic. Soundwise, it actually uses “long” samples (often ST-01/02) a lot more than I expected. The synthesized sounds are used more like instruments among others and there doesn’t seem to be much nostalgia in there. Music-wise, most of the songs are based on minor scales; they are melancholic — not like the happy chip-MOD style. Also, the amount of videogame music covers is very low.

Seven argues that this was the early days of chipmusic, but it might be possible to explain it with software aswell. Synthetic Amiga software produces a sound rather similar to soundchips. Despite that, or maybe because of it, the music created with it takes the shape of melancholic space data music instead, quite different from “mainstream” chipmusic. But this is a very subjective statement. When I hear songs like Paranoimia or M.A.D (youtube-clips) I have a feeling that my brain goes back 20 years and ruins my chance to judge these songs “properly”.

By the way, Paranoimia was composed by TSM in a custom-format. From what I understand, songs that are available in custom format are not necessarily made in a custom program, but stored in a custom format for optimization purposes.

By the way #2, SIDmon 2 was not made by the makers of SIDmon 1. Seven told me that the publishers of SIDmon 1 (Turtle Byte) did not even pay them for it, and then hired Unknown/DOC to program SIDmon 2 (who previously made his own version of Soundtracker). Their own sequel was called Digital Mugician, later followed by the Windows-tracker Syntrax.

Note Duration in Chipmusic Software

August 17, 2009

Due to a comment by Viznut, I’ve had quick look into music made with the PDP-1 (circa 1960). This was a popular machine in the early hacker culture that grew out of universities such as MIT. A few of the audio hacks are documented in MIT’s HAKMEM (1972). Writing this post, it gradually turned into a form of platforms studies or media-specific analysis, describing note duration on different trackers/hardware, and considering the compositional consequences of it. The scope of platforms here is limited, and comments about alternatives are appreciated.

Sometime between 1960 and 1964, Peter Samson made the music software Harmony Compiler along with additional hardware, enabling 4 channels of square wave audio. Dan Smith and others used it to compose music, or “encode” as they say themselves, which was saved on paper tape (materiality matters!). Peter Samson had previously made music software for the TX-0 (1956), that you controlled with a light pen! Oh the joy. At bitsavers you can find files that seem to be related to his software. And if you want to try to get in on the PDP-fun you can check this video where Peter Samson talks after 80 minutes.

J.S. Bach, Organ Concerto No. 1 in G Major, 3rd movement, BWV
592 being played on DEC PDP-1.Photographer unknown, from here

Harmony Compiler was text-based. According to Smith‘s memory, a melody could look like: 7t4 7t4 8t4 7t4 9t4 8t2. 7t4 means note 7 at duration 4, and you could replace the t with other letters to change e.g staff (bass, tenor) and tempo. For each note you have to state the duration of it, just like in traditional sheet music. I remember using similar ways to make ringtones on my Nokia mobile phone only a few years ago. And in fact, this principle is still the basis for the dominant way of sequencing music today: piano roll sequencing. For each note you enter, the duration of the note must be graphically declared with the mouse. Replacing mouse with light-pen, this is what you did with Samson’s TX-0 software from the 50s.

In step sequencers and trackers, you don’t have to set neither duration nor tempo in the sequencer. You can if you want to, but each step has a predefined length in time and every instrument has a fixed duration. This is true in particular for trackers using synthetical sounds, with its roots in Soundmonitor (C64, 1986). To vary the duration, you use effects to sustain the duration and alter volume envelopes. For example, in JCH Editor for C64 there is a default volume-setting in the instrument, but by writing ‘+++’ in the tracker, you extend the duration of the sound. So in the picture below, the C-4 note on the right lasts longer than usual. There is also an effect to change the volume envelope, so you can make the duration shorter. But effects are abstracted one level; with the command SXX you point to line XX in an effect table where you set the volume.

jchJCH 20.G4 (C64, 1991), screenshot taken from HVMEC

Synthetic trackers are typically not ideal for playing with more complex volume dynamics. One reason is limitations in the platform that generates the sounds. As a composer you cannot set the volume any way you please; you are dependent on the so called ADSR capabilities of the hardware/software (the volume curve generator). A special example is the C64, where the dynamics are highly determined by the hardware due to the ADSR-bug in the soundchip. It means that you can never be completely sure that an instrument will play with the same volume envelope. It depends on what ADSR-values the previous instrument had, and when it was executed. Also, some ADSR-settings tend to produce clicks, which can make it frustrating to program clean delays for example.

This is easier with sample-based trackers such as Protacker (Amiga, 1990), since you can set absolute volume levels at any point. In these tracker you can also use instruments with fixed volume curve. An audio sample sounds the same each time you trigger it. But the duration varies with different notes, because a higher note means a higher speed, and therefore the sample ends faster. Usually this is fixed by looping the sample, and setting the volume directly in the tracker instead. Each time you initiate the sound, you have to manually set the volume decrease: A0F, A0F in the 1st channel below. Without copy-paste functions, it takes a lot of work to write the same volume envelope each time an instrument is triggered. Being a somewhat lazy composer thus only increases the amount of volume dynamics.

protracker-3.15Protracker 3.15 (Amiga, 1993), screenshot from xmp @ sourceforge

In a sense then, we are back to the explicit note duration style of Harmony Compiler. To complete the circle, we can consider the tracker-like software that works with explicit note duration. HVMEC defines such software as editors, rather than trackers. In the screenshot of DMC below, you can see the pattern editor on the right side. Notes share space with commands (DURation, SouND, SLiDE), which means that step #8 (ADR.00) in this channel, might not be the same place in time as step #8 in the other channels. This adds a new layer of trickery, and I remember cursing a lot when using this back in the days. But the detachment of channels and the lack of overview, means that you can play with polyrhythms very easily – a feature I always appreciated also in LSDj. It is as far as I know, not very common in your average piano roll software, where the master clock is an authoritative master.

dmcDMC 5.1 by Brian & DJB (C64, 1997), screenshot from HVMEC

To conclude then – it seems that from the software discussed here, the sample-based ones give the most direct control over volume duration. In my view, this is reflected in the amount of volume/duration dynamics in Amiga MOD-music compared to C64-music. But, the difference is not necessarily between sample-based or synthetic trackers. In Musicline Editor (Amiga 1995) you can control synthetic sounds with pattern effects for both instrument volume, ADSR and channel volume. But this flexibility is possible only because the synthesis is made in software. In LSDj (Gameboy) there are also different ways to experiment with the volume, but it’s determined by the hardware’s limitations in ADSR.

The point with medium-specific analysis is to keep one thing constant across different media, and even if this was a rather basic attempt, it does give some interesting results and some basics to start from. Then you can start looking at hacks to overcame hardware limitations in volume control, such as for the 2A03.