Archive for the ‘theory’ Category

More on intended uses

March 26, 2018

blackflag.gif

I heard some anarchists had some feedback my post about intended uses of technologies. They disagreed with my claim that we don’t know the original intentions of Facebook. So let me expand a bit on the previous post.

I have this slightly mystical idea that humans can’t fully and perfectly understand what a certain media is (following Kittler). So I don’t think we should go all anthropocentric and claim that we know exactly what this is, like objectively dude. Maybe aliens know it better? Eh, for example. Our understanding of something as “simple” as a Commodore 64 clearly changes over time, as we discover previously unknown details. So we should at least be a bit humble and keep an open mind about the media’s substance.

IMG_5758

That is not to that say some media aren’t made with specific intentions from its human inventors, which might be obscured from the end user. It’s a very important discussion too, but a somewhat different one. Maybe Facebook was intended to become what it is today already from the beginning, as these anarchists claim to know, but how can we be sure? Spotify is easier to speculate about, because we know that it didn’t start with the idea of streaming music. They wanted to stream something. Whatever. Peer-to-peer something. But they knew that they wanted to sell ads. So maybe that was the original intention?

We can and should speculate about these things. Especially when we talk about the politics of media. After all, Spotify becomes something else when its history doesn’t start with “let’s revolutionize the music industry in our underwear” but instead “let’s sell ads by streaming stuff” (in Swedish). But I’m not sure that we should put too much focus on the origin.

In the end, it feels like a particularly Western thing to look for an “original intention”. A singular origin. The “one man, one idea” kind of thing (yeah, those stories are mostly about men). It’s probably more complex than that, right? Lots of people involved, economic interests, unexpected events, failures, power struggles, ideology, and so on. Even if we can define one point of origin, it seems pretty unlikely that any intended uses would be so firmly embedded in an object or in a company that it would withstand the pressure from decades of political and economical changes. Or, you know, from your friend Steve who turns your computer company into a walled garden.

IMG_6452

To me it seems fairly obvious that a human-made object can take on a “life” of its own that the inventors cannot anticipate or explain, and that the inventors don’t own. And it’s also fairly obvious that there are psychopath inventors and structures that don’t care/know about what they destroy.

Gif from Black Flags by William Forsythe. Just-in-case disclaimer: I don’t dislike anarchists.

Advertisements

Who decides what “intended uses” are?

March 18, 2018

For the last year or so, there has been a growing mainstream critique of social media. Silicon Valley entrepreneurs and investors are raising their concerns about what Facebook and other cyber gangs are doing to society. See for example the Center for Humane Technology. The recent concerns are often embedded in a discourse that “Russia” has abused Facebook to influence voting. But did they really abuse it? Or did they merely use it, as an article in WIRED recently put it?

It’s an important distinction, which has everything to do with how we talk about chip music and low-tech art. I’m doing a talk in Utrecht this summer, which has brought me back to these ideas. And they feel highly relevant now, with discussions about what social media are and how they should or should not be used.

Once upon a time the App Store rejected a Commodore 64 emulator because its BASIC interpreter could be used to program stuff. That was unacceptable at the time, but these policies later changed to allow the fierce power of C64 BASIC. It makes the point clear enough: what the iPhone and other iOS-devices can do is not just conditioned by its hardware. The possibilities of a programmable computer are there, only hidden or obscured. But there are ways to get around it.

And this is true for all kinds of hardware. Maybe today it’s even true for cars, buildings and pacemakers. There are possibilities that have not yet been discovered. We rarely have a complete understanding of what a platform is. My talk in Utrecht will focus on how the chip- and demoscenes over time have unfolded the platforms that they use. What is possible today was not possible yesterday. Even though the platforms are physically the same, our understanding – and “objective definitions” of them change. And it almost seems like the emulators will never be complete?

With a less object-oriented definition of these platforms, it’s reasonable to define the 8-bit platforms not only as the platform itself, but as an assemblage of SD-card readers, emulators and other more-or-less standard gadgets for contemporary artists and coders. The Gameboy, for example, might have been an inter-passive commodity at first, but after development kits were released, it changed. It used to be really difficult or expensive to get access to its guts, but now it’s relatively easy. So it might be time to stop framing Gameboy music – and most other chip music – as something subversive; something that goes against the “intended uses” of the platforms.

Sure, the Gameboy was probably not designed for this, in the beginning. And Facebook was probably not designed to leak data, influence elections, and make people feel like shit. But that’s not really the point. These possibilities were always there, and they always will be. But perhaps the Center for Humane Technology will push such materialist mumbo jumbo to the side, and re-convince people of the “awesomeness” of social media.

 

Custom Fonts Destryoing Your World

March 11, 2018

I’ve started to post things at the text-mode tumblr (and archive) again. This was prompted by me starting to write on my book about text graphics again. It’s taking forever, but it’s definitely starting to take shape now.

I’ve started digging deeper into character sets and fonts, and it appears to me that customization is becoming more popular. Ray Manta has been experimenting a lot with his own custom charsets in a yet-to-be-released text-mode editor. Now one of his fonts will be in the upcoming version of the Retrospecs app. This app lets you convert images to text, which is certainly not new, but what is new is that you can choose between a wide range of fonts and palettes, mostly from 1980s computers and consoles.

Ray Manta’s custom charset destroying the world

Playscii (formerly Edscii) is one of the first software I heard about that did this, and that also lets you design your own. Polyducks uses it to make interesting merges of ANSI, PETSCII and custom charsets. Some of his works, like the recent Boko Forest, almost doesn’t look like text graphics anymore. It looks more like the tile graphics that for example the NES uses, where everything on the screen is built from these “mosaic blocks”. I call this form of text graphics text mosaics because they often share more with geometric and mosaic art, than with text. Still, it is text graphics on a technical level.

It’s not easy to say where text graphics end and tile graphics begin. A font can be designed to look like graphical tiles, and tiles can be designed to look like text. On a material level we can look at things like colours (text graphics has more limited use of colours) and resolution (tiles often larger). When there is more colours and details, at some point it doesn’t feel like text graphics anymore.

But where that line is drawn, is something that changes. Over time and place and context. For example, I see sceners who say that ANSI graphics that are more than 100 characters wide, is not “real” ANSI anymore. In other contexts it might be fine to make make it 1000 characters width, which basically turns it into pixel art where you don’t see the individual characters, and still call it ANSI. Because that’s what it is. Or?

If custom character sets and fonts become more popular, I think this will push these changes further, in all kinds of directions, in the coming years. And maybe eventually destroy the world. But more about that in the book. It will hopefully be finished before those years have passed…

If you don’t want to miss the book release sign up to this and you can read more on this topic in previous blog posts here.

Beyond Encodings: A Critical Look at the Terminology of Text Graphics

June 15, 2017

I used to write a lot about text art here in the blog, but it’s been a while now. I’m still very much into it, though, and I do update TEXT-MODE every now and then. Today, I’m publishing an article about text graphics in the Finnish academic journal WiderScreen’s new issue focusing on text art. It’s pretty great, I have to say, with contributions from active artists and scene researchers alike. Raquel Meyers gives a thorough look into her KYBDslöjd approach where she, among other things, disses the oft cited ideas from media archeology that old media are more or less dead. Gleb Albert takes an interesting economic approach to ANSI art in the warez scene. Daniel Botz talks scrolltexts, Dan Farrimond shows teletext works, and Tommy Musturi shares very interesting artistic techniques with PETSCII graphics. And there’s much more.

I’ve contributed with the text Beyond Encoding: A Critical Look at the Terminology of Text Graphics. In my text I give brief overviews of ASCII, ANSI, PETSCII, Unicode and Shift-JIS art; some of the most popular forms of text graphics today. Text graphics is my own umbrella term for these visual forms, because I don’t think it’s necessary to downplay the skills and work involved by calling this “art”. Just like with the demoscene, I think it’s a lot more relevant to generally consider these works as a form of craft. Raquel also touches on this topic in her text.

My key point though, is that I find terms such as ASCII art or PETSCII art to be more difficult to use by the day. After all, these are forms of encoding. They only stipulate what number each character has. A lower-case a is 97 or 129 or 65 or something else. That’s of course very important for the technical purpose of displaying it correctly, but mostly… I mean… Who cares what numbers are there?

It’s about time to start to look beyond the encodings to discuss and categorize text graphics according to other criteria. Which fonts are used? Are the fonts customized? What kinds of characters are (not) used? What style does it have? How many colours and what resolution does it use? How was it made, and which media is it presented on? In what (sub)cultural context does it exist? For these purposes, I’ve included a model in the text to look at the different material levels of a piece of text graphics.

I also suggest the term text mosaic to refer to text graphics that use blocks rather than lines. These are especially popular in Western ANSI and PETSCII art, but exist in all forms of text graphics where the font has block characters. Block ASCII, Unicode or Shift-JIS art based on block elements, Chinese ANSI, and so on.

Text mosaic is different from ASCII art. I think we can accept the popular idea of ASCII art mostly using line characters, and alphanumeric characters. You know, all the ASCII-converters work in this kind of Matrix-style. And this idea actually also exists in the ASCII art scene, where you talk about block ASCII if it’s not like “normal” alphanumeric line-based ASCII art.

In this way, we don’t have to fight against the dominant idea of ASCII art, but we can and should develop more refined terminology for when it’s necessary.

OK, over and out.

What Can We Learn From the Demoscene?

November 28, 2016

I was in Montréal for the I/O Symposium and gave the talk What Can We Learn From the Demoscene? In 45 minutes I explained everything about the scene and explained what other fields could learn from it.

Or well, not exactly. I tried to give a broad view, but I zoomed in on four key points:

1. Computing as craft. The idea that code (and music and graphics) requires skills and knowledge about the material you are using. The techne is more important than the art, and the human is more important than the machine. Basically. This means that the scene is making computing sustainable, when most others are not and the internet already seems to require nuclear power to live.

2. Non-recorded formats. Releasing things as code rather than recordings gives very different possibilities. Scene productions are not products – removed from the platform once it’s finished – no, they are states of the machine (Botz). There are countless archives of data that future researches can unleash heavy data analysis on. What will the recording industry offer to future researchers? Not much. Especially not if they maintain their stance on copyright and related rights.

3. Collective copyright system. There has always been a tension around ownership in the scene. Early on there might have been plenty of anti-copyright among crackers, but later sceners who wanted to protect their works had a much more conservative stance. I exemplified this through the Amiga mod-scene, where artists sampled records and claimed ownership to the samples. “Don’t steal my samples” like it says in many a mod-file. On the other hand, the mod-format made it extremely easy for anyone to take those samples, or that cool bassline, or whatever else they might fancy. The remix culture was present in the materiality, but the scene resisted it for various reasons. They developed a praxis where artists who transgressed – who borrowed too much, or in a wrong way – would be shamed in public and have their status lowered. This sounds brutal and even primitive, but copyright praxis today means that you can do whatever you want if you have the capital for it. Which is perhaps not much better?

4. A bounded culture. There is a sense of detachment from the rest of society in the scene. The crackers and traders broke laws, the sysops didn’t want journalists sneaking around in their bulletin boards, and some artists follow the idea of “what’s made in the scene stays in the scene”. Some online forums today do not accept members if they are not sceners. And so on. There are all kinds of problems with this attitude, but it also meant that the scene could let their traditions and rituals take root, over a long period of time. Without it, it’s harder to imagine that kids in the 1990’s could maintain a network culture on their own, even before the www was commercialized. The question is, though, how many teenagers today are interested in all those obscure traditions and rituals?

Building on talks I’ve had with Gleb Albert, I also talked shortly about the neoliberal tendencies in the scene. How meritocracy and competition was so important, how groups were sometimes run as businesses with leaders and creatives and workers, and how there was a dream of having a network culture where The Man was not involved.

Discussions followed about how the neoliberal tendencies were different in the North American demoscene. In America, they said, people got into cracking games and making demos with a goal set on making a career and making money. I think this is one of the topics that Gleb Albert is looking into (in Europe), especially the connections between the cracking scene and the games industry.

There were also discussions about what I’ve called the collective copyright system. Some people in the audience talked about how coders would secretly look at other people’s code (because, again, that’s possible due to the formats used) and take inspiration from it. I’m sure most sceners did this at one point or another. But the point is that it wasn’t considered positive like in remix cultures such as hip-hop, vaporwave or plunderphonics. That tension between the Open and the Closed is probably something we need to understand better when we develop post-copyright networks in the digital.

Tech Criticism is Dead?

April 15, 2016

Evgeny Morozow has been one of the more spicy academics during the past years. He combines philosophy, internet criticism and social science to deliver clever and well-founded blows to the world.

While reading this, I got the impression that he is starting to run out of steam. He was always a bit of a pessimist or cynic, but now it feels like he’s doubting what he’s doing:

Why, then, aspire to practice any kind of technology criticism at all? I am afraid I do not have a convincing answer. If history has, in fact, ended in America—with venture capital (represented by Silicon Valley) and the neoliberal militaristic state (represented by the NSA) guarding the sole entrance to its crypt—then the only real task facing the radical technology critic should be to resuscitate that history. But this surely can’t be done within the discourse of technology, and given the steep price of admission, the technology critic might begin most logically by acknowledging defeat.

He’s talking about the academic world, and seems to intentionally ignore a lot of active criticism that is taking place in media studies, art, sociology, design, and so on. But I think his point is: the criticism is not making an impact on the public so it doesn’t really matter anyway.

Internet freedom mongers were apalled by Pirate Bay-founder Peter Sunde saying that the battle of the free internet has been lost and it’s time to move on. Compared with the two other admins of Pirate Bay, he was more into the political aspects of internet activism, rather than the technology. And he still is.

Morozow too, has his aim on larger political questions. He slanders on the technologists and the technology critics who fail to see the bigger picture. Like in the Apple vs FBI-debate that was not only about technnology (encryption) and personal integrity, but much more complex. The issue at hand is not what technology does to the daily lives of human brains, and their job bodies. Or how technology should be an “extension of man” (a slave). The main question should be more like: how does it infect society, and who wants the consequences?

I’m thinking about how this relates to the lo-fi computing world. 10-20 years ago it was charged with a myriad of political values of anti-consumerism,  anti-hitech, libertarianism, socialism, recycling and sustainability, DIY/punk, retrofuturism, and so on. There’s not much of that left now, is there?

Retrocomputing to me seems more like a club for middle aged conservative white men who have beards because of Linux or because of “I’m not a hipster, but…”. We have enough money to pay for vintage hardware and ridiculous crowdfunding campaigns. Some of us even use it from time to time! But emulators are so much more convenient, of course…

Morozow says that technology criticism is “just an elaborate but affirmative footnote to the status quo”. And that pretty much describes much of the tongue-in-cheek, just for fun, “hacking intended uses” people of retrocomputing of the last 10 years. It has confirmed that high-tech progress is #1, baby.

Meanwhile, the tech industry “doesn’t really like democracy” and wants to techify the governance of cities. And in all honesty, doesn’t it seem likely that this will eventually happen? Capitalist realism + Californian ideology.

F**k yeah, loving the end of an era.

From Space to the Clouds

January 15, 2016

For the last 30 years, computer culture has moved from outer space into the clouds. From the dark and mysterious into the bright and familiar. From the alien and unknown to the heavenly.

Look at computer magazines from the 70s and 80s and you’ll see joysticks flying around in space, space exploration metaphors, black backgrounds, otherwordly vector grids, and star fields in space (I sometimes post these things here).

Space was the place, and not only for computers. A lot of movies, record sleeve covers, design and advertising were often out in space. Mars was exciting. Governments spent a lot of money on space exploration. And in the computer underground, space aesthetics was the shit. Personally, I feel like the Amiga crack intro aesthetics in the years around 1990 had something eerily space special, that hasn’t really been matched since.

Another way of describing this shift is to start in the depth of Hades instead, and move upwards to the clouds. Then you can also fit in all the metaphors about water and oceans (Pirate Bay, surfing the web) and land (information highway) and biology (swarms, flows, feeds). Computers started out in Hades, looking pretty evil and frightening (like many other “new” technologies). The computer world was something dark, something unknown and unexplored. Like space. Like Hades.

If you listen to how computers sound in movies and tv-series, you can get a sense of that. If you look at a movie from the 70s or 80s, or even earlier than that, computers were usually sonified with fast arpeggios of random squarewave bleeps. Scary and harsh, not easy to process for a human, as from another world. In the 90s computers started to sound differently. A sort of high-pitched ticking sound; a single tone/noise iterated into eternity. Rational and trustworthy. Reliable.

Those sounds are still heard in movies and series, especially when the computers are doing something important for the plot. To emphasize its cold power, for bad or for good, usually in scenes with advanced stuff, rather than everyday use.

In everyday use, it’s the sound of the operating system that is perhaps the most relevant. Brian Eno invented ambient in the 70s and, through his soundtrack for Windows, also invented the genre of operating system music. Soothing, kind, soft, business/beach, cloudy, comforting. Sort of vaporwavey today, I suppose.

This could be seen as a step away from the complicated and clumsy computer world of the 1980s, to a new era of user-friendliness. In a way, it was part of a general move away from hardware. Since the 1990s, software has taken over from hardware. We don’t want hardware anymore; we want it to be ubiquitous, invisible, unnoticeable, transparent. The interaction between computers and humans is disappearing. Designers no longer design interfaces but experiences (UX), something that Olia Lialina has written about many times.

Again, this brings us into the clouds. The dirty and dark cyberspace is being replaced by the immaterial and heavenly clouds. It’s a quest for perfection in a secluded world, protected from bad cyberd00ds and bulky hardware and political conflicts.

Everything solid condenses and turns into clouds that pee precicious data on us.

Greets to FTC for inspiring this post in the kolonistuga!

What’s Chipmusic in 2015?

November 13, 2015

When I wrote my thesis on chipmusic in 2010, chipmusic was in a transition phase. Atleast in Europe, there used to be a lot of influences from genres like electroclash and breakcore, and towards the 10’s it was common to hear house influences. House, not in the 80s or 90s way, but more in the EDM kind of way. I remember playing a chip event in 2008 where all the acts before me played EDM-like music, so I felt compelled to start my headliner set with religious chip rock as a childish countermove. Instant anti-success!

That same year I mentioned in a blog post that more dub/2-step influences in chip would be nice. And then dubstep morphed from an obscure and ambiguous Brittish thing into a full-on mega-defined bro monster, and the chipscene followed suite. Bass!

So from where I’m standing (which is not super close to the chipscene), EDM and bass still seem to be two dominant influencers of the chipscene. It’s a bit like breakcore and electroclash was before, but with one big difference. Chipmusic as a genre/ideology/praxis has changed from putting the technology first, into putting the sound first. To put it bluntly.

Just like in the 1990s, the hardware used to produce the sounds of chipmusic is not the main thing. The pendulum has swung back, and continued even further. Not only is the hardware used not as important, but it seems like the sounds are less important too. Not everywhere in the chipscene, but in some contexts.

There are some oldschool names in the chipscene whose music no longer sounds like chipmusic, and is not made with chipmusic tools, but is still tagged as chipmusic, listened to in the context of chip, and discussed in the chipscene. It seems to be part of the chipscene, but it doesn’t connect to the platforms or aesthetics (media and form) of chipmusic. Go to a chipmusic festival and you can listen for yourself.

My last few releases might fit in here to some extent. I partly use other sounds and instruments than the standard chipmusic repertoire – and have been for quite some time. So I’m not saying that there is something “wrong” with this, just that it seems like a general shift in how the chipmusic/chiptune terms are used, and what they mean.

The other side of the coin is that there are people who should know the term, but don’t. I was chatting with Dubmood and he mentioned that a lot of newcomers start to make chipmusic without even knowing about the term. Even if what they do is “authentic” chipmusic (from a 00’s perspective), they don’t describe it as such, and people don’t listen to it as such, I suppose.

We’re painting with a big brush here. Or perhaps with many small brushed. I’m not saying this happens everywhere all the time, but it is a tendency. It might grow, it might disappear, but it’s here now.

It is the chipscene as a culture. A network of people in social platforms online, perhaps with a long history of making chipmusic, who now make other kinds of music but continue to hang out. They might use modular synths to make noise, or oldschool synth VSTs to make synthwave vaporwave something, or phat bass music, or polka drone, or something else.

Of course, the tech-focused and aesthetics-focused parts of the chipscene still exist: in the demoscene, in indiegames, in forums like chipmusic.org, Battle of the Bits, the FB-group Chiptunes=WIN (with 4000 members now), and so on. But as for the performers and recorders in the chipscene, the technopurism that glued the scene together, for better or worse, is not there anymore. And if the sounds won’t be a defining factor either, then where does that leave the chippers?

Perhaps chipmusic, atleast in some contexts, has been de-genrefied to the point where it doesn’t exist anymore? And maybe that’s not a bad thing? Finally the people who say that chipmusic is not a genre will be right without a doubt.

The Chip Sect

October 1, 2015

I recently watched a reasonably bad TV-show about sects. There was a doctor of religion who talked about her interviews with children growing up in sects, that she did for her dissertation.

Are these children really free? the journalist asked her. Her answer was, as an obvious adaptation to the level of the discussion, yes.

According to her, the children didn’t see themselves as non-free. They could choose between “freedom” and “safety” and they had chosen the latter. The sect. And in that safety, they felt free to do what they wanted.

I thought that was a nice way to put how I sometimes feel about using old computers to make music. Freedom within the system. Thinking within the box. And the safety of always having the option to go well it might suck but at least it will always be chipmusic. I think Zabutom expressed something similar in my master thesis, actually.

The journalist replied: But aren’t they so repressed by the sect that they can’t think anything else?

¯\_(ツ)_/¯

The chipscene is a sort of bounded culture, where the members choose to live in celibacy from the temptations of technology. A community of people based on abstinence rather than affluence. They dare to say no.

People on the outside cannot understand why you would want to limit yourself to old machines. For the outsiders, more options means more freedom. For the insiders, more options means more angst. More confusion.

In my master thesis and in this blog, I described this as a sort of alternative to the progress-mania of modernism/capitalism. But I had never thought about it from this religious perspective before.

It makes sense. Some, at least. After all, it’s one thing to critique these things in text or art, and it’s another way to actually live that critique. And in some way, that is what some sects do. They might be fecked up in other ways, but they don’t buy the affluent idea of freedom that most of us do in the West. Spam freedom.

We think we are more free because we have “freedom of speech”. We think we are more free because we can wear “whatever we want”. We think we are more free because we can buy 98791672445838 different kinds of whatever. And by we I mean me.

Deep down I think I believe that too, because I’ve been trained to. It is completely normal. If I wouldn’t believe it *at all* I would be in trouble. I would be in a sect. Or even worse, I would make 8-bit music.

A retrospective on the stories and aesthetics of 8­bit music

January 26, 2015

Taken from the catalogue to Lu Yang’s exhibition ANTI-HUMANISM at the OK Corral gallery in Copenhagen. I was asked to write a free-floating essayoid text about 8-bit music, and I came up with this. I added some links here too, for further reading/watching/listening.

When practitioners of 8­bit music like me write about the genre, it is hard to ignore the skills and effort needed to make the music. To play 8­bit music you need to master a not­so­intuitive software interface in order to communicate with a computer chip, that in return produces bleeping sounds from cheap digital logic. On or off, increase or decrease. These inputs are the basics of digital technologies, making it as if there is something timeless about 8­bit music, although it might seem really old: 30 years in digital terms is the equivalent of something like 1001011001010101011111101011 years.

8­bit music can be understood as a low­level cultural technique of music hacking, where different stories can be told. The sceptic might tell a story of nostalgia for videogames, where the composer makes simplistic music because the tool used doesn’t allow anything complex to be made. Indeed, that would be a normal story to tell if we believe that newer is better, and that new expressions require new technologies. It’s an almost logical story in a society that values quantitative increases over quality.

The most common story about 8-­bit music among academics, artists and journalists, however, puts the human at the centre of attention. It sometimes has a similar narrative to an old monster movie. There is a hero who learns how to manipulate and finally control some sort of wild beast. Instead of a monster, the Obsolete Computer is a mysterious relic of old school digital consumerism that is nowadays hard to understand, both in terms of purpose and function. A young white male hero appears and tames a frightening thing with rational choices, and probably kills it with physical or symbolic violence. He achieves freedom and love and/or emancipation from capitalism or modernism or something. The end.

I should know, for I too have told this kind of story. Many times. I started making music with 8­-bit machines as a kid in the early 1990’s, when that was (almost) the normal thing to do. Thing is, I never stopped using them. Throughout the 2000’s, as 8-­bit music started to intertwine with mass culture again because of the current retromania, people like me had to start explaining what we were doing. Journalists started to ask questions, promoters wanted biographies that would spark an interest, art curators wanted the right concepts to work with, and so on. So during the noughties, a collective story started to emerge among those of us who were making 8­-bit music in what I have called the chipscene: a movement of people making soundchip-­related music for records and live performances (rather than making sounds for games and demos as was done during the 80’s and 90’s).

The stories circulated around Commodore 64s, Gameboys, Amigas and Ataris, Nintendo Entertainment Systems, and other computers and game consoles from the 1980’s. We were haunted by the question “Why do you use these machines?” and although I never really felt like I had a good answer, we were at least pretty happy to talk about our passion for these machines. For a while anyway.

In comparison to many other music movements we spoke out about the role of technology, and we did it at the expense of music. We didn’t care much about the style and aesthetics of the music we made, because 8-­bit music could be cute pop and brutal noise, both droney ambient and complex jazz. We didn’t care about the clothes we wore, or which drugs we took, or which artists we listened to. We formed a subculture based on a digital technology that uses 8 bits instead of 32 or 64, as modern machines do. Defining our music movement as “8-­bit music” was a simplified way of explaining what we did. It was a way of thinking about medium and technology intrinsic to some modern discourses on art. Like, anything you do with a camera is photography. Simple, but slightly … pointless?

The music somehow came in second. Or maybe third. Sometimes the music we made almost became irrelevant. The idea of seeing someone on a huge stage with a Gameboy was sometimes enough. The primal screams of digital culture roaring on an over­sized sound system in a small techno club, was what we needed to get us going, even if it sounded terrible. Some of us were more famous than others, sure, but there wasn’t the same celebrity­ and status­ cults as in some of the “too ­serious” 1990’s­ style electronic music scenes. For us, the machines were the protagonist of the stories. Sometimes it was almost as if we – the artists who made the music – had been reduced to objects. It was as if the machines were playing us, and not the other way around. Yeah.. very anti­human!

To be honest, not many people are willing to give up their human agency and identity, step back, and give full credit to the machine. Or even worse – have someone else do that for you. Well, I didn’t feel comfortable with it, at least. People came up to us when we performed live to interrupt and ask what games we were playing. Or perhaps requested some old song from a game: But for many of us, the entire movement of 8-­bit music was not about the games of the 1980s. It was about the foundational computational technologies and their expressions manifested as sounds. Or something like that, anyway.

It’s quite interesting how this came to be. How did 8­-bit music become so dehumanized, when it involves quite a lot of human skills, techniques, knowledge and determination? I think an important factor was when the chip­scene was threatened by outsider perspectives. In 2003, Malcolm McLaren, known for creating spectacles such as the Sex Pistols in the 1970’s, discovered 8-­bit music. For him, this was the New Punk and he wrote a piece in Wired magazine about how the movement was against capitalism, hi-­tech, karaoke, sex, and mass culture in general: Through the appropriation of discarded commodities, the DIY spirit, the raw and unadulterated aesthetics, etc.. On McLaren’s command, mainstream media started to report about 8-­bit music, at least for 15 minutes or so.

To be fair, it was a good story – when Malcolm met 8­bit. But it pissed off plenty of people in the scene, because of its misunderstandings, exaggerations and non­truths. It did, however, play an important role in how the scene came to understand itself. McLaren’s story had stirred a controversy that made us ask ourselves “Well, if he’s wrong, then who’s right?”. We didn’t really know, atleast not collectively. McLaren pushed the chip­scene into puberty, and it began to search for an identity.

I was somewhere in the midst of this, and contributed to the techno­humanist story that started to emerge. It was basically this: We use obsolete technologies in unintended ways to make new music that has never been done before. Voila. The machine was at the centre, but it was we, the humans, who brought the goods. We were machine­ romantic geniuses who figured out how to make “The New Stuff” despite the limitations of 8-­bit technologies. It was machine­ fetishism combined with originality and the classic suffering of the author. It was very cyber romantic, but with humans as subjects, machines as objects, and pop cultural progress at the heart of it. It could be a story of fighting capitalist media. All in all: pretty good fluff for promotional material!

Over time, I became increasingly uncomfortable with the narratives forming around 8-­bit. In 2007, I was asked to write a chapter for Karen Collins’ book From Pac Man to Pop Music. I researched the history of 8-­bit music and realized the current techno­centric view of 8-­bit music was a rather new idea. In the 1980’s there wasn’t any popular word for 8-­bit music. Basically all home computer music was 8­-bit, so there was no need to differentiate between 8, 32 and 64­ bits as there is today. That changed in the 1990’s, when the increase of hi-­tech machines created a need for popular culture to differentiate between different forms of home computer systems and the music they made.

The term chip­music appeared to describe music that sounded like the 1980’s computer music. It mimicked not only the technical traits of the sound chips, but also the aesthetics and compositional techniques of the 1980’s computer composers. So 1990’s chipmusic wasn’t made with 8-­bit machines. The term was mostly used for music made with contemporary machines (Amigas and IBM PCs) that mimicked music from the past. It wasn’t about taking something old and making something new. It was more like taking something new and making something old. In other words: not very good promotional fluff.

I realised something. The techno­determinist story of “anything made with sound-chips is chipmusic” was ahistorical, anti­cultural, and ultimately: anti­human. Sure, there was something very emancipating about saying “I can do whatever I want and still fit into this scene that I’m part of”. That’s quite ideal in many ways, when you think about it.

Problem is – it wasn’t exactly like that. Plenty of people made 8­-bit or soundchip music that wasn’t understood as such. The digital hardcore music of the 1990’s that used Amigas. The General MIDI heroes of the 1990’s web. The keyboard rockers around the world, who were actually using soundchips. So for me it became important to explore chipmusic as a genre, rather than just a consequence of technology. If it’s not just a consequence of technology, then what is it? How were these conventions created, and how do they relate to politics, economics and culture?

This is what I tried to give answers to in my master thesis in 2010. Looking back at it now, what I found was that it was actually quite easy to not make chipmusic with 8­bit technology. I mean, if you would hook a monkey straight into an 8­bit soundchip, it’s not like there would be chipmusic. It would be more like noise glitch wtf. Stuff. Art. I don’t know. But not chipmusic. Chipmusic was more about how you used the software that interfaced you and the hardware soundchip. So I tried to figure out how this worked for me, and more importantly, for the people I interviewed for my thesis. How and why do we adapt to this cultural concept of what non­human “raw computer music” sounds like?

I am still recovering from this process. During this time my music became increasingly abstract and theoretical. I started to move away completely from danceable and melodic music, and got more interested in structures and the process of composing music, rather than the results of it. I wanted to rebel against the conventions that I was researching, and find something less human, less boring, less predictable.

But at the same time, I wanted to prove that we don’t need hi­-tech machines to make non­boring music. I despise the idea that we need new technologies to make new things. And I am super conservative in that I, in some way, believe in things like craft, quality, and originality. In some way.

So I was trying to find my own synthesis between me and the machine. Since I am not a programmer, I didn’t work with generative systems like many post­human composers do. I kept a firm focus on the craft of making music. For example, I started to make completely improvised live­sets without any preparations. I got up on stage, turned on a Commodore 64, showed it on a beamer, loaded the defMON­ software, and made all the instruments and composition in front of the eyes and ears of the audience.

I like this a lot because it’s hard work (for me) and it gives surprising results (for me). It’s a bit similar to live coding, if you’ve heard about that, but with a less sophisticated approach, I suppose. It’s more like manual labour than coding. Typing hundreds of numbers and letters by hand, instead of telling the computer to do it. You have to do it “by hand” which opens up for different mistakes compared to when it’s automatized. Which leads to surprises, which leads to new approaches.

I am not in full control, nor do I want to be. Or, more correctly, I don’t think I can be. I agree with the media theorist Friedrich Kittler’s ideas that we can never fully grasp or relate to what a computer is, and how it works. It is a thing on its own, and it deserves respect for what it is. We should not say that it has certain intended uses – like a “game computer” – because that is just semantic violence that in the long run reinforces the material censorship of Turing complete machines into crippled computers, like smartphones.

I think that whatever we use these things we call computers for, is okay. And most of us have odd solutions to make technology do what we want, even if we are not programmers. Olia Lialina calls it a Turing complete user – s/he who figures out how to copy­paste text in Windows through Notepad to remove the formatting, or perhaps how to make Microsoft Word not fuck up your whole text.

What I mean is: even if I make sounds that people say “go beyond the capabilities of the machine”, I don’t see myself as the inventor of those sounds, nor do I think that they go beyond the machine. They were always there, just like Heidegger would say that the statue was already inside the stone before the stone carver brought it forth.

Yeah, I suppose it’s some sort of super­essentialist point of view, and I’m not sure what to make of it to be honest. But I like how it mystifies technology, rather than mystifiying human “creativity”. The re­mystification of technology is great, and the demystification of the author is important. What if the author is just doing stuff, and not fantastic art? What if it’s just work?

My Dataslav performance plays with this question. I sit in a gallery, and people tell me what kind of song they want, and I have to fulfil their wish in no more than 15 minutes. I turn myself into a medium, or perhaps more correctly: a medium worker. I mediate what other people want, but it takes skills and effort to do it. It’s perhaps craft, not art. Or maybe it’s just work. Work that I don’t get paid to do, like so many other “cultural” workers in the digital arts sector.

If the potentials are already present in the technology, and we humans are there to bring it forth, that kind of changes things, doesn’t it? We don’t really produce things by adding more stuff to it. We are more like removing things. Subtraction rather than addition.

And if that’s the case, then it’s obviously much better to use something where we don’t need to subtract so much to make something that most people didn’t already do. If everything is possible, which some people still believe to be the case with some technologies, then that’s a whole lot of stuff to delete to get to the good stuff!

So, start deleting. It’s our only hope.