Archive for the ‘politics’ Category

Who decides what “intended uses” are?

March 18, 2018

For the last year or so, there has been a growing mainstream critique of social media. Silicon Valley entrepreneurs and investors are raising their concerns about what Facebook and other cyber gangs are doing to society. See for example the Center for Humane Technology. The recent concerns are often embedded in a discourse that “Russia” has abused Facebook to influence voting. But did they really abuse it? Or did they merely use it, as an article in WIRED recently put it?

It’s an important distinction, which has everything to do with how we talk about chip music and low-tech art. I’m doing a talk in Utrecht this summer, which has brought me back to these ideas. And they feel highly relevant now, with discussions about what social media are and how they should or should not be used.

Once upon a time the App Store rejected a Commodore 64 emulator because its BASIC interpreter could be used to program stuff. That was unacceptable at the time, but these policies later changed to allow the fierce power of C64 BASIC. It makes the point clear enough: what the iPhone and other iOS-devices can do is not just conditioned by its hardware. The possibilities of a programmable computer are there, only hidden or obscured. But there are ways to get around it.

And this is true for all kinds of hardware. Maybe today it’s even true for cars, buildings and pacemakers. There are possibilities that have not yet been discovered. We rarely have a complete understanding of what a platform is. My talk in Utrecht will focus on how the chip- and demoscenes over time have unfolded the platforms that they use. What is possible today was not possible yesterday. Even though the platforms are physically the same, our understanding – and “objective definitions” of them change. And it almost seems like the emulators will never be complete?

With a less object-oriented definition of these platforms, it’s reasonable to define the 8-bit platforms not only as the platform itself, but as an assemblage of SD-card readers, emulators and other more-or-less standard gadgets for contemporary artists and coders. The Gameboy, for example, might have been an inter-passive commodity at first, but after development kits were released, it changed. It used to be really difficult or expensive to get access to its guts, but now it’s relatively easy. So it might be time to stop framing Gameboy music – and most other chip music – as something subversive; something that goes against the “intended uses” of the platforms.

Sure, the Gameboy was probably not designed for this, in the beginning. And Facebook was probably not designed to leak data, influence elections, and make people feel like shit. But that’s not really the point. These possibilities were always there, and they always will be. But perhaps the Center for Humane Technology will push such materialist mumbo jumbo to the side, and re-convince people of the “awesomeness” of social media.


New Media is More Obsolete than Old Media

May 18, 2014

Cory Arcangel, Golan Levin and others have done some great work to retrieve old Amiga graphics that Andy Warhol made back in the day. This is some great work! And I think it’s great that the Amiga gets some attention in terms of computer creativity instead of the constant Apple-ism. But.. what kind of attention is it?

Many artists, media scholars and journalists have a special way of talking about old media. The term hacking usually pops up. Even if you just download software and use it in a very normal way – like most chip music is made for example – we still love to call it hacking. But why? There are several possible explanations. First – we love to believe that humans are in control of technology and that fantasy can flourish with these old and supposedly non-user-friendly machines. Human intelligence can tame even this uncivilized digital beast! Secondly – the term hacking oozes creativity and innovation and has become an omnipotent term used for almost everything.

Obsolescence is another popular word. I’ve written about this many times before, for example in relation to zombie media. Let’s put it like this: new media is permeated with planned obsolescence. Old media is not. Amigas were not designed to be obsolete after a few years like so many modern platforms, systems and programs are. So from our current perspective it seems totally incredible that these old floppy disks and file formats can still be used. Because we’re not used to that anymore. Most people don’t know how easy it is to copy that floppy to a flash card and view the images with UAE or even Photoshop.

It’s also common to think of old media as fragile. But then why do nuclear missiles rely on 8″ floppies? Why do so many airports use DOS, matrix printers and Hi8 video? Why did Sony sell 12 million 3.5″ floppies in 2009?Why did so many gabber/noise people use the Amiga for live shows? Because these things are stable, sturdy and built to last. And because it’s expensive to change it, sure, but the point is: old media is clearly not as fragile as many people seem to think.

To summarize this discourse we can say that 8-bit users are hacking media that is fragile and obsolete. While there is obviously some truth to that statement, a general adaptation of it rests on some pretty problematic ideological assumptions that we all need to relate to in order to get by in a consumer culture. For example:

“New media is better than old media because in technology, change = progress”.

I think we can all be more careful with how we discuss old media in order to move away from this dangerous misunderstanding. I know that there are many contexts where that is not suitable, possible or meaningful. But technological change oozes with politics and it doesn’t have to be conservative or retro-cool to criticize or reject the new. So bring it on, hipster!


The Truth Behind E.T + Something a Lot More Disturbing

May 2, 2014

In case you missed it – for the past week the internetz has been going bananas about Microsoft digging out tons of Atari cartridges in a desert in USA. Microsoft? Yeah, they are sponsoring a documentary about the “urban myth” that Atari’s game E.T was so bad that they buried it in a desert in USA in 1983. And now they’ve dug it out, and revealed the truth! Well…

1. It’s not news. It’s always been known that they buried cartridges (New York Times from 1983). Wikipedia even claims that kids looted the site to find not only E.T-carts but also Raiders of the Lost Ark, Defender, and Bezerk.

2. The E.T game was an experiment made in a few weeks. Whether the game is crap or not is up for debate, but it was a bold move in a flood of boring.

3. Atari made bad business choices and market predictions. They over-produced and over-priced their games, under pressure from their owner Warner. This was one of the factors of the North American video game crisis. It wasn’t about one single bad game. It was a bubble that burst. And it took years before it would inflate again, when Nintendo stepped up to show it’s done…

4. We now know for sure that it wasn’t only E.T in there, but several other games. In total more than 700,000 cartridges.

It’s going to be interesting to see the documentary, I guess. But the reporting of BREAKING! single game actually buried in the ground wow! is just wrong. The true story is more like a tech-bubble leading to tons of crap in the desert, which pissed off the locals living there. And that is actually not so far from how it works today. Only a lot more toxic, on a much larger scale, and completely normalized.

Planned obsolescence and “e-trash” commerce makes sure that tons of toxic tech-stuff  is shipped to e.g Africa and China to kill the kids who work with it. It’s a tech bubble – since both the production and disposal of consumer tech is ecologically and socially unsustainable – only this bubble is out of sight, and way more serious. Hey, maybe that could be topic of your next documentary on Xbox, Microsoft?


Toxics e-waste documentation (China : 2005)

More Networks, Less Internet?

January 3, 2014

When I started this blog 6 years ago, the internet was still a poster boy for freedom. Anyone could publish or access anything, anywhere, anytime. We were all pretty amazed by how “far” we had come. Surfing the waves of neoliberal postermodernism, we celebrated the right of individual freedom online, free from physical constraints. Free knowledge for all! We were all living the American dream. Or something.

So, at that time, it seemed almost irrelevant to talk about other networks for communication. Even so, I was writing a paper on the Amiga music scene in the 1990s, and what it could teach us about the future of copyright and distribution. Amiga musicians formed a teenage folk culture that effectively worked outside of the “music industry” and its long arms of the law.

While this seemed more like a historical curiosity at the time, these issues are now becoming relevant again. We’re starting to question “the internet” again, although our behaviours are still pretty much the same. We silently agree to mass surveillance by continuing to use platforms infected by spyware and backdoors, through infrastructure that analyzes and profits from that information.

I’m not sure we should be surprised. Maybe we should be more surprised that we had this “digital wild west” in the first place. I mean, we were able to reach billions of people at almost no cost at all, with very little control from corporate or public institutions. Is that a realistic situation? Well, for companies that work with “personlized content” and authorities who need to “fight terrorism”, or stock market bots that predict the future, it’s most definitely not.

In 2006 Alexander Galloway wrote that the internet was always about control, and not freedom. I assume that there’s more understanding for that statement today, compared to 8 years ago when YouTube was all the rage. Not only because of all the surveillance scandals, but because of an increased interest in net politics and new materialism. There is a need to understand the technology and the politics, to deal with things like net neutrality, hobby surveillance, drones, censorship algorithms, bots, IP, spam, etc. 

Many recent attempts at creating alternative networks have not been so successful (as in big). But there’s been many successful attempts in the past, and I for one would love to read more about it. So I’m glad that Lori Emerson is writing a book on other networks, and that Kevin Driscoll is writing a dissertation on hobbyist networks 1977-1997. And I know that Jörgen Skågeby is doing interesting work on software distribution with cassettes.

There is probably a lot more out there. But most of the research done in this field has been made by enthusiasts so far. They usually get the details right, but lack a certain critical distance. It often gets retro-romantic rather than future-fantastic. But these old networks can be an inspiration for the future!

Just look at the Amiga music scene. They used open file formats, free distribution, a distributed informal copyright system, and its own kind of infrastructure combining bulletin boards and postal mail. It was a small-scale network of like-minded people with no worries about big business hindering your work. It wouldn’t surprise me if such networks became more common again.

So, here’s to a 2014 full of BBS theory, Fidonet history, real sharing economies, low-tech infrastructures and platform politics. Bring it on!

My Presentation of 8-bit Users

November 22, 2012

Last week I made a presentation at Merz Academy called Hackers and Suckers: Understanding the 8-bit Underground. I was invited by Olia Lialina for a lecture series called Do You Believe in Users? in Stuttgart. This question should be understood in the context of a disappearing user in modern discourses on design. Computers have become normalized and invisible, and the user seems to have a similar fate. (read more in Olia’s Turing Complete User)

The talk was about 8-bit users, and the hype around 8-bit aesthetics. I talked about different 8-bit users – from those who unknowingly use 8-bit systems embedded in general tech-stuff, through stock freaks and airports, to chipmusic people and hackers. I explained how “8-bit” is both a semiotic and materialist concept, but often used as a socially constructed genre. 1950s music or 1920s textile can be called 8-bit today.

I explained what the qualities of 8-bit computing are, as based on my thesis: simple systems, immediacy, control and transgression. Some examples of technical and cultural transgression followed, and then I gave the whole “8-bit-punk-appropriator-reinvent-the-obsolete” speech and then dissed that perspective completely. Finally, I tried to explain my own view of non-antropocentric computing, man-machine creativity, media materialism, and so on. When I prepaired the presentation I called this Cosmic Computing, but I changed it because my presentation was already hippie enough…

  • Humans cannot have a complete & perfect understanding of a computer.  Following ideas from Kittler – and the fact that 30-year-old technologies still surprise us – this seems controversial for computer scientists, but not so much for artists?
  • Users bring forth new states, but that might be all normal for the machine. This is controversial for all ya’ll appropriatingz artistz, but not for Heidegger and computer scientists.
  • All human-machine interactions are both limited and enriched by culture, technology, politics, economy, etcetera. Meaning that “limitations” and “possibilities” are cultural concepts that change all the time.
  • Don’t make the machine look bad — don’t be a sucker. Make it proud! Another anti-human point, to get away from the arrogant ways that we treat technologies.

In hindsight, it was a pretty bad idea to be so anti-user in a lecture series designed to promote the user. (: And the discussion that followed mostly evolved around the concept of suckers. Some people seemed to interpret what I said as “if you are not a hacker you are a sucker”. This was unfortunate but understandable. I don’t mean that there are only two kinds of users. They are merely two extremes on a continuum.

Hackers explore the machine in artistic ways and they can be coders, musicians, designers — whatever. They are not necessarily experts but they know how to transgress the materiality/meaning of the hardware/software. They can make things that have never been done before with a particular machine, or something that wasn’t expected from it. That often requires not-so-rational methods, which is not always based on hard science. Just because you know “more” doesn’t make you better at transgression. There is a strong connection between user and computer. Respect, and sometimes a strong sense of attachment – even sexual? That’s probably easier to develop if you don’t plan to sell it when the next model comes out. (btw, this is not some kind of general-purpose-definition of the term hacker, just how I used it in this presentation)

Suckers, on the other hand, don’t seem to have this connection. They buy it, use it and throw it away. Either they don’t feel any connection to the object, or they don’t want to. They act as if they are disconnected from technology, and only suck out the good parts when it suits their personal needs.

It is a disrespectful use. The machines are treated merely as instrumental tools for their own satisfaction. Suckers are consumers to the bone. Amazing technologies are thrown at them, and suckers treat them as if they don’t even exist – until something stops working. Or they go all cargo cult.

I don’t like it when I act as a sucke.r, but it happens all the time. I recently got an iPhone for free. I’ve had it for months without using it, because I am scared of becoming a sucker 24/7. I am definitely not in charge of my life when it comes to technology. And I like that. Hm…


Are Humans More Disabled Than Ever?

September 9, 2012

This long post will provoke some of you, and feel free to lave a comment. I just want to clarify something first. The purpose of this post is to examine the similarities between how we talk about lo-fi computing and human disabilities. It is not about comparing machines with humans, but rather about the dominant discourses surrounding limitations and capabilities in general.

I was watching football 5-a-side, where more or less blind people play soccer in teams of five. Blindness, as you know, is considered as a handicap because visual culture makes it difficult to live without the small part of the electromagnetic spectrum that humans call light.

Playing blind football might seem absurd at first, but I was completely fascinated by it. The TV makes it look clumsy as the players stumble, fall, look for the ball, etcetera. But from a sonic perspective (sic) there is something very different going on. The players are navigating with sound, as the ball makes noises and your team shouts instructions for you. They create a small new world on the field. And it’s inaccessible to us who look at it.

Just like I initially valued 5-a-side as a less elite form of football, lo-fi computing is often seen as something less worthy by most people. Or perhaps it’s more worthy – “the results are good eventhough the tools are bad”. It doesn’t matter – it’s all centered around the same basic idea. Hi-fi is more useful, expressive or productive. It is the norm from which we value other things, just like with the human body. Perhaps some of you find it offensive, but I see many parallells between the mainstream discourses of low-res media and human handicaps. Specifically, the political discussions about them are often polarized between the “objective” and the “social”.

Social vs Objective

The objective model sources the problem to a single entity (human and/or machine) and is as such an ahistorical, essentialistic or psychological understanding. According to the social model, the physical ‘impairment’ is a problem mostly because society is not willing or capable to deal with it. This seems to be the dominant model today, adopted by e.g World Health Organization. But there is much confusion in terminology, and there doesn’t seem to be a term that will work in all contexts (disability, handicap, impairment, etc). And why should it? Why should there be a word that grouped together blindness, autism, crippled people and cerebral palsy?

The conflict that the two models are organized around is obviously an on-going process with very real consequences for people’s well-being, of which I’ve had some experiences during the past four years. I’ve seen how hard it is to deal with bureaucracy and daily life.

Are we more disabled than ever?

According to WHO, fifteen percent of the world’s population is disabled. That’s an increase of 5 percentage points since 1970, which is quite noteworthy. It’s not a fact – it is an estimate that varies with the choice of methods and terminology.  Nevertheless, we seem to think of ourselves – in general, in the developed world – as more ‘handicapped’ than before. We need drugs to be normal. We require digital tools to organize our daily life. Our knowledge has become prosthetic. And our lifestyle affects the prevalence of certain diseases and diagonses.

It could be argued that humans are handicapping themselves by creating machines that do the things we want to do, but cannot do ourselves (see this documentary). Or – are humans and machines working closer together to create a better world?

Whichever perspective is taken, it seems that in a techno-consumerist society, normal humans are not good enough in themselves. Post- or transhumanism is perhaps a taste of what is to come when we further develop glasses, hearing aids and artificial organs (btw, new aesthetic is back).

One possible consequence is that we accept more kinds of sensory perceptions and lifestyles. Perhaps we can learn to to respect and take advantage of the unique characteristics of each so called disability. Deaf police are better at video surveillance. There is (was :() a blind kid who relies mostly on sonar navigation. Deaf people perceive sound and make music. And so on.

But it is probably naive to think that the conflicts will disappear. There are norms to relate to and those norms grow from limitations of human perception and, in the case of computers, from progress as second nature. The conflicts concerning human disabilities is a much more pressing matter than legitimizing low-res computing. Perhaps this post has contributed with something, without offending too many people.

But the main purpose is to build a background for a future post about limitations in computing. Coming soon!

Slow-Tech as the New Religion

July 31, 2012

Is slow-tech a useful concept for the study of low-tech action? It seems to be on the rise in the form of apps that help you to lead a slower life. But what is it? Is it something more than just a counter-reaction to capitalism & speed?

Slow-tech defines itself in opposition to e.g fast food and the instant gratification of consumerism. When I first heard the term slow food there were all these connections to environmentalism, health, spa, buddhism, etc. I guess it came from San Francisco? (eh, no, stfu)

Apparently, this is called the slow movement and it appeared in the 1980’s. It’s the anti-thesis to the high speed of modern society (>-Virilio -<), but is framed more as a sort of consumer-health-issue in an idealised harmonic society, than something political. It’s still about consuming. It’s still about equilibrium. And for me, ultimately, it seems like a form of wellness – when you make individual choices to get a “successful lifestyle”.

The slow web makes more sense to me. The key features of the slow web has been described as Timely (not real-time), Rhythm (not random) and Knowledge (not information). It sounds very reasonable. In fact, maybe it’s even a bit too reasonable?

I agree that these things are important for a sort of modern media literacy. We need to learn how to deal with the tools and information of today. And probably, we need technology’s help to do it. It can help us to reduce stress by structuring shores, or motivate us by turning real-life sequences into “games” where you collect points and get more things done in your life. Augmented reality, etc.

But idk. Slow? Is that really so good? Looking at e.g slow-tech, it seems like a postmodern version of the californian ideology. There is an underlying idea that technologies can help individuals to become more free. That should basically be the purpose of technologies. So there is still a lot of nature-culture divide in there. In short, it’s antropocentric. Connect your body to AppStore – be successful and happy.

To be blunt: the slow movement sounds like a lazy, ego-centric, new agey and half-arsed alternative to consumerism. Speed isn’t bad, per se. Maybe speed is just what we need to get some *real* alternatives together.

Yesterday I saw The Take (2004) about workers reclaiming abandoned factories in order to make a living. There is a fitting slogan in there: occupy, resist and produce. To me that sounds better than … you know … slow tempo, sustainability and individual health. Less weed, more speed!

If you are interested in stuff like this, I recommend the documentary series All Watched Over by Machines of Loving Grace by Adam Curtis.

A New Hi to the High

September 2, 2011

There’s an interesting article in Vague Terrain about low-bit audio: A New High in Low: Adventures in Low Bitrate Audio. It’s a pretty good read, because it mentions 20kbpsDex and the City and Floppyswop which I released music on several years ago. :) Just have a few comments to make quickly before I go for some food.

It starts by talking about zombie media, and how benders salvage electronics that other people throw away, even if it usually still works. So anti-consumerism is the starting point. The article writes about low bit-rate music as a new approach that comes with a visual aesthetics that the author describes as infantile (hi Kodek and Overthruster!), but also fills an important function for poorer parts of the world. “More interestingly, though, is that no clever scripting, hacking, bending, or esoteric software was required to kickstart this audio micro-revolution: the ability to encode an MP3 at sub-‘CD quality’ bitrates is a feature built into the iTunes application”.

Hmmmm! It’s grounded in critical theory to describe lo-bit as a subversion, and art perspectives to say that presets can be used creatively. Or something like that. The standard way, you know? That’s all fine, but I think there are better ways to talk about this, which perhaps is less alien to the practioners themselves.

Not all music are recordings. MP3/OGG is just one option. For example, on my release at Floppyswop I used mod-files. Sounds better than a lo-bit recordings and it’s smaller in filesize. Non-recorded music is truly tricky for contemporary culture to deal with and it’s a shame that this article doesn’t discuss it. Well, I guess it wasn’t the point. Nevertheless, denouncing chipmusic as videogame remixes and emulations, is a bit perverse.

Lo-bit doesn’t have to be about authenticity. One charm with low bitrate is that it leaves things to the imagination. Low resolution gives more room for the listeners’ own interpretations. Some kind of brutalist hauntology. The articles says that authenticty is a mandatory selling point for culture consumers (which might be true), but it seems more refreshing to say: who knows or cares if it’s authentic or not?

Neither low bitrate recordings or chipmusic are re-animations of zombie media. People have done it for ages — it’s the things around it that has changed. And it’s not about unintended uses. Remember when it became possible to stream audio in the 1990’s? Real audio! It was quite useful, and it still is. Why wouldn’t it be? Because technology has changed? If yes, then you=technodeterminist and that’s not frexxy.

Chip Critique 1: Re-appropriation

April 28, 2011

“I like restrictions, reappropriating & using tools in ways they weren’t meant to be used”. Maybe it’s just me, but after hearing similar sentences for the past 15 years about chipmusic, I’m getting tired of it. Is that what we like, really? And why do so many chipmusicians say this?

The quote above is not from a chipmusician. It’s from the artist Petra Cortright who’s part of what I see as the Rhizome rhizome. In there you’ll find plenty of artists who work with general midi, low-res, old internet memes and GIF-animations. Perhaps it’s what Cory Arcangel called dirt style.

It seems that this grew big in the 00’s, for example with Arcangel’s NES-works. He supposedly reappropriated restricted technologies in new ways. Many low-tech hackers were annoyed with this, seeing it as some kind of imperialism on ‘their’ thing.

I reluctantly admit that I was one of those, back then. The art-perspective on this was just too foreign for me. But of course, chipmusic was gradually becoming part of the package. The postmodern rhetorics was an easy way to get attention for chipmusic, as we tried to legitimize our work beyond videogames and nostalgia (and still are).

So bending, glitching, dirt style and hacking grew more and more connected. And it was given a political relevance in typical postmodern cultural studies rhetorics. Critical new uses of technologies! It assumes that the glitching, bending and hacking of the previous decades, was somehow not the same thing. Making Gameboy music today might have a different cultural status, compared to 20 years ago. But still. It’s not like we’re making something new, or doing something that the technology was not intended to do.

Consider technologies as subjects instead of objects. Or, more appropriately, consider both humans and technologies as objects. Claiming that something has “intended uses” can be a discriminating claim then. Who’s to decide what the intended uses are? We pretend that the Gameboy was an “inter-passive” commodity, so now we pretend that we are heroes who liberated the machines (and ourselves).

Bob Stevenson - Max Headroom (C64, 1986)

We pose as the heroes of the digital age. The glitchers and benders bring forth the hidden expressions of the machine, and the chipmusicians give the technologies new value in new contexts.

Or: we are reinforcing traditional paradigms such as human excellence and and techno-libertarianism. Perhaps it’s a reaction to the lack of control and comprehension that modern consumer technologies offer. Perhaps it’s part of the zeitgeist of hauntology and lo-fi VHS-artefacts (or, uh, hypnagogic cyberpunk). But it’s certainly not very critical in my book.

And the machines continue to laugh. Let’s laugh with them, not at them. Uhm.

An Even More Secret History of Social Networks

February 1, 2011

BBC has published a radio documentary called the Secret History of Social Networking. It interviews people involved with BBS-communication in the 1970s, was influenced by the counterculture in California. It’s a rather expected historiography – pioneering Americans that used computers to network the whole world, and John Cage got into it. We’ve heard it before.

The counterculture merged with commercial interests in a Californian ideology that shaped the home computer revolution. This technolibertarianism probably made the term personal computer catch on so well. So in a way, it is a very relevant history of social networking: individual freedom and computer networks and entrepreneurs (yeah!).

Community Memory, a BBS from 1973

On the other hand, there are the social networks that emerged from software piracy in the 1980s. Already in 1979 there were digital networks for Apple II-crackers, and a few years later a lot of people were distributing cracked software. Not only modem-to-modem, but face-to-face and mailman-to-mailman. It was a network for middle-class kids that had little to do with highbrow art or traditional politics; it was merely a way to use computers for what they were designed for. Copying information.

In other words – it was a popular network where common people did common sense things. It was an early warez economy, which is not so different from the current network economy/culture. You make, share and remix things for free and you get stuff back – either as money or status. Or something like that.

The point is that the countercultural BBS-stuff is an interesting early example, but did it influence things to come?  Sure they conversed and organized through modems, but what else? The cracker/demoscene networks pioneered or perfected many things: text art, free distribution of executable artefacts, open source music and remix culture, mail art, computer parties, etcetera – and it had very real effects on the economy and culture outside of itself. Eventually. If the counterculture led to iTunes, then this network led to netlabels and the Pirate Bay.

I don’t blame the BBC for their angle and perhaps they will also deal with this topic in future episodes. But there’s been very little research made on the cracker- and demoscene networks. I wrote a text for the Media Art Histories 2009 that has some additional information, but it was hastily put together so don’t expect too much.