How’s the Demoscene on Twitter?

April 17, 2013

I’ve been meeting sceners in the strangest places lately. Which got me thinking. What do all those old sceners do these days? What do they work with? Are they all programmers and geeks? Or what? What do sceners talk about today?

Enter Twitter list! [UPDATE: go here & here, read comments] I know, they’re usually quite useless. But perhaps this could actually be a useful way to use it; to show the differences within a group. To show what they talk about when they are not together. All the other things. I think that could be quite interesting with such a diverse group such as the demoscene.

So I started by searching Twitter bios for demoscene, demoscener and scener. The hits I got showed that those people are mostly programmers that have tweeted something during the past week. (Also, there are people who call themselves demoscene hangarounds!)

Some accounts were – surprise! – more popular than others. I made a list of some that had more than 1000 followers, just for fun. This is not some top-of-twitter-megachart, it’s just an observation from a silly first search and leaves out eg Kim Dotcom, Axwell and other crispy phresh celebrities with a scene past. Anyway, here goes:

  • Richard Davey @photonstorm (game developer)
  • Maija Haavisto @DiamonDie (writer)
  • Mathieu Henri @p01 (programmer)
  • Tadej Gregorcic @tadej (programmer)
  • Douglas Alves @_Adoru_ (history professor)
  • Jussi Laakkonen @jussil (games entrepreneur)
  • Jean-Christophe G. @gatuingt (programmer)
  • Leonard Ritter @paniq (game developer, musician)
  • Tomoki Shishikura @T_SRTX1911 (?)
  • Renaldas Zioma @__ReJ__ (programmer, game developer)
  • Nathaniel Reindl @nrr (programmer)

I browsed around randomly among followers, remembered some people that I follow on Twitter, checked their followers, etc. Found another demoscene list and stole all the members, muhaha. I also checked #demoscene tag which was surprisingly empty. Even just a search for demoscene resulted in quite few hits. Anyway. I ended up with 256 accounts in total, after a few hours.

I tried to exclude inactive accounts, unless I thought they’d be active again. Didn’t include parties, groups, etc. I wanted to see how persons talked – regardless if they are inactive or active sceners. Scene for life, yeeeäah!

So let me know about all those scener accounts on Twitter! Let’s find all the forgotten sceners and see what they’re talking about. Fun, yes?

Delete Or Die #1 – Why Subtraction Beats Production

April 15, 2013

delete_working

Everything that we do is to delete things. We don’t create or add, we subtract and remove. Anyone who reads this text deletes my original intentions. The choice to read this text is a choice that exclude gazillions of other options. So thanks for staying!

Science agrees. In quantum theory, the world is a sea of virtual potentials and whatever happens is not much compared to what did not happen. It is only a drop in the ocean. According to some economists, capitalism thrives on destroying the past. It deletes previous economic orders and the current value of existing products, in order to create new wealth. Among some posthuman philosophers, humans are no longer thought of as creators, but as sculptors or signal filters. We receive signals and filter them according to the taste of our system. If it doesn’t make sense, it gets deleted. I guess cybernetic theorists and cognitive psychologists might agree on that one?

One of the phreshest cyberd00dz, senor Nick Land, once wrote that organization is suppression. Any kind of organization – imposed on anything from cells to humans – deletes more than it produces. This of course includes modern technologies like seach engines and augmented reality – more about that in a minute.

So: the most productive thing you can do is to increase the desire to delete. One easy way of doing that is to use sedatives. These are the drugs o’ the times – a reaction to the cocaine-induced individualism of the 1980s that was caused by the psychedelic ecologism of the 1960s. Nowadays we don’t tune in and turn on, we turn off and drop out. Artists do it like this while most people do it by watching TV or using “smart technologies” that deletes decisions for you. We need censorship, even if we think it’s wrong. Delete or die!

textfreebrowsing-screenshot-withwindow-02-700x458

Let’s look at a few very different examples that relates to this. If this all seems very confusing to you, first consider that the only way to be creative today is to be non-creative by e.g stealing & organizing instead of “creating original content”. From plunderphonics in the 80′s to the mainstream copyright infringement known as social media — now the next step is to start removing things.

Nice idea, but how useful would that be? Well, I experimented for a while with filling the memory with crap, loading a music program, and then start to remove the crap. Like a sculptor. And the idea was to make “real music” and not only noise, of course. Both challenging and fun! But anyway, let’s back up a bit:

Subtraction is all around us all the time. It’s how light/colour works, or some forms of sound works. Our own brains are really good at it too. We perceive and process only a fraction of all the input our senses can take in.

Another almost-naturalized form of subtraction, but in the arts, is the removal of content to reveal the form (uh, or was it the other way around?). I guess that’s what a lot of art of the 1900s was about? Abstractionism and minimalism, space and non-space, figure-ground oscillations, and so on. Take things out to reveal something we didn’t know before. Two unexpected examples: Film the Blanks and Some Bullshit Happening Somewhere.

Another rather recent thing is Reverse Graffiti. It doesn’t add paint, but removes e.g dirt & dust instead. Graffiti can also be removed by adding paint over it, which some people jokishly calls art. Or perhaps doing graffiti by carving the walls is more relevant?

Censorship is another topic. Here is a silly one where naked bodies are censored and the black boxes form new shapes and stuff. I suppose censorship could also include net art things such as Facebook Demetricator and Text Free Browsing. Also, Intimidad Romero does art by pixelizing faces

On the more techy side, Diminished Reality is the opposite to augmented reality, and seems to be very controversial to people. More so than augmented reality, probably because we think we’ll “miss out” on stuff instead of getting “more” like augmented reality promises. Whitespace is, I guess, a tongue-in-cheek project: a programming language that ignores normal text and only uses space, tab and newline instead. A favourite of mine is the game Lose/lose where you play for the survival of your hard drive’s files.

Some more examples:

 

For me these examples show how rich the field of DELETE actually is. And there is plenty of more to say. In fact, there was a rather big plan for this project once. But instead of letting it decay away and be unrealized (?) I decided to undelete it. Oh n0ez, teh paradox! Or maybe a blog post doesn’t count as being realized? Well I think it’s pretty obvious that the ████████ was ████ ███ ████████ ████ because ████████ ███ █ so ████████ ██████████ █ ████████.

Some useful slogans:

Progress = deleting alternatives

Any thing is a reduction of some thing

Understanding = organizing = deleting

Creativity spots the ugly and deletes it

Anything that happens is nothing compared to what could have happened.

 

A Tracker From the 1960s?

April 9, 2013

lejaren hiller knobs 1970

 

Lejaren Hiller was one of the first people to generate music with a computer. He was doing it already in the 1950s, just like for example Douglas Bolitho and Martin Klein (info).

The picture above though, shows something else. It’s a dot matrix print-out with instructions for how to operate the volume and EQ knobs on your hi-fi system while playing the record “Program (Knobs) for the Listener”, released in 1970.

While others would surely salivate over the random (?) numbers and the interaction/remixism that this presents, I’m more interested in seeing it as a tracker. A primitive tracker, but nevertheless:

  • It’s a text-mode list of instructions that runs vertically.
  • There are discrete steps fixed in time and all the instructions are locked to these steps, like a soundtracker.
  • The instructions are not absolute, but relative to whatever sound is coming from “under the hood” like a hypertracker.
  • It’s divided into tracks, and the tracks affect eachother just like they do on many old soundchips.

Sure, you could see this as an analogue step sequencer, combined with the ideas of John Cage (who Hiller worked with). It’s only the print out that makes it seem like a tracker. Makes sense. But then again, it is the level of interface that is the most defining part of trackers. Trackers could use analogue synthesis and generative features. They just never do. :–)

Btw – some people claim that Lejaren Hiller did the first computer music, but that is not true. In Australia and the UK people made computer compositions and audio as early as 1951. See here.

But could we say that this is the first example of a tracker interface? Yeah, of course we can. This is Chipflip, where dreams come true. So who’s up for the challenge of finding something older that looks like a tracker? I’m sure it exists, right?

80% listening, 20% improvisation. A Modern Composer?

January 20, 2013

I just watched a Norwegian documentary about noise music from 2001 (ubuweb). It featured mostly Norwegian and Japanese artists, and it struck me how different they talked about music. While the norwegians got tangled into complex and opposing ideas about concepts, tools and artistic freedom, the Japanese gave shorter answers with more clarity. Straight to the point.

It made me wonder (again) how human-machine relationships are thought of in Japan. Over here, it’s very controversial to say that the machine does the work. Deadmau5 did that, in a way, and I doubt he will do it again.

In the documentary, the Japanese artists said things like “When I am on stage I spend 80% of the time listening, and 20% improvising”. A very refreshing statement, and electronic musicians can learn a lot from it. Shut up and listen to what the surroundings have to offer!

There are many similar ideas in the West, especially after cybernetics and John Cage. The sound tech and the author melting together in a system of feedback. Machines are extensions of man (á la McLuhan) and we can exist together in harmony.

In the documentary, one Japanese artist turns against this idea. He doesn’t believe that the sounds and the author work closely together at all. For him, they are separated, with only occassional feedback between the two. Hmmm!

9bA0b

It’s an intriguing idea. When I first started reading about cybernetics, it was in the context of the dead author. Negative feedback loops that take away power from the human. I felt that my musical ideas were heavily conditioned by the tools that I used, and there was something annoying about that. How could there be harmony from that?

Maybe it’s better to think of it as a conflict. The computer is trying to steer your work in a certain way. And you want to do it another way. Like two monologues at the same time. It’s a reasonable idea, especially if you consider computers to be essentially non-graspable for humans – worthy of our respect.

However, that’s not how we think of computers. We’ve come to know them as our friends and slaves at the same time. Fun and productive! Neutral tools that can fulfill our fantasies. As long as the user is in control, it’s all good. No conflict. Just democracy and entertainment, hehe.

As much criticism as this anti-historical approach has received over the years, I think it’s still alive and kicking. Maybe especially so in the West. Computer musicians want to work in harmony with their tools. Not a conflict. “I just have to buy [some bullshit] and then I’ll finally have the perfect studio”. You heard that before? The dream lives on, right?

It’s  almost like 1990′s virtual reality talk. Humans setting themselves free in an immaterial world where “only your imagination is the limit”. Seems like a pretty christian idea, when you think of it. I doubt that it’s popular in Japan, anyway.

To conclude – it’s of course silly to generalize about Japan, judging only from a few dudes in a documentary. But I think there is still something important going on here. If anyone has reading suggestins about authorship/technology in Japan, please comment.

Retromania, Time Warps, Revivalism & Slovenia

January 15, 2013

Simon Reynolds’ Retromania – Pop Culture’s Addicition to its own Past gives a good overview of the intensified retromania of the last decades. He describes nostalgia’s integration in 1950′s pop culture, and the ‘memory boom’ of the 1990′s that made retro more … modern. You know, archive fever and cheap hard drives and all that.

Retromania focuses on a sort semiotic nostalgia. It’s about our relationship to content. We’re likely to accelerate and maximize this ‘content retromania’  as Reynolds suggests in an article. But there is also a material retromania that revolves around machines and formats. It’s obviously popular to use typewriters, Moogs and cassettes and delve into medium specifics. Gradually they are emulated, sampled and commodified into plugins and filters. Sometimes they even become specific signifiers, like the needle scratching across the vinyl record signifies interruption in sitcoms. Or an icon of a floppy disk means ‘save’.

tumblr_m7lvmuj9dA1r65559o1_500

From where I’m standing, it seems that retromania is moving away from content and towards the material. Songs are easy to find, records and machines are not. Reynolds writes plenty about collectors. I think that future collectors might have things like old firmware, ancient software versions, algorithms, or maybe a full multimedia set-up with Windows 95 and Netscape to browse like it’s 1997.

These things are usually described either as nostalgia or appropriation. Nostalgia is bad and appropriation is good, lulz. Nostalgia is non-intellectual and melancholic, appropriation is social and political. Oneothoprix Point Never is quoted in the book to have said that it’s about a desire to connect, not to relive things which I think illustrates this artificial separation quite well.

Reynolds doesn’t mention chipmusic in his book. But who can blame him? While techno, rock and punk emerged from extatic periods of the new, chipmusic was never really new and exciting. When the term chipmusic emerged around 1990, it referred to Amiga music that sounded like previous C64-music. 10 years later, micromusic.net was also looking back quite a lot.

ulan batorrrrr

So – chipmusic was always “retro”. From the start. That’s why it doesn’t really make sense to call it retro. To say that micromusic.net or the 1990s Amiga demoscene was retro, doesn’t really compute. Reynolds talks about two kinds of retromaniacs which I think capture the tension in the chip scene:

The revivalist dissident chooses an era and stays there. Some people still listen to the same chipmusic hits from the 1980s, and love it. It’s some sort of neo-conservatism, a rebellion against the new in mass culture, a freeze in the past. Lots of demoscene vibes here…

Time-warp cults focus on unsuccessful parts of an old era. Go back, and change the future. This reminds me of the 00′s chipscene mantra of “making something new with the old”. And it also makes me think about media archeology and all kinds of lo-fi practices in the context of Phine Artz. It’s not old (nostalgia) — it’s new and fresh! (appropriation). Retrofuturism, I suppose.

I think it’s two useful concepts. If I would have to choose one of these, I would choose revivalism. It feels more honest, somehow. For me it’s not about going back to a certain time/culture. It’s more about the machines. The sweet, smelly machines.

Anyway. We don’t have to choose sides. So nevermind that. We should probably look into stuff like hauntology and retrogardism instead. THE FUTURE IS THE SEEED OF THE PAST as the Slovenian IRWIN/NSK/Laibach said. Perhaps the difference between the past and the future is not so important after all..

Like Reynolds hints in the book – pop culture seems to go in cycles much like the economy. Growth through novelties. Unlimited progress. Forever young. Would’ve been great to read more about that in the book. About cycles rather than linear movements. Because that’s what really makes retromania interesting. If capitalism is going down the drain, so is pop culture.

Soundtrackers, Hypertrackers and Acidtrackers

December 30, 2012

tl;drThere are two kinds of trackers: soundtrackers and hypertrackers. But it’s a combination of them that is showing the way forward. And perhaps the micro-efficient trackers are more useful than ever, with the popularity of handheld devices.

When I wrote my thesis I had some difficulties to cover the topic of trackers. Although they are old and popular programs, there’s not much scholarly research on them. I can’t remember anyone trying to categorize trackers properly, for example. If you know of any such attempts, please get in touch.

For my thesis, I ended up talking about soundtrackers and hypersequencers. They refer to two dominant families of chipmusic trackers. Soundtrackers use sampled sounds and have a user-friendly interface. Hypersequencers are more about synthetic sounds and efficiency.

I find these two categories quite useful for discussing trackers in general. But I have also found that talking about trackers as hypersequencers (originally from Phelps) doesn’t feel quite right. Instead, I suggest the term hypertracker.*

So:

Soundtrackers are similar to sheet music, because they display notes and effects next to eachother. You can see which note is played, and also its ornamentation (vibrato, arpeggio, etc). The song is arranged in patterns, and one pattern includes one bar of all the voices. That means that all voices are locked to the same tempo, and the same arrangement structure.

Hypertrackers use more of a code logic. If soundtrackers are like sheet music with absolute values, hypersequenced music is like code that executes instructions. The note C might play a completely different note, depending on what kind of code is next to it. It enables a wild and “generative” composing style. Voices can have different tempos and sounds can be connected to eachother in a modular fashion. Hypersequenced music requires few resources (in terms of RAM, ROM, CPU) and mostly use synthetic sounds. They are “hyper” because they are referential; a letter or number usually refers to something other than itself.

Personally I find soundtrackers very convenient to use. They are straight-forward, simple and direct. Hypertrackers on the other hand, are more versatile and offer more surprises. They have more character somehow, and can lead the music in directions that the composer wasn’t aware of. Hypertrackers offer a lot of control and yet, as a composer, you can choose to hand some of that control back to the software. In soundtrackers it’s more up to the composer to take command.

Plenty of chip software doesn’t fit into these two categories. LSDj is an interesting example, since it takes inspiration from both. Obviously Mr. Kotlinski prefers hypertrackers. He even expanded the hyper-structure by adding more layers to the song arrangement, and by adding more tables. But just like one of his big sources of inspiration (MusicLine) it also incorporates some of the UI-ideas from soundtrackers. For example, you can set absolute effects next to the notes, such as pitchbend or vibrato.

This mixture of sound- and hypertracker became very popular in the chipscene. LSDj inspired LittleGPTracker, and created a new momentum. One example is Pulsar, recently created by Neil Baldwin who made 8-bit game music already in the 80′s. Even more recently, I’ve seen previews of new demoscene software that is highly inspired by LGPT.

These programs are not made for keyboards. They are designed for handheld consoles and very few buttons. Another difference from other trackers is that they can be used for live performances. Most trackers are pretty useless for live improvisations, unfortunately. A third difference: they can maximize the hardware. Trackers are normally designed to leave resources for code and graphics of demos and games, but this new generation allows you to use nearly 100% of the available resources. That is a fundamental difference, which is why chipscene Gameboy music can be more powerful than game/demo music for Gameboy.

The chipscene made chipmusic stand on its own feet, independently from the visuals, and that has affected the software too. New conventions have been developed, and it seems like future chiptrackers will follow this new path inbetween sound- and hypertrackers. It might also be used for other platforms with few buttons or low memory. Arduino and Raspberry Pi come to mind, aswell as smartphones with their complete lack of buttons.

In those situations I’d guess that “tracker” is a precise enough term. Just like  Renoise is a tracker, in a world of piano rollerz. But if there should be a new term for it, I suggest acidtrackers.

* I agree with HVMEC that trackers and editors are not the same thing. Trackers are step sequencers, while editors require the user to set the duration of each note (more here). The term hypertracker excludes programs like Soundmonitor or Future Composer, because they are editors. On the other hand, I think those kinds of programs are rare today. And perhaps they share more with MCK/MML or piano roll sequencers, than with trackers?

Realtime Text /2/ Interview with BBS-artist

December 5, 2012

The previous post was inspired by a conversation I had with Erik Nilsson, probably the only one who’s made a music video on a BBS. We talked about the 1990′s, when teenagers used BBS instead of WWW to talk. When you could see how the person on the other end of the modem was acting. I’ve added my comments [in brackets] to explain some technical stuff that Erik talks about.

ERIK > I remember as an early lamer, the sysops would wonder what the fuck you were up to. I remember the feeling of knowing that the sysop could be watching your every move. It was a bit like being in someone’s house, or in some sort of social club.

I remember the local BBS Secret Gate as one of the first places where I was accepted, and met friends. They had 3 nodes [phonelines = 3 simultaneous users] so you could chat with other users – not just the sysop. That’s how I started to hang out with Mortimer Twang, and together with Trivial we started Divine Stylers.

CHIPFLIP > Did you talk mostly about computer stuff, or also other things?

ERIK > I lived in an isolated place, so the computer was really a window into a world full of everything. Mortimer’s early mod music was my introduction to loop-based alternative music. The loopy and psychedelic aspects of dance music works really well in amiga trackers.

But there was also friendship, and pretty close conversations. I remember when I had my own BBS and my best friend called. We had fallen for the same girl, and I remember the chats we had about it. The pauses and the trembling made the conversation more tender. It was a really emotional talk, which I can still think back to and appreciate. It could have been through any medium, but I remember how the pauses and the tempo of the text made it more “charged”. I remember typing “I’m crying” and getting back “me too”. :)

There is a big difference in seeing the words take shape, instead of just reading them. It’s more personal. What you type is closer to the thought you have before you say it.

CHIPFLIP > Why do you think the real-time text isn’t around anymore?

ERIK > What was once standard no longer exists. It’s as if technology has taken a step back when it comes to text-based communication. I really don’t know why the intermediate step of pressing return has been added. It’s like you publish the text, while you used to say things more directly. The movement of the cursor reveals how the person is hesitating, erasing or contemplating.

If you chat on a BBS, you press return twice to signal that the other one can start writing. But it was still possible to interrupt the other one, if there was a heated argument for example. That doesn’t happen the same way in say Skype, because there is a gap between the users. It feels more plastic and more “simulated” than it has to be.

Well, when I think about Skype, which I use on daily basis there actually is a ‘function’ reminding about the old days standard in a weird way. In Skype you can actually see on a small icon when the person is typing and erasing, it’s really far away from the old chat style, it’s a weird verson of it in some way.. Still not even close to the thing I miss, but I guess someone was thinking about this gap when making Skype.

CHIPFLIP: And it’s more difficult to change your mind, too. Did you use the backspace often?

ERIK: Yeah, you erase constantly if you’ve learnt how to type street style. Erasing is just as important as typing. ;) I got really into animated text. It was a like digital thumb twiddling. You typed something, erased it, and replaced it with something new to make an animation. Sometimes you erased it because you didn’t want to keep it on the screen, like card numbers for example :) You typed it on the screen, and when the other person had written it to a piece of paper, you erased it.

CHIPFLIP > So one way to make animations on a BBS is to quite simply “type the animation”. And due to the slow modem speed, it will look animated when you play it back. But what kind of options were there to make the graphics on the BBS?

ERIK > There were a couple of different chat systems. The most common one was that each user had a colour, and you simply pressed return twice when you were done. There were also more advanced chats for ami/x, where you could move the cursor freely, like in a text editor or like the message editor in C*Base for C64.

CHIPFLIP > Was there anything bad about it being real-time?

ERIK > No. I mean it’s not the real-time thing that made it disappear. It changed because IRC took over most of the communication for the elite scene, since it was more global. When internet came real-time chat just disappeared by itself. It’s probably all just one big PC bug.

The situation is a bit similar to that of PETSCII [Commodore's own ASCII-standard, with colors, plenty of graphical characters]. PETSCII is a better and more evolved system for text and symbols. It was more beautiful and personal to directly use the keyboard to write a letter to someone using colours, symbols and even 4×4 pixel graphics. Today you have to load images and change font colour in some menu to make a really spaced out e-mail. It’s slower, and it’s not “in the keyboard” like on the C64.

CHIPFLIP > What’s the best modern alternative to PETSCII?

ERIK > ANSI is not really an option, from my point of view. It’s typical “slow PC” style. Like some kind of Atari. You draw the graphics in a graphics program. Choose with the mouse. Draw fancy stuff from choices you make on the screen. It’s just like Photoshop.

PETSCII could’ve been a good source of inspiration for mobile phones, for example. But it needs an update to have meaning and function today. But how the system works, makes it the most interesting one I know of, still. ASCII is okay, but you still have to use a special editor to make the graphics. That’s a step in the wrong direction.

The C64 is like a synthesizer – you just turn it on, and get to work. With modern computers you have to wait for it to start, find the right program, and so on. They say that computers are faster today, but honestly – I have no idea what they are talking about! They only seem to get slower.

It’s strange, because computers were not supposed to become stiff and flat, like they are today. There’s all this talk about more convenience and speed, but from day one humans have only made it harder for computers to help us.

CHIPFLIP > A very broad explanation, also, is to consider analogue media as immediate (light bulbs, guitars, TVs, analogue synthesizers) and digital media as more-or-less indirect. It can never have zero latency and we seem to, somewhat paradoxically, accept that changing the channel on a modern TV takes 10 times longer than it used to. If you know Swedish you can read more about those things here.

Other than that, thanks so much to Erik for sharing his thoughts on this. Let’s fix the future!

Realtime Text /1/ Why Did it Disappear?

November 30, 2012

When we chat to each other, we don’t do it in real-time. Until we press return, the person on the other end can’t see what we’re doing. But it wasn’t always like that. Before the internet took over the world, you could actually see how the other person was typing. It is like a digital equivalent to body language; involuntary, unescapable, direct and intimate. All this was destroyed, as the return key gradually went from carriage return (↵) to enter.

Initially, the most mainstream example of real-time text I could think of was real-time captions for TV. It’s a service offered to deaf people in public service areas like UK and Scandinavia. It’s produced word-by-word (“chords“) and its mere existence adds a new dimension to TV-watching: you know when a program is following a script and when it’s not. There are many more real-time text services, often involving so called disabled people. Actually, there is even a Real-Time Text Taskforce (R3TF).

But wait a minute. Why did I forget about collaborative text editors like Etherpad or Google Docs? I use those very often. Great for having two people editing the same text. But they are also boring, I guess. I use them primarily for facts, lists, research, etc. Only a few times did I use them for something more playful or emotional. It’s like having fun in Microsoft Word. It just doesn’t happen, unless as an anomaly. Consider the difference to a less officey site like Your World of Text.

It’s not that it’s not possible to use real-time text. In fact, popular chats like Google Talk and iChat support it, but don’t implement it. AOL IM implements it but you have to activate it yourself.

Chat is a clear example of how new media makes things more indirect, by adding layers to the interface. Even if you believe that digital media only gets better, you’d have to admit that chat is an exception to that. Right? Chat is actually slower and less expressive than it was in the 90′s. Or even the 70′s with PLATO. Chat has derailed into some sort of primitive enter-beast, where you can’t even draw or use images.

Computer-mediated human-to-human communication is quite primitive, isn’t it? It’s like 1968 only with more layers to make it indirect and abstract. Layers of secrecy, as good ol’ Kittler would say.

In the next part, I will post a conversation I had with the BBS-artist Erik Nilsson. That was actually the reason why this post was written, so stay tuned!

We Are The Zombies, Not the Machines

November 29, 2012

Zombie Media – media that are living dead. This is a concept that Jussi Parikka and Garnet Hertz have developed for a while now. They recently published a mini-manifesto of a larger text, that is locked into academia unfortunately.

This is connected with the field of media archeology, which I think is a very interesting and confusing field. It feels like I should love it, but there’s something that bothers me about it. First I have to admit that I haven’t read any of the books on it, so I’ve probably misunderstood plenty of things. Let’s go through the 5 points of the mini-manifesto and see.

1. They oppose the idea of dead media, but they also talk about it. A lot. In fact, the idea of dead media seems crucial for the whole manifesto. So how does that compute? Aren’t there better ways to oppose the idea of dead media?

 

2. Zombie media are living dead, the authors claim. But… says who? I guess according to hi-def capitalism and its cynical idea of people-as-consumers. But what about all the people, not visible in the mainstream, who still use these media for the same reasons that others use mainstream media? Old people. Children. Poor people. Disabled. Demosceners. Me. Are we that irrelevant?

The machines are far from dead, atleast to us. So my question is: Doesn’t the zombie media concept completely surrender to planned obsolescence?

3. So there is a war on general-purpose computing, which seems highly urgent to address politically and pragmatically. The authors focus on practice, and argue for hacking the black boxes – echoing the free-and-open discourse (which deserves some scepticism). But how – and why – would the opening of technologies lead to something that we haven’t already seen?

4. The authors want to take media archeology into the art world. I don’t know, but didn’t that happen with chip/glitch in the 00′s, or the demoscene in the 90′s, or with all those Cages and Paiks of the 60′s? I agree that artists (and others, including me) need to engage more with technologies, and take it seriously. But I guess for me that means to master the tool, instead of bending it or something. Why should media archeological art build on appropriation and remixing?

5. Of course reuse is an important part of our culture. People don’t seem to be talking about much else these days. Everything is a remix and originality is a sin. But does that mean that we should promote remix culture even more? Doesn’t really seem necessary? It just seems Scandinavian. Why not just steal shit from the trash instead? Pay the guy at the recycling point to get some good machines. Why would an “open remix culture” be better than trashy hacking and computer love?

I never really liked archeology/anthropology so perhaps it’s not surprising that I don’t really get the ideas of zombie media. Why does it matter so much that it’s old? Why do we need to circuit bend and remix them, when they are amazing machines already? Why only focus on the differences?

The experts still haven’t figured out how they work. After 30 years the C64 is still not perfectly emulated. They are mysterious machines already. There is no need to hack them.

If there is a machine that should be hacked, it’s academia. If I was an academic I would do something about it before it’s 100% Google Scholar to anyone who doesn’t have leet access.

Meanwhile, the 8-bit computers work just fine. They are not the zombies. We are the zombies. We are the ones who are too lazy or busy to learn how to use them. That’s why I don’t believe that encouragement of appropriation and remixing and opening is going to amount to much. Just do your homework and stop fiddling around! :)

What Was Mainstream Chip in 2009?

November 28, 2012

I’m going through old post drafts for this little bloggie-blogg, and I’ve deleted about 20 of them so far. They were too good for this world.

But I came across a post-that-never-happened about mainstream chipmusic. As an initial research I was lurking around last.fm, to see which artists were the most popular. At the time those stats were pretty boring and pointless, but today they seem more interesting. Who remember anything about 2009 today?

As you can see the selection is pretty narrow, and there’s tons of important artists missing obviously. But these were the ones I checked before I found better things to do:

  • Slagsmålsklubben (Sweden) 3,500,000
  • Sabrepulse (UK) 1,400,000
  • She (Sweden) 900,000
  • Bondage Fairies (Sweden) 850,000
  • Dubmood (Sweden/France) 750,000
  • YMCK (Japan) 700,000
  • Anamanaguchi (USA) 550,000
  • Nullsleep (USA) 450,000
  • Random (Sweden) 400,000
  • Bitshifter (USA) 350,000
  • Goto80 (Sweden) 200,000

There were a few popular artist at the time that I didn’t include, since they werent’s popoular at last.fm at all, despite big popularity elsewhere. DJ Scotch Egg, Meneo and Patric Catani were three of them.

Also, I guess there were bots to improve the last.fm-statistics, right? Iirc, some people (on this list) used those kinds of bots for MySpace.


Follow

Get every new post delivered to your Inbox.

Join 65 other followers