Archive for the ‘theory’ Category

New Media is More Obsolete than Old Media

May 18, 2014

Cory Arcangel, Golan Levin and others have done some great work to retrieve old Amiga graphics that Andy Warhol made back in the day. This is some great work! And I think it’s great that the Amiga gets some attention in terms of computer creativity instead of the constant Apple-ism. But.. what kind of attention is it?

Many artists, media scholars and journalists have a special way of talking about old media. The term hacking usually pops up. Even if you just download software and use it in a very normal way – like most chip music is made for example – we still love to call it hacking. But why? There are several possible explanations. First – we love to believe that humans are in control of technology and that fantasy can flourish with these old and supposedly non-user-friendly machines. Human intelligence can tame even this uncivilized digital beast! Secondly – the term hacking oozes creativity and innovation and has become an omnipotent term used for almost everything.

Obsolescence is another popular word. I’ve written about this many times before, for example in relation to zombie media. Let’s put it like this: new media is permeated with planned obsolescence. Old media is not. Amigas were not designed to be obsolete after a few years like so many modern platforms, systems and programs are. So from our current perspective it seems totally incredible that these old floppy disks and file formats can still be used. Because we’re not used to that anymore. Most people don’t know how easy it is to copy that floppy to a flash card and view the images with UAE or even Photoshop.

It’s also common to think of old media as fragile. But then why do nuclear missiles rely on 8″ floppies? Why do so many airports use DOS, matrix printers and Hi8 video? Why did Sony sell 12 million 3.5″ floppies in 2009?Why did so many gabber/noise people use the Amiga for live shows? Because these things are stable, sturdy and built to last. And because it’s expensive to change it, sure, but the point is: old media is clearly not as fragile as many people seem to think.

To summarize this discourse we can say that 8-bit users are hacking media that is fragile and obsolete. While there is obviously some truth to that statement, a general adaptation of it rests on some pretty problematic ideological assumptions that we all need to relate to in order to get by in a consumer culture. For example:

“New media is better than old media because in technology, change = progress”.

I think we can all be more careful with how we discuss old media in order to move away from this dangerous misunderstanding. I know that there are many contexts where that is not suitable, possible or meaningful. But technological change oozes with politics and it doesn’t have to be conservative or retro-cool to criticize or reject the new. So bring it on, hipster!

 

Wider Screen: Authenticity in Chipmusic

April 16, 2014

Yesterday I wrote about the new scene issue in Wider Screen, where several noteworthy scholars write on chipmusic, demoscene and warez culture. Today I return to that, to discuss the ethnographic study of authenticity in the chipscene. Chipmusic, Fakebit and the Discourse of Authenticity in the Chipscene was written by Marilou Polymeropoulou who I’ve met a few times around Europe when she’s been doing field studies for her dissertation. Her article is refreshing because it deals with technology in a non-technological way, so to say. It takes a critical look at the ideologies of chipmusic (which I also tried to do in my master’s thesis) and she doesn’t get caught up in boring discussions about what chipmusic actually is (which, uhm, I have done a lot).

Polymeropoulou divides the chipscene into three generations. The first generation is described as a demoscene-inspired strive for being an original elite, by challening the limitations of original 8-bit hardware from the 1980’s. As I understand, this generation is everything that happened before the internet went mainstream. The second generation is internet-based and focused on mobility (read Gameboy), learning by copying and making more mainstream-ish chipmusic. The third generation is characterized as “chipsters” that are more interested in sounds and timbres rather than methods and technologies.

The first generation of chipmusicians would be a very diverse bunch of people, activities and machines. Perhaps even more diverse than the chipscene is now. Back then there were not as many established norms to relate to. I mean, we hardly knew what computers or computer music was. The terms chipmusic or chiptune didn’t exist, and I doubt that it was relevant to talk about 8-bit music as a general concept. It was computer music, game music, SID-music, Nintendo-music, etcetera. People were using these 8-bit home computers to make music for school, for games, for art, for their garage band, for themselves, for Compunet, for bulletin boards, the demoscen, for crack-intros, etcetera. However, looking back through the eyes of “chipscene 2014″ it makes sense to zoom in on only the demoscene during this period, as it is normally considered as one of the most important precursors.

Chip Music Festival, 1990

In the demoscene there were many people who ripped songs to copy the samples, look at their tracker tricks, or just use the song for their own demo. Copying was common, but it wasn’t exactly elite to do it. There was certainly a romantic ideology of originality at work. But I’m not so sure about ascribing a technological purism to the demoscene of that time. Sure, people loved their machines. But most sceners eventually moved on to new platforms (see Reunanen & Silvast). So I’m not sure that this generation would be the anti-thesis to fakebit. In fact, when the chipmusic term first appeared around 1990 it refered to sample-based Amiga-music that mimicked the timbres of the PSG-soundchips and the aesthetics of game music.

So, in a sense, the Amiga/PC chip-generation of the 1990’s (when the 8-bit demoscenes were very small) was actually not so far from what is called fakebit today. And that’s obviously why this big and important momentum with tens of thousands of open source chip-modules is so often ignored in histories of chipmusic. It just doesn’t fit in. (It’s also worth noting here that many if not most 8-bit demoscene people today use emulators such as VICE or UAE to make music, and use the original hardware more like a media player.)

My theory is that the hardware-fetish of the chipscene is a more recent phenomenon, established sometimes in the mid 2000’s, and I think that Malcolm McLaren’s PR-spree had something to do with it, regardless of the scene’s reaction. If you listen to the early releases at micromusic.net and 8bitpeoples today, you could call it fakebit if you wanted to. Just like with the Amiga-chip music of the 1990’s. So it seems to me that this generation didn’t build much on what had been done in the demoscene, other than perhaps using tools developed there. Games, on the other hand, were a popular reference. So to me, the post-2000 generation of chipmusicians feels more like a rupture than a continuation from the previous generation (something like hobbyism->crackerscene->demoscene->trackerscene->netlabels).

At this time I was still a purist demoscene snob, and I thought that this new kind of bleepy music was low quality party/arty stuff. Still, I decided to gradually engage in it and I don’t regret it. But I was one of very few demosceners who did that. Because this was, in short, something very different from the previous chipmusic that was characterized by lots of techné and home consumption. Micromusic was more for the lulz and not so serious, which was quite refreshing not only compared to the demoscene but compared to electronic music in general (you know, IDM and drum n’ bass and techno = BE SERIOUS).

It’s funny, but when Polymeropoulou describes the third generation of the chipscene (the chipsters) it actually reminds me a bit of the early demoscene people, perhaps even during the 1980’s.

Chipsters compose chipmusic – and of course, fakebit – on a variety of platforms, including modern computers, applying different criteria, based on popular music aesthetics rather than materialist approaches. [..] Chipsters find creative ways combining avant-garde and subcultural elements in order to break through to mainstream audiences, a practice which is criticised by purists.

In the 1980’s they used modern computers to try to make something that sounded like the “real” music in the mainstream. They borrowed extensively from contemporaries such as Iron Maiden, Laserdance and Madonna and tried to make acid house, new beat, synth pop, etc. There was definitely some freaky stuff being made (“art”), and something like comedy shows (Budbrain) and music videos (State of the Art) and later on so called design demos (Melon Dezign) and those demos appealed to people who were not sceners. And the megamixes! Here’s one from 1990:

Okay… how did we end up here? Oh yeah — my point is, I suppose, that the demoscene is not as purist as people think, and never was. Atleast that’s my impression of it. But even if I disagree with the generational categorization of Polymeropoulou’s text, I consider this article as an important contribution to the field of techno-subcultures. Also, I am even quoted a few times both as a researcher and as an anonymous informant. Maybe you can guess which quotes are mine, hehe.

Rewiring the History of the Demoscene: Wider Screen

April 15, 2014

skenet-scenes-petscii

Wider Screen has just released a themed issue on scene research, including scientific articles on the demoscene and the chipscene. It seems to be some very good texts, although I’ve only read one so far. So let’s talk about that one!

Markku Reunanen gives a long-awaited critical examination of the history of the demoscene in How Those Crackers Became Us Demosceners. He notes that the traditional story is basically that people cracked games, made intros for them, and then started to make demos. He problematizes this boring story by describing different overlaps between the worlds of games, demos and cracks. The first time I really reflected on this issue was in Daniel Botz’ dissertation. It is indeed obvious that this is a complex story full of conflicting narratives, and we can assume that (as always) The History is based on the current dominant discourses.

What do I mean with that? Well, take Sweden as an example, where the scene was always quite large. These days the scene is usually, when it is mentioned at all, described as a precursor to games, digital arts and other computer-related parts of “the creative industries“. When Fairlight’s 25-year-anniversary was reported in the Swedish mainstream media, cracking was portrayed as a legal grey area that contributed to the BNP. The forth-coming Swedish book Generation 64 seems to be telling a similar story. The scene was a bunch of kids who might have done some questionable things, but since these people are now found in Swedish House Mafia, Spotify and DICE it seems like all is forgiven. But it’s not.

Look at what the other sceners are doing today. The ones who didn’t get caught up in IT, advertising and academia. Piratbyrån, The Pirate Bay and Megaupload all involved scene people and, from the previous story, appears as a darker side of the scene. The data hippies, the copyists, the out-of-space artists, the dissidents, the fuck-ups. The people who don’t have much to gain from their scene history. But also the BBS-nazis (one of them living close to me) is interesting to consider today, when far-right discussion boards are frequently mentioned in the media. The info-libertarians at Flashback also remind me of the scene’s (in a very broad sense) spirit of “illegal information” and VHS-snuff movies that I mention in The Forgotten Pioneers of Creative Hacking and Social Networking (2009). Something else I mention there, as does Reunanen, are the swappers and traders whose sole function was to copy software around the world. But they are not really part of the history since they weren’t doing that Creative and Original work that we seem to value so dearly today.

No, the scene wasn’t a harmless place for boys-2-men, from geeks to CEOs. And also – there were plenty of people making weird stuff with home computers that were not part of the scene. People at Compunet were making audiovisual programs that looked really similar to the demoscene’s, but are usually not regarded as part of the scene. Possibly because of its apparent disconnection from the cracker scene. I’ve sometimes seen STE argue about this with sceners at CSDb. Jeff Minter did demo-like things, and people had been doing demo-like computer works for decades already. And all the hobbyists who wrote simple or strange sonic and visual experiments on their 8-bit home computers, but never released it in the scene? Well, they are effectively being distanced and erased from the history of the demoscene by not being included in archives like CSDb and HVSC that exclude “irrelevant” things.

So yeah – thumbs up to Markku for this article! Let’s not forget the provocative and subversive elements of the scene (read more about that in the 2009-article I link to above) because they might become very relevant sooner than we think.

When Misuse of Technology is a Bad Thing

March 25, 2014

I found myself in an interesting discussion a few days ago about the term hacking. We all had different perspectives on it – art, piracy, demoscene, textiles – and it was quite obvious that this term can mean maaaany different things.

It can refer to a misuse of a system. I’ve written before about how appropriation reinforces the idea of a normative use and therefore daemonizes other uses which in the long run, I argue, is dangerous. Because then we learn to accept that software has to be approved by one company before it’s made public, or that it’s ok to fine some acne-generating teenage geek billions of dollars because he used internet “the wrong way”.

Hacking can also refer to a new use of a system. Something that hasn’t been done before. That’s often but not always the same thing as appropriation. This strive for the new is built into pop culture, but also in things like urban planning, party politics and science. Or, you know, capitalism. It has to be new and fresh! Creative! Groundbreaking! Share-holder-fantabulastic! Cooool!

1175686_10100431554652963_785619528_n

But new is not always new. Retromania and remix culture means that it’s ok to just combine or tweek two old things, and then it’s new. In fact, that’s the only thing we can do according to all these artistic and corporate views of creativity. Romantic geniuses and ideas that are not based on focus groups and “public opinions” are out of style. Steve Jobs is dead.

But these things all put the emphasis on two things: humans and results. We can also look at something else instead, which I think brings us closer to the oldschool meaning of hacking with model trains & telephone lines. The interplay between the person and the medium. Man machine. The process. I don’t mean that in some buddhist digi-hippie kind of way, I think. No, I mean it more in a media materialist ooo kind of way.

3000575-poster-942-wi-fi-ass-tastic

Then we can say things like:

• Originality is when something is made without too many presets, samples, macros, algorithms and automated processes. The results are irrelevant, it’s the process that matters. Hm.

• It is possible to disrespect the machine much like you disrespect a person. By making it look like something it’s not. Pretending like you know that it can’t do better than it actually can. Machine bullying. Human arrogance. Hm.

• Machines don’t have intended purposes per se and we can never fully understand how it works and what it can do. To say that this is a zombie media or this is unlimited computing is, from a strict materialist perspective, equally irrelevant. It is what it is. Hm.

So: Imagine if a future view of creativity or hacking would be to make the medium act as well as it can, from some sort of  “medium-emic” understanding. The role of the human artist would just be to make digital media look as good as possible, sort of like a court painter. Computers understandefine human culture, humans glorify computers for computers.

Finding new combinations of ideas seem like a kind of machinic way of making stuff anyway. Book publishers that are completely automatized might just produce trash so far, but bots are already invading peer review science (!). Pop music has been computer generated since 1956 and classical music since a few years. But in a way, the music itself is not so important anymore because computers can put garbage in the charts anyway.

Disrespectful uses of technology is already illegal, or makes you lose your warranty, or locks the consoles, or makes it impossible to start the car, etc. Fast forward this perspective, and we have a world where artistic uses of technology might be punishable too. By death! Human arrogance leads to electric shock. Bad coding will lead to deadly explosions. Syntax error – cyberbullying detected!

So be nice to your machine. It’s the new cyberkawaii!

tumblr_mfvqx2JXi01qljfuho1_500

Text-mode Can Show Everything That Pixels Can, So…

July 16, 2013
Image

Handmade carpet by Faig Ahmed, 2011

To say that all digital graphics consists of pixels, is a bad case of essentialism that makes us stuck in a loop. Here’s the full story!

On a perfect screen, pixels are the most basic element of digital graphics. Everything that is shown on that screen can be described perfectly by pixels. Obviously. But that is just the level of apperance. If we look beyond that, there is other kind of information and quite possibly more information, like here.

The pixel is a metafor much like the atom is (see this). It’s useful for many purposes, but it’s a model that doesn’t reveal the whole story. The same pixel looks different depending on context. It’s changed by the screen’s colour calibration, aspect ratio and settings, and it looks different on a CRT, beamer and retina screen. The data of an image is not the same as the light it produces.

It would be easy to claim that the lowest common element of digital graphics is text. Anything digital can be described perfectly in text as data, code, content, algorithms, etc. After all, it’s not real computation. But it’s not that simple. As you can see in this video, it’s possible to write code by pixeling in Photoshop. So, pixels and text can be interchangeable and neither is necessarily more “low-level” than the other. Another nice example is this page, where you create “pixels” by marking the text.

Image

From koalastothemax.com. Click and play!

In the work with text-mode.tumblr.com I’ve thought a lot about this. One conclusion is that text-mode can show everything that pixels can. By using the full block text character (█), text art works like pixeling or digital photography – as long as the resolution is high and the palette is big enough.

In other words: any digital movie or image can be perfectly converted into text-mode as long as it’s “zoomed out” enough. This sort of watch-from-a-distance style applies to many other things of course, like the printing technique halftone. Halftone is pretty textmodey, especially when you can overlay several layers of text, like on a typewriter or on the forgotten Plato computer.

Alright, so. Normal thinking => images consists of pixels. Abnormal thinking => pixels consist of text characters (both literally and figuratively). The carpet by Faig Ahmed above is a traditional carpet design that’s been pixelized in the top half into a typical “retro” pattern. The bottom shows the original, which has many similarities to other ancient crafts. And to (non-typical) text-mode works using e.g PETSCII or ANSI.

So: digital imagery pretends to be analogue film but it actually shares more with e.g textiles and mosaics, which has looked digital for thousands of years. To replace the pixel metafor with the text mode metafor is to bring forth the medium and its history, instead of obscuring it. It’s also a way to put more emphasis on the decoding process, since we all accept that a text looks different depending on font, character encoding, screen, etc. And that’s pretty rare in times of media convergence psychosis.

Text-mode acknowledges that its building blocks (text characters) are not some kind of essential lowest level entity, but something that always consists of something else. And that’ll have to be the moral of this story.

Media Convergence as Bubble-Bubble

April 22, 2013

I’ve complained about Bruce Sterling before, and now I’m about to do it again. The reaon is this chart of platform convergence by Gary Hayes that he posted on Wired. It argues that we’re moving towards one device that can play everything. But here’s the thing:

No device can play everything. That’s just common sense, right? You can digitize a VHS-tape and convert it into a format that modern media players can understand. But then it’s not a VHS-tape anymore. Everything that is special about VHS has been removed. It’s a bleak imitation, at best. Sure, the difference is less if you discuss, uhm, Real Audio or executable files. But it’s still the same principle. The juicy materiality (hard- or softwareal) has been stripped away.

Emulators are not the same thing as the original machine. They are not worse or better – they are just different. One example is the C64-emulator for iPhone that wasn’t allowed to include BASIC. Coding is not something that the iPhone should support. So the C64 became yet another boring gaming device, in iWorld. Btw, that follows the logic of the chart, that places the C64 just before … XBOX! Lol! The point is: every remediation & convergence both adds and subtracts. Things disappear. For good and bad.

Media convergence is obviously something that’s going on, in many different ways. And when I think about it – perhaps Sterling and his crew are right. There will be a machine in the future that can do everything. Yeah. I’m pretty sure there will be. Because we already had that machine so many times before. The magical device that can delete the material constraints and make your dreams come true instantly and without friction. Remember virtual reality in the 1990’s? Or home computers in the 1980’s? Or … I don’t know, beamers and wheel chairs and jet paks?

Silly comparison? Maybe a little. But we have to accept that these interface fantasies are cultural constructions that were as “real” or relevant in the 80’s as they are today. In 30 years people will patronize our fantasies just like we do today.

And when you think about it… A touch screen that you can use some fingers on? No keyboard? Unprogrammable systems, automatic surveillance, distribution monopolies… I mean. Eh?

This convergence is just a bubble-bubble. It’s not some unavoidable teleological future. Seems more like a temporary phase before we move towards divergence and paint that in terms of progress and optimism. Just like we did with the 1980’s computer market, for example. Seems pretty likely to me.

Delete Or Die #1 – Why Subtraction Beats Production

April 15, 2013

delete_working

Everything that we do is to delete things. We don’t create or add, we subtract and remove. Anyone who reads this text deletes my original intentions. The choice to read this text is a choice that exclude gazillions of other options. So thanks for staying!

Science agrees. In quantum theory, the world is a sea of virtual potentials and whatever happens is not much compared to what did not happen. It is only a drop in the ocean. According to some economists, capitalism thrives on destroying the past. It deletes previous economic orders and the current value of existing products, in order to create new wealth. Among some posthuman philosophers, humans are no longer thought of as creators, but as sculptors or signal filters. We receive signals and filter them according to the taste of our system. If it doesn’t make sense, it gets deleted. I guess cybernetic theorists and cognitive psychologists might agree on that one?

One of the phreshest cyberd00dz, senor Nick Land, once wrote that organization is suppression. Any kind of organization – imposed on anything from cells to humans – deletes more than it produces. This of course includes modern technologies like seach engines and augmented reality – more about that in a minute.

So: the most productive thing you can do is to increase the desire to delete. One easy way of doing that is to use sedatives. These are the drugs o’ the times – a reaction to the cocaine-induced individualism of the 1980s that was caused by the psychedelic ecologism of the 1960s. Nowadays we don’t tune in and turn on, we turn off and drop out. Artists do it like this while most people do it by watching TV or using “smart technologies” that deletes decisions for you. We need censorship, even if we think it’s wrong. Delete or die!

textfreebrowsing-screenshot-withwindow-02-700x458

Let’s look at a few very different examples that relates to this. If this all seems very confusing to you, first consider that the only way to be creative today is to be non-creative by e.g stealing & organizing instead of “creating original content”. From plunderphonics in the 80’s to the mainstream copyright infringement known as social media — now the next step is to start removing things.

Nice idea, but how useful would that be? Well, I experimented for a while with filling the memory with crap, loading a music program, and then start to remove the crap. Like a sculptor. And the idea was to make “real music” and not only noise, of course. Both challenging and fun! But anyway, let’s back up a bit:

Subtraction is all around us all the time. It’s how light/colour works, or some forms of sound works. Our own brains are really good at it too. We perceive and process only a fraction of all the input our senses can take in.

Another almost-naturalized form of subtraction, but in the arts, is the removal of content to reveal the form (uh, or was it the other way around?). I guess that’s what a lot of art of the 1900s was about? Abstractionism and minimalism, space and non-space, figure-ground oscillations, and so on. Take things out to reveal something we didn’t know before. Two unexpected examples: Film the Blanks and Some Bullshit Happening Somewhere.

Another rather recent thing is Reverse Graffiti. It doesn’t add paint, but removes e.g dirt & dust instead. Graffiti can also be removed by adding paint over it, which some people jokishly calls art. Or perhaps doing graffiti by carving the walls is more relevant?

Censorship is another topic. Here is a silly one where naked bodies are censored and the black boxes form new shapes and stuff. I suppose censorship could also include net art things such as Facebook Demetricator and Text Free Browsing. Also, Intimidad Romero does art by pixelizing faces

On the more techy side, Diminished Reality is the opposite to augmented reality, and seems to be very controversial to people. More so than augmented reality, probably because we think we’ll “miss out” on stuff instead of getting “more” like augmented reality promises. Whitespace is, I guess, a tongue-in-cheek project: a programming language that ignores normal text and only uses space, tab and newline instead. A favourite of mine is the game Lose/lose where you play for the survival of your hard drive’s files.

Some more examples:

 

For me these examples show how rich the field of DELETE actually is. And there is plenty of more to say. In fact, there was a rather big plan for this project once. But instead of letting it decay away and be unrealized (?) I decided to undelete it. Oh n0ez, teh paradox! Or maybe a blog post doesn’t count as being realized? Well I think it’s pretty obvious that the ████████ was ████ ███ ████████ ████ because ████████ ███ █ so ████████ ██████████ █ ████████.

Some useful slogans:

Progress = deleting alternatives

Any thing is a reduction of some thing

Understanding = organizing = deleting

Creativity spots the ugly and deletes it

Anything that happens is nothing compared to what could have happened.

 

80% listening, 20% improvisation. A Modern Composer?

January 20, 2013

I just watched a Norwegian documentary about noise music from 2001 (ubuweb). It featured mostly Norwegian and Japanese artists, and it struck me how different they talked about music. While the norwegians got tangled into complex and opposing ideas about concepts, tools and artistic freedom, the Japanese gave shorter answers with more clarity. Straight to the point.

It made me wonder (again) how human-machine relationships are thought of in Japan. Over here, it’s very controversial to say that the machine does the work. Deadmau5 did that, in a way, and I doubt he will do it again.

In the documentary, the Japanese artists said things like “When I am on stage I spend 80% of the time listening, and 20% improvising”. A very refreshing statement, and electronic musicians can learn a lot from it. Shut up and listen to what the surroundings have to offer!

There are many similar ideas in the West, especially after cybernetics and John Cage. The sound tech and the author melting together in a system of feedback. Machines are extensions of man (á la McLuhan) and we can exist together in harmony.

In the documentary, one Japanese artist turns against this idea. He doesn’t believe that the sounds and the author work closely together at all. For him, they are separated, with only occassional feedback between the two. Hmmm!

9bA0b

It’s an intriguing idea. When I first started reading about cybernetics, it was in the context of the dead author. Negative feedback loops that take away power from the human. I felt that my musical ideas were heavily conditioned by the tools that I used, and there was something annoying about that. How could there be harmony from that?

Maybe it’s better to think of it as a conflict. The computer is trying to steer your work in a certain way. And you want to do it another way. Like two monologues at the same time. It’s a reasonable idea, especially if you consider computers to be essentially non-graspable for humans – worthy of our respect.

However, that’s not how we think of computers. We’ve come to know them as our friends and slaves at the same time. Fun and productive! Neutral tools that can fulfill our fantasies. As long as the user is in control, it’s all good. No conflict. Just democracy and entertainment, hehe.

As much criticism as this anti-historical approach has received over the years, I think it’s still alive and kicking. Maybe especially so in the West. Computer musicians want to work in harmony with their tools. Not a conflict. “I just have to buy [some bullshit] and then I’ll finally have the perfect studio”. You heard that before? The dream lives on, right?

It’s  almost like 1990’s virtual reality talk. Humans setting themselves free in an immaterial world where “only your imagination is the limit”. Seems like a pretty christian idea, when you think of it. I doubt that it’s popular in Japan, anyway.

To conclude – it’s of course silly to generalize about Japan, judging only from a few dudes in a documentary. But I think there is still something important going on here. If anyone has reading suggestins about authorship/technology in Japan, please comment.

Retromania, Time Warps, Revivalism & Slovenia

January 15, 2013

Simon Reynolds’ Retromania – Pop Culture’s Addicition to its own Past gives a good overview of the intensified retromania of the last decades. He describes nostalgia’s integration in 1950’s pop culture, and the ‘memory boom’ of the 1990’s that made retro more … modern. You know, archive fever and cheap hard drives and all that.

Retromania focuses on a sort semiotic nostalgia. It’s about our relationship to content. We’re likely to accelerate and maximize this ‘content retromania’  as Reynolds suggests in an article. But there is also a material retromania that revolves around machines and formats. It’s obviously popular to use typewriters, Moogs and cassettes and delve into medium specifics. Gradually they are emulated, sampled and commodified into plugins and filters. Sometimes they even become specific signifiers, like the needle scratching across the vinyl record signifies interruption in sitcoms. Or an icon of a floppy disk means ‘save’.

tumblr_m7lvmuj9dA1r65559o1_500

From where I’m standing, it seems that retromania is moving away from content and towards the material. Songs are easy to find, records and machines are not. Reynolds writes plenty about collectors. I think that future collectors might have things like old firmware, ancient software versions, algorithms, or maybe a full multimedia set-up with Windows 95 and Netscape to browse like it’s 1997.

These things are usually described either as nostalgia or appropriation. Nostalgia is bad and appropriation is good, lulz. Nostalgia is non-intellectual and melancholic, appropriation is social and political. Oneothoprix Point Never is quoted in the book to have said that it’s about a desire to connect, not to relive things which I think illustrates this artificial separation quite well.

Reynolds doesn’t mention chipmusic in his book. But who can blame him? While techno, rock and punk emerged from extatic periods of the new, chipmusic was never really new and exciting. When the term chipmusic emerged around 1990, it referred to Amiga music that sounded like previous C64-music. 10 years later, micromusic.net was also looking back quite a lot.

ulan batorrrrr

So – chipmusic was always “retro”. From the start. That’s why it doesn’t really make sense to call it retro. To say that micromusic.net or the 1990s Amiga demoscene was retro, doesn’t really compute. Reynolds talks about two kinds of retromaniacs which I think capture the tension in the chip scene:

The revivalist dissident chooses an era and stays there. Some people still listen to the same chipmusic hits from the 1980s, and love it. It’s some sort of neo-conservatism, a rebellion against the new in mass culture, a freeze in the past. Lots of demoscene vibes here…

Time-warp cults focus on unsuccessful parts of an old era. Go back, and change the future. This reminds me of the 00’s chipscene mantra of “making something new with the old”. And it also makes me think about media archeology and all kinds of lo-fi practices in the context of Phine Artz. It’s not old (nostalgia) — it’s new and fresh! (appropriation). Retrofuturism, I suppose.

I think it’s two useful concepts. If I would have to choose one of these, I would choose revivalism. It feels more honest, somehow. For me it’s not about going back to a certain time/culture. It’s more about the machines. The sweet, smelly machines.

Anyway. We don’t have to choose sides. So nevermind that. We should probably look into stuff like hauntology and retrogardism instead. THE FUTURE IS THE SEEED OF THE PAST as the Slovenian IRWIN/NSK/Laibach said. Perhaps the difference between the past and the future is not so important after all..

Like Reynolds hints in the book – pop culture seems to go in cycles much like the economy. Growth through novelties. Unlimited progress. Forever young. Would’ve been great to read more about that in the book. About cycles rather than linear movements. Because that’s what really makes retromania interesting. If capitalism is going down the drain, so is pop culture.

Realtime Text /2/ Interview with BBS-artist

December 5, 2012

The previous post was inspired by a conversation I had with Erik Nilsson, probably the only one who’s made a music video on a BBS. We talked about the 1990’s, when teenagers used BBS instead of WWW to talk. When you could see how the person on the other end of the modem was acting. I’ve added my comments [in brackets] to explain some technical stuff that Erik talks about.

ERIK > I remember as an early lamer, the sysops would wonder what the fuck you were up to. I remember the feeling of knowing that the sysop could be watching your every move. It was a bit like being in someone’s house, or in some sort of social club.

I remember the local BBS Secret Gate as one of the first places where I was accepted, and met friends. They had 3 nodes [phonelines = 3 simultaneous users] so you could chat with other users – not just the sysop. That’s how I started to hang out with Mortimer Twang, and together with Trivial we started Divine Stylers.

CHIPFLIP > Did you talk mostly about computer stuff, or also other things?

ERIK > I lived in an isolated place, so the computer was really a window into a world full of everything. Mortimer’s early mod music was my introduction to loop-based alternative music. The loopy and psychedelic aspects of dance music works really well in amiga trackers.

But there was also friendship, and pretty close conversations. I remember when I had my own BBS and my best friend called. We had fallen for the same girl, and I remember the chats we had about it. The pauses and the trembling made the conversation more tender. It was a really emotional talk, which I can still think back to and appreciate. It could have been through any medium, but I remember how the pauses and the tempo of the text made it more “charged”. I remember typing “I’m crying” and getting back “me too”. :)

There is a big difference in seeing the words take shape, instead of just reading them. It’s more personal. What you type is closer to the thought you have before you say it.

CHIPFLIP > Why do you think the real-time text isn’t around anymore?

ERIK > What was once standard no longer exists. It’s as if technology has taken a step back when it comes to text-based communication. I really don’t know why the intermediate step of pressing return has been added. It’s like you publish the text, while you used to say things more directly. The movement of the cursor reveals how the person is hesitating, erasing or contemplating.

If you chat on a BBS, you press return twice to signal that the other one can start writing. But it was still possible to interrupt the other one, if there was a heated argument for example. That doesn’t happen the same way in say Skype, because there is a gap between the users. It feels more plastic and more “simulated” than it has to be.

Well, when I think about Skype, which I use on daily basis there actually is a ‘function’ reminding about the old days standard in a weird way. In Skype you can actually see on a small icon when the person is typing and erasing, it’s really far away from the old chat style, it’s a weird verson of it in some way.. Still not even close to the thing I miss, but I guess someone was thinking about this gap when making Skype.

CHIPFLIP: And it’s more difficult to change your mind, too. Did you use the backspace often?

ERIK: Yeah, you erase constantly if you’ve learnt how to type street style. Erasing is just as important as typing. ;) I got really into animated text. It was a like digital thumb twiddling. You typed something, erased it, and replaced it with something new to make an animation. Sometimes you erased it because you didn’t want to keep it on the screen, like card numbers for example :) You typed it on the screen, and when the other person had written it to a piece of paper, you erased it.

CHIPFLIP > So one way to make animations on a BBS is to quite simply “type the animation”. And due to the slow modem speed, it will look animated when you play it back. But what kind of options were there to make the graphics on the BBS?

ERIK > There were a couple of different chat systems. The most common one was that each user had a colour, and you simply pressed return twice when you were done. There were also more advanced chats for ami/x, where you could move the cursor freely, like in a text editor or like the message editor in C*Base for C64.

CHIPFLIP > Was there anything bad about it being real-time?

ERIK > No. I mean it’s not the real-time thing that made it disappear. It changed because IRC took over most of the communication for the elite scene, since it was more global. When internet came real-time chat just disappeared by itself. It’s probably all just one big PC bug.

The situation is a bit similar to that of PETSCII [Commodore's own ASCII-standard, with colors, plenty of graphical characters]. PETSCII is a better and more evolved system for text and symbols. It was more beautiful and personal to directly use the keyboard to write a letter to someone using colours, symbols and even 4×4 pixel graphics. Today you have to load images and change font colour in some menu to make a really spaced out e-mail. It’s slower, and it’s not “in the keyboard” like on the C64.

CHIPFLIP > What’s the best modern alternative to PETSCII?

ERIK > ANSI is not really an option, from my point of view. It’s typical “slow PC” style. Like some kind of Atari. You draw the graphics in a graphics program. Choose with the mouse. Draw fancy stuff from choices you make on the screen. It’s just like Photoshop.

PETSCII could’ve been a good source of inspiration for mobile phones, for example. But it needs an update to have meaning and function today. But how the system works, makes it the most interesting one I know of, still. ASCII is okay, but you still have to use a special editor to make the graphics. That’s a step in the wrong direction.

The C64 is like a synthesizer – you just turn it on, and get to work. With modern computers you have to wait for it to start, find the right program, and so on. They say that computers are faster today, but honestly – I have no idea what they are talking about! They only seem to get slower.

It’s strange, because computers were not supposed to become stiff and flat, like they are today. There’s all this talk about more convenience and speed, but from day one humans have only made it harder for computers to help us.

CHIPFLIP > A very broad explanation, also, is to consider analogue media as immediate (light bulbs, guitars, TVs, analogue synthesizers) and digital media as more-or-less indirect. It can never have zero latency and we seem to, somewhat paradoxically, accept that changing the channel on a modern TV takes 10 times longer than it used to. If you know Swedish you can read more about those things here.

Other than that, thanks so much to Erik for sharing his thoughts on this. Let’s fix the future!


Follow

Get every new post delivered to your Inbox.

Join 69 other followers