Archive for the ‘theory’ Category

A retrospective on the stories and aesthetics of 8­bit music

January 26, 2015

Taken from the catalogue to Lu Yang’s exhibition ANTI-HUMANISM at the OK Corral gallery in Copenhagen. I was asked to write a free-floating essayoid text about 8-bit music, and I came up with this. I added some links here too, for further reading/watching/listening.

When practitioners of 8­bit music like me write about the genre, it is hard to ignore the skills and effort needed to make the music. To play 8­bit music you need to master a not­so­intuitive software interface in order to communicate with a computer chip, that in return produces bleeping sounds from cheap digital logic. On or off, increase or decrease. These inputs are the basics of digital technologies, making it as if there is something timeless about 8­bit music, although it might seem really old: 30 years in digital terms is the equivalent of something like 1001011001010101011111101011 years.

8­bit music can be understood as a low­level cultural technique of music hacking, where different stories can be told. The sceptic might tell a story of nostalgia for videogames, where the composer makes simplistic music because the tool used doesn’t allow anything complex to be made. Indeed, that would be a normal story to tell if we believe that newer is better, and that new expressions require new technologies. It’s an almost logical story in a society that values quantitative increases over quality.

The most common story about 8-­bit music among academics, artists and journalists, however, puts the human at the centre of attention. It sometimes has a similar narrative to an old monster movie. There is a hero who learns how to manipulate and finally control some sort of wild beast. Instead of a monster, the Obsolete Computer is a mysterious relic of old school digital consumerism that is nowadays hard to understand, both in terms of purpose and function. A young white male hero appears and tames a frightening thing with rational choices, and probably kills it with physical or symbolic violence. He achieves freedom and love and/or emancipation from capitalism or modernism or something. The end.

I should know, for I too have told this kind of story. Many times. I started making music with 8­-bit machines as a kid in the early 1990’s, when that was (almost) the normal thing to do. Thing is, I never stopped using them. Throughout the 2000’s, as 8-­bit music started to intertwine with mass culture again because of the current retromania, people like me had to start explaining what we were doing. Journalists started to ask questions, promoters wanted biographies that would spark an interest, art curators wanted the right concepts to work with, and so on. So during the noughties, a collective story started to emerge among those of us who were making 8­-bit music in what I have called the chipscene: a movement of people making soundchip-­related music for records and live performances (rather than making sounds for games and demos as was done during the 80’s and 90’s).

The stories circulated around Commodore 64s, Gameboys, Amigas and Ataris, Nintendo Entertainment Systems, and other computers and game consoles from the 1980’s. We were haunted by the question “Why do you use these machines?” and although I never really felt like I had a good answer, we were at least pretty happy to talk about our passion for these machines. For a while anyway.

In comparison to many other music movements we spoke out about the role of technology, and we did it at the expense of music. We didn’t care much about the style and aesthetics of the music we made, because 8-­bit music could be cute pop and brutal noise, both droney ambient and complex jazz. We didn’t care about the clothes we wore, or which drugs we took, or which artists we listened to. We formed a subculture based on a digital technology that uses 8 bits instead of 32 or 64, as modern machines do. Defining our music movement as “8-­bit music” was a simplified way of explaining what we did. It was a way of thinking about medium and technology intrinsic to some modern discourses on art. Like, anything you do with a camera is photography. Simple, but slightly … pointless?

The music somehow came in second. Or maybe third. Sometimes the music we made almost became irrelevant. The idea of seeing someone on a huge stage with a Gameboy was sometimes enough. The primal screams of digital culture roaring on an over­sized sound system in a small techno club, was what we needed to get us going, even if it sounded terrible. Some of us were more famous than others, sure, but there wasn’t the same celebrity­ and status­ cults as in some of the “too ­serious” 1990’s­ style electronic music scenes. For us, the machines were the protagonist of the stories. Sometimes it was almost as if we – the artists who made the music – had been reduced to objects. It was as if the machines were playing us, and not the other way around. Yeah.. very anti­human!

To be honest, not many people are willing to give up their human agency and identity, step back, and give full credit to the machine. Or even worse – have someone else do that for you. Well, I didn’t feel comfortable with it, at least. People came up to us when we performed live to interrupt and ask what games we were playing. Or perhaps requested some old song from a game: But for many of us, the entire movement of 8-­bit music was not about the games of the 1980s. It was about the foundational computational technologies and their expressions manifested as sounds. Or something like that, anyway.

It’s quite interesting how this came to be. How did 8­-bit music become so dehumanized, when it involves quite a lot of human skills, techniques, knowledge and determination? I think an important factor was when the chip­scene was threatened by outsider perspectives. In 2003, Malcolm McLaren, known for creating spectacles such as the Sex Pistols in the 1970’s, discovered 8-­bit music. For him, this was the New Punk and he wrote a piece in Wired magazine about how the movement was against capitalism, hi-­tech, karaoke, sex, and mass culture in general: Through the appropriation of discarded commodities, the DIY spirit, the raw and unadulterated aesthetics, etc.. On McLaren’s command, mainstream media started to report about 8-­bit music, at least for 15 minutes or so.

To be fair, it was a good story – when Malcolm met 8­bit. But it pissed off plenty of people in the scene, because of its misunderstandings, exaggerations and non­truths. It did, however, play an important role in how the scene came to understand itself. McLaren’s story had stirred a controversy that made us ask ourselves “Well, if he’s wrong, then who’s right?”. We didn’t really know, atleast not collectively. McLaren pushed the chip­scene into puberty, and it began to search for an identity.

I was somewhere in the midst of this, and contributed to the techno­humanist story that started to emerge. It was basically this: We use obsolete technologies in unintended ways to make new music that has never been done before. Voila. The machine was at the centre, but it was we, the humans, who brought the goods. We were machine­ romantic geniuses who figured out how to make “The New Stuff” despite the limitations of 8-­bit technologies. It was machine­ fetishism combined with originality and the classic suffering of the author. It was very cyber romantic, but with humans as subjects, machines as objects, and pop cultural progress at the heart of it. It could be a story of fighting capitalist media. All in all: pretty good fluff for promotional material!

Over time, I became increasingly uncomfortable with the narratives forming around 8-­bit. In 2007, I was asked to write a chapter for Karen Collins’ book From Pac Man to Pop Music. I researched the history of 8-­bit music and realized the current techno­centric view of 8-­bit music was a rather new idea. In the 1980’s there wasn’t any popular word for 8-­bit music. Basically all home computer music was 8­-bit, so there was no need to differentiate between 8, 32 and 64­ bits as there is today. That changed in the 1990’s, when the increase of hi-­tech machines created a need for popular culture to differentiate between different forms of home computer systems and the music they made.

The term chip­music appeared to describe music that sounded like the 1980’s computer music. It mimicked not only the technical traits of the sound chips, but also the aesthetics and compositional techniques of the 1980’s computer composers. So 1990’s chipmusic wasn’t made with 8-­bit machines. The term was mostly used for music made with contemporary machines (Amigas and IBM PCs) that mimicked music from the past. It wasn’t about taking something old and making something new. It was more like taking something new and making something old. In other words: not very good promotional fluff.

I realised something. The techno­determinist story of “anything made with sound-chips is chipmusic” was ahistorical, anti­cultural, and ultimately: anti­human. Sure, there was something very emancipating about saying “I can do whatever I want and still fit into this scene that I’m part of”. That’s quite ideal in many ways, when you think about it.

Problem is – it wasn’t exactly like that. Plenty of people made 8­-bit or soundchip music that wasn’t understood as such. The digital hardcore music of the 1990’s that used Amigas. The General MIDI heroes of the 1990’s web. The keyboard rockers around the world, who were actually using soundchips. So for me it became important to explore chipmusic as a genre, rather than just a consequence of technology. If it’s not just a consequence of technology, then what is it? How were these conventions created, and how do they relate to politics, economics and culture?

This is what I tried to give answers to in my master thesis in 2010. Looking back at it now, what I found was that it was actually quite easy to not make chipmusic with 8­bit technology. I mean, if you would hook a monkey straight into an 8­bit soundchip, it’s not like there would be chipmusic. It would be more like noise glitch wtf. Stuff. Art. I don’t know. But not chipmusic. Chipmusic was more about how you used the software that interfaced you and the hardware soundchip. So I tried to figure out how this worked for me, and more importantly, for the people I interviewed for my thesis. How and why do we adapt to this cultural concept of what non­human “raw computer music” sounds like?

I am still recovering from this process. During this time my music became increasingly abstract and theoretical. I started to move away completely from danceable and melodic music, and got more interested in structures and the process of composing music, rather than the results of it. I wanted to rebel against the conventions that I was researching, and find something less human, less boring, less predictable.

But at the same time, I wanted to prove that we don’t need hi­-tech machines to make non­boring music. I despise the idea that we need new technologies to make new things. And I am super conservative in that I, in some way, believe in things like craft, quality, and originality. In some way.

So I was trying to find my own synthesis between me and the machine. Since I am not a programmer, I didn’t work with generative systems like many post­human composers do. I kept a firm focus on the craft of making music. For example, I started to make completely improvised live­sets without any preparations. I got up on stage, turned on a Commodore 64, showed it on a beamer, loaded the defMON­ software, and made all the instruments and composition in front of the eyes and ears of the audience.

I like this a lot because it’s hard work (for me) and it gives surprising results (for me). It’s a bit similar to live coding, if you’ve heard about that, but with a less sophisticated approach, I suppose. It’s more like manual labour than coding. Typing hundreds of numbers and letters by hand, instead of telling the computer to do it. You have to do it “by hand” which opens up for different mistakes compared to when it’s automatized. Which leads to surprises, which leads to new approaches.

I am not in full control, nor do I want to be. Or, more correctly, I don’t think I can be. I agree with the media theorist Friedrich Kittler’s ideas that we can never fully grasp or relate to what a computer is, and how it works. It is a thing on its own, and it deserves respect for what it is. We should not say that it has certain intended uses – like a “game computer” – because that is just semantic violence that in the long run reinforces the material censorship of Turing complete machines into crippled computers, like smartphones.

I think that whatever we use these things we call computers for, is okay. And most of us have odd solutions to make technology do what we want, even if we are not programmers. Olia Lialina calls it a Turing complete user – s/he who figures out how to copy­paste text in Windows through Notepad to remove the formatting, or perhaps how to make Microsoft Word not fuck up your whole text.

What I mean is: even if I make sounds that people say “go beyond the capabilities of the machine”, I don’t see myself as the inventor of those sounds, nor do I think that they go beyond the machine. They were always there, just like Heidegger would say that the statue was already inside the stone before the stone carver brought it forth.

Yeah, I suppose it’s some sort of super­essentialist point of view, and I’m not sure what to make of it to be honest. But I like how it mystifies technology, rather than mystifiying human “creativity”. The re­mystification of technology is great, and the demystification of the author is important. What if the author is just doing stuff, and not fantastic art? What if it’s just work?

My Dataslav performance plays with this question. I sit in a gallery, and people tell me what kind of song they want, and I have to fulfil their wish in no more than 15 minutes. I turn myself into a medium, or perhaps more correctly: a medium worker. I mediate what other people want, but it takes skills and effort to do it. It’s perhaps craft, not art. Or maybe it’s just work. Work that I don’t get paid to do, like so many other “cultural” workers in the digital arts sector.

If the potentials are already present in the technology, and we humans are there to bring it forth, that kind of changes things, doesn’t it? We don’t really produce things by adding more stuff to it. We are more like removing things. Subtraction rather than addition.

And if that’s the case, then it’s obviously much better to use something where we don’t need to subtract so much to make something that most people didn’t already do. If everything is possible, which some people still believe to be the case with some technologies, then that’s a whole lot of stuff to delete to get to the good stuff!

So, start deleting. It’s our only hope.

A Rant on Limitations

August 26, 2014

In the lo-fi arts, it is common to say that limitations serve as a source of inspiration. It’s such a common phrase that it’s become nearly as hollow as saying that less is more. This paradoxical expression basically means that less can be good despite not being more. Less is good only if it’s like more. If you flip the expression around into more is less, which Barry Schwartz does when arguing against freedom of choice, it actually means the same thing. Less is always worse.

It’s a truism to point out the ideological connection with a capitalist focus on eternal growth. Much less obvious, is how this belief permeats so many artistic, scientific and journalistic accounts of lo-fi computing. There’s a strong  focus on limitations when it comes to lo-fi computers, but not when it comes to hi-tech stuff. There’s a fetish with lo-fi limitations that I think we can all recognize, and therefore there is also a fetish with the hi-fi unlimited. Right?

***

We should probably talk about verbs rather than nouns. Saying that certain characteristics are limitations or not… well… says who? All systems have limitations, depending on how/who/when/where you ask the question. Can you imagine something that is actually unlimited? Invisible? Isn’t it actually the “limitations” that gives character to something? A piano without the limitation of discrete notes? Well, now that’s just not a piano anymore, is it?

But anyway – the real question is: how are those characteristics limiting? Can it be limiting to only have squarewaves and 3 oscillators? Yes, of course. And can it be limiting to have 3 million custom waveforms and 12 million super oscillators and a frictionless interface between man and machine? Yes, that too can be limiting. It can be too much. It can push is into making the familiar, because it requires a mega fresh brain to get out of the path dependence. It’s much easier to have an interface that suggests unfamiliar ways.

A lot of artists show love and respect for the technologies they use. In the digital art, not so much. For digital artists, the tools are mostly commercial products, and it’s not exactly arty to celebrate a commodity (unless, you know, you have a conceptual reason to do so). Computers are hidden in art galleries, screens are turned away from the audience at laptop gigs, and so on.

***

We’re also quite obsessed with critical and transgressive uses of these technologies. We imagine that we’re doing something that we’re not supposed to do, and call it critical uses or hacking or appropriation or something like that.

Smells like humanist spirit. As if we’re in control, eh?

8-bit artists, on the other hand, are often positioned in a much more posthuman way. As slaves of technology. Underdogs. We often portray ourselves as suffering artists – or even handicapped – who make stuff despite technology. And yet, that once again reinforces the idea that hi-fi tech is somehow less limiting than old tech. But here’s a few reasons why a lot of old tech is superior:

* Fast, reliable, sturdy. It’s not your work laptop that you have to be super careful with. It doesn’t take 1 minute to boot or shut down. It doesn’t break if you check your luggage in. It’s fixable and still cheap to buy.

* Super control. For me as a musician, I can do almost anything that the platform allows me to do. That’s not at all the case with hi-fi platforms, that hides most of it potential.

* Aesthetically, you can work with instant genrefication. If you keep it simple, your song/picture/animation is instantly recognized as 8-bit/retro. This can be negative, but also positive. No need to worry about aesthetics. Just let the machine provide it for you.

(this post was revived from the 2012-archives)

New Media is More Obsolete than Old Media

May 18, 2014

Cory Arcangel, Golan Levin and others have done some great work to retrieve old Amiga graphics that Andy Warhol made back in the day. This is some great work! And I think it’s great that the Amiga gets some attention in terms of computer creativity instead of the constant Apple-ism. But.. what kind of attention is it?

Many artists, media scholars and journalists have a special way of talking about old media. The term hacking usually pops up. Even if you just download software and use it in a very normal way – like most chip music is made for example – we still love to call it hacking. But why? There are several possible explanations. First – we love to believe that humans are in control of technology and that fantasy can flourish with these old and supposedly non-user-friendly machines. Human intelligence can tame even this uncivilized digital beast! Secondly – the term hacking oozes creativity and innovation and has become an omnipotent term used for almost everything.

Obsolescence is another popular word. I’ve written about this many times before, for example in relation to zombie media. Let’s put it like this: new media is permeated with planned obsolescence. Old media is not. Amigas were not designed to be obsolete after a few years like so many modern platforms, systems and programs are. So from our current perspective it seems totally incredible that these old floppy disks and file formats can still be used. Because we’re not used to that anymore. Most people don’t know how easy it is to copy that floppy to a flash card and view the images with UAE or even Photoshop.

It’s also common to think of old media as fragile. But then why do nuclear missiles rely on 8″ floppies? Why do so many airports use DOS, matrix printers and Hi8 video? Why did Sony sell 12 million 3.5″ floppies in 2009?Why did so many gabber/noise people use the Amiga for live shows? Because these things are stable, sturdy and built to last. And because it’s expensive to change it, sure, but the point is: old media is clearly not as fragile as many people seem to think.

To summarize this discourse we can say that 8-bit users are hacking media that is fragile and obsolete. While there is obviously some truth to that statement, a general adaptation of it rests on some pretty problematic ideological assumptions that we all need to relate to in order to get by in a consumer culture. For example:

“New media is better than old media because in technology, change = progress”.

I think we can all be more careful with how we discuss old media in order to move away from this dangerous misunderstanding. I know that there are many contexts where that is not suitable, possible or meaningful. But technological change oozes with politics and it doesn’t have to be conservative or retro-cool to criticize or reject the new. So bring it on, hipster!

 

Wider Screen: Authenticity in Chipmusic

April 16, 2014

Yesterday I wrote about the new scene issue in Wider Screen, where several noteworthy scholars write on chipmusic, demoscene and warez culture. Today I return to that, to discuss the ethnographic study of authenticity in the chipscene. Chipmusic, Fakebit and the Discourse of Authenticity in the Chipscene was written by Marilou Polymeropoulou who I’ve met a few times around Europe when she’s been doing field studies for her dissertation. Her article is refreshing because it deals with technology in a non-technological way, so to say. It takes a critical look at the ideologies of chipmusic (which I also tried to do in my master’s thesis) and she doesn’t get caught up in boring discussions about what chipmusic actually is (which, uhm, I have done a lot).

Polymeropoulou divides the chipscene into three generations. The first generation is described as a demoscene-inspired strive for being an original elite, by challening the limitations of original 8-bit hardware from the 1980’s. As I understand, this generation is everything that happened before the internet went mainstream. The second generation is internet-based and focused on mobility (read Gameboy), learning by copying and making more mainstream-ish chipmusic. The third generation is characterized as “chipsters” that are more interested in sounds and timbres rather than methods and technologies.

The first generation of chipmusicians would be a very diverse bunch of people, activities and machines. Perhaps even more diverse than the chipscene is now. Back then there were not as many established norms to relate to. I mean, we hardly knew what computers or computer music was. The terms chipmusic or chiptune didn’t exist, and I doubt that it was relevant to talk about 8-bit music as a general concept. It was computer music, game music, SID-music, Nintendo-music, etcetera. People were using these 8-bit home computers to make music for school, for games, for art, for their garage band, for themselves, for Compunet, for bulletin boards, the demoscen, for crack-intros, etcetera. However, looking back through the eyes of “chipscene 2014″ it makes sense to zoom in on only the demoscene during this period, as it is normally considered as one of the most important precursors.

Chip Music Festival, 1990

In the demoscene there were many people who ripped songs to copy the samples, look at their tracker tricks, or just use the song for their own demo. Copying was common, but it wasn’t exactly elite to do it. There was certainly a romantic ideology of originality at work. But I’m not so sure about ascribing a technological purism to the demoscene of that time. Sure, people loved their machines. But most sceners eventually moved on to new platforms (see Reunanen & Silvast). So I’m not sure that this generation would be the anti-thesis to fakebit. In fact, when the chipmusic term first appeared around 1990 it refered to sample-based Amiga-music that mimicked the timbres of the PSG-soundchips and the aesthetics of game music.

So, in a sense, the Amiga/PC chip-generation of the 1990’s (when the 8-bit demoscenes were very small) was actually not so far from what is called fakebit today. And that’s obviously why this big and important momentum with tens of thousands of open source chip-modules is so often ignored in histories of chipmusic. It just doesn’t fit in. (It’s also worth noting here that many if not most 8-bit demoscene people today use emulators such as VICE or UAE to make music, and use the original hardware more like a media player.)

My theory is that the hardware-fetish of the chipscene is a more recent phenomenon, established sometimes in the mid 2000’s, and I think that Malcolm McLaren’s PR-spree had something to do with it, regardless of the scene’s reaction. If you listen to the early releases at micromusic.net and 8bitpeoples today, you could call it fakebit if you wanted to. Just like with the Amiga-chip music of the 1990’s. So it seems to me that this generation didn’t build much on what had been done in the demoscene, other than perhaps using tools developed there. Games, on the other hand, were a popular reference. So to me, the post-2000 generation of chipmusicians feels more like a rupture than a continuation from the previous generation (something like hobbyism->crackerscene->demoscene->trackerscene->netlabels).

At this time I was still a purist demoscene snob, and I thought that this new kind of bleepy music was low quality party/arty stuff. Still, I decided to gradually engage in it and I don’t regret it. But I was one of very few demosceners who did that. Because this was, in short, something very different from the previous chipmusic that was characterized by lots of techné and home consumption. Micromusic was more for the lulz and not so serious, which was quite refreshing not only compared to the demoscene but compared to electronic music in general (you know, IDM and drum n’ bass and techno = BE SERIOUS).

It’s funny, but when Polymeropoulou describes the third generation of the chipscene (the chipsters) it actually reminds me a bit of the early demoscene people, perhaps even during the 1980’s.

Chipsters compose chipmusic – and of course, fakebit – on a variety of platforms, including modern computers, applying different criteria, based on popular music aesthetics rather than materialist approaches. [..] Chipsters find creative ways combining avant-garde and subcultural elements in order to break through to mainstream audiences, a practice which is criticised by purists.

In the 1980’s they used modern computers to try to make something that sounded like the “real” music in the mainstream. They borrowed extensively from contemporaries such as Iron Maiden, Laserdance and Madonna and tried to make acid house, new beat, synth pop, etc. There was definitely some freaky stuff being made (“art”), and something like comedy shows (Budbrain) and music videos (State of the Art) and later on so called design demos (Melon Dezign) and those demos appealed to people who were not sceners. And the megamixes! Here’s one from 1990:

Okay… how did we end up here? Oh yeah — my point is, I suppose, that the demoscene is not as purist as people think, and never was. Atleast that’s my impression of it. But even if I disagree with the generational categorization of Polymeropoulou’s text, I consider this article as an important contribution to the field of techno-subcultures. Also, I am even quoted a few times both as a researcher and as an anonymous informant. Maybe you can guess which quotes are mine, hehe.

Rewiring the History of the Demoscene: Wider Screen

April 15, 2014

skenet-scenes-petscii

Wider Screen has just released a themed issue on scene research, including scientific articles on the demoscene and the chipscene. It seems to be some very good texts, although I’ve only read one so far. So let’s talk about that one!

Markku Reunanen gives a long-awaited critical examination of the history of the demoscene in How Those Crackers Became Us Demosceners. He notes that the traditional story is basically that people cracked games, made intros for them, and then started to make demos. He problematizes this boring story by describing different overlaps between the worlds of games, demos and cracks. The first time I really reflected on this issue was in Daniel Botz’ dissertation. It is indeed obvious that this is a complex story full of conflicting narratives, and we can assume that (as always) The History is based on the current dominant discourses.

What do I mean with that? Well, take Sweden as an example, where the scene was always quite large. These days the scene is usually, when it is mentioned at all, described as a precursor to games, digital arts and other computer-related parts of “the creative industries“. When Fairlight’s 25-year-anniversary was reported in the Swedish mainstream media, cracking was portrayed as a legal grey area that contributed to the BNP. The forth-coming Swedish book Generation 64 seems to be telling a similar story. The scene was a bunch of kids who might have done some questionable things, but since these people are now found in Swedish House Mafia, Spotify and DICE it seems like all is forgiven. But it’s not.

Look at what the other sceners are doing today. The ones who didn’t get caught up in IT, advertising and academia. Piratbyrån, The Pirate Bay and Megaupload all involved scene people and, from the previous story, appears as a darker side of the scene. The data hippies, the copyists, the out-of-space artists, the dissidents, the fuck-ups. The people who don’t have much to gain from their scene history. But also the BBS-nazis (one of them living close to me) is interesting to consider today, when far-right discussion boards are frequently mentioned in the media. The info-libertarians at Flashback also remind me of the scene’s (in a very broad sense) spirit of “illegal information” and VHS-snuff movies that I mention in The Forgotten Pioneers of Creative Hacking and Social Networking (2009). Something else I mention there, as does Reunanen, are the swappers and traders whose sole function was to copy software around the world. But they are not really part of the history since they weren’t doing that Creative and Original work that we seem to value so dearly today.

No, the scene wasn’t a harmless place for boys-2-men, from geeks to CEOs. And also – there were plenty of people making weird stuff with home computers that were not part of the scene. People at Compunet were making audiovisual programs that looked really similar to the demoscene’s, but are usually not regarded as part of the scene. Possibly because of its apparent disconnection from the cracker scene. I’ve sometimes seen STE argue about this with sceners at CSDb. Jeff Minter did demo-like things, and people had been doing demo-like computer works for decades already. And all the hobbyists who wrote simple or strange sonic and visual experiments on their 8-bit home computers, but never released it in the scene? Well, they are effectively being distanced and erased from the history of the demoscene by not being included in archives like CSDb and HVSC that exclude “irrelevant” things.

So yeah – thumbs up to Markku for this article! Let’s not forget the provocative and subversive elements of the scene (read more about that in the 2009-article I link to above) because they might become very relevant sooner than we think.

When Misuse of Technology is a Bad Thing

March 25, 2014

I found myself in an interesting discussion a few days ago about the term hacking. We all had different perspectives on it – art, piracy, demoscene, textiles – and it was quite obvious that this term can mean maaaany different things.

It can refer to a misuse of a system. I’ve written before about how appropriation reinforces the idea of a normative use and therefore daemonizes other uses which in the long run, I argue, is dangerous. Because then we learn to accept that software has to be approved by one company before it’s made public, or that it’s ok to fine some acne-generating teenage geek billions of dollars because he used internet “the wrong way”.

Hacking can also refer to a new use of a system. Something that hasn’t been done before. That’s often but not always the same thing as appropriation. This strive for the new is built into pop culture, but also in things like urban planning, party politics and science. Or, you know, capitalism. It has to be new and fresh! Creative! Groundbreaking! Share-holder-fantabulastic! Cooool!

1175686_10100431554652963_785619528_n

But new is not always new. Retromania and remix culture means that it’s ok to just combine or tweek two old things, and then it’s new. In fact, that’s the only thing we can do according to all these artistic and corporate views of creativity. Romantic geniuses and ideas that are not based on focus groups and “public opinions” are out of style. Steve Jobs is dead.

But these things all put the emphasis on two things: humans and results. We can also look at something else instead, which I think brings us closer to the oldschool meaning of hacking with model trains & telephone lines. The interplay between the person and the medium. Man machine. The process. I don’t mean that in some buddhist digi-hippie kind of way, I think. No, I mean it more in a media materialist ooo kind of way.

3000575-poster-942-wi-fi-ass-tastic

Then we can say things like:

• Originality is when something is made without too many presets, samples, macros, algorithms and automated processes. The results are irrelevant, it’s the process that matters. Hm.

• It is possible to disrespect the machine much like you disrespect a person. By making it look like something it’s not. Pretending like you know that it can’t do better than it actually can. Machine bullying. Human arrogance. Hm.

• Machines don’t have intended purposes per se and we can never fully understand how it works and what it can do. To say that this is a zombie media or this is unlimited computing is, from a strict materialist perspective, equally irrelevant. It is what it is. Hm.

So: Imagine if a future view of creativity or hacking would be to make the medium act as well as it can, from some sort of  “medium-emic” understanding. The role of the human artist would just be to make digital media look as good as possible, sort of like a court painter. Computers understandefine human culture, humans glorify computers for computers.

Finding new combinations of ideas seem like a kind of machinic way of making stuff anyway. Book publishers that are completely automatized might just produce trash so far, but bots are already invading peer review science (!). Pop music has been computer generated since 1956 and classical music since a few years. But in a way, the music itself is not so important anymore because computers can put garbage in the charts anyway.

Disrespectful uses of technology is already illegal, or makes you lose your warranty, or locks the consoles, or makes it impossible to start the car, etc. Fast forward this perspective, and we have a world where artistic uses of technology might be punishable too. By death! Human arrogance leads to electric shock. Bad coding will lead to deadly explosions. Syntax error – cyberbullying detected!

So be nice to your machine. It’s the new cyberkawaii!

tumblr_mfvqx2JXi01qljfuho1_500

Text-mode Can Show Everything That Pixels Can, So…

July 16, 2013
Image

Handmade carpet by Faig Ahmed, 2011

To say that all digital graphics consists of pixels, is a bad case of essentialism that makes us stuck in a loop. Here’s the full story!

On a perfect screen, pixels are the most basic element of digital graphics. Everything that is shown on that screen can be described perfectly by pixels. Obviously. But that is just the level of apperance. If we look beyond that, there is other kind of information and quite possibly more information, like here.

The pixel is a metafor much like the atom is (see this). It’s useful for many purposes, but it’s a model that doesn’t reveal the whole story. The same pixel looks different depending on context. It’s changed by the screen’s colour calibration, aspect ratio and settings, and it looks different on a CRT, beamer and retina screen. The data of an image is not the same as the light it produces.

It would be easy to claim that the lowest common element of digital graphics is text. Anything digital can be described perfectly in text as data, code, content, algorithms, etc. After all, it’s not real computation. But it’s not that simple. As you can see in this video, it’s possible to write code by pixeling in Photoshop. So, pixels and text can be interchangeable and neither is necessarily more “low-level” than the other. Another nice example is this page, where you create “pixels” by marking the text.

Image

From koalastothemax.com. Click and play!

In the work with text-mode.tumblr.com I’ve thought a lot about this. One conclusion is that text-mode can show everything that pixels can. By using the full block text character (█), text art works like pixeling or digital photography – as long as the resolution is high and the palette is big enough.

In other words: any digital movie or image can be perfectly converted into text-mode as long as it’s “zoomed out” enough. This sort of watch-from-a-distance style applies to many other things of course, like the printing technique halftone. Halftone is pretty textmodey, especially when you can overlay several layers of text, like on a typewriter or on the forgotten Plato computer.

Alright, so. Normal thinking => images consists of pixels. Abnormal thinking => pixels consist of text characters (both literally and figuratively). The carpet by Faig Ahmed above is a traditional carpet design that’s been pixelized in the top half into a typical “retro” pattern. The bottom shows the original, which has many similarities to other ancient crafts. And to (non-typical) text-mode works using e.g PETSCII or ANSI.

So: digital imagery pretends to be analogue film but it actually shares more with e.g textiles and mosaics, which has looked digital for thousands of years. To replace the pixel metafor with the text mode metafor is to bring forth the medium and its history, instead of obscuring it. It’s also a way to put more emphasis on the decoding process, since we all accept that a text looks different depending on font, character encoding, screen, etc. And that’s pretty rare in times of media convergence psychosis.

Text-mode acknowledges that its building blocks (text characters) are not some kind of essential lowest level entity, but something that always consists of something else. And that’ll have to be the moral of this story.

Media Convergence as Bubble-Bubble

April 22, 2013

I’ve complained about Bruce Sterling before, and now I’m about to do it again. The reaon is this chart of platform convergence by Gary Hayes that he posted on Wired. It argues that we’re moving towards one device that can play everything. But here’s the thing:

No device can play everything. That’s just common sense, right? You can digitize a VHS-tape and convert it into a format that modern media players can understand. But then it’s not a VHS-tape anymore. Everything that is special about VHS has been removed. It’s a bleak imitation, at best. Sure, the difference is less if you discuss, uhm, Real Audio or executable files. But it’s still the same principle. The juicy materiality (hard- or softwareal) has been stripped away.

Emulators are not the same thing as the original machine. They are not worse or better – they are just different. One example is the C64-emulator for iPhone that wasn’t allowed to include BASIC. Coding is not something that the iPhone should support. So the C64 became yet another boring gaming device, in iWorld. Btw, that follows the logic of the chart, that places the C64 just before … XBOX! Lol! The point is: every remediation & convergence both adds and subtracts. Things disappear. For good and bad.

Media convergence is obviously something that’s going on, in many different ways. And when I think about it – perhaps Sterling and his crew are right. There will be a machine in the future that can do everything. Yeah. I’m pretty sure there will be. Because we already had that machine so many times before. The magical device that can delete the material constraints and make your dreams come true instantly and without friction. Remember virtual reality in the 1990’s? Or home computers in the 1980’s? Or … I don’t know, beamers and wheel chairs and jet paks?

Silly comparison? Maybe a little. But we have to accept that these interface fantasies are cultural constructions that were as “real” or relevant in the 80’s as they are today. In 30 years people will patronize our fantasies just like we do today.

And when you think about it… A touch screen that you can use some fingers on? No keyboard? Unprogrammable systems, automatic surveillance, distribution monopolies… I mean. Eh?

This convergence is just a bubble-bubble. It’s not some unavoidable teleological future. Seems more like a temporary phase before we move towards divergence and paint that in terms of progress and optimism. Just like we did with the 1980’s computer market, for example. Seems pretty likely to me.

Delete Or Die #1 – Why Subtraction Beats Production

April 15, 2013

delete_working

Everything that we do is to delete things. We don’t create or add, we subtract and remove. Anyone who reads this text deletes my original intentions. The choice to read this text is a choice that exclude gazillions of other options. So thanks for staying!

Science agrees. In quantum theory, the world is a sea of virtual potentials and whatever happens is not much compared to what did not happen. It is only a drop in the ocean. According to some economists, capitalism thrives on destroying the past. It deletes previous economic orders and the current value of existing products, in order to create new wealth. Among some posthuman philosophers, humans are no longer thought of as creators, but as sculptors or signal filters. We receive signals and filter them according to the taste of our system. If it doesn’t make sense, it gets deleted. I guess cybernetic theorists and cognitive psychologists might agree on that one?

One of the phreshest cyberd00dz, senor Nick Land, once wrote that organization is suppression. Any kind of organization – imposed on anything from cells to humans – deletes more than it produces. This of course includes modern technologies like seach engines and augmented reality – more about that in a minute.

So: the most productive thing you can do is to increase the desire to delete. One easy way of doing that is to use sedatives. These are the drugs o’ the times – a reaction to the cocaine-induced individualism of the 1980s that was caused by the psychedelic ecologism of the 1960s. Nowadays we don’t tune in and turn on, we turn off and drop out. Artists do it like this while most people do it by watching TV or using “smart technologies” that deletes decisions for you. We need censorship, even if we think it’s wrong. Delete or die!

textfreebrowsing-screenshot-withwindow-02-700x458

Let’s look at a few very different examples that relates to this. If this all seems very confusing to you, first consider that the only way to be creative today is to be non-creative by e.g stealing & organizing instead of “creating original content”. From plunderphonics in the 80’s to the mainstream copyright infringement known as social media — now the next step is to start removing things.

Nice idea, but how useful would that be? Well, I experimented for a while with filling the memory with crap, loading a music program, and then start to remove the crap. Like a sculptor. And the idea was to make “real music” and not only noise, of course. Both challenging and fun! But anyway, let’s back up a bit:

Subtraction is all around us all the time. It’s how light/colour works, or some forms of sound works. Our own brains are really good at it too. We perceive and process only a fraction of all the input our senses can take in.

Another almost-naturalized form of subtraction, but in the arts, is the removal of content to reveal the form (uh, or was it the other way around?). I guess that’s what a lot of art of the 1900s was about? Abstractionism and minimalism, space and non-space, figure-ground oscillations, and so on. Take things out to reveal something we didn’t know before. Two unexpected examples: Film the Blanks and Some Bullshit Happening Somewhere.

Another rather recent thing is Reverse Graffiti. It doesn’t add paint, but removes e.g dirt & dust instead. Graffiti can also be removed by adding paint over it, which some people jokishly calls art. Or perhaps doing graffiti by carving the walls is more relevant?

Censorship is another topic. Here is a silly one where naked bodies are censored and the black boxes form new shapes and stuff. I suppose censorship could also include net art things such as Facebook Demetricator and Text Free Browsing. Also, Intimidad Romero does art by pixelizing faces

On the more techy side, Diminished Reality is the opposite to augmented reality, and seems to be very controversial to people. More so than augmented reality, probably because we think we’ll “miss out” on stuff instead of getting “more” like augmented reality promises. Whitespace is, I guess, a tongue-in-cheek project: a programming language that ignores normal text and only uses space, tab and newline instead. A favourite of mine is the game Lose/lose where you play for the survival of your hard drive’s files.

Some more examples:

 

For me these examples show how rich the field of DELETE actually is. And there is plenty of more to say. In fact, there was a rather big plan for this project once. But instead of letting it decay away and be unrealized (?) I decided to undelete it. Oh n0ez, teh paradox! Or maybe a blog post doesn’t count as being realized? Well I think it’s pretty obvious that the ████████ was ████ ███ ████████ ████ because ████████ ███ █ so ████████ ██████████ █ ████████.

Some useful slogans:

Progress = deleting alternatives

Any thing is a reduction of some thing

Understanding = organizing = deleting

Creativity spots the ugly and deletes it

Anything that happens is nothing compared to what could have happened.

 

80% listening, 20% improvisation. A Modern Composer?

January 20, 2013

I just watched a Norwegian documentary about noise music from 2001 (ubuweb). It featured mostly Norwegian and Japanese artists, and it struck me how different they talked about music. While the norwegians got tangled into complex and opposing ideas about concepts, tools and artistic freedom, the Japanese gave shorter answers with more clarity. Straight to the point.

It made me wonder (again) how human-machine relationships are thought of in Japan. Over here, it’s very controversial to say that the machine does the work. Deadmau5 did that, in a way, and I doubt he will do it again.

In the documentary, the Japanese artists said things like “When I am on stage I spend 80% of the time listening, and 20% improvising”. A very refreshing statement, and electronic musicians can learn a lot from it. Shut up and listen to what the surroundings have to offer!

There are many similar ideas in the West, especially after cybernetics and John Cage. The sound tech and the author melting together in a system of feedback. Machines are extensions of man (á la McLuhan) and we can exist together in harmony.

In the documentary, one Japanese artist turns against this idea. He doesn’t believe that the sounds and the author work closely together at all. For him, they are separated, with only occassional feedback between the two. Hmmm!

9bA0b

It’s an intriguing idea. When I first started reading about cybernetics, it was in the context of the dead author. Negative feedback loops that take away power from the human. I felt that my musical ideas were heavily conditioned by the tools that I used, and there was something annoying about that. How could there be harmony from that?

Maybe it’s better to think of it as a conflict. The computer is trying to steer your work in a certain way. And you want to do it another way. Like two monologues at the same time. It’s a reasonable idea, especially if you consider computers to be essentially non-graspable for humans – worthy of our respect.

However, that’s not how we think of computers. We’ve come to know them as our friends and slaves at the same time. Fun and productive! Neutral tools that can fulfill our fantasies. As long as the user is in control, it’s all good. No conflict. Just democracy and entertainment, hehe.

As much criticism as this anti-historical approach has received over the years, I think it’s still alive and kicking. Maybe especially so in the West. Computer musicians want to work in harmony with their tools. Not a conflict. “I just have to buy [some bullshit] and then I’ll finally have the perfect studio”. You heard that before? The dream lives on, right?

It’s  almost like 1990’s virtual reality talk. Humans setting themselves free in an immaterial world where “only your imagination is the limit”. Seems like a pretty christian idea, when you think of it. I doubt that it’s popular in Japan, anyway.

To conclude – it’s of course silly to generalize about Japan, judging only from a few dudes in a documentary. But I think there is still something important going on here. If anyone has reading suggestins about authorship/technology in Japan, please comment.


Follow

Get every new post delivered to your Inbox.

Join 83 other followers