This week marked a genuine milestone in human history: scientists successfully synthesised an entire bacterial genome and implanted it into a host cell, creating synthetic life.
The reaction has been entirely predictable: idealists dreaming of a utopian future on one side (including me), and pessimists fearful of the dangers of this type of science on the other. The potential benefits of this technology are vast, since DNA is the software of life. That we are now working out how to program in it, and how to generate synthetic protein pathways, is one of the most genuinely exciting moments of the last two hundred years.
The problem is that the naysayers might have a point. What they miss is that this is not a reason to stop research, it is a reason to continue it on with more fervour and fanatiscism than ever before. Computer scientists have a phrase for the type of reasoning behind calls to stop this research: “Security through obscurity.”
The argument goes that things that aren’t known are protected from malicious agency by virtue of their obscurity, whereas information that is widely distributed becomes exploitable. This ethos has rarely proven to be effective, since the human condition has curiosity as one of its most core components. That which is obscured becomes not hidden, but a target to reach for. As the lessons of prohibition have constantly taught us, you can’t stop something by making it illegal. And I’d much rather have ethical, conscientious and incredibly intelligent scientists working on this research, with much thought as to how to effectively neutralise the dangers, than have it solely be funded by the more villainous entities on the planet for their own gains (which could be unimaginable).
Other things to note: I’ve decided on a potential future life plan, depending on the results of my artificial music investigation. I’ve worked out a way that given the component algorithms I’m using, you can set emotional cue-points such that the generation incorporates specific changes at specific times to its feel. Doing so would allow the program to generate time-synced music for video. I now plan to analyse a corpus of epic film music for its database, and write an interface to allow specification of these cuepoints. Ideal end aim from there becomes starting a company, and licensing the software out to film studios for film music.
I recently discovered the musician Kattoo, and it’s one of the most stunning albums I’ve ever listened to. Instantly a favorite up there with This Binary Universe.
In the past year, I’ve tried to explain to both Dave and Adam this little pet theory of mine, but have stumbled on the difficulties of distilling the concept into language. So I don’t forget the idea, this is my attempt to formalise it a little bit.
The idea of consciousness has always fascinated me. What is it about our particular neuronal configuration that gives rise, through the vast incalculable complexities of chemical interaction, to what we perceive as subjective, internal existence situated within our bodies. This idea of sentience is rooted in the way we assume everyone “sees” as if watching a giant TV screen behind our eyes, we are aware of and shape our internal monologue, and that we are situated thusly in our own body and never in anyone elses.
So what gives rise to the subjectivity of experience? Why did “I” start perceiving as such in this body when I was born and not someone elses, and why do I stay being the one in this body and not someone elses?
To keep the logic a little clearer, I’m going to borrow and subvert the word “soul.” It’s connotations accurately describe the subjectivity of situated perception that I want to use. For the sake of argument, the question then becomes “why is our soul in this body and why does it stay there?”
We know almost nothing about such questions when it comes to it. My theory on this just looks at the facts we do know: Consciousness arises out of the myriad interactions of neuronal processes within our brain, leading to us perceiving our consciousness as being situated inside our heads, seeing through our eyes and thinking internally. Once such a situation occurs, our “soul” as such stays situated in that body until the brain no longer has the ability to generate it, and we become brain-dead. Or just dead.
Given these two statements, I can think of no reason that such a situated consciousness couldn’t arise twice. When a new person is born, what could possibly block “me” from being the “soul” perceiving in that body? Only that I am already perceiving in my body, and that precludes any other subjective situated perception. But when I’m dead, that lock on perception is gone, so surely “I” could perceive in a new body?
Clearly, it wouldn’t be “me.” It would be an entirely new character, with it’s own personality and memories and no possible link between new and old bodies, characters, or memories. But in some sense, that specific perception of situated consciousness, that seeing of the TV screen of the world (as it were…), could happen in another body once I leave this one. In a sense, I see it as a form of reincarnation.
This theory is entirely speculative, but I quite like it. Jumping through the ages without any knowledge of those jumps…What else would I have seen and what else will I see?
One of the most interesting facets of human thought (at least, for me) is it’s ability to shape our perception of the world. Perception and thought are intrinsically, somewhat inexplicably linked, since our perception is derived from thought and vice-versa. It’s all too easy to see our perception of the world as raw input from our senses, upon which our thought acts, but more and more research seems to be showing our view of the world as intensely subjective and influenced by a vast number of processes asigning meaning and structure upon the world. I had a chat with Wills the other day about a philosophical model of perception that seems congruent to this view; it can be summed up as we perceive, we model, and then we think. Our senses push all their raw data into some form of mental buffer, and then this information is processed to create and internal model of the world, and it is this model which we “see” and think about.
So when I find myself thinking and learning about a certain area a lot, I find it shapes my world view. I first noticed this when I was fourteen (or so), and playing far too many computer games for my own good. Every so often when I was out and about in the real world, I’d marvel at the amazing graphics of reality. I’d wonder about the tactics inherent to the area I was in. When I got interested in 3D modelling, I’d start seeing architecture as a series of polygons. Nowadays, this shaping seems to come from evolution.
Since my PhD investigates evolutionary dynamics, I’m constantly thinking about it, and I’ve started realising that almost everything can be described evolutionarily: it’s all a question of finding the right fitness function and the right mutation function. Take memes as an example, a term coined by Richard Dawkins to describe some idea or structured thought that spreads throughout society. Those memes that are more memorable, that make us think in some way or appreciate them more strongly, these are the ones that get passed on. If a meme is distorted or mutated (perhaps due to a chinese whispers style mechanism), it may become more or less fit to be passed on. Evolution is occuring.
In my mind, this fits neatly as an explanation of religion. Although I believe personal faith is “fair enough,” I do disapprove of it as I think it displays a dangerous lack of critical analysis. Religion as a level above personal faith is abhorrent. I suppose this makes me a militant atheist, as far as that phrase actually describes someone who rationally comes to the conclusion that religion is the most potent cause of human suffering and inhibition of human progress that has ever existed. Yet despite it’s stranglehold on intelligent discourse in vast swathes of society, it has remained powerful and influential.
Looking at it in a similar way to the evolution of memes, it’s easy to see why. The human condition has at its core wonderment and curiosity about the whys and wherefores of the magnificence of reality. It’s only recently we’ve had the mathematical descriptions of the universe that allow us to even start scratching the surface, and so older generations and civilisations necessarily had to resort to less powerful and predictive explanations for these questions.
At some point, belief and inquiry met up with social hierarchy and authoritative structure. The ideas meld into an interesting way to promote cooperation and altruistic behaviour in an instinctively selfish species, but they only work and give power if everyone adheres to them. Religions push a vast number of mental tricks and manipulations into our mind: The fear of eternal punishment, the comfort of a beneficial powerful being, the reward for good behaviour, and a set of rules to live by that don’t have to be personally derived, questioned or doubted. As our cognitive abilities and social intelligence have evolved, so has religion. The idea of hell and the devil manifest have fallen behind as our credulity is stretched, and less vulnerable ideas such as “spirituality” and “community” have become more entrenched in the ideology. The evolution of religion is clear to anyone who looks at the simple differences between now and then, but modern day faiths are a hodgepodge of marketable ideas that are so good at manipulating the critical thought of believers that they still cause immeasurable harm. Look at the anti-homosexual agenda, the paedophilia scandal, the thousands of people in Africa infected with HIV because they believed the pope when he said condoms wouldn’t help, the nine year old girl raped by her stepfather but not allowed an abortion because it’s “murder.” This last one angers me most; anyone who argues that an abortion from the moment of conception is murder is guilty of genocide: All the billions of bacteria they kill each day by cleaning, cooking, all of these life forms could one day evolve into a sentient being. They have just as much a right to life as 1000 or so dividing cells in a uterus do. Now, I’ll happily admit this argument is slightly spurious, and the question of where to draw the line on abortion is tricky. The argument it should be drawn at conception is pathetic however: in what way is it just as bad to kill a few thousand cells which have the POTENTIAL to be a human as it is to kill an actual human? The legal system, and our moral frameworks, cannot work on predictive judgements.
That tangent aside, I feel there’s one interesting corollary of this argument. Evolution doesn’t guarantee an evolving entity will always find a way to survive a changing environment. That atheism is on the rise worldwide is encouraging; I’ve always thought an interesting litmus test is the levels of fundamentalism. Fundamentalist societies in the middle east and fundamentalist christians in America show this well: over the last decades they have become increasingly vocal, increasingly fundamentalist, and increasingly in the minority, as they have attempted to defend flawed and damaging world views. Fundamentalism is the last gasp of a dying entity that has no way of surviving the paradigm shift of the modern age. Religion served it’s purpose in the past, potentially, but at a huge cost to all our species. I can only hope it’s on its way out.
Today I have mostly been listening to:
Flying Lotus: I just got his new album after seeing him play one of the most interesting, innovative, unique dj sets I’ve ever witnessed at bloc. The new album is similarly amazing, bizarre, and a definite progression. The best way to describe it is I think a chilled out Aphex twin making jazz. Mmmm.
Take: The “Only Mountain” EP is chilled out dubsteppy electronica vibes and it flows along really nicely. Interesting stuff.
I read abit more about ideas for musical computational creativity the other day. Must put ideas down. I’m so unmotivated to do anything at all today though, and it’s pissing me off.
"Shadow of the future" is a term used in game theory to describe a possible explanation for altruistic behaviour: if I don’t cooperate (at some cost to myself) then others in the future will not cooperate with me, and I will be isolated.
It also quite neatly describes my feelings about any potential career I have; I simply don’t know what I want to do with my life. I have a number of criteria:
1) The obvious, but also least important one. I need money. I’m not too bothered about having lots of it, but enough to be comfortable is definitely up there.
2) It has to be significant. If I can’t in some way contribute to the wealth of human progress, if I just work as another cog in the corporate machine doing what someone tells me when they tell me and being unable to strike that key balance of creativity and critical analysis with hard work and hands-on activity I simply won’t be happy.
3) I have to be able to grow within it. I don’t want my job to define me, but I’m a big believer that happiness in life and the best way to get the most out of our inevitably finite time is to attempt to make the best of yourself within that time: learning should be done for the sake of learning as much as anyhting else, and the thought of being stuck in a dead-end job (and I reckon I class a lot more jobs in that category than most people) for the rest of my life is pretty horrifying.
Based on these, I’ve been having these broad thoughts about where to go after my phd.
I definitely want to take a gap year, in an unconventional sense. I don’t really want to go travelling, it’s not something that has ever really interested me that much. But I do want to devote time to my creative hobbies. I want to take a year away from academia partially because I never have, I’ve never had time outside the educational system and this’ll be my first chance. I figure if I move to london, I could get a temp job to live on (teaching people how to use computers at £15+/hr strikes me as a pretty fulfilling, easy way to do this) and then spend as much time as possible working on music production and DJing, attempting to get gigs, residencies, and released music. I don’t mind if nothing comes of it in the commercial sense: if i were to make something of myself in the music industry I would probably go for it, see where it would take me, but if it doesn’t I’ll be a years worth of practice better at these hobbies and that will stay with me the rest of my life.
After that however, it’s murky. I could go into academia, but exposure to the nuts and bolts of the process has given me doubts. While academic research is always presented as peer reviewed, impartial, without any facet of luck or peer selection involved, the unglamorous reality is years of drudge work attempting to churn out papers to get your record up until a professorship comes along. I’m sure it would be interesting, and fulfilling, and probably I’d be happy, but I’m not sure it’s necessarily what I want for my entire career. I do like the idea of coming back to it after a hiatus somewhere else, but the ability to do that is unknown - would I miss my chance?
I’ve looked at some Post-PHD jobs and some of them sound interesting. The starting salaries are 30-50k, so that fulfills point one, and the application domains are definitely significant: CO2 emission research, medical software, policy consultancy. I’m sure I’ll investigate these sorts of things more further down the line.
I also toy with the idea of starting a company. I’ve got a lot of ideas I think could be marketable: my mix suggestion, algorithmic composition (think how many companies license music for phone systems, lobbies, lifts, corporate events and media - what if that could all be algorithmically generated, using technology my company licenses), and general consultancy (perhaps relating to AI/agents?). The trouble is, I really don’t have a nose for business: i’d need to find a business partner to do that side, and let me do the research side. And really that’s just becoming a plan to let me indulge my desires without too much hassle, and the risks of this idea are quite significant.
I really have no idea, and it’s starting to get to me.
Netflix ran a competition last year to find a new algorithm for their recommendations. This class of algorithm, also known as collaborative filtering, is used on websites everywhere. When amazon recommends DVDs or CDs based on your purchase history, it’s effectively saying “This other person also bought what you bought, and they liked <X> that you haven’t bought.” The process of coming to this result is complex, and an extremely tricky problem in algorithmic analysis.
The solutions to the prize were invariably highly mathematical, and combined the results of several different techniques in a weighted manner.
One of the things I’m really interested in my PhD is finding the simple, low space-time complexity, beautiful and elegant solutions to problems. One example of this is trust and reputation. Trust and reputation algorithms exist to allow an intelligent agent to judge how likely another agent is to fulfill its promises of performing a certain task. The most successful ones (given a certain domain) involve multi-dimensional trust, wherein multiple (the FIRE system uses 5) metrics and models are combined to give an evaluation.
While these system work and do encourage cooperation, there exists another system that does just as well (and indeed, better). It’s called tag based cooperation, and involves each agent maintaining a set of tags, generally a vector of real numbers in the range [0,1]. When two agents meet, they decide to interact if the distance between their two tag vectors is below a threshold value. If they interact and it is successful, they move their tag values closer together, increasing the probability of future interactions between those two agents. If it is unsuccessful, they move their tags apart, and are less likely to subsequently interact. Computationally, this algorithm is insignificant - processors could do the basic test for interaction billions of times over every second. It doesn’t require interaction history, or encryption, or data structures, or any of the other components of a complex trust and reputation mechanism.
Can such a simplified system be found for recommendation algorithms? I think so: just apply tags. Initially when I thought of this, I envisioned the tags of a person being their purchase history. If two peoples histories (i.e. their tag vectors) are below a certain distance, then recommend the set of things person A bought to person B, since they probably like similar things. This struck me as a fairly naive way of doing it though, and representation of purchase history as a tag would be something that would have to be carefully considered.
The twist on this idea is the addition of a self-organising map (or Kohonen map, after the guy that invented them). These are a form of neural network that can learn a training set without need for guided learning - the weights of the synapses automatically change to catagorise the input onto a specific neuron group given fuzzy similarity. The update rule is deterministic: given the same initial neuron weights and same input, the result will always be the same. Given the same initial neuron weights and similar input (i.e., two people who like similar things), the resultant weights will be similar, but representative of a more “fuzzy” pattern based view of the purchase history.
If you then set the tags of each person to be the weights of that neural network, then two people will “interact” (i.e. share their likes and dislikes) if their neural networks are similar enough; but random outliers (e.g. someone who mostly likes action films but have bought a couple of romantic comedies as birthday presents) will be ironed out in the fuzzy process, whereas the previous idea would probably push them out of interaction range.
This system would be much simpler than those proposed, but I unfortunately I don’t really have time to test it. One day maybe :)
The last few days since I’ve started this blog have been pretty interesting. I’ve always been fascinated by the brain, how we learn, and how we direct our thoughts at a problem, and as usual I’ve got a good few entirely speculative and unresearched ideas on the issue.
We know fairly conclusively that we learn through reinforcement. Given any action and a set of perceptions, our brain generates a signal back to the area that generated the action effectively saying how successful we were. The problem is, the reinforcement signal is delayed, so it takes a few tries to calibrate effectively. This is why when we learn, for example, to ride a bike, we’ll fall off one way, then maybe overcompensate and go too far the other way, and slowly oscillate around the “correct” way to do it with a smaller and smaller amplitude. We’ve modelled this effect quite convincingly in reinforcement learning artificial neural networks, and the convergence effect occurs in them as well.
This clearly works on a low-level: coordination, balance, that sort of thing. I also think that it occurs at a higher, more cognitive level, that our brains can reinforce (or not) more complex, multi-faceted actions and reasoning.
There are two major examples in this past week that I feel show this to me.
Firstly, I’ve been trying to teach myself to scratch on my vinyl decks. This is a ridiculously tricky endeavour, and one of the major hurdles I’ve always come up against when I’ve tried this before is the lack of an easy entry point. Learning guitar or piano, there are easy exercises you can do before you get to the good bits that give you a grounding skill to build on, but I’ve never found anything like that for scratching. Because of this, I’ve been trying an experiment. I’ve simply been scratching entirely randomly. At the start, this sounded aryhthmic and atonal, and nothing like what I’d want to show anyone. But as reinforcement learning kicked in, and my brain started subconsciously working out which movements corresponded to “that sounded pretty cool,” I’ve noticed more and more of my scratches coming out well.
The second example involves this blog. The more I write on here, the more I feel like I’m becoming “aware” of my own internal ideas. It’s teaching me how to organise and distill my thoughts in an entirely novel fashion, and it’s something thats happening subconsciously. The more I try and think about my ideas in a manner meaning I can write them down clearly, the more my brain learns how to do it, to think critically about it.
Another thing I’ve been thinking about with the brain relates to this subconcious capacity. I’ve always been a big believer of the view that our consciousness is not the huge thing we think it is - barely a percent of our actual thought process. Think of what’s involved just in hearing a conversation. Your ears pick up a load of soundwaves, which are converted internally into representations of timbre, frequency, rhythmic, and then another layer works on that data to find out the acoustic space, derive syllables from speech in the signal, and create higher level representations. Then another part of the brain processes that and derives words, sentence structure, and attaches meaning. Then more of your brain starts formulating responses; and all of this has happened by the time your conscious brain “hears” what has been said. It has always been known that the brain is massively parallelised, but just how much so is the important point here.
My favorite thought experiment to do with this involves being in a situation where you might hear your name in another conversation. Next time you’re talking to someone, and you hear your name suddenly across the room, think about what else you heard. You’ll probably have actually heard the two or three seconds of conversation that lead up to your name being said too; but how is that, when it was your name that caused you to listen in? It’s because some little sub-processor in the auditory processing centre has one job: listen for your name in the background hubbub, and when it hears it, present it along with all the information present in your “audio buffer” before then. Because our internal audio buffer is two or three seconds long, we can pick out the information before our name was said and know what they were talking about.
In our brain, we have billions of these sub-processors. I believe they’re responsible for so much more than we give them credit for, so for the last couple of weeks I’ve been trying an experiment (of an entirely non-scientific manner). I’ve wondered for a long time where epiphanies come from, and in a related manner, how intuition works. It occured to me that an epiphany is just one (or many) of those sub-conscious processors finishing its evaluation and saying “hello, try this idea out for size!”
It doesn’t matter that we weren’t consciously thinking about it, because we’d set a little part of our brain thinking about it in the background. Computers do it all the time…
The question I was wondering was this: Can you direct this thought? Is it possible to run specific queries, as it were? For the last week or two, whenever I’ve had a serious problem I can’t get my head around, I’ve put my headphones on with ambient, chilled music on a low level, and then spent half an hour with my eyes closed thinking about every possible facet of the problem, exploring every avenue that could potentially shed light on it, trying out everything I can to attack the problem from as many angles as possible. The idea is, to get a load of subconscious processors running on many different potentially fruitful ideas. Then, after that half an hour, I move onto something else completely and try and forget about it.
So far, I’ve known the answer to my problem when I wake up the next day almost every time. It’s an interesting effect; much like I sometimes wake up with random songs going round my head again and again, this week I’ve been waking up with bits of my brain shouting “just do this, it’s so simple!”
If I had some time between working, pretending to scratch and writing speculative theories on this blog I’d definitely look up if this is an established canon of psychological thought. For now though, I’m pretty content with my little discovery.
Evolution is one of the scientific sub-fields at which a large part of my curiosity and passion is directed. Darwin’s elegant theory was empirical; derived from observations of the effects of self-organising complexity in the real world. It’s true power, however, is a characterisation in general of complex systems, whether that system is life as defined by DNA sequences or more artificial, prescribed domains.
There is a lot of direct evidence for evolution in the biological sphere, but evolution steps on a lot of toes and has come up against vocal, if entirely moronic, opposition. A great example of this (though I must stress that it appears even most creationists try to distance themselves from this individual) that I stumbled across the other day is the PhD thesis of Dr. Kent Hovind. You can find the thesis at http://sebso.de/kent-hovind-doctoral-dissertation.pdf if you have a spare hour in your day and want to have a bit of a chuckle at his expense: If I was handed this by an undergraduate, even an A-level student, I would probably fail them. Rarely in my life have I seen such a poorly argued excuse for serious discourse. I think the thesis speaks for itself, if you want to read a good summary of it check out http://www.noanswersingenesis.org.au/bartelt_dissertation_on_hovind_thesis.htm .
The reason for the controversy with evolution is, I feel, mainly because of two key facts.
Firstly, it is a simple and elegant theory. It can be expressed simply, it can be understood without significant scientific training (though much of its proof requires understanding of more advanced concepts), and yet it explains all the diversity, complexity and wonder of the world we find ourselves in. Its power is undeniable, and anyone with half a brain can be taught its basis in its entirety.
Secondly, it pushes the boundaries of science fully into contact with religion. Richard Dawkins argues, and I agree, that evolution as a theory does away with the need for religious explanation; evolution can explain the creation of life without the need for an overarching controlling or designing intelligence, and this is anathema (if you’ll excuse the irony of that phrase) to religion.
The majority of logical argument I’ve seen that attempts to discredit evolution focuses on the physical evidence, and I’ve always thought people that argue for evolution (i.e. anyone with more than 2 neurons firing) have missed a trick here.
Evolutionary algorithms and evolutionary game theory study evolution in the abstract, as a mathematical construct in which a set of entities defined by some genome (whether it be a set of real numbers, program code, etc) are iteratively selected for by a fitness function, and bred together with some probability of mutation. In this sense, evolution belongs to the class of search algorithms. Analysed along these lines, evolutionary algorithms show some interesting and highly desirable characteristics: robust to the effects of local minima and fast traversal of a wide search space. I don’t think any creationist would dispute that evolution works in theory; just whether it is what actually happened to life on earth.
But to dispute that evolution applies to life on earth is to dispute that any complex system, whether it is mathematically defined or not, will abide by such rules. Consider this thought experiment: Imagine early, prehistoric, just formed earth. There’s a huge amount going on; early earth was a pretty hostile place. Volcanoes are erupting, sulferous seas and poisoned air predominate. Somewhere in this world, a molecule happens to be produced by the confluence of temperature and materials being thrown against each other that occupies a special niche in chemistry: when it reacts with a certain other molecule, which might be itself or entirely different, then another copy of that molecule is created. This happens more and more, so that this molecule becomes abundant through self-replication. Except somewhere among these billions upon billions of molecules, something happens - it reacts with another molecule, or something happens during the replication, or whatever, and a different molecule is created. Now, this happens billions of times, and mostly these new molecules don’t self-replicate at all. But somewhere, among all those billions of botched self replications, the new molecule that is created not only self-replicates, but does so better, or faster. Slowly the numbers of this new molecule increase, and it will eventually become more dominant. Imagine this happening billions of times (and considering how many molecules there are, this isn’t an improbable assumption), with billions of errors. Slowly complexity increases, as molecules with more complex behaviours form and dominate in number by virtue of faster replication, or ability to resist certain temperatures, or because it can take in energy to allow it to react faster, and structures begin to form. Slowly, the molecules become cells, then multi-cellular life, and so on down the millenia until today.
I feel that when you think about it like this, evolution becomes not just another theory, but a core inevitability of the type of complex environment our universe depicts. It is not something that happens by chance, it is a fundamental law of self-replication that can’t NOT happen given some system that favours certain characteristics over others.
The argument over evolution shouldn’t be having to prove that it happens; to truly disprove evolution as an explanation for life one would have to prove some reason that this process did NOT happen, that the one true inevitability of complex systems (the domination of self-replicating entities) is somehow not applicable to the world we live in. I think that argument is unwinnable.
It seems Ireland has added itself to the list of countries considering censorship of the internet (http://yro.slashdot.org/story/10/04/16/1211212/Ireland-May-Be-Next-To-Censor-the-Internet).
I think this links neatly into my previous post about the lack of understanding of modern technology holding back politics and competent governance.
The argument progresses easily. The internet is designed as a fault tolerant network. It (and this happens rarely in the course of human history) is a design that perfectly hits its goals; if a node goes dark, it transparently and automatically routes around it. The problem with censorship, is it causes exactly that: A node goes dark. Attempting censorship of the internet is attempting to go against exactly that which it was designed for, and it is unlikely to work. China, which has invested god knows how many billions of yen into its great firewall, is still unable to stifle dissidence. Technologies such as Tor allow encrypted routing of data. Censorship simply doesn’t work, it just drives the information underground.
There are strong parallels with prohibition here. It’s a commonly accepted truism nowadays that prohibition simply doesn’t work, whether it’s related to drugs, alcohol, or in this case information. When the government makes something illegal that people want, people find other ways of obtaining it, and censorship is no different. Encrypting your travels across the internet is unbelievably easy, and when everyone decides to do that so they can access the latest site whichever incompetents that pretend to represent the populace have decided is unsuitable for people to expose themselves to it suddenly makes everything worse: the site is still accessable, but now noone knows who’s accessing it, any data that IS important is lost to encryption (and I’m sure the govt. will have a lot to say about terrorist communication with respect to this, leading to an interesting if slightly spurious argument: is a govt that pushes censorship encouraging terrorism?). It’s lose-lose really.
That the maliciously ignorant authorities in countries trying to censor the internet (Australia, UK, Ireland, France - not China as I feel their govt. is more sinister than incompetent) haven’t realised the futility of such mandated information filtering is worrying, and angers me more than I could practically express in words.
I’m going to preface this by saying I am in no way educated on political theory, psychology, or any information whatsoever that would inform and probably discredit what I’m going to say next.
I watched the leaders debate last night, along with (I’m hoping) a large proportion of the country. It was illuminating in many ways, but what really stood out to me was the difference in political style between Nick Clegg and the other two. My political views are fairly congruent to the lib dems anyway, so when I say I thought Clegg was the clear winner I’m probably not adding much of value to the debate. There were however some rather sinister overtones that reshaped my views of the blue and red inputs to the political system.
I think Clegg said it best when he asserted “the more I listen to these two attack each other, the more they sound the same.” Indeed, that was more prescient than I think he realised. The continuous bleatings of the same tired sob stories (though Clegg wasn’t entirely innocent on that front) and populace-pandering platitudes struck me as the last gasps of a pre-technological mindset that can’t really grasp the realities of today.
Look at the evidence: 180,000 tweets during the debate (though I am loathe to use Twitter statistics in any sort of serious discussion), live streams on the internet, and the BBC alone had upwards of 500 comments on its feedback page for the topic by the time the leaders had finished their summing up speeches. Technology is not just a single topic for the political mindset of the now, it is a core, underlying paradigm shift that Labservatives simply don’t understand. Nor am I suggesting that the Lib Dems are utilising the benefits of the switched on generation fully, but their promises for civil rights and repeal of the draconian Digital Economy Bill show a step in the right direction.
I feel that my generation, probably a bit of the generation above me, and most certainly the generation after (although they probably can’t vote yet) have grown up with a communications medium that exposes us to hundreds, thousands even, of multiple viewpoints very quickly. We’ve been forced to learn how to think critically about the information we’ve been exposed to in a way that our parents and grandparents didn’t - I can’t imagine only getting my news from one source, be it the radio or the TV. In any one day, I’ll read several times over stories and opinion pieces from 18 differents news sites in my bookmarks, as well as those chosen for me by sophisticated (and hopefully unbiased) algorithms that know what kind of things I like to read about. The opinions of not just those paid to form opinions in a professional manner but also those who just feel they want to say their piece are displayed across the world in that never ending stream of 0’s and 1’s, and if you didn’t learn how to find the diamonds in the rough there would be no way of assimilating it. Various media outlets, looking for a sensationalist story, have proclaimed the fear of “information overload.” Invariably I’ve found the authors biographies give a clear indication of the source of that fear: They’re over thirty.
Now, this is a remarkably strong, and possibly arrogant, assertion. I don’t deny that, or that it is a generalisation and there will be many over thirty who are exceptions and those in my generation who have not been exposed to such overloads of information. My point is this: Technology today facilitates an unbelievable torrent of data, and by virtue of growing up with it my generation has most likely learned the best ways to assimilate it.
This point isn’t as tangential as it seems. Given a markedly stronger ability to identify worthless information, or, indeed, outright lies, I have no doubt in my mind that this generation is (because of this reason and others unrelated to technology) the most politically skeptical there has ever been.
The fact that Brown and Cameron thus kept pushing their overly polished, substanceless non-policies and resorted more often than not to personal attack just shows how out of touch they are, and I have no doubt that many, many people saw right through them. The learnt skills of critical information evaluation must play a part in this. This is where I found Clegg at his most refreshing: laying out simple point-by-point policies without resorting to personal attack, layers of fluff or (for the most part) emotional hooks. Up until this debate, I strongly agreed with Lib Dem policies but had reservations of their ability to “act” the governing party. After watching it, Clegg is the only one I can bear to think of as representing our country.
The point polls after the debate almost universally put Clegg as the winner, in some cases by a margin as wide as 20%. Almost every comment I have seen online (and remember, the majority of people online, especially on the sites I read, are of the internet generation) did indeed see right through the vapid commentary of the two biggest parties. Whether this is something that would have happened without the effect above I don’t know, but I like to think it played a part. I’m a massive idealist, and the thought that my generation might be informed enough to disregard the negative effects of tradition, dogma and uninformed allegiance and rely instead on reasoned analysis of empirical information is an encouraging one.
I think one of the most interesting statistics that will come out of this election is how many young people vote. The demographic I sit in by virtue of my age is unbelievably reticent on this front, often coming last in terms of the proportion that voted. I think this will change this election - enough people are talking about it on facebook that you can’t help but notice something is going on, and enough people are so disillusioned with the death throes of a corrupt, moronic old guard that they want something new. I live in hope :)