Thursday 21 November 2013

Radical Technology and Institutional Pathology

I find myself in a curious paradoxical situation - more about that in a minute. But first, I'm thinking we're currently in an "it's all been done" moment in technology: we've got the internet! social software! big data! mobile apps!... what more could we want? It's been done. Let's settle down and get used to it!

But we've been here so many times before, as I pointed out here: http://dailyimprovisation.blogspot.com/2013/10/towards-2023-education-rediscovered.html. It's the '1992' moment: then people said: Hey, we've got Windows! and Macs! and MS Office!... what more could we need? Patterns repeat. But they repeat in a way that whatever is next is going to be completely unlike whatever was 'next' last time.

And what's next?? I would call it 'emotional computing', 'the feeling computer', 'aesthetic computing', and so on. It's a moment of convergence between the arts and the sciences: where the information systems by which we come to know the world are indistinguishable for the world we wish to come to know. Practically, this means a different kind of symbolic encoding of what we expect. Currently, expectations are encoded in transactions online in the form of simple transactions like "I order a book from Amazon... the book arrives in a couple of days". Financial transactions have always worked like this "I promise to pay the bearer...". However, in most human contact these kind of simplistic transactions are in the minority. Trust builds between individuals because of deep reflexive knowledge about each other. Trust breaks down where expectations are not met.

Individuals to whom we are attached are people we know deeply: the baby knows the mother and the mother knows the baby knowing the mother. The extent to which our own identity is constituted by these networks of attachment which themselves are the product of deep reflexive processes is something which an individualistic psychology alone cannot fathom. It's the patterns of communication which constitute individuals. It is this realisation that will drive a new kind of symbolic encoding of expectations, and a new kind of technology.

The basic issue is that increasingly rich pictures of communications are available for analysis. It's as if, without really knowing it, we are all walking around with MRI scanners examining the context of our brains. Except that the neuroscientists may have needed look no further than the deep analysis of communications to understand the many mysteries of the psyche (MRI pictures are in fact a kind of communication - although I think visualised bowel movements rather than brains would tell us much more!). There is enough that is revealed through communicative action which can tell us about structures of expectations in individuals. There is enough that is revealed that tells us about departures from norms of behaviour which will be associated with particular feelings. The mass data that is collected about everyone gives those who have the tools to manipulate it the means to understand (and predict!) the behaviour of individuals in ways which we never imagined would be possible.

Think about it. Imagine that every character I type on this blog is fed to Google. Not only each word and phrase, but the rhythm of my typing, the pause between words as I think, the corrections I make, and so on. How much does that reveal about me? Well, imagine that the database of every single other post I have made is similarly recorded in miniscule detail and can be compared and contrasted in nanoseconds in immediate response to what I do. It can see patterns where I have done things before; it can see where I deviate from something before. Most importantly, it can see not the unique and imaginative things I might do (occasionally!) but gets a grasp of the routine and mundane things. Those are the things that really matter, because they are the ground upon which innovation arises.

Could a machine work out how to respond to stimulate my creativity? Could a machine work out novelties that I might have missed? Could it imagine what I'm thinking? Could it read my mind? Or could it appear to read my mind?

When the first AI pioneers imagined 'intelligent machines' they were countered by sensible people who argued that a brain isn't a computer, that the Cartesian view which privileged thinking over being was short-sighted and based on what C.S. Lewis called "Men with empty chests". Computers were communication machines as Winograd and Flores, and many others, have insisted. And indeed they were right. But here's the twist. The AI pioneers were indeed wrong to think of brains as computers, but the communication-oriented people were wrong to think that communications were not constitutive of what we assumed to be "brains". They may have been the proper AI specialists!

Of course, right now we're not there yet. Things are still slow. But they will get quicker. And quicker. And faster is different, never only just faster.

So what's the paradox?? Well, how do we research and develop this technology? Answer: you go to a University. How does a University organise itself for the conduct of this research? Answer: through rigid institutional structures, evaluations, PhD programmes, etc. Why does the institution do this? Answer: to keep itself economically viable in its provision of educational products. Why can't/won't it deploy the radical technologies and practices it researches to make the research process flexible and adaptable? Answer: because it fears radical change to its operations will threaten its viability.

So once again, here's the future of technology. And you will find it everywhere in the next 5 years I reckon. Except (not for the first time) in your University!

The question I ask myself is "How the hell do we fix this?"

No comments: