Sunday, 21 May 2017

New technologies and Pathological Self-Organisation Dynamics

Because new communications technologies liberate individuals from the prevailing constraints of communication, it is often assumed that the new forces of self-organisation are benificent. Historical evidence for massive liberation of means of communication can tell a different story. Mechanisms of suppression, unforeseeable consequences of liberation - including incitement to revolt - revolution, war and institutional disestablishment follow the invention of printing; propaganda, anti-intellectualism, untramelled commercialism and totalitarianism followed telephone, cinema and TV; and the effects of unforseeable self-organising dynamics caused by the internet are only beginning to be apparent to us. It isn't just trolling, Trump and porn, its vulnerabilities that over-rationalised technical systems with no redundancy expose to malevolent forces that would seek their collapse (which we saw in the NHS last week).

What are these dynamics?

It's an obvious point that the invention of a new means of communication - be it printing or Twitter - presents new options for making utterances. Social systems are established on the basis of shared expectations about not only the nature of utterances (their content and meaning) but on the means by which they are made. The academic journal system, for example, relies on shared expectations for what an "academic" paper looks like, the process of review, citations, the community of scholars, etc. It has maintained these expectations supported by the institutional fabric of universities which continues to fetishise the journal, even when other media for intellectual debate and organisation become available. Journalism too relies on expectations of truth commensurate with the agency responsible for the journalism (some agencies are more trusted than others), and it again has resisted new self-organising dynamics presented by individuals who make different selections of their communication media: Trump.

But what happens then?

The introduction of new means of communication is the introduction of new uncertainties into the system. It increases entropy across institutional structures. What then appears to happen is a frantic dash to "bring things back under control". That is, reduce the entropy by quickly establishing new norms of practice.

Mark Carrigan spoke in some detail about this last week in a visit to my University. He criticised the increasing trend for universities to demand engagement with social media by academics as a criterion for "intellectual impact". What are the effects of this? The rich possibilities of the new media are attenuated to those few which amplify messages and "sell" intellectual positions. Carrigan rightly points out that this is to miss some of the really productive things that social media can do - not least in encouraging academics in the practice of keeping an open "commonplace book" (see

I'm wondering if there's a more general rule to be established relating to the increase in options for communicating, and its ensuing increase in uncertainty in communication. In the typical Shannon communication diagram (and indeed in Ashby's Law of Requisite Variety), there is no assumption that increasing the bandwidth of the channel affects either the sender or the receiver. The channel is there to illustrate the impact of noise on the communication, the things that the sender must do to counter noise, and the significance of having sufficient bandwidth to convey the complexity of the messages. Surplus bandwidth beyond what is necessary does not affect the sender.

But of course, it does. The communications sent from A to B are not just communications like Twitter messages "I am eating breakfast". They are also communications that "I am using Twitter". Indeed, the selection of the medium is also a selection of receiver (audience). This introduces a more complex dynamic which needs more than a blog post to unfold. But it means that as the means of communicating increases, so does the entropy of messages, and so does the levels of uncertainty in communicating systems.

This is what's blown up education, and it's what blew up the Catholic church in 1517. It's also what's enabled Trump's tweeting to move around conventional journalism and the political system as if it was the Maginot line. As the levels of uncertainty increase, the self-organisation dynamics lead to a solidification (almost a balkanisation - particularly in the case of Trump) of message-medium entities which become impervious to existing techniques for establishing rational dialogue. Government, because it cannot understand what is happening, is powerless to act to intervene in these self-organising processes (it should). Instead, it participates in the pathology.

We need a better theory and we need better analysis of what's happening.

Saturday, 13 May 2017

The Evaluation of Adaptive Comparative Judgement as an Information Theoretical Problem

Adaptive Comparative Judgement is an assessment technique which has fascinated me for a long time (see Only recently, however, have I had the opportunity for trying it properly... and its application is not in education, but in medicine (higher education, for some reason, has been remarkably conservative in experimenting with the traditional methods of assessment!).

ACJ is a technique of pair-wise assessment where individuals are asked to compare two examples of work, or (in my case) two medical scans. They are asked a simple question: Which is better? Which is more pathological? etc. The combination of many judges and many judgements produces a ranking from which a grading can be produced. ACJ inverts the traditional educational model of grading work to produce a ranking; it ranks work to produce a grading.

In medicine, ACJ has fascinated the doctors I am working with, but it also creates some confusion because it is so different from traditional pharmacological assessment. In the traditional assessment of the efficacy of drugs (for example), data is examined to see if the administration of the drug is an independent variable in the production of the patient getting better (the dependent variable). The efficacy of the drug is assessed against its administration to a wide variety of patients (whose individual differences are usually averaged-out in the statistical evaluation). In other words, in traditional clinical evaluation, there is a linear correlation between
P(patient) + X(drug) = O(outcome)
where outcome and drug are shown to be correlated across a variety of patients (or cases).

ACJ is not linear, but circular. The outcome from ACJ is what is hoped to be a reliable ranking: that is, a ranking which accords with the  judgements of the best experts. But it is not ACJ which does this - it is not an independent variable. It is a technique for coordinating the judgements of many individuals. Technically, there is no need for more than one expert judge to produce a perfect ranking. But the burden of producing consistent expert rankings for any single judge (however good they are) will be too great, and consistency will suffer. ACJ works by enlisting many experts in making many judgements to reduce the burden on a single expert, and to coordinate differences between experts in a kind of automatic arbitration.

Simply because it cannot be seen to be an independent variable does not mean that its efficacy cannot be evaluated. There are no independent variables in education - but we have a pretty good idea of what does and doesn't work.

What is happening in the ACJ process is that a ranking is communicated through the presentation of pairs of images to the collective judgements of those using the system. The process of communication occurs within a number of constraints:

  1. The ability of individual judges to make effective judgements
  2. The ease with which an individual judgement might be made (i.e. the degree of difference between the pairs)
  3. The quality of presentation of each case (if they are images, for example, the quality is important)

An individual's inability to make the right judgement amounts to the introduction of "noise" into the ranking process. With too much "noise" the ranking will be inaccurate.

The ease of making a judgement depends of the degree of difference, which in turn can be a measure of the relative entropy between two examples. If they are identical, then the relative entropy will be the same. Equally, if images are the same, the mutual information between them will be high, calculated as:
H(a) + H(b) - H(ab)
If the features of each item to be compared can be identified, and each of those features belongs to a set i, then the entropy of each case can be measured simply as a value for H, across all the values of x in the set i:

The ability to make distinctions between the different features will depend partly on the quality of images. This may introduce uncertainty in the identification of values of x in i.

What ACJ does is it deals with issues 1 and 2. Issue 3 is more complex because it introduces uncertainty as to how features might be distinguished. ACJ deals with 1 and 2 in the same way as any information theoretical problem deals with problems of transmission: it introduces redundancy.

That means that the number of comparisons needed to be made by each judge is dependent on the quality and consistency of the of the ranking which is produced. This can be measured by determining the distance between the ranking produced by the system and the ranking determined by experts.  Ranking comparisons can be made for the system as a whole, or for each judge. Through this process, individual judges may be removed or others added. Equally, new images may be introduced whose ranking is known relative to the existing ranking.

The evaluation of ACJ is a control problem, not a problem of identifying it as an independent variable. Fundamentally, if ACJ doesn't work, it will not be capable of producing a stable and consistent ranking - and this will be seen empirically. That means that the complexity of the judges performing ranking will not be as great as the complexity of the ranking which is input. The complexity of the input will depend on the number of features in each image, and the distance between each pair of images.

In training, we can reduce this complexity by having clear delineations of complexity between different images. This is the pedagogical approach. As the reliability of the trainee's judgements increase, so the complexity of the images can be increased.

In the clinical evaluation of ACJ, it is possible to produce a stabilised ranking by:

  1. removing noise by removing unreliable judges
  2. increasing redundancy by increasing the number of comparisons
  3. introducing new (more reliable) judges
  4. focusing judgements on particular areas of the ranking (so particular examples) where inconsistencies remain
As a control problem, what matters are the levers of control within the system. 

It's worth thinking about what this would mean in the broader educational context. What if ACJ was a standard method of assessment? What if the judgement by peers was itself open to judgement? In what ways might a system like this assess the stability and reliability of the rankings that arise? In what ways might it seek to identify "semantic noise"? In what ways might such a system adjust itself so to manipulate its control levers to produce reliability and to gradually improve the performance of those whose judgements might not be so good? 

The really interesting thing is that everything in ACJ is a short transaction. But it is a transaction which is entirely flexible and not constrained by the absurd forces of timetables and cohorts of students.

Wednesday, 10 May 2017

The Managerial Destruction of Universities... but what do we do about it?

As I arrived at the University of Highlands and Islands for a conference on the "porous university", there was a picket line outside the college. Lecturers were striking about a deal agreed with the Scottish Government to establish equal pay among teaching staff across Scotland which had been reneged on by management of colleges. The regional salary difference can be as much as £12,000, so this clearly matters to a lot of people. It was a good turnout for the picket line (always an indication of how much things matter) - similar to the one when the University of Bolton sacked their UCU rep and his wife which made the national press (

It is right to protest, and it is right to strike. But sadly, none of this seems to work very well. Managements seems to be indestructible, and pay and conditions continually seem to get worse.

At UHI, the porous university event was an opportunity to take the temperature of the effects of over 5 years of managerial pathology in universities across the country. The collective existential cry of pain by the group was alarming. The optimism, hope, passion and faith which is the hallmark of any vocation, and was certainly the hallmark of most who worked in education, has evaporated. It's been replaced with fear and dejection. Of course, an outside observer might remark "well, you've still got jobs!" - but that's to miss the point. People might still be being paid (some of them) but something has been broken in the covenant between education and society which has destroyed the fabric of a core part of the personal identities of those who work in education. It's the same kind of breaking of covenant and breaking of spirit that might be associated with a once healthy marriage which is destroyed by a breakdown of trust: indeed, one of my former Bolton colleagues described the spirit of those working for the institution as being like "the victims of an abusive relationship".

Lots of people have written about this. Stefan Collini has just published his latest collection of essays on Universities, "Speaking of Universities", which I was reading on the way up to Scotland. It's beautifully written. But what good does it do?

In the perverse monetised world of universities, the writing and publishing (in a high ranking journal) of a critique of the education system is absorbed and rewarded by the monetised education system. In its own way, it's "impact" (something Collini is very critical of). Weirdly, those who peddle the critique inadvertently support the managerial game. The university neutralises and sanitises criticism of itself and parades it as evidence of its 'inclusivity' and the embrace of opposing views, all the time continuing to screw lecturers and students into the ground.

A good example of this is provided by the University of Bolton who have established what they call a "centre for opposition studies" ( There are no Molotov cocktails on the front page - but a picture of the house of commons. This is sanitised opposition - neutralised, harmless. The message is "Opposition is about controlled debate" rather than genuine anger and struggle. Fuck off! This isn't a progressive way forwards: it is the result of a cynical and endemic conservatism.

I wouldn't want to condemn Collini of conservativism in the same way - and yet the symptoms of conservativism are there in the way that they exist in the kind of radical "history man" characters that pepper critical discourse. The main features of this?
  • A failure to grasp the potential of technology for changing the dimensions of the debate
  • A failure to reconcile deep scholarship with new possibilities for human organisation
  • A failure to suggest any constructive way of redesigning the system
If I was to be cynical, I would say that this is because of what Collini himself admits as the "comfortable chair in Cambridge" being a safe place to chuck bricks at the system. It is not really wishing to disrupt itself to the point that the chair is less comfortable. 

The disruption and transformation of the system will not come from within it. It will come from outside. There's quite a cocktail brewing outside the institution. One of the highlights of the UHI conference was the presentation by Alex Dunedin of Alex's scholarly contribution was powerful, but he himself is an inspiration. He exemplifies insightful scholarship without having set a "formal" foot inside a university ever. His life has been a far richer tapestry of chaos and redemption than any professor I know. Meeting Alex, you realise that "knowledge is everywhere" really means something if you want to think. You might then be tempted to think "University is redundant". But that might be going too far. However, the corporate managerialist "nasty university" I think will not hold sway for ever. People like Alex burn far brighter. 

Another bright note: Just look at our tools! The thing is, we have to use them differently and creatively. I did my bit for this effort. I suggested to one group I was chairing that instead of holding up their flipchart paper with incomprehensible scribbles on it, and talking quickly in a way that few take in, they instead passed the phone over the paper and made a video drawing attention to the different things on their paper. So paper became a video. And it's great!

Monday, 8 May 2017

Educational Technologists: Who are we? What is our discipline?

I am an educational technologist. What does that mean?

I think we are at a key moment in the history of technology in education and there is a radical choice facing us.

We can either:

  • Use technology to uphold and reinforce the traditional distinctions of the institution. This means VLEs, MOOCs, Turnitin, etc. This enslaves individual brains and isolates them; 
The consequences of this are well summarised by Ivan Illich:
"Observations of the sickening effect of programmed environments show that people in them become indolent, impotent, narcissistic and apolitical. The political process breaks down, because people cease to be able to govern themselves; they demand to be managed."
The alternative?
  • We use technology to organise multiple human brains in institutions and outside them so that many brains think as one brain.

To do the latter, we need to think about what our discipline really is. I am going to argue that our discipline is one that crosses boundaries: it is the discipline of study into how distinctions are made, and what they do.

For many academics, the educational technologist looks after the VLE or does cool videos on MOOCs. They also the person academics seek help from when the techno-administrative burden of modern universities becomes overwhelming: how do I submit my marks, get my students on this course, etc. For some academics, the educational technologist is a kind of secretary - the equivalent of the secretary who would have done the academic's typing in the 1970s when typing was not considered to be an academic activity. Some academics blame the educational technologist for the overwhelming techno-administrative nightmare that constitutes so much of academic life today.

Certainly there is a boundary between the academy and the educational technologist. Like all boundaries, it has two sides. On the one hand, the academy pushes back on the technologists: it generally treats them with suspicion (if not disdain) - partly because it (rightly) sees a threat to its current practices in the technology. The educational technologists have tried to push back on the academy to get it to change, embrace open practice, realise the potential of the technology, etc. Right now, the academy is winning and educational technologists are rather despondent, reduced to producing "learning content" in packages like "storyline" which often reproduce work which already exists on YouTube.

This situation has partly arisen because of a lack of identity among learning technologists. In trying to ape the academy, they established themselves in groups like ALT or AACE as a "discipline". What discipline? What do they read? In reality, there is not much reading going on. There is quite a lot of writing and citing... but (I'll upset a few people here) most of this stuff is unreadable and confused (I include my own papers in this). In the defence of the educational technologist, this is partly because what they are really trying to talk about is so very difficult.

I believe we should admit our confusion and start from there. Then we realise that what we are doing is making distinctions. We make distinctions about learning like this:

or we might make cybernetic distinctions like this:
What is this? There are lines and boxes (or circles). 

What are the lines and boxes doing?

What are the lines around the boxes doing? (these are the most interesting)

Scientific communication is about coordinating distinctions. In coordinating distinctions, we also coordinate expectations. The academy, in its various disciplines, upholds its distinctions. However, as physicist David Bohm realised, scientists don't really communicate.

For Bohm, dialogue is a way to exploring the different distinctions we make. The demand of Bohmian scientific dialogue is to continually recalibrate distinctions in the light of the distinctions of others. More importantly, it is to embrace uncertainty  as the operating scientific principle.

Scientific Dialogue is about communicating uncertainty.

If we are recalibrating, then we are continually drawing and redrawing our boundaries. But this process is controlled by more fundamental organising principles which underlie the processes of a viable organism. It's perhaps a bit like this:

Here we see the death of boundaries, and the reorganisation of the organism. Much of what goes on here remains a mystery to biologists. Some are exploring the frontiers, however. Deep down, it seems to be about information...
or ecology:

Information, semiotics, ecology all concern the making of distinctions. There are a variety of mathematical approaches which underpin this. In fact, Charles Peirce, founder of semiotics, was also the founder of a mathematical approach to studying distinctions.  This is Peirce's attempts to fathom out a logic of distinctions:

And this is the very closely-related work of Cybernetician George Spencer-Brown:

When we talk about education, or technology, or biology... or anything... we are making distinctions.

A distinction has an inside and an outside. We usually forget the outside because we want to communicate the inside. We only know the outside of the distinction by listening.

It is the same in physics - particularly Quantum physics. 

And this is becoming important for the future of computing. Quantum computers are programmed using a kind of "musical score" - like this from the IBM Quantum Experience computer:

So what does all this mean?

Well, it means that science has to embrace uncertainty as an operating principle. Yet science in the academy is still tied to traditional ways of communicating. The academic paper does not communicate uncertainty.

To communicate uncertainty, we need to listen to the outside of our distinctions.

Our scientific institutions need to reconfigure their practices so that the distinction between education and society is realigned to progress society's scientific knowledge.

It is not to say that distinction between education and society should be removed. But that a discipline of examining ecologies of distinctions is essential for a new science of uncertainty to prosper.

It also means that new media should be deployed to communicate uncertainty and understanding on a much wider basis than can be achieved with academic papers. Where we have struggled is in being able to listen to large numbers of people in a coherent way.

This is one way in which we might do this...

It involves doctors and learners using Adaptive Comparative Judgement tools as a means of making diagnoses of retinal scans in examining Diabetic Retinopathy. Adaptive Comparative Judgement is a technique of getting lots of people to make simple comparisons in order to arrive at a ranking of those scans, with the most pathological at one end, and the normal at the other. In addition to this, there is a simple way in which learners can be trained to do this themselves:

Other technological means of getting many brains to act as one brain include BitCoin and the BlockChain that sits behind it...

The MIT Digital Certificates project is exploring ways in which a blockchain might decentralise education...

What about the distinctions between education and society? How might they be better managed?

What about the distinctions between critique and functionalism and phenomenology in education?

Well, the critique only exists because it has something to push against. The thing it pushed against exists partly because of the existence of the critique (and indeed, it embraces the critique!)... We have a knot.

We should understand how it works....

Saturday, 6 May 2017

@siobhandavies and Double Description at @WhitworthArt ... and reflections on Music and Education

Living around the corner from the Whitworth Art gallery means that I often make serendipitous discoveries. I popped into the gallery on my way into the city centre centre this morning and found Siobhan Davies and Helka Kaski doing this as part of their work "Material / Rearranged / to / be" - a dance work inspired by photographs from the Warburg Institute collection:

There's something very cybernetic about what they are doing - indeed, the whole installation's emphasis on action and reflection is very similar to the theme of the American Society for Cybernetics conference in 2013 (see This is rather better than we managed in Bolton!

If the cybernetician Gregory Bateson wasn't the first thinker to have considered the importance of 'multiple descriptions of the world' - particularly in the distinction between connotation and denotation, he certainly thought more analytically about it than anyone else. We live with multiple descriptions of the same thing. In cybernetic information-theoretic terms, we are immersed in redundancy. Why does Siobhan Davies have two dancers mimicking each other? Because the dual presentation is more powerful - perhaps (and this is tricky) more real  - than the single description.

In a world of austerity, what gets stripped away is redundancy. We streamline, introduce efficiencies, 'de-layer' (a horrible phrase that was used to sack a load of people in my former university), get rid of the dead wood (blind to the fact that the really dead wood is usually making the decisions!). The arts are fundamentally about generating multiple descriptions - redundancies. It's hardly surprising that governments see them as surplus to requirements under austerity.  But it spells a slow death of civilisation.

Warren McCulloch - one of the founders of Cybernetics and the inventor of Neural Networks - took particular interest in naval history as well as brains. He was fascinated by how Nelson organised his navy. Of course, there were the flag signals from ship to ship. But what if it was foggy? Nelson ensured that each captain of each ship was trained to act on their own initiative understanding the heuristics of how to effectively self-organise even if they couldn't communicate with other ships. McCulloch called this Redundancy of Potential Command, pointing out that the ultra-plastic brain appeared to work on the same principles. This was not command and control - it was generating sufficient redundancy so as to facilitate the emergence of effective self-organisation. In effect, Nelson organised the many brains of his naval captains to act as one brain.

That's what Davies does here: two brains act as one brain.

This also happens in music... but it hardly ever happens in education. In education, each brain is examined as if it is separate from every other brain. The stupidity of this is becoming more and more apparent and the desperate attempts of the education system to scale-up to meet the needs of society stretch its traditional ways of operating to breaking point. Yet it doesn't have to be like this.

In a project with the China Medical Association, at Liverpool University we are exploring how technologies might facilitate the making of collective judgements about medical conditions. Using an assessment technology called "Adaptive Comparative Judgement" each brain is asked to make simple comparisons like "which of these scans displays a condition in more urgent need of treatment?". With enough people making the judgements and each person making enough judgements, many brains act as one brain in producing a ranking of the various scans which can then be used to prioritise treatment. In practice, it feels like a kind of orchestration. It is the most intelligent use of technology in education I have ever been involved with.

Orchestration is of course a musical term. Musicians are traditionally orchestrated using a score, but there is much more going on. The fine degrees of self-coordination between players is heuristic at a deep level (much like Davies's dance). The performance and the document which describes the manner of the performance are all descriptions of the same thing too. It's redundancy all the way down.

I was mindful of this as I put together this video of my score for a piece I wrote 10 years ago called "The Governor's Veil" with a recording of its performance. In video, with the score following the sound, the double description and the redundancies become much more noticeable.


Thursday, 4 May 2017

Teaching, Music and the life of Emotions: a response to distinctions between thinking and knowing

Music makes tangible aspects of emotional life which underpin conscious processes of being – within which one might include learning, thinking, reflecting, teaching, acting, and so on. In education, we place so much emphasis on knowledge because knowledge can be turned into an object. People make absurd and indefensible distinctions between “thinking” and “knowing”, “reflecting” and “acting”, “creating” and “copying” partly because there is no framework for thinking beyond objects; equally nobody challenges them because they are only left with feelings of doubt or alienation that they can barely articulate. The emotional life cannot be objectified: it presents itself “through a glass, darkly”. Only the arts, and particularly music succeeds in “painting the glass”.

In Suzanne Langer’s view, composers and performers are epistemologists of the emotions: in their abstract sonic constructions they articulate what they know about what it is to feel. What they construct is a passage of time over which, they hope, the feelings of listeners and performers will somehow be coordinated to the point that one person might look at another and know that they are feeling the same thing. It is a coordination of the inner world of the many; a moment where the many brains think as one brain. This is the most fundamental essence of social existence.

We each have something of the composer in us in the sense that we (sometimes) express our feelings. But composers do more than this. They articulate what they know about what it is to feel, and their expression is a set of instructions for the reproduction of a temporal form. In mathematics, this kind of expression through a set of instructions is called “E-Prime” ( It’s a bit like the kind of games that people sometimes play: “think of a number between 1 and 10; double it; divide by …”. But similar in kind though such games are, they have nothing of the sophistication of music.

Great teachers do something similar to composers. To begin with, they work with in an immensely complex domain. Broadly, the teacher’s job is to express their understanding of a subject. But when we inquire as to what it is to "express understanding", we are left with the same thing as in music: it is to express what it feels like to know their subject. In great hands, the subject they express and the feelings they reveal are coordinated to the point that what is conveyed is their knowledge of what it is to feel knowing what they do.

Talking about emotions is difficult. It is much easier to talk of knowledge, or to talk about creativity, or thinking in loose rhetorical terms, avoiding any specifics. It is easy to point to pictures of brain scans and make assertions about correlations between neural structures and experiences - which somehow takes the soul of it and gives license to bullies to tell everyone else how to teach based on the brutal "evidence" of neuroscience. Any child will know they are lying. 

We can talk about emotion more intelligently. Wise heads in the past - some from cybernetics - made important progress in this. Bateson's concept of Bio-entropy is, I think the closest description we have of what happens (I had a great chat to Ambj√∂rn Naeve about this yesterday). We should start with music: it is the essence of connotation. It presents the richness of the interaction of multiple descriptions of the world which was at the heart of Bateson's message. It is ecological, and it's ecology is so explicitly ruled by redundancies. And perhaps the most hopeful sign is that the very idea of counterpoint is beginning to take centre stage not just in the way that we analyse ecologies, but in the way that the quantum physicists are programming their remarkable computers. 

Tuesday, 2 May 2017

Relative Entropy in the Analysis of Educational Video

Relative entropy is a calculation much used by quantum physicists to measure degrees of entanglement between subatomic particles. Its formal form is the Kullback-Leibler equation:
It isn't as scary as it looks (information theory rarely is!) - it's basically a metric of distance between a probability distribution P and a distribution Q. If two subatomic particles are entangled (in other words, their behaviour will be coordinated), then the distance between the probability distribution of their behaviour (their expected states) will be zero. 

That Quantum physics tells us something we already know about nature and social life is reflected in the various fluffy uses of "entanglement" (e.g. Latour, Barad, etc) in the social science literature. But this is rarely done with any real insight into what it actually means. It basically seems to say "it's complex, init!".

I'm grateful to Loet Leydesdorff for pointing me in the direction of Kullback-Leibler after I requested some degree of measurement for the synergy between different entropy values for different variables. My inspiration for asking this was in thinking about music. Music presents many descriptions to us: rhythm, melody, harmony, timbre, dynamics, etc. Something happens in music when the change in any of these dimensions is accompanied with a similar change in another dimension: so the rhythm changes with melody, for example. At these moment, we often detect some new idea or motif - it's at these moments that things grow. Basically, I'm drawing on a musical experiment I did a few years ago:

The same kind of technique can be applied to the analysis of video. Like music, video presents many different descriptions of things. 

I've been looking at Vi Hart's wonderful video on Fibonacci numbers and spirals. 

There are a rich range of descriptions contained in this video, and I was wondering how the probability distribution of each description relates to the distribution of other descriptions. So I've been doing some analysis, using Kinovea for video analysis, Puredata for analysis of the pitch and rhythm of speech, and using YouTube to produce a transcript of the video from which I can do some entropy calculations. 

After munching on the data and converting it into a form I can deal with, I've imported it all into a Jupyter notebook using Python's Panda dataframes, queried it using sql (using the pysqldf library), and done entropy calculations on the whole thing. 

My code is still a bit rubbish, but it's beginning to tell me things. For example, I can look at the changes in entropy of the transcribed text over window periods as the video progresses. So here is a list of the first 20 seconds in 5-second chunks:

0-5: -0.25206419825534054
5-10: -0.24292065819269668
10-15: -0.3868528072345415
15-20: -0.3333333333333334

Now I can do the same for the 'events' which occur in the video. Here I was a bit stuck to describe things, so that when she drew a spiral, I wrote "spiral". She draws a lot of spirals, so the entropy is uninteresting...

0-5: 0
5-10: 0
10-15: 0
15-20: 0

What? Well, maybe there's an error in my coding - I might go back and add some more detail to my analysis. She keeps on drawing spirals, and therefore the entropy is 0.

What about the pitch of her voice? That's the interesting one... I used PD to do this using fiddle~ (I first played with Fiddle~ in PD years ago in improvisation: - it just goes to show the importance of documenting everything that we do!)
Now the pitches are more interesting than the video events:

0-5: -0.4533324434922346
5-10: -0.366932572935196
15-20: -0.6913119495075026

Is there a correlation there? Well, the range of pitches in the voice increases with the variety of vocabulary used in the text. Perhaps that isn't surprising. But it's not surprising for a reason which has everything to do with relative entropy: the entropy of the use of words is likely to be coupled with the pitch, because with more words, there are more syllables and potentially more opportunities for variety in the pitch. Over a more extended period of time, and taking into account that events do occur in the video which increase its entropy, we can start to examine the relationship between the different aspects of what happens. 

The fact that there is a kind of stable ritual of drawing spirals which runs alongside an increase in the variety of words spoken and pitches used suggests that the actions in the video are a kind of 'accompaniment' to the words that are spoken. To begin with, the ritual of drawing spirals is a kind a 'drone' against which other things happen. As in music, the drone maintains the coherence of the piece. 

Image if she started differently: if she started by doing the maths straight away.. then it would have a very different dynamic. The entropies would also be very different.

Saturday, 29 April 2017

@TedXUoBolton, Science and the Managerial Craving for Academic Celebrity

What has actually happened to Universities in the last 20 years? We only see indicators that things aren't what they used to be, but since those whose job it is to commentate on how things are changing are themselves enmeshed in Universities which are in the throws of these transformations, there appears to be no position from which one can gauge how far our institutions are straying from their historic origins.

So here's the latest sign: the second TEDx event to be held at the University of Bolton. For those students and some of the more junior academic staff taking part in this, it is a great opportunity, and on the face of it, a great idea. But the weird thing is that three senior managers plus a couple of professors from Bolton have been instrumental in creating a platform for themselves.

Heading the bill -  (which is here: is Bolton esteemed Vice-chancellor Professor George Holmes DL - cue dancing girls!. If that's not enough of senior management (he's enough for most, including the former UCU rep -, then you can listen to the Deputy Vice-Chancellor, Professor Patrick McGhee! Wait... Yes! I know you want more! So, here's what you've been waiting for - the University's one-and-only Pro-Vice Chancellor, Kondal Reddy Khandadi (cue lots of whooping and cheers)

These illustrious speakers are mixed with academics from other institutions, including Steve Fuller - who gave a nice talk about democratising HE and science, which is something he's been on about for some time (see

What is this? It looks like a kind of 'academic washing': a way of manufacturing academic credibility and bestowing it upon members of the management team. It runs alongside the 'Royal washing' which Bolton has also been engaged in recently:, and the seriously odd "political washing" of the "Centre for Opposition Studies" - (I find the picture of the House of Commons curious - there's something they don't understand about opposition... this is entirely sanitised! No reference to struggle, peaceful or violent, whatsoever!)

Then, I'm reminded about TED itself, and the particularly unfortunate episode with Rupert Sheldrake whose talk at TEDx Whitechapel on the "The Science Delusion" was banned. Sheldrake is one of my favourite scientists because he has the courage to ask difficult questions of people who call themselves scientists but chose to ignore those questions because it would make peer review more troublesome. 

Why, then, does TED ban Sheldrake and willingly host Vice Chancellors, Pro-Vice chancellors, and Deputy Vice Chancellors who haven't got anything remotely as interesting to talk about?

So this is the barometer of where things have got to. Most scientists find Sheldrake's "morphogenetic field" idea too esoteric an explanation for the phenomenon of the simultaneous formation of new crystal structures at different points in the world. But even physics used to be more inquiring and accepting of weird ideas.

At my University, the first head of the physics department was Oliver Lodge, who did pioneering work in electromagnetic radiation in the early 20th century, and was also a passionate communicator of science. Fuller's appeal for democratised science is not new, and we should examine those who did it long before. (I'm deeply grateful to Liverpool physicist Peter Rowlands (see for pointing me towards this). The football at the beginning of this video is a curious distraction!

What Lodge says in this video is not a million miles away from what Sheldrake says. Lodge was not only a physicist, but a spiritualist. It's the kind of combination that would get you sacked from Universities these days. But instead of getting sacked, Lodge went on to become Vice Chancellor of the University of Birmingham.

So here's the barometer. Lodge exemplifies what scientific inquiry looks and sounds like. He epitomises the spirit of inquiry and communication which suffused the university of his time. He wasn't alone in Liverpool - Charles Sherrington was down the road exploring the neural structures in monkeys' brains (in the attic above my office!); He was a contemporary of and studied alongside William Bateson, who invented the term "genetics" and was father to Gregory Bateson; He worked with other key thinkers in philosophy including Whitehead, with whom he no doubt found much in common.

This simply doesn't happen any more, and we are all the poorer for it. Instead we have managers parading themselves as academic celebrities, making pronouncements about education - about which they understand very little (as we all do). Why? Because we have turned education into a business, where status is money, and money gives status.

There are still people like Lodge around. Sheldrake is one, and so is Peter Rowlands. But they are on the fringes, many clinging to the academy in adjunct positions which save the managers money, and help to fund their yachts, and (no doubt) TedXBolton 2018!

Wednesday, 26 April 2017

Educational Content and Quantum Physics

One the most difficult issues to understand in education is the role of content and its relation to conversation. There are the material aspects of content - physical books, e-books, webpages, interactive apps and tools... What's the difference? There are the many different ways in which teachers can coordinate conversations around the content. And there are the fundamental differences between disciplines. Tools like Maple or Matlab are great for getting students to do virtual empirical work in Maths or Physics. But in sociology and philosophy?

The phenomenology of these tools is radically different. It's not simply about the rather shallow view on "affordances" which was popular a few years ago. It's a much deeper ecological process (which perhaps is where Gibson's original work on affordance should - but didn't - lead us). We can say books afford "flicking through" but in effect that is to reduce the richness of experience into a function. The problem is that the systems designers, with their functionalist bent, will then try to reproduce the function in another form. We only have as many functions as we can give names to them. Yet each function is implicitly dependent on any other function, and on aspects of the phenomenology which we cannot articulate.

I'm thinking about books a lot at the moment partly because I've been learning quantum mechanics using Leonard Susskind's "Theoretical Minimum" (see book in conjunction with the videos of his lectures he gave at Stanford. This is the first time I've found any kind of MOOC-like experience actually worth it: A book + online lectures. Of course the objection would be that it is so expensive and resource-hungry to produce a book that this isn't practical unless you are famous like Susskind. But this isn't true any longer.

Book printing machines are extraordinary things. Combined with high quality typesetting using Latex-based tools like Overleaf, the results are as good as anything that Penguin can produce for Susskind. And it's cheap - with most of the self-publishers, the equivalent of Susskind's book could be less than the £9.99 charged by Penguin. All universities can now do the Open University thing at a fraction of the cost.

But what about conversation? In my case, my interest in Quantum Physics is being driven by a conversation with one of the physicists in Liverpool about the use of ideas of 'entanglement' in the social sciences (i.e. sociomaterial stuff, Latour, Barad, etc). Without wanting to "do a Sokal" (, it does seem that quantum theoretical terms are being used without deep understanding of what they refer to. Equally, it may be the case that the physics and its mathematical techniques does indeed reflect a wider reality which is already known to our common sense. I think both propositions may be true, and that one way of exploring it is to make a deep and clear connection between the physicists and the social scientists.

Might I pursue the interest in Susskind without my physicist friend? Maybe... but there'll always a be a conversation I will have somewhere where I can process this stuff. But it may not be online.

That is the crucial point - that conversations about matters of curiosity do not necessarily happen online. The current online education model saw that conversation had to happen online because otherwise the education could not be coordinated. But with a good book, and a set of video resources, we can do our own coordination independently of any central authority.

The reason why the online education model forced conversation into forums was I think because it confused learning conversation with assessment processes. In order to assess learners, obviously there has to be some record of the transactions between learners and teachers which reveals their understanding. It might also indicate to teachers new kinds of interventions which might be necessary to steer student learning in particular ways. But if the learning is left to self-organisation processes, and free choice is given to use a variety of different resources (books, webpages, etc), then what needs to be focused on is a flexible and reliable method of tracking (or assessing) development.

But it's not as simple as separating assessment from learning. Assessment is a key moment of learning - it is the moment when somebody else reveals their understanding in relation to the learner's revealing of understanding. That is a key aspect of conversation. In formal education, it can also be a formal transaction - particularly where marks are involved.

This is perhaps where the interaction with online content can be developed. Could it be an explicitly formal interaction of exchanging different understandings of things, and passing judgements about each others' understanding? In the emerging world of learning analytics, there is already something like this going on - but its lack of focus and theoretical clarity are resulting in exacerbating the confusion rather than deepening understanding. 

Tuesday, 25 April 2017

Revisiting Cybernetic Musical Analysis

I had a nice email yesterday from a composer who had seen a video of an analysis of music by Helmut Lachenmann which I did in 2009 using cybernetic modelling. I'd forgotten about it - partly embarrassed by what I thought of as a crude attempt to make sense of difficult music, but also because it was closely related to my PhD which I'm also slightly embarrassed by. In the intervening time, I became dissatisfied with some of the cybernetic underpinnings, and became more interested in critical aspects of theory. Being embarrassed about stuff can be a block to taking things further: I have so many almost-finished unpublished papers - often discouraged from publishing them because of the the frequent nastiness of peer review. So it's nice to receive an appreciative comment 8 years after something was done.

It's made me want to collate the set of music analysis videos that I made in 2009. They are on Schumann, Haydn, Ravel and Lachenmann. In each I pursue the same basic theory about "prolongation" - basically, what is it that creates the sense of coherence and continuity of experience in the listener. The basic theory was inspired by Beer's Viable System Model - that coherence and continuity is a combination of different kinds of manipulation of the sound as "disruptive" - sound that interrupts and surprises; "coercive" - sound that reinforces and confirms expectations; "exhortational" - sound that transforms one thing into another.

What do I think about this 8 years later?

First of all, what do I now think about the Viable System Model which was the foundation for this music analysis? There is a tendency in the VSM to refer to the different regulating layers allegorically: this is kind-of what I have done with coercion, exhortation, etc. But now I think the VSM is more basic than this: it is simply a way in which a system might organise itself so as to maintain a critical level of diversity in the distinctions it makes. So it not that there is coercion, or disruption or exhortation per se... it is that the system can distinguish between them and can maintain the possibility that any of them might occur.

Furthermore, each distinction (coercion, exhortation, etc) results from a transduction. That it, the conversion of one set of signals into another. Particular transductions attenuate descriptions on one side to a particular type which the transduced system can deal with: so the environment is attenuated by the skin. But equally, any transducer is held in place by the descriptions which arise from the existence of the transducer on its other side. It's a bit like this:

Each transducer attenuates complexity from the left and generates it to the right. This is where Beer's regulating levels come from in the Viable System Model. 

The trick of a viable system - and any "viable music"  (if it makes sense to talk of that) - is to ensure that the richness of possible transductions and descriptions is maintained. In my music videos I call this richness "Coercion", "exhortation", "disruption" - but the point is not what each is, but that each is different from the others, and that they are maintained together.

Understanding transduction in this way gives scope for saying more about analysis. The precursor to a transducer forming is an emerging coherence between different descriptions of the world. A way of measuring this coherence is to use the Information Theoretical calculation of relative entropy.  I've become very curious about relative entropy since I learnt that it is the measure used in quantum mechanisms for measuring the entanglements between subatomic particles. Given that quantum computers are programmed using a kind of musical score (see IBM's Quantum Experience interface), this coherence between descriptions expressed as relative entropy makes a lot of sense to me. 

So in making the distinctions that I make in these music videos, I would now put more emphasis on the degrees of emerging relative entropy between descriptions. Effectively coherences can be seen in what I called "coercive" moments as repetition, and this produces descriptions of what isn't repeated - or what is surprising. Surprises on larger structural layers such as harmony or tonality amount to transformations - but this is also a higher-level transduction.  

The viable system which makes these distinctions is, of course, the listener (I was right about this in the Lachenmann analysis). The listener's system has to continually recalibrate itself in the light of events. It performs this recalibration so as to maintain the richness of the possible descriptions which it can generate. 

The world is fucked at the moment because our institutions cannot do this. They cannot recalibrate effectively and they lose overall complexity and variety - and consequently they lose the ability to adapt to changing environments. 

Here are the videos:





Saturday, 22 April 2017

Porous Boundaries and the Constraints that separate the Education System and Society

I'm taking part in a conference on the "Porous University" early next month (, and all participants have to prepare a position statement about the conference theme. In my statement, I'm going to focus on the issue of the "boundary" and what the nature of "porosity" means in terms of a boundary between education and society.

We often think of formal education as a sieve: it filters out the wheat from the chaff in recognising the attainment and achievement of students. Sieves are porous boundaries - but they are the antithesis of the kind of porosity which is envisaged by the conference, which - to my understanding - is to make education more accessible, socially progressive, engaged in the community, focused on making practical interventions in the problems of daily life. The "education sieve" is a porous boundary which upholds and reinforces the boundary between education and society; many progressive thinkers in education want to dissolve those boundaries in some way - but how? More porosity in the sieve? Bigger holes? Is a sieve with enormous holes which lets anything through still a sieve? Is education still education without its boundary?

Part of the problem with these questions is that we focus on one boundary when there are many. The education system emerges through the interaction of multiple constraints within society - it's not just the need for disseminating knowledge and skill, but the need for keeping people off the streets (or the unemployment register, or out of their parents' houses!), or the need to maintain viable institutions of education and their local economies, or the need to be occupied in the early years of adult life, or the desire to pursue intellectual interests, or the need to gain status. These multiple constraints are constantly manipulated by government. The need to pay fees, the social exclusion which results from not having a degree (which is partly the consequence of everyone having them!), or the need for professionals like nurses to maintain accreditation are only recent examples of continual tweaking and political manipulation. Now we even have the prospect of official "chartered scientists" (! Much of this is highly destructive.

Widening participation, outreach, open learning, open access resources are as much symptoms of the current pathology as they appear to be efforts to address it: it's something of an auto-immune response by a system in crisis. Widening participation? Find us more paying customers! Open Access Resources? Amplify our approved forms of communication so everyone can learn "how to fit the system" (whilst enabling academics to boost their citation statistics) - and then we can enrol them!

A deep problem lies within universities; a deeper problem lies within science. Universities are powerful and deeply confused institutions. They establish and maintain themselves on the reputations of scholars and scientists from the past - many of whom would no longer be employable in the modern institution (and many who had difficult careers in their own time!) - and make promises to students which, in many cases, they don't (and cannot possibly) keep. The University now sees itself as a business, run by business people, often behaving in irrational ways making decisions about future strategy on a whim, or behaving cruelly towards the people they employ. There is nobody who isn't confused by education. Yet the freedom one has to express this confusion disappears in the corridors of power.

Boundaries are made to maintain viability of an organism in its environment: the cell wall or the skin is created to maintain the cell or the animal. These boundaries can be seen as transducers: they convert one set of signals from one context into another for a different context. Education, like an organism, has to maintain its transducers.

Transduction can be seen as a process of attenuating and amplifying descriptions across a boundary. The environment presents many, many descriptions to us. Our skin only concerns itself with those descriptions are deemed to be of importance to our survival: these are presented as "information" to our biological systems. Equally a university department acquires its own building, a sign, courses (all transductions) when a particular kind of attenuation of signals from the environment can be distilled into a set of information which the department can deal with. Importantly, both the skin and the departmental identity is established from two sides: there is the distilling of information from the environment, and there are the sets of descriptions which arise from the boundary having been formed. The liver and kidneys require the skin as much as the skin attenuates the environment.

Pathology in organisations results where organisations reconfigure their transducers so that too much complexity is attenuated. Healthy organisations maintain a rich ecology of varied distinctions. Pathological organisations destroy this variety in the name of some simple metric (like money - this is what happens in financialisation). This is dangerous because if too much complexity is attenuated, the institution becomes too rigid to adapt to a changing environment: it loses overall complexity. Equally if no attenuation occurs, the institution loses the capability of making any distinctions - in biology, this is what happens in cancer.

If we want to address the pathology of the distinction between education and society, we must address the problem of boundaries in institutions and in society. Removing boundaries is not the answer. Becoming professionally and scientifically committed to monitoring the ecology of the educational and social system is the way forwards. Since this is a scientific job, Universities should lead the way.

Tuesday, 11 April 2017

Scarcity and Abundance on Social Media and Formal Education

Education declares knowledge to be scarce. That it shouldn't do this is the fundamental message in Illich's work on education. Illich attacked "regimes of scarcity" wherever he saw them: in health, energy, employment, religion and in the relations between the sexes.

Illich's recipe for avoiding scarcity in education is what he calls "institutional inversion", where he (apparently presciently) visualised "learning webs". When we got Social media and wikipedia, it seemed to fit Illich's description. But does it?

I wrote about the passage in Deschooling Society a few years ago where Illich speaks of his "education webs" (see but then qualifies it with "which heighten the opportunity for each one to transform each moment of his living into one of learning, sharing, and caring". Learning, sharing and caring. Is this Facebook?

Despite Illich's ambivalent attitude towards the church, he remained on the one hand deeply catholic and on the other communitarian. Like other Catholic thinkers (Jaques Ellul, Marshal McLuhan, Jean Vanier) there is a deep sense of what it means for people to be together. It's the togetherness of the Mass which influences these people: the experience of being and acting together, singing together, sharing communion, and so on. The ontology of community is not reducible to the exchange of messages. It is the ontology which interests Illich, not the mechanics.

So really we have to go further and explore the ontology. Illich's "institutional inversion" needs unpacking. "Institution" is a problematic concept. The sociological definition typically sees it as a complex of norms and practices. New Institutionalism sees it as focus of transactions which are conducted through it by its members. At some level, these descriptions are related. But Facebook and Twitter are institutions, and the principal existential mechanisms whereby social media has come into being is through facilitating transactions with customers. The trick for social media corporations is to drive their mechanisms of maintaining and increasing transactions with customers by harvesting the transactions that customers have already made.

In more traditional institutions, the work of attracting and maintaining transactions is separate from the transactions of customers. It is the marketing and manufacturing departments which create the opportunities for customer transactions. The marketing and manufacturing departments engage in their own kind of internal transaction, but this is separate from those produced by customers: one is a cost, the other is income.

The mechanism of driving up the number of transactions is a process of creating scarcity. Being on Twitter has to be seen to be better than not being on it; only by being on Facebook can one hope to remain "in the loop" (Dave Elder-Vass writes well about this in his recent book "Profit and Gift in the Digital Economy").  Formal education drives its customer transactions not only by declaring knowledge to be scarce, but by declaring status to be tied to certification from prestigious institutions. At the root of these mechanisms is the creation of the risk of not being on Twitter, not having a degree, and so. At the root of this risk is existential fear about the future. The other side of the risk equation is the supposed trust in institutional qualifications.

Illich didn't go this far. But we should now - partly because it's more obvious what is happening. The issue of scarcity is tied-up with risk and worries about a future which nobody can be sure about. That this has become a fundamental mechanism of capitalism is a pathology which should worry all of us.

Monday, 3 April 2017

Lakatos on History and the Reconstruction and Analysis of Accidents

"Fake news" and Brexit has inspired a reaction from Universities, anxious that their status is threatened, that they must be the bastions of facts, truth and trust. The consequences of this are likely to reinforce the already conservative agenda in education. Universities have been post-truth for many years - particularly as they chased markets, closed unpopular departments (like philosophy), replaced full-time faculty with adjuncts, became status-focused and chased league table ranking, appointed business people to run them, became property developers, and reinforced the idea that knowledge is scarce. On top of that, they protected celebrity academics - even in the face of blatant abuse of privilege and power by some. The allegations against John Searle are shocking but not surprising - the scale of the sexual harassment/abuse problem (historical and present) in universities is frightening - just as the compensation claims will be crippling. Current students and society will pay for it.

What is true news? I picked up an interesting book on Lakatos by John Kadvany at the weekend (it was in the bookshop that I learnt of the Searle problem). Latakos was interested in rationality in science, maths and history. Along with Popper, Feyerabend and Kuhn, he was part of a intellectual movement in the philosophy of science in the 1960s and 70s from which few sacred cows escaped unscathed.

Kadavny quotes Lakatos's joke that:
"the history of science is frequently a caricature of its rational reconstructions; that rational reconstructions are frequently caricatures of actual history; and that some histories of science are caricatures both of actual history and of its rational reconstructions" ("The History of Science and its rational reconstructions")
In practical life we meet this problem with history directly in the analysis of risk and accidents in institutions. In the flow of time in a hospital, for example, things happen, none of which - in the moment in which they happen - appear untoward. A serious accident emerges as a crisis whose shock catches everyone out - suddenly the patient is dying, suddenly the catastrophic error, blame, etc is revealed when in the flow of time at which it happened, nothing was noticed.

The reconstruction is reinforced with the investigation process. The narrative of causal events establishes its own reality, scapegoats, etc. Processes are 'tightened up', management strategies are reinforced, and.... nothing changes.

Lakatos's position was that historical reconstruction was "theory-laden": "History without some theoretical bias is impossible. [...] History of science is a history of events which are selected and interpreted in a normative way"

In this way, all histories are "philosophies fabricating examples... equally, all physics or any kind of empirical assertion (i.e. theory) is 'philosophy fabricating examples'"

Is it just philosophy? In organisational risk, for example, there is a philosophy of naive causal successionism, and obscure selection processes which weed-out descriptions which don't fit the narrative. But the purpose of all of this is to reinforce institutional structures who themselves exist around historical narratives.

Where does Lakatos go with this? He wants to be able to distinguish "progressive" and "degenerative" research programmes. A research programme is the sequence of theories which arise within a domain (like the successive theories of physics): changes in theoretical standpoint are what he calls "problem shifts". The difference between progressive and regressive research programmes rests on the generative power of a theory. Theories generate descriptions of observable phenomena. In order to be progressive, each problem shift needs to be theoretically progressive (it generates more descriptions) and occasionally empirically progressive. If these conditions are not met, the research programme is regressive.

I agree with this to a point. However, the structure of institutions is an important element in the generative power of the institution's ideas about itself. Lakatos is really talking about "recalibration" of theory and practice. But recalibration is a structural change in the way things are organised.

That there is rarely any fundamental recalibration in the organisation and management of health in the light of accidents is the principal reason why their investigations are ineffective. 

Thursday, 30 March 2017

Giddens on Trust

Giddens's criticism of Luhmann which I discussed in my last post, leads to a 10-point definition of trust. I'm finding this really interesting - not least because it was written in the early 90s, but now seems incredibly prescient as we are increasingly coming to trust technological systems, and do less of what Giddens calls "facework" (something which he took from Goffman, who in turn took it from Schutz's intersubjectivity). Whether he's right on every detail here is beside the point. I find the level of inquiry impressive.

Giddens writes:
"I shall set out the elements involved [in trust] as a series of ten points which include a definition of trust but also develop a range of related observations:
  1. Trust is related to absence in time and in space. There would be no need to trust anyone whose activities were continually visible and whose thought processes were transparent, or to trust any system whose workings were wholly known and understood. It has been said that trust is "a device for coping with the freedom of others," but the prime condition of requirements for trust is not lack of power but lack of full information.
  2. Trust is basically bound up, not with risk, but with contingency. Trust always carries the connotation of reliability in the face of contingent outcomes, whether these concern the actions of individuals oir the operation of systems. In the case of trust in human agents, the presumption of reliability involves the attribution of "probity" (honour) or love. This is why trust in persons is psychologically consequential for the individual who trusts: a moral hostage to fortune is given.
  3. Trust is not the same as faith in the reliability of a person or system; it is what derives from that faith. Trust is precisely the link between faith and confidence, and it is this which distinguishes it from "weak inductive knowledge". The latter is confidence based upon some sort of mastery of the circumstances in which confidence is justified. All trust is in a certain sense blind trust!
  4. We can speak of trust in symbolic tokens or expert systems, but this rests upon faith in the correctness of principles of which one is ignorant, not upon faith in the "moral uprightness" (good intentions) of others. Of course, trust in persons is always to some degree relevant to faith in systems, but concerns their proper working rather than their operation as such.
  5. At this point we reach a definition of trust. Trust may be defined as confidence in the reliability of a person or system, regarding a given set of outcomes or events, where that confidence expresses a faith in the probity or love of another, or in the correctness of abstract principles (technical knowledge)
  6. In conditions of modernity, trust exists in the context of (a) the general awareness that human activity - including within this phrase the impact of technology upon the material world - is socially created, rather than given in the nature of things or by divine influence; (b) the vastly increased transformative scope of human action, brought about by the dynamic character of modern social institutions. The concept of risk replace that of fortuna, but this is not because agents in pre-modern times could not distinguish between risk and danger. Rather it represents an alteration in the perception of determination and contingency, such that human moral imperatives, natural causes, and chance reign in place of religious cosmologies. The idea of chance, in its modern senses, emerges at the same time as that of risk.
  7. Danger and risk are closely related but are not the same. The difference does not depend upon whether or not an individual consciously weight alternatives in contemplating or undertaking a particular course of action. What risk presumes is precisely danger (not necessarily awareness of danger). A person who risks something courts danger, where danger is understood as a threat to desired outcomes. Anyone who takes a "calculated risk" is aware of the threat or threats which a specific course of action brings into play. But it is certainly possible to undertake actions or to be subject to situations which are inherently risky without the individuals involved being aware how risk they are. In other words, they are unaware of the dangers they run.
  8. Risk and trust intertwine, trust normal serving to reduce or minimise the dangers to which particular types of activity are subject. There are some circumstances in which patterns of risk are institutionalised, within surrounding frameworks of trust (stock-market investment, physically dangerous sports). Here skill and chance are limiting factors upon risk, but normal risk is consciously calculated. In all trust settings, acceptable risk falls under the heading of "weak inductive knowledge" and there is virtually always a balance between trust and the calculation of risk in this sense. What is seen as "acceptable" risk - the minimising of danger - varies in different contexts, but is usually central in sustaining trust. Thus traveling by air might seem an inherently dangerous activity, given that aircraft appear to defy the laws of gravity. Those concerned with running airlines counter this by demonstrating statistically how low the risk of air travel are, as measured by the number of deaths per passenger mile. 
  9. Risk is not just a matter of individual action. There are "environments of risk" that collectively affect large masses of individuals - in some instances, potentially everyone on the face of the earth, as in the case of the risk of ecological disaster or nuclear war. We may define "security" as a situation in which a specific set of dangers is counteracted or minimised. The experience of security usually rest upon a balance of trust and acceptable risk. In both its factual and its experiential sense, security may refer to large aggregates or collectivities of people - up to and including global security - or to individuals.
  10. The foregoing observations say nothing about what constitutes the opposite of trust - which is not, I shall argue later, simply mistrust. Nor do these points offer much concerning the conditions under which trust is generated or dissolved."

Tuesday, 28 March 2017

Trust and Risk (Giddens and Luhmann)

In The Consequences of Modernity Giddens critiques Luhmann's idea of trust and its relation to risk and danger. I find what he has to say about Luhmann very interesting, as I am currently exploring Luhmann's book on Risk. Giddens says:

Trust, he [Luhmann] says, should be understood specifically in relation to risk, a term which only comes into being in the modern period. The notion  originated with the understanding that unanticipated results may be a consequence of our activities or decisions, rather than expressing hidden meanings of nature or ineffable intentions of the Deity. "Risk" largely replaces what was previously thought of as fortuna (fortune or fate) and becomes separated from cosmologies.  Trust presupposes awareness of circumstances of risk, whereas confidence does not. Trust and confidence both refer to expectations which can be frustrated or cast down. Confidence, as Luhmann uses it, refers to a more or less taken-for-granted attitude that familiar things will remain stable: 
"The normal case is that of confidence. You are confident that your expectations will not be disappointed: that politicians will try to avoid war, that cars will not break down or suddenly leave the street and hit you on your Sunday afternoon walk. You cannot live without forming expectations with respect to contingent events and you have to neglect, more or less, the possibility of disappointment. You neglect this because it is a very rare possibility, but also because you do not know what else to do. The alternative is to live in a state of permanent uncertainty and to withdraw expectations without having anything with which to replace them."
Where trust is involves, in Luhmann's view, alternatives which are consciously borne in mind by the individual in deciding to follow a particular course of action. Someone who buys a used car, instead of a new one, risks purchasing a dud. He or she places trust in the salesperson or the reputation of the firm to try to avoid this occurrence. Thus, an individual who does not consider alternatives is in a situation of confidence, whereas someone who does recognise those alternatives and tries to counter the risks thus acknowledges, engages in trust. In a situation of confidence, a person reacts to disappointment by blaming others; in circumstances of trust she or he must partly shoulder the blame and may regret having placed trust in someone or something. The distinction between trust and confidence depends upon whether the possibility of frustration is influenced by one's own previous behaviour and hence upon a correlate discrimination between risk and danger. Because the notion of risk is relatively recent in origin, Luhmann holds, the possibility of separating risk and danger  must derive from social characteristics of modernity.
Essentially,. it comes from a grasp of the fact that most of the contingencies which affect human activity are humanly created, rather than merely given by God or nature. 
Giddens disagrees with Luhmann, and explores the concept of trust from a different aspect to that of Luhmann's double-contingency-related view. The argument is important though. Trust is going to become one of the most important features of the next wave of technology: BitCoin, Blockchain, etc are all technologies of trust. Conceptualising what this means is a major challenge for social theory.

It's worth noting that Luhmann comments on Giddens's position in his Risk book with regard to the distinction between risk and danger. Giddens rejects the distinction, but Luhmann says "we must differentiate between whether a loss would occur even without a decision being taken or not - whoever it is that makes this causal attribution"

However, Luhmann throws in something into the "risk pot" which I find fascinating. He calls it "time-binding" - time, for Luhmann is at the centre of risk (another blog post needed there). Time-binding looks very much like sociomateriality + time to me.