Wednesday, 12 July 2017

Winograd and Flores on Computers and conversation

Winograd and Flores wrote this in 1984. Have things changed much?
Computers do not exist, in the sense of things possessing objective features and functions, outside of language. They are created in the conversations human beings engage in when they cope with and anticipate breakdown. Our central claim in this book is that the current theoretical discourse about computers is based on a misinterpretation of the nature of human cognition and language. Computers designed on the basis of this misconception provide only impoverished possibilities for modelling and enlarging the scope of human understanding. They are restricted to representing knowledge as the acquisition and manipulation of facts, and communication as the transferring of information. As a result, we are now witnessing a major breakdown in the design of computer technology - a breakdown that reveals the rationalistically oriented background of discourse in which our current understanding is embedded. 

[...] Computers are not only designed in language but are themselves equipment for language. They will not just reflect our understanding of language, but will at the same time create new possibilities for the speaking and listening that we do - for creating ourselves in language. (Understanding computers and cognition, p78)

Later on Winograd and Flores defend their argument that computers are tools for keeping track of commitments that people make to each other through recording speech acts. They argue:

New computer-based communication technology can help anticipate and avoid breakdowns. It is impossible to completely avoid breakdowns by design, since it is in the nature of any design process that it must select a finite set of anticipations from the situation. But we can partially anticipate situations where breakdowns are likely to occur (by noting their recurrence) and we can provide people with the tools and procedures they need to cope with them. Moreover, new conversational networks can be designed that give the organisation the ability to recognise and realise new possibilities.   (p158)

I'm curious about this because it resonates with many of the aims of big data today. Winograd and Flores were anti-AI, but clearly the mass storage of speech acts does serve to reveal patterns of recurrence and breakdown which do provide anticipatory intelligence (which is what Google Now does).

I think the real issue concerns a deeper understanding of language and conversation, and particularly the inter-subjective nature of conversation - that is, the con-versare nature of it (dancing). 

Saturday, 8 July 2017

Interoperability and the Attenuation of Technological Possibility: Towards Socially Responsible Hacking?

I owe at least 10 years of my career directly or indirectly to generous funding from JISC in the UK and the EU commission. The underpinning rationale which attracted this research money was interoperability in educational technology. It was presence of the Centre for Educational Technology and Interoperability Standards (CETIS) at the University of Bolton which created the conditions for engagement in a wide range of projects. The University of Bolton, of all places, had the greatest concentration of technical experts on e-learning in the world (something continually reinforced to me as I meet colleagues from overseas: Bolton? You were a world-leader!).

Now that most of the project funding opportunities have gone (JISC survives in very different form, but on a mission to keep itself going on a commercial footing which has become problematic), the EU closed its Technology Enhanced Learning strand a couple of years ago (hardly surprising since there were rather too many very expensive projects which delivered little - even for the EU!), and CETIS survives as an independent Limited Liability Partnership (LLP), albeit in a role of more general IT consultancy for education, rather than a focused mission to foster interoperability. The international agency for interoperability in education, IMS, seems to have largely ceded the debate to the big commercial players like Blackboard, who talk the language of interoperability as a salespitch, but have little interest in making it happen.

Now that I am settled elsewhere, and I'm pleased to say, soon to be joined by a former CETIS colleague, it seems like a good time to think about interoperability again. In my current role, interoperability is a huge issue. It is because of interoperability problems that my faculty (just the faculty!) runs four different e-portfolio systems. It is because of a lack of interoperability that the aggregation and analysis of data from all our e-learning platforms is practically impossible (unless you do something clever with web automation and scraping, which is my current obsession), it is because of interoperability problems that individual parts of the faculty will seek new software solutions to problems which ought to merely require front-end adjustments to existing systems, and interoperability problems coupled with pathological data security worries create barriers to systems innovation and integration. Eventually, this becomes unsustainable.

So given all the effort that went into interoperability (my first JISC project was an investigation of interoperable web services in E-portfolio in 2004 - the project concluded that the available interoperability models didn't work and that something should be done about it), how have we got here?

Any new technology creates new possibilities for action. The ways of acting with a new tool may be very different from the ways of acting with existing tools. This means that if there is overlap in the functionality of one tool with another, users can be left with a bewildering choice: do I use X to do a,b and c, or do I use Y to do a, c and z? The effect of new technologies is always to increase the amount of uncertainty. The question is how institutions should manage this uncertainty.

CETIS was a government-funded institutional attempt to manage the uncertainty caused by technology. It served as an expert service for JISC, identifying areas for innovation and recommending where calls for funding should be focused. CETIS is no longer funded by government because government believes the uncertainties created by technology in education can be managed within institutions.. so my university ends up with 4 e-portfolio systems in one faculty (we are not alone). This is clearly bad for institutions, but not bad in terms of a libertarian philosophy to support competition between multiple providers of systems. Having said this, the interoperability battle was lost even when CETIS was flourishing. The dream of creating an educational equivalent of MIDI (which remains the golden child of systems interoperability) quickly disappeared as committees set about developing complex specifications for e-portfolio (LEAP, LEAP2, LEAP2a - see http://cetis.org.uk/leap2/terms/index), the packaging of e-learning content (SCORM, IMS-Content Packaging), the sequencing of learning activities (IMS Learning Design, IMS Simple Sequencing), and more recently, Learning Analytics (xAPI).

All of this activity is bureaucratic. Like all bureaucratic processes, the ultimate result is a slowing down of innovation (importantly, this is NOT what happened with MIDI). Whilst technology creates new possibilities, this also creates new uncertainties, and bureaucratic processes act as a kind of weir to stem the flow of uncertainties. Institutions hate uncertainty. In the standards world, this is achieved by agreeing different shaped boxes into which different things can be placed. Sometimes the boxes are useful: we can say to a vendor of e-portfolio, does it support LEAP2a (for example). They might say "yes", meaning that there is an import routine which will suck in data from another system. However, much more important is the question "Does it have an API?" - i.e. can we interact with the data without going through the interface and do new things which you haven't thought about yet? The answer to this is almost always, No! The API problem has also become apparent with social media services too: APIs have become increasingly difficult to engage with, and less forthcoming in the data they provide. This is for a simple reason - for each of the clever things you might want to do with the data, each company wants to provide as a new "premium service".

An alternative to the institutional bureaucratic approach to the interoperability problem would seek to manage the uncertainties created by technology in a different way. This would be to embrace new uncertainties, rather than attenuate them,  and create situations within institutions where processes of technical exploration and play are supported by a wide range of stakeholders. One of the problems with current institutionally attenuative approaches to technology is that the potential of technology is underexplored. This is partly because we are bad at quantifying the new possibilities of any new tool. However, in working with most institutional tools, we quickly hit barriers which dictate "We can't do that", and that's the end of the story. But there are usually ways of overcoming most technical problems. This is what might be called the "Socially responsible hacking" approach to interoperability. With the failure of bureaucratic interoperability approaches, this may be the most productive way forwards.

Socially Responsible Hacking addresses the uncertainty of new technology in dialogue among the various stakeholders in education: programmers who see new ways of dealing with new and existing tools, teachers who seek new ways of organising learning, managers who seek new opportunities for institutional development, learners who seek new ways of overcoming the traditional constraints of institutions, and society within which educational institutions increasingly operate as something apart, rather than as an integral component. 

Wednesday, 5 July 2017

Saturday, 1 July 2017

Ivory Towers and the Grenfell Tower: The problem with Evidence

The Grenfell Tower fire represents a collapse of trust in expertise and evidence, and will bring about a reawakening of scepticism. Newsnight's report on "How flammable cladding gets approved" - http://www.bbc.co.uk/news/uk-40465399 raises questions about the role of evidence beyond fire safety. In policy in health, education, welfare, economics and housing evidence is the principal aid for decision-making. What Enid Mumford calls "dangerous decisions" are supported by studies which demonstrate x or y to be the best course of action. The effect of these studies is to attenuate the range of options available to be decided between. Of course, in that attenuation, many of the competing descriptions of a phenomenon or subject are simplified: many descriptions are left out, some voices are silenced. Usually, the voices that are silenced are those "on the edge": the poor, immigrants and the occasional "mad professor". From Galileo to Linus Pauling, history tells us that these people are often right.

Understanding "evidence" as "attenuation" helps us to see how easily "evidence-based policy" can become "policy-based evidence". Evidence can be bent to support the will of the powerful. The manifestations of this exist at all levels - from the use of econometrics to produce evidence to support austerity to the abuse of educational theory in support of educational interventions (which so many educational researchers, including me, are guilty of). But it helps academics to get published, to raise our status in the crazy academic game - and, once established in the sphere of the University, the habit sticks. Effective decision-making is intrinsic to effective organisation. If organisational pathology creeps in, decision-making within a pathological organisation will be constrained in ways which obscure real existent problems.

The deeper problems concern academia's and society's allergy to uncertainty. We hold to an enlightenment model of scientific inquiry, with closed-system experiments and the identification of causal relations through the production of event-regularities. Too often we pretend that the open systems with which we engage are closed systems whose event regularities are no longer physical events, but statistical patterns. So Stafford Beer's joke that "70% of car accidents are caused by people who are sober" entailing that we should all drink and drive, highlights the dangers of any statistical measure: it is an attenuation of descriptions - and often an arbitrary one at that.

The computer has changed the way we do science, and in almost all areas of inquiry from the humanities to physics, probabilities are what we look at. These are maps of uncertainty, not pointers to a likely successful outcome, or a statistically proven relation between an independent variable and a probability distribution. What is an independent variable, after all? It is a single description chosen out of many. But its very existence is shaped by the many other descriptions which are excluded by its isolation. And we don't seem to care about it! I review endless depressing papers on statistical approaches to education and technology, and I see these assertions being made without the slightest whiff of doubt - simply because that is how so many other papers which are published do it. I reject them all (although always gently - I hate horrible reviews - but always inviting authors to think harder about what they are doing).

Uncertainty is very difficult (probably impossible) to communicate through the medium of the academic journal article. The journal article format was devised in 1662 for an enlightenment science which is radically different from our own. Of course, in its time, the journal was radical. The effect of printing on a new way of conducting and communicating science was only just opening up. Printing was doing to the academic establishment what it did to the Catholic church a century before. Enlightenment scholars embraced the latest technology to harness their radical new practices.

We should be doing the same. The experiments on building cladding are easily demonstrable on YouTube. Equally, uncertainties about scientific findings can be expressed in rich ways using new media which are practically impossible in the journal. The scientists should learn from the artists. Furthermore, technology provides the means to democratise the making of descriptions of events. No longer is the description of an event the preserve of those with the linguistic skill to convey a compelling account in print. The smartphone levels the playing field of testimony.

Our decisions would be better if we became accustomed to living with uncertainty, and more comfortable living with a plurality of descriptions. The idea of "evidence" cuts against this. We - whether in government or academia - do not need to attenuate descriptions. Uncertainties find their own equilibrium. Our new media provide the space where this can occur. Universities, as the home of scholarly practice in science, should be working to facilitate this.


Friday, 30 June 2017

The Paradox of Institutional Change in Universities: The Strategic Need for a Pincer-Movement

The last 10 years has seen most Universities in the UK undergo significant restructuring. These processes, which are still ongoing - most terribly at Manchester and the OU at the moment - are intended to deliver transformations to the institution's financial viability, their "market appeal", improvement of the student experience, and increasing competitiveness in research and teaching.

The results from the last 10 years of restructuring tells us quite clearly that NONE of this actually occurs. Departments may be closed, and salaries saved, but within a few years, the salary bill creeps up to exceed what it was before. Staff morale is damaged through the autocratic processes by which friends, colleagues and (most importantly) conversations are broken up. The atmosphere in institutions whilst restructuring occurs is dismal and this has an impact on students.

The recruitment of new (cheaper, younger) staff can also be highly problematic. Some of these will be adjuncts, paid very little, and struggling to survive, let alone teach their large (and highly profitable) Masters class of overseas students in the Information Systems department.  These people are clinging on to the academy in the hope that something better comes up. But things continue to get worse. Other new staff will be recruited on a kind of "metric" basis - those with the most papers wins! Never mind what they are like as people, how collegial they are, how well they care about their students. And often, they are appointed by a few senior colleagues, because the junior staff who keep the department going are all at risk of redundancy.

The spirit of despondency turns out to be highly contagious. The new staff - particularly the good ones - leave. The students complain - although they continue to attend in sufficient numbers to keep the thing on the road because almost everywhere else is the same.

Who benefits from restructuring? Usually, only the person who thought the thing up.  There is a real and deep question about institutional change which needs to be addressed.

Organisms change their structure when the structure of their environment changes. What is the environment of the University? With the student-as-customer rhetoric, are students cast in the role of the "environment" of universities? Universities seem to believe this, because they attempt to adapt to meet student expectations.

But many would argue that society at large is the environment of the university. What is the relation between the University and society? Well, it is one of circular causation. The university produces important aspects of society (its knowledge), and society produces the university through society's requirement to think about itself and produce new components like doctors, teachers and government ministers.  Of course, society includes learners... and teachers, administrators, tax payers, voters, Brexiteers, Remainers, banks, Corbynites and Theresa May.

History tells us that Universities do change over time. Like biological organisms, change comes about through adaptation to changes in the environment - to changes in society. Francis Bacon's 1605 "The Advancement of Learning" was a wake-up call to universities, just as the Reformation was a wake-up call to the Catholic church. Curiously though, the members of the 17th century "invisible college" beavering away at scientific experiments outside the university had all been through the academic establishment at some point. The early IT pioneers like Gates and Jobs, the military developers of the internet and the Whole Earth Catalogue existed on the periphery of the institution in a counter-cultural bubble. The same might be said of the off-piste developments in BitCoin in the early 2000s.

University change takes the form of organic absorption of the counter-culture. Jazz improvisation, for example, moves from seedy strip joints to the university classroom (with its professors of jazz) in the space of 90 years.  The only counter-cultural development which has resisted this seems to be the sex industry, and yet its adoption and development of technology has paved the way to the iPlayer and lecture capture!

What can we learn from this?

  • Institutional restructuring is institutional self-harm. 
  • If institutions change in response to changes in their environment, perhaps they should consider nurturing environmental changes which they might find challenging in the short-term, but to which their adaptation will be fruitful in the future. 
  • The obvious thing here is to develop feasible free personal certificated learning - but this is NOT a MOOC and it is not a marketing exercise. The institution doesn't need to make its presence felt, but to support social movements. 
  • Institutional change is likely to result from a pincer-movement: Constructive internal initiatives to help an institutional culture thrive are good, but they go hand-in-hand with initiatives to develop challenging things in the environment. 

Wednesday, 28 June 2017

Viable Institutions

Still a lot to do here, but it's taking shape...

Saturday, 24 June 2017

Government as Steering: Cybernetics and the Coming Labour Government

The joy surrounding Jeremy Corbyn's success in the election masks a need to do some very difficult work if a left wing labour government is going to deliver on the promise to transform society. There is muddle-headedness about the practicalities of government, the way events can overtake good intentions (no politician would have wanted a Grenfell on their watch), or the sheer challenge of keeping a political machine together which always seems hell-bent on self-destruction (all political parties seem to have this tendency).

Now is a golden opportunity to do this. Corbyn has the luxury of opposition where his grip on the party has been strengthened, and public expectation of a Corbyn victory (unthinkable before the election) has shifted significantly. These are real achievements.

Labour, and Corbyn, have got here because the Tories don't know how to govern. They see the world in a linear and hierarchical way, where simple "strong and stable" solutions can solve intractable problems. When things don't work out the way they wished (like the deficit coming down), the Tories tend to carry on regardless: strong and stable. This isn't government. It is ideological extremism.

"Government" and "governor" come from the same latin root: Gubernator. The Watt governor is the simplest idea of governing:
















The Watt governor 'steers' the engine, by increasing the flow of steam if the engine runs too slow, and decreases it if it runs to fast. The Greek word for governor is kybernetes, from which we get cybernetics. The Kybernetes was the steersman on the ship, so cybernetics is about steering. And so is government.


Stafford Beer is the cybernetic thinker who considers the problems of government (and its related problem, management) in most detail. I have thought about the Viable System Model (see https://en.wikipedia.org/wiki/Viable_system_model) for many years, and the Cybersyn experiment in Chile of 1971-3 (see https://en.wikipedia.org/wiki/Project_Cybersyn) remains the most significant attempt to rethink government (apart from some promising experiments in the Soviet Union which didn't get off the ground properly - see http://dailyimprovisation.blogspot.co.uk/2014/11/social-ecology-and-soviet-cybernetics.html).

There is a fundamental problem that the VSM addresses: the problem of attenuating descriptions of the world. In hierarchical power structures like governments, or bosses of universities, hospitals or any institution for that matter, the "top" relies on filters to give them the most important information from the ground. This is where the pathology starts, because the filter entails removing most of the other descriptions which are not considered important. This is why the election opinion polls got it so wrong - because they didn't listen to the variety of description that was out there. Technology has made the situation worse - it can filter more effectively than anything else - although this is a stupid way to use technology!

The VSM is a set of nested loops within which there is attenuation of description (there has to be), but at the same time the attenuated descriptions are organised into the production of a generative model whose engagements with the organisation (or country) that is being managed is continually monitored. The circular loop continually asks "Are we right?", "In what ways are we wrong?", "What have we learnt about the world that we didn't know before?", "How should the model be changed?". In other words, there is attenuation, and there is amplification of the abstracted model in a continual process of organic adaptation (Beer described his model using the metaphor of the human body). This is steering.

In theory, this is fine, and the VSM is often used in management consultancy to help heal organisational pathology: I'm hosting a conference in November at Liverpool on this very topic: http://healingorganisations2017.org.

But apart from Cybersyn, there has been no real-time empirical attempt to exploit this thinking in government or management. We should do it, because our existing models of government cannot deal with the obvious circular causality which is endemic in our world, from overseas wars and local terrorism to austerity and burning tower blocks.  We have to have a practical way of dealing with circular causation, and I worry that Corbyn's labour isn't prepared.

Beer's Cybersyn was a data-driven operation in a world where data was hard to come by (they transmitted it with Telex machines). Today, we have data everywhere - but we don't know how to use it. Most approaches to "big data" seek to amplify automatic "filters" of complexity - this is basically what machine learning does. That's fine up to a point, but whatever filters are produced, are used to create a model which must be tested and improved. The human thinking about the rightness of the models used doesn't appear to happen. All "big data" results are the opportunity for humans to produce new descriptions of the world, and for these new descriptions to feed into higher level steering processes. But it doesn't happen. Consequently, we allow the "big data" to dictate how the world should become without thinking about what we've missed.

One of the critical signs that any government or management should worry about is a decrease in the variety of description about something. This is usually the harbinger of catastrophe. Our Universities are heading straight for this, because they are removing vast chunks of variety in the conversations and descriptions which are made within them as they close departments, sack staff, become fixated on metrics of academic performance which mean nothing, or chase government targets for "teaching excellence" in the hope of getting more money. Nobody is monitoring the richness of conversation in Universities. Yet, the true strength of any university is the richness of the conversations which it maintains.

The same goes for a healthy society. The urgency of thinking about this was impressed upon me a couple of days ago when I received a text message from a bright and brilliant academic and friend in my old institution (one of only a few in that awful place). It's a dismal reminder of how much trouble we are in: "I've just been told I'm being made redundant". So that's another conversation killed.


Monday, 19 June 2017

Technology, Forms and the Loss of Description

When rich descriptions are difficult to bear, methods of attenuating description become attractive. They restrict the mode of expression to that which is permitted by whatever medium is devised for conveying 'standard' messages. We have become so used to this that we barely even notice it.  Paul Fussell identified in "The Great War and Modern Memory", that the means by which descriptions are attenuated emerged from the most brutal and traumatic of events where it was barely possible to articulate how people felt. Before the first world war, there were no "forms" to fill in.

The military authorities did their best to ensure that richer descriptions of the soldier's experiences were not conveyed home, lest it lead to unrest or loss of morale. Fussell describes a letter sent by a young boy in a platoon which went:

Dear Mum and Dad, and dear loving sisters Rosie, Letty, and our Gladys, -
I am very pleased to write you another welcome letter as this leaves me. Dear Mum and Dad and loving sisters, I hope you keeps the home fires burning. Not arf. The boys are in the pink. Not arf. Dear Loving sisters Rosie, Letty, and our Gladys, keep merry and bright. Not arf. 

Today our whole lives are ruled by forms, and even the scope for protesting the restrictions of the medium are curtailed. The best one can do is not fill it in. Such 'data gathering' processes have become part of normal life. We even conduct social research like this. 

Fussell describes the "Form A. 2042" shown above. The Field Service Post Card was sent 
with everything crossed out except "I am quite well" - immediately after a battle which relatives might suspect their soldiers had been in. Such were the hazards of occupying newly blown mine-craters that, according to George Coppard, "Before starting a twelve-hour shift in a crater, each man had to complete a field postcard for his next of kin, leaving the terse message "I am quite well" undeleted."
Soldiers found ways of using the medium to convey messages that the cards were not meant to convey. Fussell notes:
the implicit optimism of the post card is worth noting - the way it offers no provision for transmitting news like "I have lost my left leg" or "I have been admitted into hospital wonded and do not expect to recover".  Because it provided no way of saying "I am going up the line again" its users had to improvise. Wilfred Owen had an understanding with his mother that when he used a double line to cross out "I am being sent down to base" he meant he was at the front again. (Fussell, "The Great War and Modern Memory", p185)
Fussell claims that the Field Service Post Card is the first "form": "It is the progenitor of of all modern forms on which you fill in things or cross out things or check off things, from police fraffic summonses to "questionnaires" and income-tax blanks. When the Field Service Post Cardwas devised, the novelty of its brassy self-sufficiency, as well as its implications about the uniform identity of human creatures, amused the sophisticated and the gentle alike, and they delighted to parody it..."

Today we have video, which has, in many ways, levelled the playing field of testimony: one does not have to be a great poet or writer to convey the complex reality of a situation - anyone can do it. Yet the form remains. How could one summarise the complexity were it not for the tick-boxes?

There is a better answer to this question than tick boxes. The form amplifies a particular set of descriptions as a series of choices. Whatever actual descriptions might be made by individuals, these somehow have to fit the provided descriptions. The interpretation of the fit to the provided descriptions adds a further layer of attenuation.

Institutions and governments fail because they fail to listen to the rich variety of descriptions made within the organisations they oversee. Instead, they collect "data" which they attenuate into "preferred descriptions", and implement policy according to their conclusions. Crisis emerges when the effects of policy are the production of more descriptions which are also ignored. 

Sunday, 18 June 2017

Tuesday, 13 June 2017

Open Educational Resources and Book Printing Machines

"Being open" has been a major theme in educational technology for many years. It goes to the heart of why many have been drawn to education technology in the first place: "let's transform education, make it available to all, liberate ourselves from constraints", and so on. There is an associated economic narrative which speaks of "re-use" and highlights the apparent ridiculousness in the redundancy of so much content - why have 100 videos about the eye when one would do?

The opportunity of technology is always to present people with new options for acting: blogging presents new options for publishing, for example. In effect, new options for acting are new ways of overcoming existing constraints. When looking at any innovation, it is useful to examine the new options it provides, and the constraints it overcomes. Sometimes new technologies introduce new constraints.

What new options does OER provide? What constraints does it overcome?

These are not easy questions to answer - and perhaps because of this, there is much confusion about OER. However, these are important questions to ask, and by exploring them more fully, some insight can be gained into how OER might be transformative.

Enormous amounts of money have been spent on repositories of stuff which are presented as lego bricks for teachers to assemble their teaching. Remember learning objects? Remember widgets? Remember JORUM? The rationale behind much of this was that educational content could be assembled by teachers and incorporated as ready-made chunks of knowledge into new courses. So the constraint was the labour of teachers? Or the cost of resources? OER to the rescue!?

But actually none of this addressed the deep constraint: the course. Who cares about courses? Well, universities do... Courses = Assessment = Money.

Of course, away from courses, there are Open Educational Resources on YouTube, Facebook, Twitter, Wikipedia, Stackoverflow, Listservs, blogs, wikis, and various other specialised disciplinary forums. Moreover, the tools for searching and retrieving this stuff have got better and better. Email histories are now a major resource of information thanks to vast storage of google and the capabilities of their search tools. All of these things have circumvented the constraint of the course.

Universities care about courses. Open Educational Resources cut the costs of setting courses up. And of course, the skill requirements of the teacher might be seen to be lowered to that of curators where the cost saving implications are attractive to university managers: we don't need teachers - we can get this stuff for free, and pay cheap adjuncts to deliver it! So the constraints are financial and organisational.

But... nobody really understands what happens in teaching and learning. Whilst there are ways in which a video on YouTube might be said to "teach", generally teaching happens within courses. But what does the teacher do?

What happens in teaching and learning is conversation. Ideally, in that conversation, teachers reveal their understanding of something, and learners expose their curiosity. This can happen away from formal courses - most notably on email listservs, where (perhaps) somebody posts a video or a paper, and then people discuss it, but it is something that clearly is meant to happen in formal education.

"Revealing understanding" of something is not the same as presenting somebody with ready-made resources and activities (although someone can reveal their understanding of a subject in a video or a book - or indeed, a blog post!). Teachers have always used textbooks, but conversations usually revolve around a negotiation of the teacher's understanding of the textbook. Most textbooks are sufficiently rich in their content to throw up interesting questions for discussion.

Ready-made resources represent someone else's understanding. They can sometimes present an unwelcome extra barrier for the teacher, since the teacher is trying to reveal their understanding of the subject, but are caught trying to reveal their understanding of somebody else's understanding.

Teachers produce resources to help them articulate their understanding. Some very experienced teachers may even write books about their understanding of a subject. When resources are publishable at this level, things get interesting and a new set of constraints emerges. The big constraint is the publishers.

Let's say a teacher writes a book. They send it to a publisher and sign away their rights to it. In signing away their rights to the content, they are restricted in what they might do with the content in future. The book might be very expensive and so the people who a teacher wants to read it, cannot afford it. There may be chunks of text which they might want to extract and republish for a different audience. They can't do it.

I think this is about to change. One of the exciting developments in recent years has been print-on-demand self-publishing. Alongside this, professional typesetting has become within easy reach of anyone. LaTeX-driven tools like Overleaf (http://overleaf.com) make a once-esoteric skill accessible to all. And the book printing machines like Xerox's Espresso Book Machine are the most powerful exemplars of 3D printing:



Why will academics exploit this? Because, whilst publishing with a respectable publishing house is often seen as a 'status marker', it also constrains the freedom of the academic to manage their own resources and engagement with their academic community.

A self-published open book can exist on GitHub as a LaTeX file, which an academic can fork, republish, reframe, etc. And why not allow others to do the same? And if copies can be printed for very little, why not do your own print run and distribute your book to your academic community yourself? For all teachers, and for all academics, the point of the exercise is conversation.

More importantly, with the production of high quality resources that can be exploited in different ways, the teacher is able to express their understanding of not just one but potentially many subjects. What is the difference between a book on methodology in education research to a book on methodology on health research? Might not the same person have something to say about both? Why shouldn't the resources or the book produced for one not be exploited to do the other?

Saturday, 10 June 2017

Albert Atkin on Peirce

I have always been a little bit reticent about Peirce's semiotics. It's become another kind of theoretical 'coat-hanger' which media theorists, communication scholars, educationalists, informational scholars, musicologists, and much postmodern theory has draped 'explanations' which, it seems to me, don't explain very much. My suspicion, as with many social and psychological theories, is that the clergy are a pale imitation of the high-priests. It's the same story with James Gibson and affordance theory. And whilst believing that there's much more to Peirce than meets the eye of someone surveying this academic noise, I haven't yet found a way into it. Until now.

I'm reading Albert Atkin's recent book on Peirce. He articulates exactly how I feel about the sign theory, when he first of all points out that philosophy has largely ignored the sign theory - partly due to unreflective criticisms of analytic philosophers (most notably Quine), whereas

"Interest is much livelier outside of philosophy, but a similar problem lurks nearby. One finds interest in and mention of Peirce's sign theories in such wide-ranging disciplines as art history, literary theory, psychology and linguistics. There are even entire disciplinary approaches and sub-fields - semiotics, bio-semiotics, cognitive semiotics - which rest squarely on Peirce's work. Whilst this greater appreciation of Peirce's semiotic marks a happier state of affiars than that which we find in philosophy, there is still a worry that, as the leading scholar of Peirce's sign theory, T.L. Short, puts it, 'Peirce's semiotics has gotten in amongst the wrong  crowd'. Short's complaint may be a little hyperbolic, but his concern is well founded considering the piecemeal and selective use of Peirce's ideas in certain areas. From a cursory reading of much work in these areas, one might think Perice had only ever identified his early tripartite division of signs into icons, indexes and symbols." (Atkin, "Peirce", p126)
Peirce's biography, which Atkin covers elegantly, is extremely important in understanding how Peirce's logic, mathematics, sign theory and metaphysics fit together. A combination of intellectual isolation - he lost his University position in 1884 and never gained another one - and a unique inheritance from his mathematician father Benjamin Peirce, together with power intellectual life in the family home, set the scene for a radical redescription of logic, mathematics, cognition and science. The simple fact is that the extent to which this redescription is truly radical remains underappreciated - not helped by noisy dismissals by the academic establishment - not only of Peirce himself, but also of some of the foundational work which Peirce built on (he gained his interest in Hamilton's Quaternions from his father; Hamilton's work too suffered some careless dismissals).

If people think they know Peirce, or they know the semiotics, they should think again. I strongly suspect the time for this true original is yet to come. 

Tuesday, 6 June 2017

Distinctions

This is gradually coming together... It helps me to post on here - it's a multiple description!

Monday, 5 June 2017

Peirce on Quaternions

Had it not been for my discussions with Peter Rowlands at Liverpool University, I wouldn't know what a quaternion was. That I took it seriously was because it plays a vital role in Rowland's physical theory which unites quantum and classical mechanics, and my interest in this has evolved through a desire to tackle the nonsense that is talked about in the social sciences about sociomateriality, entanglement, etc. But then there is a another coincidence (actually, I'm more convinced there is no such thing - these are aspects of some kind of cosmic symmetry). I got to know Rowlands because he is a friend of Lou Kauffman, who has been one of the champions of Spencer-Brown's Laws of Form.

One of the precursors to Spencer-Brown's visual calculus is contained in the existential graphs of Charles Sanders Peirce. So on Saturday, I went looking in the collected writings of Peirce for more detail on his existential graphs. Then I stumbled on a table of what looked like the kind of quaternion matrix which dominates Rowlands work. Sure enough, a quick check in the index and Peirce's work is full of quaternions - and this is a very neglected aspect of his work.

To be honest, I've never been entirely satisfied with the semiotics. But the mathematical foundation to the semiotics makes this make sense. It situates the semiotics as a kind of non-commutative algebra (i.e. quaternion algebra) - and in fact what Peirce does is very similar intellectually to what Rowlands does. It means that Peirce's triads are more than a kind of convention or convenience: the three dimensions are precisely the kind of rotational non-commutative symmetry that was described by Hamilton. I'm really excited about this!

So here's Peirce on the "Logic of Quantity" in the collected papers (vol. IV), p110:

The idea of multiplication has been widely generalized by mathematicians in the interest of the science of quantity itself. In quaternions, and more generally in all linear associative algebra, which is the same as the theory of matrices, it is not commutative. The general idea, which is found in all of these is that the product of two units is the pair of units regarded as a new unit. Now there are two senses in which  a "pair" may be understood, according as BA is, or is not, regarded as the same as AB. Ordinary arithmetic makes them the same. Hence, 2 x 3 of the pairs consisting of one unit of a set of 2, say I, J, and another unit of a set of 3, say X,Y,Z the pairs IX, IY, IZ, JX, JY, JZ, are the same as the pairs formed by taking a unit of the set of 3 first, followed by a unit of the set of 2. So when we say that the area of a rectangle is equal to its length multiplied by its breadth, we mean that the area consists of all the units derived from coupling a unit of length with a unit of breadth. But in the multiplication of matrices, each unit in the Pth row and Qth column, which I write P:Q of the multiplier coupled with a unit in the Qth row and Rth column, or Q:R gives:
      (P:Q)(Q:R) = P:R
or a unit of the Pth row and Rth column of the multiplicand. If their order be reversed,
      (Q:R)(P:Q) = 0
unless it happens that R = P.

Monday, 29 May 2017

Symmetry, Learning and Sociomateriality

A number of ideas are bombarding me at the moment. The best thing that has happened to me at Liverpool University is the encounter with the theoretical physics of Peter Rowlands. This is what universities are really about: creating the possibility of encounter with radically new ideas. Peter is interested in reconfiguring the relation between classical and quantum mechanics. At the root of his approach are three simple concepts: the 'nilpotent' - an imaginary number when raised to a power produces zero; quarternions - 3-dimensional imaginary numbers invented by Hamilton which have peculiar properties; and, most importantly, symmetry.

In grappling with this (and I am still grappling with it... this blog is part of the process!) both symmetry and the nilpotent resonate with me in my thinking about cybernetics, learning, music and emotional life. The nilpotent puts the focus on nothing. The universe is about nothing. Now compare this to the importance of absence in  the work of Terry Deacon, or Bhaskar, or Lawson, or the apophatic in the ecological work of Ulanowicz. There is also the category theoretical work of Badiou who places particular emphasis on nothing in his graphs. Is absence nothing? In the sense that absenting is about something "not there"... zero is clearly not there. From Newton's third law (Rowlands has published a number of books about Newton), an obvious point to make is that the resultant force in the Universe is zero. More importantly, the somethings that we see in the universe are the product of constraints of things which we can't see (dark matter/energy). There is a nothingness about dark matter. But there is also nothing in the resultant totality of what we can see and what we can't. The real question is how something emerges from nothing.

In cybernetics, the concept of constraint takes the place of absence - although they are considered to be the same thing (Deacon, Ulanowicz, Lawson are all in agreement about this). The nilpotent idea seems to be mirrored in the tautology of Ashby's Law: a complex system can only be controlled by a system of equal or greater complexity. Something emerges from nothing, through the fact that at any particular level, systems are unstable: The complexity of system a and system b might be greatly different, thus necessitating systems at higher levels to participate in balancing the variety. This is a dynamic process. In terms of understanding it, this is a process that relies on broken symmetry.

Having a fundamental way of describing symmetry-breaking is something which mathematics struggles with. Perhaps the closest we get is in fractal geometry, or in the dynamics of Conway's game of life. But these are the result of heuristics and recursive functions rather than fundamental mathematical properties. The quaternions present a way of thinking about broken symmetry which is fundamental. i, j and k are all square roots of -1, so ii = jj = kk = -1. But ij is not equal to ji. This anti-commutative property gives quaternions the potential to articulate complex matrix structures which have a kind of 'handed-ness'. This abstract property becomes useful to describe the apparent handedness that we see in the universe, from subatomic particles to DNA to the Fibonacci structures in biology.

What's fascinating about this is that nilpotency and broken symmetry combined have remarkable generative properties. For Rowlands, one of the key things is the bridging of the gap between classical mechanics and quantum mechanics. In much social science writing (and in educational technology) it has become fashionable to cite quantum phenomena like "superposition" and "entanglement" as a way of articulating the complex 'sociomateriality' of social life. Many realists object to the woolly language. Scientists like Sokal object to the lack of understanding of physics - although some who promote sociomateriality do so from a scientific perspective - like Karen Barad. Part of the problem lies within physics. Classical mechanics and quantum mechanics are generally considered (not just by Barad) to be different kinds of thing. Rowlands argues that they are the same kind of thing - and in fact, quantum mechanics can be seen to be an entirely logical and consistent extension of classical mechanics. So Newton was more profoundly right about the universe than is widely accepted. Through Rowland's ideas of broken symmetry, issues like superposition and entanglement emerge as logical consequences of the conservation of mass and energy, and the non-conservation of time and space.

So here's a tantalising question: could learning be seen as a product of broken symmetry and nilpotency? My first instinct with these kinds of questions is to ask it of something like learning but more objectively observable: music. Can music be the result of broken symmetry and nilpotency? There have been many studies of the Fibonacci sequence in music - notably in Bartok and Debussy. I strongly suspect the answer to this question is yes. So learning? What is the symmetry of understanding or thinking? Is there a way of answering this question? At a deep level, these are questions about information - and through taking them as such, methods can be devised for exploring them. The way Rowlands is able to explain the emergence of something from nothing immediately suggests a new approach to one of the fundamental questions in the theory of information - the "Symbol grounding problem" (see https://en.wikipedia.org/wiki/Symbol_grounding_problem)

A nilpotent broken symmetry of learning would have to entail a nilpotent broken symmetry of education and other social structures. Might they be investigated in the same way? What about a nilpotent broken symmetry of politics? (Is that dialectic?) Are these too questions about information?

Yes...

Sunday, 21 May 2017

New technologies and Pathological Self-Organisation Dynamics

Because new communications technologies liberate individuals from the prevailing constraints of communication, it is often assumed that the new forces of self-organisation are benificent. Historical evidence for massive liberation of means of communication can tell a different story. Mechanisms of suppression, unforeseeable consequences of liberation - including incitement to revolt - revolution, war and institutional disestablishment follow the invention of printing; propaganda, anti-intellectualism, untramelled commercialism and totalitarianism followed telephone, cinema and TV; and the effects of unforseeable self-organising dynamics caused by the internet are only beginning to be apparent to us. It isn't just trolling, Trump and porn, its vulnerabilities that over-rationalised technical systems with no redundancy expose to malevolent forces that would seek their collapse (which we saw in the NHS last week).

What are these dynamics?

It's an obvious point that the invention of a new means of communication - be it printing or Twitter - presents new options for making utterances. Social systems are established on the basis of shared expectations about not only the nature of utterances (their content and meaning) but on the means by which they are made. The academic journal system, for example, relies on shared expectations for what an "academic" paper looks like, the process of review, citations, the community of scholars, etc. It has maintained these expectations supported by the institutional fabric of universities which continues to fetishise the journal, even when other media for intellectual debate and organisation become available. Journalism too relies on expectations of truth commensurate with the agency responsible for the journalism (some agencies are more trusted than others), and it again has resisted new self-organising dynamics presented by individuals who make different selections of their communication media: Trump.

But what happens then?

The introduction of new means of communication is the introduction of new uncertainties into the system. It increases entropy across institutional structures. What then appears to happen is a frantic dash to "bring things back under control". That is, reduce the entropy by quickly establishing new norms of practice.

Mark Carrigan spoke in some detail about this last week in a visit to my University. He criticised the increasing trend for universities to demand engagement with social media by academics as a criterion for "intellectual impact". What are the effects of this? The rich possibilities of the new media are attenuated to those few which amplify messages and "sell" intellectual positions. Carrigan rightly points out that this is to miss some of the really productive things that social media can do - not least in encouraging academics in the practice of keeping an open "commonplace book" (see http://dailyimprovisation.blogspot.co.uk/2011/07/commonplacing-and-blogging.html)

I'm wondering if there's a more general rule to be established relating to the increase in options for communicating, and its ensuing increase in uncertainty in communication. In the typical Shannon communication diagram (and indeed in Ashby's Law of Requisite Variety), there is no assumption that increasing the bandwidth of the channel affects either the sender or the receiver. The channel is there to illustrate the impact of noise on the communication, the things that the sender must do to counter noise, and the significance of having sufficient bandwidth to convey the complexity of the messages. Surplus bandwidth beyond what is necessary does not affect the sender.

But of course, it does. The communications sent from A to B are not just communications like Twitter messages "I am eating breakfast". They are also communications that "I am using Twitter". Indeed, the selection of the medium is also a selection of receiver (audience). This introduces a more complex dynamic which needs more than a blog post to unfold. But it means that as the means of communicating increases, so does the entropy of messages, and so does the levels of uncertainty in communicating systems.

This is what's blown up education, and it's what blew up the Catholic church in 1517. It's also what's enabled Trump's tweeting to move around conventional journalism and the political system as if it was the Maginot line. As the levels of uncertainty increase, the self-organisation dynamics lead to a solidification (almost a balkanisation - particularly in the case of Trump) of message-medium entities which become impervious to existing techniques for establishing rational dialogue. Government, because it cannot understand what is happening, is powerless to act to intervene in these self-organising processes (it should). Instead, it participates in the pathology.

We need a better theory and we need better analysis of what's happening.

Saturday, 13 May 2017

The Evaluation of Adaptive Comparative Judgement as an Information Theoretical Problem

Adaptive Comparative Judgement is an assessment technique which has fascinated me for a long time (see http://dailyimprovisation.blogspot.co.uk/2011/11/adaptive-comparative-judgement-and.html). Only recently, however, have I had the opportunity for trying it properly... and its application is not in education, but in medicine (higher education, for some reason, has been remarkably conservative in experimenting with the traditional methods of assessment!).

ACJ is a technique of pair-wise assessment where individuals are asked to compare two examples of work, or (in my case) two medical scans. They are asked a simple question: Which is better? Which is more pathological? etc. The combination of many judges and many judgements produces a ranking from which a grading can be produced. ACJ inverts the traditional educational model of grading work to produce a ranking; it ranks work to produce a grading.

In medicine, ACJ has fascinated the doctors I am working with, but it also creates some confusion because it is so different from traditional pharmacological assessment. In the traditional assessment of the efficacy of drugs (for example), data is examined to see if the administration of the drug is an independent variable in the production of the patient getting better (the dependent variable). The efficacy of the drug is assessed against its administration to a wide variety of patients (whose individual differences are usually averaged-out in the statistical evaluation). In other words, in traditional clinical evaluation, there is a linear correlation between
P(patient) + X(drug) = O(outcome)
where outcome and drug are shown to be correlated across a variety of patients (or cases).

ACJ is not linear, but circular. The outcome from ACJ is what is hoped to be a reliable ranking: that is, a ranking which accords with the  judgements of the best experts. But it is not ACJ which does this - it is not an independent variable. It is a technique for coordinating the judgements of many individuals. Technically, there is no need for more than one expert judge to produce a perfect ranking. But the burden of producing consistent expert rankings for any single judge (however good they are) will be too great, and consistency will suffer. ACJ works by enlisting many experts in making many judgements to reduce the burden on a single expert, and to coordinate differences between experts in a kind of automatic arbitration.

Simply because it cannot be seen to be an independent variable does not mean that its efficacy cannot be evaluated. There are no independent variables in education - but we have a pretty good idea of what does and doesn't work.

What is happening in the ACJ process is that a ranking is communicated through the presentation of pairs of images to the collective judgements of those using the system. The process of communication occurs within a number of constraints:


  1. The ability of individual judges to make effective judgements
  2. The ease with which an individual judgement might be made (i.e. the degree of difference between the pairs)
  3. The quality of presentation of each case (if they are images, for example, the quality is important)

An individual's inability to make the right judgement amounts to the introduction of "noise" into the ranking process. With too much "noise" the ranking will be inaccurate.

The ease of making a judgement depends of the degree of difference, which in turn can be a measure of the relative entropy between two examples. If they are identical, then the relative entropy will be the same. Equally, if images are the same, the mutual information between them will be high, calculated as:
H(a) + H(b) - H(ab)
If the features of each item to be compared can be identified, and each of those features belongs to a set i, then the entropy of each case can be measured simply as a value for H, across all the values of x in the set i:

The ability to make distinctions between the different features will depend partly on the quality of images. This may introduce uncertainty in the identification of values of x in i.

What ACJ does is it deals with issues 1 and 2. Issue 3 is more complex because it introduces uncertainty as to how features might be distinguished. ACJ deals with 1 and 2 in the same way as any information theoretical problem deals with problems of transmission: it introduces redundancy.

That means that the number of comparisons needed to be made by each judge is dependent on the quality and consistency of the of the ranking which is produced. This can be measured by determining the distance between the ranking produced by the system and the ranking determined by experts.  Ranking comparisons can be made for the system as a whole, or for each judge. Through this process, individual judges may be removed or others added. Equally, new images may be introduced whose ranking is known relative to the existing ranking.

The evaluation of ACJ is a control problem, not a problem of identifying it as an independent variable. Fundamentally, if ACJ doesn't work, it will not be capable of producing a stable and consistent ranking - and this will be seen empirically. That means that the complexity of the judges performing ranking will not be as great as the complexity of the ranking which is input. The complexity of the input will depend on the number of features in each image, and the distance between each pair of images.

In training, we can reduce this complexity by having clear delineations of complexity between different images. This is the pedagogical approach. As the reliability of the trainee's judgements increase, so the complexity of the images can be increased.

In the clinical evaluation of ACJ, it is possible to produce a stabilised ranking by:

  1. removing noise by removing unreliable judges
  2. increasing redundancy by increasing the number of comparisons
  3. introducing new (more reliable) judges
  4. focusing judgements on particular areas of the ranking (so particular examples) where inconsistencies remain
As a control problem, what matters are the levers of control within the system. 

It's worth thinking about what this would mean in the broader educational context. What if ACJ was a standard method of assessment? What if the judgement by peers was itself open to judgement? In what ways might a system like this assess the stability and reliability of the rankings that arise? In what ways might it seek to identify "semantic noise"? In what ways might such a system adjust itself so to manipulate its control levers to produce reliability and to gradually improve the performance of those whose judgements might not be so good? 

The really interesting thing is that everything in ACJ is a short transaction. But it is a transaction which is entirely flexible and not constrained by the absurd forces of timetables and cohorts of students.



Wednesday, 10 May 2017

The Managerial Destruction of Universities... but what do we do about it?

As I arrived at the University of Highlands and Islands for a conference on the "porous university", there was a picket line outside the college. Lecturers were striking about a deal agreed with the Scottish Government to establish equal pay among teaching staff across Scotland which had been reneged on by management of colleges. The regional salary difference can be as much as £12,000, so this clearly matters to a lot of people. It was a good turnout for the picket line (always an indication of how much things matter) - similar to the one when the University of Bolton sacked their UCU rep and his wife which made the national press (http://www.dailymail.co.uk/news/article-3013860/Lecturer-wife-sacked-failing-University-Bolton-blowing-whistle-100-000-jolly.html)

It is right to protest, and it is right to strike. But sadly, none of this seems to work very well. Bad management seems to be unassailable, and pay and conditions continually seem to get worse.

At UHI, the porous university event was an opportunity to take the temperature of the effects of over 5 years of managerial pathology in universities across the country. The collective existential cry of pain by the group was alarming. The optimism, hope, passion and faith which is the hallmark of any vocation, and was certainly the hallmark of most who worked in education, has evaporated. It's been replaced with fear and dejection. Of course, an outside observer might remark "well, you've still got jobs!" - but that's to miss the point. People might still be being paid (some of them) but something has been broken in the covenant between education and society which has destroyed the fabric of a core part of the personal identities of those who work in education. It's the same kind of breaking of covenant and breaking of spirit that might be associated with a once healthy marriage which is destroyed by a breakdown of trust: indeed, one of my former Bolton colleagues described the spirit of those working for the institution as being like "the victims of an abusive relationship".

Lots of people have written about this. Stefan Collini has just published his latest collection of essays on Universities, "Speaking of Universities", which I was reading on the way up to Scotland. It's beautifully written. But what good does it do?

In the perverse monetised world of universities, the writing and publishing (in a high ranking journal) of a critique of the education system is absorbed and rewarded by the monetised education system. In its own way, it's "impact" (something Collini is very critical of). Weirdly, those who peddle the critique inadvertently support the managerial game. The university neutralises and sanitises criticism of itself and parades it as evidence of its 'inclusivity' and the embrace of opposing views, all the time continuing to screw lecturers and students into the ground.

A good example of this is provided by the University of Bolton who have established what they call a "centre for opposition studies" (http://www.bolton.ac.uk/OppositionStudies/Home.aspx). There are no Molotov cocktails on the front page - but a picture of the house of commons. This is sanitised opposition - neutralised, harmless. The message is "Opposition is about controlled debate" rather than genuine anger and struggle. Fuck off! This isn't a progressive way forwards: it is the result of a cynical and endemic conservatism.

I wouldn't want to accuse Collini of conservativism in the same way - and yet the symptoms of conservativism are there in the way that they exist in the kind of radical "history man" characters that pepper critical discourse. The main features of this?
  • A failure to grasp the potential of technology for changing the dimensions of the debate
  • A failure to reconcile deep scholarship with new possibilities for human organisation
  • A failure to suggest any constructive way of redesigning the system
If I was to be cynical, I would say that this is because of what Collini himself admits as the "comfortable chair in Cambridge" being a safe place to chuck bricks at the system. It is not really wishing to disrupt itself to the point that the chair is less comfortable. 

The disruption and transformation of the system will not come from within it. It will come from outside. There's quite a cocktail brewing outside the institution. One of the highlights of the UHI conference was the presentation by Alex Dunedin of https://www.raggeduniversity.co.uk/. Alex's scholarly contribution was powerful, but he himself is an inspiration. He exemplifies insightful scholarship without having set a "formal" foot inside a university ever. His life has been a far richer tapestry of chaos and redemption than any professor I know. Meeting Alex, you realise that "knowledge is everywhere" really means something if you want to think. You might then be tempted to think "University is redundant". But that might be going too far. However, the corporate managerialist "nasty university" I think will not hold sway for ever. People like Alex burn far brighter. 

Another bright note: Just look at our tools! The thing is, we have to use them differently and creatively. I did my bit for this effort. I suggested to one group I was chairing that instead of holding up their flipchart paper with incomprehensible scribbles on it, and talking quickly in a way that few take in, they instead passed the phone over the paper and made a video drawing attention to the different things on their paper. So paper became a video. And it's great!

Monday, 8 May 2017

Educational Technologists: Who are we? What is our discipline?

I am an educational technologist. What does that mean?

I think we are at a key moment in the history of technology in education and there is a radical choice facing us.

We can either:

  • Use technology to uphold and reinforce the traditional distinctions of the institution. This means VLEs, MOOCs, Turnitin, etc. This enslaves individual brains and isolates them; 
The consequences of this are well summarised by Ivan Illich:
"Observations of the sickening effect of programmed environments show that people in them become indolent, impotent, narcissistic and apolitical. The political process breaks down, because people cease to be able to govern themselves; they demand to be managed."
The alternative?
  • We use technology to organise multiple human brains in institutions and outside them so that many brains think as one brain.

To do the latter, we need to think about what our discipline really is. I am going to argue that our discipline is one that crosses boundaries: it is the discipline of study into how distinctions are made, and what they do.

For many academics, the educational technologist looks after the VLE or does cool videos on MOOCs. They also the person academics seek help from when the techno-administrative burden of modern universities becomes overwhelming: how do I submit my marks, get my students on this course, etc. For some academics, the educational technologist is a kind of secretary - the equivalent of the secretary who would have done the academic's typing in the 1970s when typing was not considered to be an academic activity. Some academics blame the educational technologist for the overwhelming techno-administrative nightmare that constitutes so much of academic life today.

Certainly there is a boundary between the academy and the educational technologist. Like all boundaries, it has two sides. On the one hand, the academy pushes back on the technologists: it generally treats them with suspicion (if not disdain) - partly because it (rightly) sees a threat to its current practices in the technology. The educational technologists have tried to push back on the academy to get it to change, embrace open practice, realise the potential of the technology, etc. Right now, the academy is winning and educational technologists are rather despondent, reduced to producing "learning content" in packages like "storyline" which often reproduce work which already exists on YouTube.

This situation has partly arisen because of a lack of identity among learning technologists. In trying to ape the academy, they established themselves in groups like ALT or AACE as a "discipline". What discipline? What do they read? In reality, there is not much reading going on. There is quite a lot of writing and citing... but (I'll upset a few people here) most of this stuff is unreadable and confused (I include my own papers in this). In the defence of the educational technologist, this is partly because what they are really trying to talk about is so very difficult.

I believe we should admit our confusion and start from there. Then we realise that what we are doing is making distinctions. We make distinctions about learning like this:

or we might make cybernetic distinctions like this:
What is this? There are lines and boxes (or circles). 

What are the lines and boxes doing?

What are the lines around the boxes doing? (these are the most interesting)

Scientific communication is about coordinating distinctions. In coordinating distinctions, we also coordinate expectations. The academy, in its various disciplines, upholds its distinctions. However, as physicist David Bohm realised, scientists don't really communicate.


For Bohm, dialogue is a way to exploring the different distinctions we make. The demand of Bohmian scientific dialogue is to continually recalibrate distinctions in the light of the distinctions of others. More importantly, it is to embrace uncertainty  as the operating scientific principle.

Scientific Dialogue is about communicating uncertainty.

If we are recalibrating, then we are continually drawing and redrawing our boundaries. But this process is controlled by more fundamental organising principles which underlie the processes of a viable organism. It's perhaps a bit like this:


Here we see the death of boundaries, and the reorganisation of the organism. Much of what goes on here remains a mystery to biologists. Some are exploring the frontiers, however. Deep down, it seems to be about information...
or ecology:

Information, semiotics, ecology all concern the making of distinctions. There are a variety of mathematical approaches which underpin this. In fact, Charles Peirce, founder of semiotics, was also the founder of a mathematical approach to studying distinctions.  This is Peirce's attempts to fathom out a logic of distinctions:





And this is the very closely-related work of Cybernetician George Spencer-Brown:


When we talk about education, or technology, or biology... or anything... we are making distinctions.

A distinction has an inside and an outside. We usually forget the outside because we want to communicate the inside. We only know the outside of the distinction by listening.

It is the same in physics - particularly Quantum physics. 


And this is becoming important for the future of computing. Quantum computers are programmed using a kind of "musical score" - like this from the IBM Quantum Experience computer:



So what does all this mean?

Well, it means that science has to embrace uncertainty as an operating principle. Yet science in the academy is still tied to traditional ways of communicating. The academic paper does not communicate uncertainty.

To communicate uncertainty, we need to listen to the outside of our distinctions.

Our scientific institutions need to reconfigure their practices so that the distinction between education and society is realigned to progress society's scientific knowledge.

It is not to say that distinction between education and society should be removed. But that a discipline of examining ecologies of distinctions is essential for a new science of uncertainty to prosper.

It also means that new media should be deployed to communicate uncertainty and understanding on a much wider basis than can be achieved with academic papers. Where we have struggled is in being able to listen to large numbers of people in a coherent way.

This is one way in which we might do this...

It involves doctors and learners using Adaptive Comparative Judgement tools as a means of making diagnoses of retinal scans in examining Diabetic Retinopathy. Adaptive Comparative Judgement is a technique of getting lots of people to make simple comparisons in order to arrive at a ranking of those scans, with the most pathological at one end, and the normal at the other. In addition to this, there is a simple way in which learners can be trained to do this themselves:

Other technological means of getting many brains to act as one brain include BitCoin and the BlockChain that sits behind it...

The MIT Digital Certificates project is exploring ways in which a blockchain might decentralise education...



What about the distinctions between education and society? How might they be better managed?

What about the distinctions between critique and functionalism and phenomenology in education?

Well, the critique only exists because it has something to push against. The thing it pushed against exists partly because of the existence of the critique (and indeed, it embraces the critique!)... We have a knot.

We should understand how it works....



Saturday, 6 May 2017

@siobhandavies and Double Description at @WhitworthArt ... and reflections on Music and Education

Living around the corner from the Whitworth Art gallery means that I often make serendipitous discoveries. I popped into the gallery on my way into the city centre centre this morning and found Siobhan Davies and Helka Kaski doing this as part of their work "Material / Rearranged / to / be" - a dance work inspired by photographs from the Warburg Institute collection:



There's something very cybernetic about what they are doing - indeed, the whole installation's emphasis on action and reflection is very similar to the theme of the American Society for Cybernetics conference in 2013 (see https://www.youtube.com/watch?v=bjGcrEl0fJg). This is rather better than we managed in Bolton!

If the cybernetician Gregory Bateson wasn't the first thinker to have considered the importance of 'multiple descriptions of the world' - particularly in the distinction between connotation and denotation, he certainly thought more analytically about it than anyone else. We live with multiple descriptions of the same thing. In cybernetic information-theoretic terms, we are immersed in redundancy. Why does Siobhan Davies have two dancers mimicking each other? Because the dual presentation is more powerful - perhaps (and this is tricky) more real  - than the single description.

In a world of austerity, what gets stripped away is redundancy. We streamline, introduce efficiencies, 'de-layer' (a horrible phrase that was used to sack a load of people in my former university), get rid of the dead wood (blind to the fact that the really dead wood is usually making the decisions!). The arts are fundamentally about generating multiple descriptions - redundancies. It's hardly surprising that governments see them as surplus to requirements under austerity.  But it spells a slow death of civilisation.

Warren McCulloch - one of the founders of Cybernetics and the inventor of Neural Networks - took particular interest in naval history as well as brains. He was fascinated by how Nelson organised his navy. Of course, there were the flag signals from ship to ship. But what if it was foggy? Nelson ensured that each captain of each ship was trained to act on their own initiative understanding the heuristics of how to effectively self-organise even if they couldn't communicate with other ships. McCulloch called this Redundancy of Potential Command, pointing out that the ultra-plastic brain appeared to work on the same principles. This was not command and control - it was generating sufficient redundancy so as to facilitate the emergence of effective self-organisation. In effect, Nelson organised the many brains of his naval captains to act as one brain.

That's what Davies does here: two brains act as one brain.

This also happens in music... but it hardly ever happens in education. In education, each brain is examined as if it is separate from every other brain. The stupidity of this is becoming more and more apparent and the desperate attempts of the education system to scale-up to meet the needs of society stretch its traditional ways of operating to breaking point. Yet it doesn't have to be like this.

In a project with the China Medical Association, at Liverpool University we are exploring how technologies might facilitate the making of collective judgements about medical conditions. Using an assessment technology called "Adaptive Comparative Judgement" each brain is asked to make simple comparisons like "which of these scans displays a condition in more urgent need of treatment?". With enough people making the judgements and each person making enough judgements, many brains act as one brain in producing a ranking of the various scans which can then be used to prioritise treatment. In practice, it feels like a kind of orchestration. It is the most intelligent use of technology in education I have ever been involved with.

Orchestration is of course a musical term. Musicians are traditionally orchestrated using a score, but there is much more going on. The fine degrees of self-coordination between players is heuristic at a deep level (much like Davies's dance). The performance and the document which describes the manner of the performance are all descriptions of the same thing too. It's redundancy all the way down.

I was mindful of this as I put together this video of my score for a piece I wrote 10 years ago called "The Governor's Veil" with a recording of its performance. In video, with the score following the sound, the double description and the redundancies become much more noticeable.

 

Thursday, 4 May 2017

Teaching, Music and the life of Emotions: a response to distinctions between thinking and knowing

Music makes tangible aspects of emotional life which underpin conscious processes of being – within which one might include learning, thinking, reflecting, teaching, acting, and so on. In education, we place so much emphasis on knowledge because knowledge can be turned into an object. People make absurd and indefensible distinctions between “thinking” and “knowing”, “reflecting” and “acting”, “creating” and “copying” partly because there is no framework for thinking beyond objects; equally nobody challenges them because they are only left with feelings of doubt or alienation that they can barely articulate. The emotional life cannot be objectified: it presents itself “through a glass, darkly”. Only the arts, and particularly music succeeds in “painting the glass”.

In Suzanne Langer’s view, composers and performers are epistemologists of the emotions: in their abstract sonic constructions they articulate what they know about what it is to feel. What they construct is a passage of time over which, they hope, the feelings of listeners and performers will somehow be coordinated to the point that one person might look at another and know that they are feeling the same thing. It is a coordination of the inner world of the many; a moment where the many brains think as one brain. This is the most fundamental essence of social existence.

We each have something of the composer in us in the sense that we (sometimes) express our feelings. But composers do more than this. They articulate what they know about what it is to feel, and their expression is a set of instructions for the reproduction of a temporal form. In mathematics, this kind of expression through a set of instructions is called “E-Prime” (https://en.wikipedia.org/wiki/E-Prime). It’s a bit like the kind of games that people sometimes play: “think of a number between 1 and 10; double it; divide by …”. But similar in kind though such games are, they have nothing of the sophistication of music.

Great teachers do something similar to composers. To begin with, they work with in an immensely complex domain. Broadly, the teacher’s job is to express their understanding of a subject. But when we inquire as to what it is to "express understanding", we are left with the same thing as in music: it is to express what it feels like to know their subject. In great hands, the subject they express and the feelings they reveal are coordinated to the point that what is conveyed is their knowledge of what it is to feel knowing what they do.

Talking about emotions is difficult. It is much easier to talk of knowledge, or to talk about creativity, or thinking in loose rhetorical terms, avoiding any specifics. It is easy to point to pictures of brain scans and make assertions about correlations between neural structures and experiences - which somehow takes the soul of it and gives license to bullies to tell everyone else how to teach based on the brutal "evidence" of neuroscience. Any child will know they are lying. 

We can talk about emotion more intelligently. Wise heads in the past - some from cybernetics - made important progress in this. Bateson's concept of Bio-entropy is, I think the closest description we have of what happens (I had a great chat to Ambj√∂rn Naeve about this yesterday). We should start with music: it is the essence of connotation. It presents the richness of the interaction of multiple descriptions of the world which was at the heart of Bateson's message. It is ecological, and it's ecology is so explicitly ruled by redundancies. And perhaps the most hopeful sign is that the very idea of counterpoint is beginning to take centre stage not just in the way that we analyse ecologies, but in the way that the quantum physicists are programming their remarkable computers.