Saturday, 26 September 2015

Explaining Explaining... and Knowledge: Reflections on Chris Smith's Realist Personalism

Between realists and constructivists there are competing explanations of the world. The nature of explanation itself remains, however, unexplored. In his book "What is a Person", Chris Smith targets the kind of explanation which appears in what he calls 'variables social science'. He says:
"variables social science typically breaks down the complex reality of human social life into 'independent' and 'dependent variables,' whose answer categories are assigned numeric values representing some apparent variation in the world. Dependent and independent variables are then mathematically correlated, usually "net of" the possible effects of other variables, in order to establish independent statistical associations between them. Important independent variables are identified as related to the dependent variable through calculations of statistical significance, and statistical models are produced purporting to represent how certain social processes operate and produce the observed human social world"
Smith's fundamental objection might be succinctly summarised as the problem of the "mereological fallacy", about which Rom Harré has been arguing with Peter Hacker recently in the journal Philosophy (see Mereological fallacies are explanations of whole phenomena in expressed in terms of parts. Smith's concern in his book is the person, and he is right to point out that explanations of the person gets lost among the variables. He says:
"Persons hardly seem to exist in variables social science - they are rarely actually studied. What are studied instead are variables - which, when it comes to humans, are usually only single aspects or dimensions of persons or human social arrangements."
But then we come to realist explanations. Is this any better? From his Critical Realist standpoint, Smith argues that realism entails a change in perspective on the variable, not a rejection of it. This means to see causation not as the result of event regularities, but as the "operation of often nonobservable yet real powers and mechanisms that naturally exist at different levels of reality and operate (or not) under certain conditions and in particular combinations to tend to produce characteristic results". Variables are not excluded from the investigation of conditions, but a realist perspective would:
"take seriously the fact that variables are not causal actors. Variables do not make things happen in the social world. Human persons do. Persons are shaped by the enabling and constraining influences of their social structures - even as social structures are always emergent entities produced by human activity. And variables can represent aspects of social structures and relevant features of human actors. But variables do not cause outcomes. Nor do variables lead to increases and decreases in the values of other variables. Real persons acting in particular contexts in the real world do." 
Fine. But we still end up with an explanation, and yet there is no consideration of what an explanation is. Moreover, Smith's realism seems concerned with proscribing certain 'erroneous' ways of thinking among social scientists. Without an understanding of explanation itself, it seems that this kind of proscription is a constraint on the imagination where the warrant for the proscription is itself an explanation (i.e. Bhaskar's ontology). This is ironic because the book is about the person, and persons and explanations seem to be deeply entwined. Smith says when criticising evolutionary explanations of persons: "The very nature of human personhood [... ]- its reflexivity, self-transcendence, moral commitments, causal agency, responsibility, and freedom - means that persons can live, move and have their being in ways and for reasons not strictly tied to an evolutionist's monotonic explanation of everything." And yet he implies there is an explanation somehow within the reflexivity, self-transcendence and so on. I suspect there are many. It might be a mistake to believe variables to be causal actors, but equally it may be energizing and generative of powerful new ideas, initiatives, interventions and so on - many of which we are grateful for. Po-faced critical realism seems to lack a sense of fun and freedom: the erroneous imagination can produce remarkable and good things.

Explanations entail some concept of causation: the difference between realists, constructivists and positivists lies in the conception of causation as either being real, inherent in nature and discoverable, or being social constructions in the light of event regularities. But explanation itself is the elephant in the room in the debate between realists and constructivists (a debate which seems to me to be a bit worn-out now anyway).

An explanation is a kind of social constraint. Father Christmas and gravitation are explanations (or as Bateson would put it, "explanatory principles"), and each of them serve to coordinate social behaviour. We might think of "gravity" as having depended on event regularities for its establishment, or think of Santa as manifesting through 'normative behaviour' - although in both cases it is more multi-layered: each case has aspects of constraint materially, psychologically, socially, biologically, politically and so on. Both Father Christmas and gravity, however, are ideas which generate new practices, innovations, new forms of social organisation and so on - whether they are 'right' or not. Many of the ideas that they generate are infeasible. But by the fact that the ideas are there, we gain knowledge about the world in discovering the difference between what we might imagine and what we can bring into reality. We might imagine a world where everyone believes in Santa; but we know there is an age that our children (and we) reach when we realise it's a kind of game.

The confusion is at the interface between explanation and knowledge. It is the fault of the education system for conflating knowledge with the production of explanations. But knowledge usually emerges at the point where we realise our explanations don't work.

Thursday, 24 September 2015

Engeström's contradictions, Searle's Status Functions and Kauffman's Knots

In an analysis of a childrens' hospital, Engeström discusses the contradictions between different aspects of the institution:
(from "Activity Theory as a Framework for analyzing and redesigning work", 2000)

The contradictions are shown by the 'lightning arrows' between the different aspects of the activity theory diagram. For example, between objects and instruments (at the top), there is a contradiction in assuming that patients have simple conditions, for which a 'critical pathway' of care can be provided. Many patients however present multiple conditions: how does the critical pathway cope with this? Similarly, if a patient has multiple problems, and each of those problems is attended to by a different provider, then contradictions arise in the ways that those providers coordinate with each other. Engeström points out that "traditional rules of the hospital organization emphasize that each physician is alone responsible for the care of his or her patients". Similarly, multiple diagnosis patients cause problems between the different professionals attending to them within the organisation (the division of labour). 

I'm interested in the idea of contradiction in activity theory because it paints a picture of complex constraint-relations in organisational culture. Engeström seeks to find ways of working through contradictions in what he calls 'boundary-crossing', 'knotworking' and 'expansive learning'. These are fundamentally communicative engagements which seek to articulate, bring together and transform different perspectives on organisational problems. This works, I suspect, because the sources of the organisational conflict are communicative in the first place. 

In Searle's social ontology, "rules", "divisions of labour", "critical pathways", "hospitals", "doctors" and "patients" all result from what he calls "status functions". From Engeström's perspective that perhaps doesn't add very much to his notion of the nature of organisations as activity systems. However, I think Searle's status functions are more usefully examined as "scarcity functions": to declare x as a rule for conduct y in the hospital, is to say that conduct y is not legitimate within the hospital; to declare a person to be a doctor means that nurses cannot do what the doctor does. "Doctoring" and "acting legitimately" all become scarce through the declarations of those with the deontic power within the hospital (managers). Engeström appears to want to change the status functions (or the scarcity functions) which are declared in the hospital so that the contradictions are addressed. However, for people positioned at different points in the Activity Theory diagram, there will always be issues of scarcity, role and position, rights, obligations, duties and responsibilities. In all organisations, behaviour occurs within constraints which are declared through status (or scarcity) functions. The danger is to blindly reconfigure the constraints of behaviour without fully understanding the deeper dynamics of constraint. The problem is that constraint goes far beyond contradictions between divisions of labour and rules of engagement (say). They operate historically, organisationally, socially, personally, materially, ontogenetically and intersubjectively. Moreover, social behaviour frequently serves to uncover constraints. Engeström's focus on overcoming contradiction may be misplaced: it may be more important to act methodically so as to identify the constraints within which one works in a more comprehensive way. Effective communication may depend more on the mutual understanding of constraints between practitioners than it does on the codification of new practices within the organisation.

More recently I've been reading a brilliant paper by cybernetician Lou Kauffman on Category Theory and knots. Kauffman has a simple idea. Category theory is all about 'mappings' from x to y (the posh word is a 'morphism'). Status functions also are a kind of mapping. So to say "this paper counts as money in social context c" is a mapping from the paper to money within a category (context). Kauffman argues that there can be a meta-mapping z which maps onto a mapping from x to y. In other words, it looks like this:
It seems to me that we could say that the mapping from x->y constrains z. In less formal terms, talk of markets is a mapping onto the mapping about money, or the talk of markets is constrained by the concept of money. It becomes more obvious when Kauffman does this kind of thing:

These, Kauffman explains, are  reflexive examples, because each mapping constrains the other: "every morphism in this category is a morphism of morphisms" (Kauffman, L "Categorical Pairs and the indicative shift", Applied Mathematics and Computation, 2012, vol 218)

My question on reflecting on all this, is whether Engestrom's communicative methods can be enhanced by a deeper characterisation of constraints represented as the 'mappings' of 'status functions'. Indeed, the relationship between the status function and the scarcity function is the difference between the mapping x->y (the status function) and the mapping z to x->y (constraint or scarcity). This is to dimension the inter-relations between different aspects of the activity theory contradictions at different levels of recursion (or meta-level). The recursive layering gives extra dimension to the rather monovalent contradictions described by Engestrom. 

Sunday, 20 September 2015

E-learning Failure? The OECD report into Technology in Schools and a Scientific Problem

A recent OECD report “Students, Computers and Learning” into the impact of technology in schools argues that computers have had little effect on childrens’ learning. In comparing results across Europe, there appears to be little noticeable educational advantage despite enormous sums of money that have been spent on educational technology. You can read the report here: There is a more general air of disappointment in educational technology circles these days, much in contrast to the optimism of 10 or 15 years ago. It perhaps wasn’t a coincidence that “Edtech” was hit by the double-whammy of the financial crisis together with a sense of having reached a cul-de-sac; the financial crisis is itself a labyrinthine cul-de-sac – and we have yet to find our way out. In Edtech, we have seen the withdrawal of funding from educational technology, with government agencies shut down (like BECTA), or turned into charities dependent on institutional sponsorship (like JISC). These agencies have been accused of lacking impact and that too much funding supported academics and others experimenting with technologies which weren’t sustainable. At the same time, technological maturity of social software, MOOCs and mobile learning appear to have settled the educational questions, thus requiring no further research and development, despite the manifest shortcomings of those technologies. Diana Laurillard’s comment that the “reality falls far short of the promise” in educational technology and that “empirical work on what is actually happening” does not reflect theoretical predictions reflects not only the mood, but a scientific problem in the way educational technology has proceeded. Laurillard doesn’t address the scientific problem but, like many others in educational technology, perpetuates it. However, education is in the state it is because of a scientific and methodological incoherence which has its roots in history long before the recent impact of the web.
The early 2000s were characterised by an explosion of ideas about education stimulated by web technologies. Underpinning both the technologies and the ways in which those technologies were envisaged to transform society were a range of ideas drawn from theories about learning, technology and organisation which had a common foundation in systems theory and cybernetics. In order to understand what has happened recently, it is important to understand the deeper historical context within which these ideas were established. Over the course of its development, and particular in the early phases of its development, there were many more articulations of what Ron Barnett calls ‘feasible utopias’ than could be explored in practice. Many of these ideas sprang from the 1950s long before the web in an age of pedagogical experimentation as governments grappled with the challenge of mass secondary schooling. By the time of the appearance of the web, the organisational power of the technology already was beginning to obstruct some paths of development in favour of others: the choice of feasible utopias became restricted. The OECD’s judgement about computers and learning is a judgement about a particular subset of ideas and activities which attracted funding, some of which delivered their promised impact, and others didn’t. Political forces drove the constraining of the research effort, and it is now political forces that determine that the phase of experimentation and research in education and technology should end in the light of ‘evidence’ produced through blinkers.
In order to be disappointed by failures requires an expectation about the validity of the scientific methodology that is pursued. Ministers and funders hope for the discovery for certain key ‘independent’ variables which can be manipulated to directly improve things, or implemented in policy. Theories about learning generate new ideas about education and new possibilities for intervention. Theories are the generators of promises. They make predictions as to what might happen, or what innovators might wish to happen. When predictions aren’t met or promised results do not materialise, disappointment results. The scientific problem lies in mistaken thinking about the causal connection between theory and practice in education. But what causes "successful learning"? Ministerial hopes for independent variables are misplaced, and whilst evidence will be sought to defend one position or another, in the real world, declarations of the causal power (or lack of it) of intervention x or y is nothing short of political manipulation. Such declarations blind themselves to contingencies in the name of pragmatism or expedience. More significantly, the pursuit of the independent variables of educational technology is blind to the constraints that bear upon education, theories of education, methodologies of the research and the personal motivations of researchers, teachers, managers and politicians. The causal obsession loses sight of the constraints that frame it.
Whilst the OECD’s declaration of failure of Educational Technology betrays a simplistic causal logic, it isn’t really their fault. The distinction between causes and constraints goes back to the relationship between the two disciplines which underpin so much of the theoretical background of education: General Systems Theory and Cybernetics. Confusion between these apparently closely-related theoretical enterprises has, I argue, been partly responsible for the current malaise in education and technology. Whilst both traditions of “systems thinking” sit uneasily with the pursuit of independent variables, quasi experiments, or evidence-based policy, their underpinning empirical approaches are fundamentally distinct. The internal differences between General Systems Theory and Cybernetics have produced a central incoherence in thinking about education and educational research where Laurillard’s “disappointment” and the OECD’s rejection are both symptoms - and indicators of a possible way forwards.
Systems theory and Cybernetics
It is hard to find a book that champions educational technology which does not hang its pitch on some variety of the concept of ‘system’. From Seymour Papert, who worked with Piaget, Diana Laurillard, whose work drew heavily on the cybernetic conversation theory of Gordon Pask, to Sugata Mitra, whose background in physics made him familiar with theories of self-organisation and cellular automata, each has defended their ideas in terms of redescribing the complexities of education in “system” terms. In each case, the theories that educational technologists attached themselves to were universalising and transdisciplinary, seeking to account for the richness of learning and educational organisation. There were connections made between a wide variety of educational perspectives, most notably between critical approaches to pedagogy, seizing on the educational critiques of Illich, Freire and others in arguing that new means of communication would deliver new ways of organising people. System theories were the generators of educational optimism buoyed by profound technological changes which transform the learning situation in schools and in the home.
The tendency of most theories of learning, and particularly “systems” theories is to be grandiose. New concepts are used to redescribe and critique existing practices. The manifest problems inherent in received knowledge and practice become clarified within the new framework conditional upon the implementation of new practices, new ways of organising education and the implementation of technologies. The transdisciplinary all-embracing nature of the systems descriptions presents what Gordon Pask called ‘defensible metaphors’ (Pask’s definition of cybernetics was “the art and science of manipulating defensible metaphors”) of transformed scenarios. In practice, when putting real teachers and learners and the real messy situations of education into the equation things didn’t look so simple: redescription produced absences; certain things were overlooked; the contingencies always outweighed the planned futures. Generally there was failure to account for concrete persons, real politics, or ethics: modelled social emancipation, however well-meaning, rarely manifests in reality. The history of cybernetics presents plenty of examples this kind of failure, not just in education. The response to these kinds of failures takes a variety of forms. On the one hand, there is a response that criticises the intervention situation for not sufficiently supporting the initiative: with “management support” or without “management interference”, better training, more time and so on, things would have worked – it is frequently asserted.  Then there is a response which looks at deficiencies of the intervention in its material constitution: the interface could have been easier, the tools more accessible, and so on. There might also be a reflection on the possibility that the underpinning theory upon which interventions were based was deficient in some way. This latter approach can lead to the rejection of one theory, and the adoption of a new one.  
If there is a common thread between these different ways of responding, it is that they each focus on causes: the lack of management support caused the intervention to fail; the interface meant the tools were unusable; the deficiencies of theory caused inappropriate interventions to be made. However, an approach based on constraint rather that causality changes the emphasis in such assessments. Constraints are discovered when theorised possibilities do not manifest themselves in nature. The constraint perspective is more consistent with a Cybernetics than it is with a Systems perspective. In order to understand the constraint perspective, we have to reinterpret the diagnosis of failure. If computers fail to help children learn in the way that the marketeers argue they might it is because there is a mis-match between the constraints that bear upon those generating ideas about what might be possible in educational reality (theorists, technologists, and so on), and the actual constraints that are revealed by the world when new interventions are attempted. We might only consider it failure if our purpose was to determine causal patterns between interventions and results: the identification of ‘independent variables’ in the implementation of technology. But what if our scientific approach was geared towards identifying constraints? Then we would have learnt a lot through our intervention: most particularly that a set of ideas and designs which in an imagined world lead to beneficial outcomes, in reality do not. What might that tell us about the constraints on thought and the generation of new ideas? How might thinking change in the light of new constraints we have discovered? By this approach, knowledge emerges in a reflexive process of theoretical exploration, and the discovery of which theoretically-generated possibilities can and cannot be found in reality.
There are significant constraints that bear upon intellectual engagement with educational technology. The identification of constraints in reality (where things don’t work as we intended) did not send theorists back to think about why they thought it might work in the first place. Whilst many of these constraints are political: “we needed the project funding to keep our jobs, and this was what the funders were asking for…”, or “this complies with current government or EU policy”, other constraints on thought emerged from a confusion that stems from the contrast between causal thinking and constraint-thinking. To put it more bluntly, it stems from confusion between constraint-oriented Cybernetics, and cause-oriented General Systems Theory to the point that the justification of interventions, or the sales-pitch for pieces of software, meant that explanations that attenuated the complexities of reality were produced to try to attract the funding for the research, rather than suggested as possibilities to be empirically explored. 

The OECD's judgement is an interesting step along the way; instead there is a risk it will be seen to slam the door shut.

Saturday, 19 September 2015

Gregory Bateson on Educational Management and Knowledge

At the end of Gregory Bateson's book, Mind and Nature, there is an essay called "Time is Out of Joint", in which he talks about the relationship between knowledge, science and educational management. It was written as a memorandum to the Regents of the University of California in 1978. He says:
"While much that universities teach today is new and up-to-date, the presupposition or premises of thought upon which all our teaching is based are ancient and, I assert, obsolete. I refer to such notions as:
a. The Cartesian dualism separating "mind" and "matter"
b. The strange physicalism of the metaphors which we use to describe and explain mental phenomena - "power", "tension", "energy", "social forces", etc
c. Our anti-aesthetic assumption, borrowed from the emphasis which Bacon, Locke and Newton long ago gave to the physical sciences, viz that all phenomena (including the mental) can and shall be studied and evaluated in quantitative terms. 
The view of the world - the latent and partly unconscious epistemology - which such ideas together generate is out of date in three different ways:
a. pragmatically, it is clear that these premises and their corollaries lead to greed, monstrous over-growth, war, tyranny, and pollution. In this sense, our premises are daily demonstrated false, and the students are half aware of this.
b. Intellectually, he premises are obsolete in that systems theory, cybernetics, holistic medicine, and gestalt psychology offer demonstrably better ways of understanding the world of biology and behaviour.
c. As a base for religion, such premises as I have mentioned became clearly intolerable and therefore obsolete about 100 years ago. In the aftermath of Darwinian evolution, this was stated rather clearly by such thinkers as Samuel Butler and Prince Kropotkin. But already in the eighteenth century, William Blake saw that the philosophy of Locke and Newton could only generate "dark Satanic mills"
Bateson's work fundamentally had been about justifying these claims in an ecological science which rested on cybernetics (it's interesting that he doesn't make the distinction with systems theory). I'd always felt that Bateson appeared to lack a political edge in his writing, preferring to argue the case for his "explanatory principles" rather than fighting to get things to change. However, the following passage contains some powerful political rhetoric, although unfortunately, his prediction that the "facts of deep obsolescence will command attention" has not yet come to pass:
"So, in this world of 1978, we try to run a university and to maintain standards of "excellence" in the face of growing distrust, vulgarity, insanity, exploitation of resources, victimization of persons, and quick commercialism. The screaming voices of greed, frustration, fear and hate.
It is understandable that the Board of Regents concentrates attention upon matters which can be handled at a superficial level, avoiding the swamps of all sorts of extremism. But I still think that the facts of deep obsolescence will, in the end, compel attention."
What would he say about our universities today, where the screaming voices of greed, frustration, fear and hate have taken over the halls of learning themselves, not just the world outside the ivory tower.  What would he make of the new managerial class of administrator today... particularly after having so many difficulties himself in 'fitting in' to the academic establishment. My guess is he would find it far worse - although perhaps he would not be surprised. He sarcastically remarks that this is "only 1978" and that by 1979,
"we shall know a little more by dint of rigour and imagination, the two great contraries of mental process, either of which by itself is lethal. Rigour alone is paralytic death, but imagination alone is insanity."
Perhaps there's a side-swipe  here at the 'hippy' community who embraced Bateson in his last years, when the scientific establishment of which he was clearly part, had shunned him. The hippies were lovely people, only too happy to talk about ecology and consciousness, but they were all imagination with no rigour. A similar criticism might be made of today's radicals: it is not enough for Occupy, the Greens or even Jeremy Corbyn's Labour to have bold dreams; there needs to be hard analysis too - more rigorous than anything attempted by those they oppose.

Which leads on to his comment about the student uprising in 1968:
"I believe that the students were right in the sixties: There was something very wrong in their education and indeed in almost the whole culture. But I believe that they were wrong in their diagnosis of where the trouble lay. They fought for "representation" and "power". On the whole, they won their battles and now we have student representation on the Board of the Regents and elsewhere. But it becomes increasingly clear that the winning of these battles for "power" has made no difference in the educational process. The obsolescence to which I referred is unchanged and, no doubt, in a few years we shall see the same battles, fought over the same phony issues, all over again."
Well, in reality it took over 40 years, but I suggest Bateson has been proved right! What he then articulates is a theory about education's relation to science and to politics. He introduces this theory by saying "I must now ask you to do some thinking more technical and more theoretical than is usually demanded of general boards in their perception of their own place in history. I see no reason why the regents of a great university should share in the anti-intellectual preferences of the press of media." - anti-intellectual? University management?! Heaven forbid!!!

Fundamentally, the theory reflects on what he sees as "two components in evolutionary process, and that mental process similarly has a double structure.". He basically argues for a conservative inner logic that demands compatibility and conformance; at the same time there is an imaginative, adaptive response by nurture in order to survive in a changing world. His point about time being "out of joint" is that the conservative and imaginative forces are mutually out-of-step: "Imagination has gone too far ahead of rigour and the result looks to conservative elderly persons like me, remarkably like insanity or perhaps like nightmare, the sister of insanity." He points out this this process is common in many fields: the law lags behind technology, for example. However, he argues for a dialectical necessity relating "conservatives" to "liberals", "radicals" and so on:
"behind these epithets lies epistemological truth which will insist that the poles of contrast dividing the persons are indeed dialectical necessities of the living world."
He argues that the purpose of university management is to maintain the balance between conservative and imaginative forces:
"if the Board of Regents has any nontrivial duty it is that of statemanship in precisely this sense - the duty of rising above partisanship with any component or particular fad in university politics."
Perfect! But Bateson sees the dangers too. He argues that there is a reason why "acquired characteristics"  can not be inherited in biology so as to protect the gene system from too rapid change. In Universities, there is no such barrier:
"Innovations become irreversibly adopted into the on-going system without being tested for long-time viability; and necessary changes are resisted by the core of conservative individuals without any assurance that these particular changes are the ones to resist."
The problem is that universities can be become corrupt, where the forces of conservatism take over:
"It is not so much "power" that corrupts as the myth of "power" It was noted above that "power", like "energy", "tension", and the rest of the quasi-physical metaphors are to be distrusted and, among them, "power" is one of the most dangerous. He who covets a mythical abstraction must always be insatiable!"
I don't think Bateson would be encouraged by what he would see today. We have gone nowhere, and the world of Universities, just as the world outside, is in dire straights. If Bateson's analysis is right, it is because of the "out-of-jointedness" of the two forces. In my own experience, I believe the problem lies with conservatism allying itself to mystifying allusions to 'learning' which are not rigorously inspected. His question to the board of Regents is simple:
"Do we, as a Board, foster whatever will promote in students, in faculty, and around the boardroom table those wider perspectives which will bring our system back into an appropriate synchrony or harmony between rigour and imagination?"
We have no information as to how the Californian Regents responded. Our problem today is that there too many (overpaid) Vice-Chancellors who wouldn't even understand the question.

Tuesday, 15 September 2015

Spam and Schutz and Meaningless Snapchats

There's a brilliant exhibition at Manchester's @HOME_mcr at the moment. "I must first apologise..." by Joana Hadjithomas and Khalil Joreige. Having collected spam emails for a number of years, Hadjithomas and Joreige put the words of those emails into the mouths of actors who were filmed reading pleas for money to be sent in various desperate circumstances. In the exhibition you first walk into a darkened room full of TV screens, each one showing a talking head. It's a cacophony of voices, and you have to get close to hear what each individual is saying. You look into their eyes, you hear words which we all have read in our inboxes every day, and somehow it all seems very different.

The closeness of the piece is what interests me, or rather the difference between reading text in an email, and staring into someone's eyes reading that text (albeit on a computer screen). They also try very innovative forms of projection where characters are projected onto a gauze-like translucent material which makes them appear to 'stand out'. It's impressive.

Since I've been thinking about intersubjectivity so much recently, the difference between text exhanges on a screen, time-based voice and video experiences, and real face-to-face contact resonates with deeper changes in the ways we interact. The online revolution has meant that our intimate face-to-face contact has become less (it's time-consuming and time-dependent and inefficient!), and remote text exchange has increased. But, beyond recent calls to limit the use of email in work (, we haven't really been able to articulate how these forms of communication are different.

Alfred Schutz made a distinction between the 'face-to-face' situation as what he called the "pure we-relation" and the more remote "world of contemporaries". With regard to the former, he says:
"I experience a fellow-man directly if and when he shares with me a common sector of time and space. The sharing of a common sector of time implies a genuine simultaneity of our two streams of consciousness: my fellow-man and I grow older together. The sharing of a common sector of space implies that my fellow-man appears to me in person as he himself and none other. His body appears to me as a unified field of expressions, that is, of concrete symptoms through which his conscious life manifests itself to me vividly. This temporal and spatial immediacy are essential characteristics of the face-to-face situation." 
With regard to the latter he comments that:
"The stratification of attitudes by degrees of intimacy and intensity extends into the world of mere contemporaries, i.e., of Others who are not face-to-face with me, but who co-exist with me in time. The gradations of experiential directness outside the face-to-face situation are characterized by a decrease in the wealth of symptoms by which I apprehend the Other and by the fact that the perspectives in which I experience the Other are progressively narrower."
It would appear then, that there is simply 'more information' in the face-to-face encounter. But what does that mean exactly? After all, information itself is an intersubjective phenomenon. The "variables" that we might identify as distinctions between face-to-face and remote communications, for example, body language, gaze, tone of voice and so on, are all themselves categories of experience only accessible to us because we live in a world of others.

Recently, I've begun to look at this differently. Schutz's phrase "the wealth of symptoms" is carefully chosen because it doesn't implicate information directly. Rather it says there are distinctions that we might agree between us. In that process of agreement of distinctions, something constrains us to the point that we can say "this is the gaze... this is the body language..." and so on. The constraint is the "not-variable" - the thing which lies outside the identified property. "Not-variables" have a different kind of combinatorial logic to variables. I suspect that Schutz's intersubjective differences between the face-to-face and the world of contemporaries is an interaction of different constraints or "not-variables". In face-to-face settings, there are more constraints than there are in remote settings.  The way we tune into each other depends on the way we recognise the constraints bearing upon each other.

Schutz's insight had one of its most powerful expressions in his love of music. His paper, "Making music together" is a remarkable account of the way that music and musicians communicate without any kind of reference. I am fascinated by the analytical component of this, which is why I'm messing around exploring the interactions of different redundancies in musical performance at the moment (see The relationship between redundancy and constraint is fascinating because it is not only the redundancies of expression in face-to-face communication (gazes, body movements, voice tone, etc), but also the redundancies of repetition and habit. Whilst most social media provides a narrow form of communication, it is also built for redundancy and the expression of habit (think of endless Twitter messages about what you're having for breakfast, or  streams of Snapchats with little content).

As we communicate more remotely with one another, so we find new ways of generating redundancy in those communications where the redundancies of face-to-face would have once taken much greater precedence. What this might mean is that 'wasting time' online with endless Tweets about very little is much more significant than those who criticise it might think.

Sunday, 13 September 2015

Jeremy Corbyn's ideas: the Arts and Science

The most exciting thing about Jeremy Corbyn's election is the imaginative energy which it has unleashed. The mere fact of his landslide election is an eschewing of constraints within which we all believed we were imprisoned by and could do nothing about. What brought this about? The power of the imagination! We should never underestimate the power of exploring new, fresh ideas. It seems to me that those bewailing his victory as "the end of days" are bereft of ideas (they're now reduced to a single idea: get rid of him!). The mealy-mouthed, cynical politics of New Labour always had a very limited repertoire of ideas; and the Tories manage to carry off this rather meagre oeuvre with more voter appeal than Labour ever could. New Labour had their moment with Blair's 'clause 4' - that too was about shedding constraint - and they won power. Unfortunately, government - and Iraq - put the shackles on them again. Blair is clanking around like Marley's ghost - and his protégés have never been allowed to think for themselves (Blame Mandelson)

The most eye-catching aspect of Corbyn's campaign was his very clear statement about the arts. It is a statement about ideas and creativity: Note, there is no comparable statement for science. There are policy documents for business, green technologies and his 'national education service', but in terms of something that drives the intellectual innovative climate in the country, Corbyn's statement about the arts goes the furthest:
"The key to enriching Britain is to guarantee a broad cultural education for all (through arts skills acquisition, participation in arts and cultural events and enhanced appreciation), an education and a curriculum that is infused with multi-disciplinarily, creativity and enterprise which identifies, nurtures and trains tomorrow’s creative and cultural talent."
I can only ask, Why did it take a renegade politician to make such a powerful statement about developing the intellectual capital of the country? Why have we had to suffer interminable blathering from arts ministers desperately trying to defend their budgets with no clear vision as to what a national climate of imagination can do? Why has there been continual talk about 'aspiration' from the government and opposition tied to a market-oriented logic that dulls hope and reduces everything to monetary terms? Corbyn's document defends a cultural environment to encourage the growth of ideas, even if some of them aren't any good. Indeed, it supports the ideas that don't work as much as those few that do. It recognises that a cultural system is an ecology:
"Culture is unpredictable. It often bites the hand that feeds it. It produces more dross than glory. Of course it does. But the dross must be defended because the road to glory lies always over the bogs of dross."
I've often wondered whether our market-driven shackles have been determined not by greedy capitalists, but by a particularly blinkered approach to creativity and science which has been delivered by the education system (which certainly breeds greedy capitalists!) What we think science is, how scientific knowledge arises, what role creativity and the imagination play in the process, are all fundamental questions which have shaped the way we organise society. In today's world, entrepreneurial technological innovation is seen as the "way forwards": and yet, as the behemoths of the technological world become bloated, gradually they too run out of ideas. At the same time, our society starves those without the independent means to explore their ideas by marketising and commodifying education.

In our school science classes, we reproduce the great experiments of the past. We are taught empirical procedures set out in textbooks which mechanise the whole thing, and constrain all the possible things we might think of doing with a Bunsen burner to very few 'acceptable' things (measured by learning outcomes) - partly out of fear of exploring the dangerous things. No intellectual advance ever was produced through such constraints: they only induce fear and reinforce an authoritarian environment for science.

The arts are fundamentally about play, and play is missing from virtually all aspects of intellectual development within school. As crude metrics, learning outcomes, exams, monotonous coursework and (more than anything else) league tables and managerial pathology have taken hold, minds are dulled into a 'professionalised' conformance. At an academic conference I was at recently, I was struck by how similar many of the academic presentations were. A friend muttered to me "professional academics - it didn't used to be like that..." Damn right it didn't!

The deep question is that in our understanding of how science works, and how scientists work, there is a huge gap in our understanding about how scientists play, and the conditions within which they are enabled to play. That, I think, is what's Corbyn's arts strategy is about. Scientists and artists are both in the business of generating lots of ideas. They know not all of them will work. Knowledge comes from testing ideas in reality and working out which ones work and which ones don't. If we taught our kids like that, rather than get them to ape successful experiments, we would have a much more invigorating science education which genuinely embraced curiosity. Scientists would become more like artists. And our economic prospects would be invigorated by the sudden removal of constraints of imagination which we thought would be with us for ever.

Saturday, 12 September 2015

Entropy and Aesthetics: Some musical improvisation experiments

I've been doing some weird musical experiments recently. What started out as a very simple Javascript program to illustrate the calculation of entropy on-the-fly turned into doing a similar thing for musical notes recorded as MIDI signals. With text, the problem of calculating entropy is quite simple: you can look at individual letters and calculate their entropy (basically the sum of the logs of their probabilities) - it's a measure of the average 'surprisingness' of the text. You can also look at the 'digrams', or successions of symbols, and continue with trigrams and so on.

I initially started by looking at the notes sent down the MIDI cable to do the same thing as I had with text. But then there are so many aspects to music: the harmony, rhythm, intervals (which are perhaps more interesting and important than the MIDI value of the notes), dynamics, etc. Somehow all of this stuff works together to produce the aesthetic effect. But each of these elements has its own entropy value: there's rhythmic entropy, harmonic (or bass-line) entropy, and so on. So I elaborated my program to take these measurements as well. Then it struck me that some way was needed of relating each of these elements to one another. I feel somehow there ought to be a way of calculating what Shannon calls "mutual information"... although doing this with music is not at all like doing entropy calculations with text. Short of doing this, the most obvious thing to do is to indicate the differences between the different levels of entropy at different points in time. So that is what I've done in the first instance. It's very crude.

When I improvise, I watch the values changing and the relationships between the different elements of entropy shift. It's quite fascinating. What I am looking for, basically, is a correlate for how I feel as I play. But I've reached a stage now where I think this structure, with its oscillating connections, needs to grow over time: because that's what happens to me as I play. What is this growth?

Drawing on my previous post about 'learning gain', I think this growth is the emergence of order. But as Von Foerster said with regard to this, there is a process of growth that occurs within constraints, and there is a growth which occurs by expanding or changing the constraints. How would my model of improvisation identify its constraints?

In music analysis, one of the principal tasks is 'segmentation' of the music into parts that can be analysed: the architectural units of a symphony, for example. It is a bit of a mystery as to how musicians identify these segments. It may be that the identification of boundaries where one set of constraints gives way to another is what is going on here. Is it possible to write a computer program that does that? Currently, I haven't got anything that measures constraint at all: I am measuring entropy which is (kind of) the opposite - so that is my next task.

Perhaps its worth saying why I think this is important. The measurement of different levels of entropy is a useful technique applicable to all kinds of experiences. Imagine being able to measure the moves that a player makes in a computer game, for example; or the drawing process of an artist; or the behaviour of individuals when interacting with an artwork, or the use of a new website, etc. In each case there is emergent order within constraints that change. Music is a powerful domain for beginning with this kind of research because it has such a powerful emotional component.

It makes me think that the most powerful research domains are those where analytical endeavour is most entangled with human emotional responses.

Tuesday, 1 September 2015

Learning Gain and the Measurement of Order

I think the biggest problem with HEFCE's current obsession with 'Learning Gain' is its motivation. One suspects that behind their wish to identify the "causal factors" behind "successful" learning there is the wish that particular practices or metrics might be identified which can then be used by management, the HEA or other agencies who see their role in controlling and monitoring the behaviour of teachers. In other words Learning Gain = Big Management. The funding calls for research look very much like a desire to seek 'Policy-based evidence'. I would be happy to be proved wrong, but I think it is unlikely that HEFCE's wish is that a scientific investigation of learning would highlight the pathology of management itself, and that notwithstanding the relationship between teaching and student success, interfering management driven by targets and greed has a generally detrimental effect on staff morale, creates inflexibility in education practice and damages the overall educational ecology of institutions.

But what if we were to consider a proper scientific study of learning and teaching? How would that study look different to what HEFCE are appearing to do?

First of all, it would focus not on management, but organisation. The problem with HEFCE's approach, it seems to me, is that the scientific problem is misconceived as a problem of management. This is understandable since HEFCE is a regulatory organisation and clearly they would wish for research that justifies their position - and the position of every senior management team in UK Universities (whose sponsorship HEFCE and the HEA crave with collapsing central government funding). But it's the wrong question. How could it be better?

Deep down, Learning Gain is an effort to find a measurement of emergent order. It is closely related to the biologist's study of growth or the ecologists study of viability. But measuring order is a well-known and difficult problem. In ecology, only recently have statistical techniques been effectively used. Epigenesis remains highly contested in biology. And there's the scale of the problem for education: because at least the biologists and ecologists can see growth processes over time. Neither HEFCE nor anyone else can look into the heads of learners and teachers: learning itself is metaphysical; only behaviour and communication are available for inspection. However, not all is impossible - and there's good reason not to give up.

Heinz von Foerster attempted to address the problem of measuring order in a paper "On Self-Organizing Systems and Their Environment" in 1959. Despite its age, what he says is profound and deeply relevant. In measuring order, von Foerster says:
"we wish to describe by this term [a measurement of order] two states of affairs. First, we may wish to account for apparent relationships between elements of a set which would impose some constraints as to the possible arrangements of the elements of this system. As the organisation of the system grows, more and more of these relations should become apparent. Second, it seems to me that order has a relative connotation, rather than an absolute one, namely, with respect to the maximum disorder the elements of the set may be able to display. This suggests that it would be convenient if the measure of order would assume values between zero and unity, accounting in the first case for maximum disorder and, in the second case, for maximum order." 
He goes on to say: "What we expect from a self-organizing system is, of course, that, given some initial value of order in the system, this order is going to increase as time goes on." Most teachers will hope for this in their students. The emergence of skilled performance, new vocabularies, mastery of concepts and so on are all indicative of increasing order. This process occurs within sets of constraints. As the system increases in order, both ordered relations and the constraints within which it emerges become more explicit: every parent of every teenager knows precisely what this is! However, the order of the system is relative to the constraints within which it develops. Von Foerster uses Shannon's equation for redundancy to indicate this. Now, redundancy is the 'background' of information (H): it constrains messages - typically it takes the form of repetitions, superfluous ways of doing or saying the same thing, surplus stuff. In information theory, information (H) is a measure of the uncertainty of a particular event. The redundancy is the ratio of the uncertainty (H) of a particular event divided by the maximum possible uncertainty (Hm) of any event subtracted from 1. Mathematically then, redundancy is:
Von Foerster then argues that this redundancy can serve as a measurement of order because if there is total disorder then H equals Hand the constraint, or redundancy, is 0. So no rules, no constraints = maximum disorder - most headteachers would agree, and learners will feel overwhelmed by complexity. Conversely, if order is high, then uncertainty is low, and therefore redundancy is close to 1 - 'total constraint'. Disciplinarian classrooms would be more like this: with low uncertainty, learners will get bored or feel oppressed.

The really important thing that HEFCE should take note of here is that if we see Learning Gain as a measurement of order, it must focus on the constraints (or redundancies) within which order develops. It is this simple shift in focus which is crucial. And what are the constraints of educational practice? Well, dominant among them is... the management and their regulatory instruments!

However, as increasing order makes the constraints more explicit (think teenagers and parents), something else happens which at once decreases the order in the system, but which presents new possibilities for increasing it: this is the increase in the maximum uncertainty. He explains that the easiest way to do this is to increase the number of elements that make up the system. In simple terms, the acquisition of new skills, concepts, social connections and so on is an increase in the number of possibilities: it is an increase in the maximum uncertainty. Order involves a relationship between uncertainties about events (or actions) and maximum uncertainties: intellectual development then is a kind of dance. We can imagine two cases: a. increasing order relative to a fixed maximum possible disorder; b. there is maintained order against a changing maximum disorder. In more simple terms, there is development within a stable environment, and there is resilience within a changing environment. Teachers apply constraints to learners, and are subject to constraints imposed by managers. Management constraints can demand that constraints applied to learners are adjusted (at the extreme end, "don't fail any learners - we need their fees!"); learners equally can apply constraints on teachers. Von Foerster's analysis of order reveals the organisationally miserable chaos of education. But it also provides analytical tools for thinking about how we might analyse and deal with it.

These processes are processes of order arising through self-organising dynamics operating within constraints. However, Von Foerster also makes the case for order arising out of noise. Noise is important in information theory because it constrains the information transmission process necessitating the injection of redundancies (i.e. "can you say that again?") in the communication. The general hubub and atmosphere of educational institutions can greatly contribute to the quality of the experiences (for which read the "increase in order") for everyone there. "Noise" is a constraint that applies to everybody; it is contingent on the actions of everybody - but particularly the action of management who have the power to suppress it. Get things wrong and the atmosphere will be killed.

So, to summarise, being cynical about HEFCE's Learning Gain plans should not distract from the potential of scientific insight into the nature of the organisation of education. However, that insight involves the study of emergent order, where order emerges within constraints. Our measurement of order is the measurement of those constraints. In simple terms, we need to know how the constraints in education, from management diktats to learning outcomes, targets, assessment criteria and timetables, interact. Moreover, the relationship between constraints is ecological - probably in a similar way to the biological dynamics studied by ecological statisticians. There is little doubt that there are synergies between constraining forces which in some institutions produce remarkable results, just as there are combinations of constraining forces in other institutions which do not produce the emergent order that one would hope from a University - indeed, it may even cause a collapse of order. The real challenge is to get the question right.