Thursday, 11 February 2016

Online Intersubjectivity and Constraint

Face-to-face conversations are different things from online conversations. Today I had an interesting conversation about conversation. I could look into the eyes of the person I was talking to and try to gauge what was happening in their 'inner world'. Each utterance I made carried with it a set of expectations as to how the person I was talking to would respond. If a response surprised me too much, I discovered I found it difficult to formulate another utterance - there were a couple of awkward silences! These moments were the most interesting for me. I found that I had to reflexively adjust my set of expectations in quite a fundamental way. I found I needed space to do this, and sometimes the intensity of a face-to-face exchange is not the most conducive to this kind of reflexivity.

Online, now, writing my blog, I think I can do it better. Moreover, I feel the need to do it. There's something about blogging which helps to resolve intellectual tension: it may be a kind of masturbation (which worries me sometimes).

Part of the discussion today concerned the we-relations that Schutz talks about. Fundamentally, the question is, Does being part of an online community constitute a 'we relation'? In Schutz, the pure we-relation is exclusively focused on the face-to-face interaction. Central to the pure we-relation is the shared flow of time: through this flow of time, “the reciprocal sharing of the other’s flux of experiences in inner time, by living through a vivid present together” occurs.

Modern technology provides new ways of accessing a 'vivid simultaneity': video is the most obvious example. New real-time technologies are continually emerging on the back of websocket technology. There is also a kind of vivid simultaneity in Facebook and Twitter - except that it extended across a span of time. Where's the common factor between these different kinds of activity? And what are the phenomenological differences between them?

Facebook and Twitter are not quite the same as Schutz's rather remote "world of contemporaries" (the term he uses to describe human relations with people who are real and alive but not present in front of us). In exploring the common ground of these phenomena, I am driven to the conclusion that it is the constraints of communication which matter. And yet much of the conversation today concerned the nature of constraint and absence, and whether constraint was merely a kind of causal power.

I once believed that constraint was a causal power, and argued about this with cyberneticians who disagreed. I now see constraints and causes as fundamentally different orientations in looking at the world - and the constraint orientation is helpful because it bypasses the problems of causal thinking including the inevitable tendency towards reductionism.

Face to face discussions are profoundly constrained by the myriad of sensory perceptions which are exchanged as we talk and look at each other. Choices of utterances, as with choices about anything, emerge through what can't be thought: forgetting and ignorance are often the drivers of decision. So to talk of a pure we-relation is to talk of a highly constrained situation where the constraints are simultaneous - one might call them synchronic. It is not necessarily the case that there are fewer constraints in an online conversation. But it is true that those constraints are less synchronic and more diachronic. A Twitter feed is the gradual exposure of diachronic constraint.

So can a we-relation exist online? Reframing the question, I would say the issue is whether a we-relation which rests of synchronic constraints is the same as a relation which rests on diachronic constraints. This is complicated because a synchronic we-relation is an inescapable constraint for both parties. A diachronic we-relation (if such a thing is possible) is dependent on different parties reading similar constraints in the flow of communications between them. There are fewer guarantees of effective 'tuning in'. However, if different kinds of media are used, including video, then there is more chance that exchanges can be more meaningful and insightful into the other's inner world.

Part of the condition for this relates to the constraints which contextualise the conversation in the first place. Peoples' bodies are constraints; their life histories are constraints; their emotions, socio-economic conditions, political views, and so on are all constraining. An online support group for people with particularly similar histories, personal tendencies, enthusiasms and so on contextualises interactions which are diachronically constraining, but which - because of shared common constraints beyond the immediate interaction - (maybe) can provide the conditions for mutual tuning-in. 

Sunday, 7 February 2016

Critical Realism and Cybernetics: A struggle for a perspective on education

I've been trying to finish a book chapter on cybernetics and education. I've got stuck. The reasons why I've got stuck have to do with changes in my orientation not just towards cybernetics, but also to the philosophical perspective which I found valuable in conjunction with cybernetics, Critical Realism.

Both critical realism and cybernetics are, on the surface at least, concerned with mechanism. Critical realism asserts that what exists in the world are causal mechanisms which are, through scientific inquiry, discoverable. There are intransitive mechanisms which exist independently of human agency (mechanisms of physics, for example), and there are transitive mechanisms which exist through human agency. Transitive and intransitive mechanisms are interconnected at different levels where the discovery of mechanisms connects empirically derived knowledge of the transitive and intransitive domains with deep mechanisms of human flourishing: Critical realism connects science to politics. Cybernetics, by comparison, is a scientific practice of building models to explore mechanisms. It is often argued in cybernetics and systems thinking that the relation between a model and reality is one of isomorphism. Cybernetic science (it is argued) can proceed by exploring the isomorphisms, improving the models and making social interventions consistent with the models. Critical realism often criticises cybernetics for considering only 'closed systems' (with feedback), for idealism and scepticism about reality, for ignoring what critical realism sees as the polyvalent structure of mechanisms (so there are mechanisms of biology and physics (intransitive) which interact with transitive mechanisms of norms, politics, positions, rights, obligations, power and so on), and for an elision between the key domains (as critical realism sees it) of human agency and social structure. Despite all this, there are many good scholars who have found the connection between critical realism and cybernetics useful, including John Mingers, Soren Brier, Loet Leydesdorff, John Shotter and Terry Deacon (to name a few). Personally I found the conjunction of cybernetics with the Critical Realist methodology of Realistic Evaluation (Pawson and Tilley) particularly useful.

For my part, I have been more sympathetic to the critical realist position than the cybernetic position - despite the fact that I identify with the cybernetic practice of building models. I tended to see cybernetics as a variety of functionalism which whilst being useful for clarifying concepts, also held within it a pathological blindness to its political implication (something which is indicated in critiques by Horkheimer, Fromm, Ilyenkov and numerous others). At the same time I was also disturbed by the dogmatism of many critical realists, and their lack of criticality concerning their own theories.

Over the last year or so - a period which also saw various personal crises - I've arrived at a different position which tends to be more favourable to the cyberneticians - or at least, to key early figures like Ross Ashby, Warren McCulloch, Heinz von Foerster and Stafford Beer. Two things changed for me.

Firstly, I began to explore Hume's epistemology more carefully - spurred on by Quentin Meillassoux's wonderful "After Finitude" and some careful critique in Christian Smith's "What is a Person?". Hume is very important to critical realism because the CR endeavour is founded on a critique of his epistemology. Now I think Bhaskar's critique is ill-founded, and Hume's scepticism stands up more powerfully (now that we are clearing the layers of varnish off it) than it has done for 200 years. So it is not a given that mechanisms have to be real because scientific laws found in the laboratory are seen to work in open systems (by sending a rocket to the moon, for example). It may be that Bhaskar's right - but we are perhaps safer with Bohr: "physics is the study of what we can say about nature". The assertion of the reality of causes is unsafe for reasons that Hume basically got right.

At the same time, the cybernetic ideas about modelling and isomorphism are equally vulnerable (partly for reasons which critical realists like Tony Lawson identify). Modelling causal mechanisms is a dangerous business because it inevitably creates a fictional ideal world which is a poor guide to the actual environment within which we live. There are many pathologies in the use of models and statistical analysis which has led us into repeated economic and other social crises.

However, some cyberneticians were more clear-sighted in what they were doing. Ross Ashby, in particular, saw the relationship between his theorising, model building and his practical experiments as a process of identifying constraints, not causes. If our scientific knowledge is always going to be speculative (because there can be no objectivity for the cyberneticians), the scientific process is one of generating logical possibilities from models, and then exploring which of them can not be found in nature. Knowledge emerges through the identification of constraint. On reflecting on my own practice, I think this is a better description of what I do. I also think it helps shed light on a range of phenomena in education about which we have been silent - most importantly, human relations and intersubjectivity.

This is where I now am - but it has been a struggle, and my chapter is only slowly emerging. Hume remains a key figure. He was right that scientific knowledge lay in discourse. He was wrong in thinking that it was 'causes' which were agreed (he can be forgiven because it was the scholastic idea of causation which he was attacking). To say that it is constraints which are agreed opens many new doors - particularly onto the enormously complex, multi-variate (although variables have causal connotations!) and obscure phenomena we see in education.

Tuesday, 2 February 2016

Learning, Design and Pedagogy: Slickness and Constraints

After years of talking about ‘Learning Design’ (both the ill-fated IMS variety and the ‘softer’ version typified by Laurillard’s equally ill-starred Pedagogic Planner), it’s curious to think how we thought learning and pedagogy were the important and difficult topics, and not for one moment did we think that ‘design’ was more problematic a concept than either of them. The dream of ‘design formulae’ as a kind of recipe for effective learning online hasn’t gone away – “might analytics help?” we now ask… (probably not)

There are perhaps some obvious points to make about the ‘slickness’ of interfaces. The experience of tapping and swiping through apps on an iPad creates the kind of ‘designed experience’ that was drilled into Apple developers by Steve Jobs. This is one kind of ‘design’ – a variety which produced interfaces which present few barriers, which appear ‘natural’ or ‘intuitive’. This natural intuitiveness has become the goal of (some) instructional designers who believe that “the slicker the better” – wheeling out various cognitive speculations to support their claim. The interesting question concerns what can be defensibly said about slickness.

If an artefact is produced to a kind of design which intends that “user” behaviour falls within particular parameters to which the design relates, and that without such a design, user behaviour would be more chaotic, then it is reasonable to say that such a design has had a constraining influence on behaviour. The Highway code is a designed set of constraints to encourage (if not force) drivers into strict parameters of behaviour. When graphic designers talk of ‘eye-catching’ designs, what they mean is that the otherwise wandering gaze is attracted and held by a particular image: that its options for looking at other things are attenuated. Indeed, the different uses of the word “design” – from the design of Apple Macs, to the UML diagrams of an IT system refer to different kinds of constraint. The IT system design attenuates through enforcing rules of behaviour on systems programmers, and eventually, users. The beautifully designed product attenuates not by enforcing rules, but often through paradoxical displays making the seemingly impossible possible. But one way or another, design is the manipulation of constraint.

Learning design then presents a curious problem because learning is not like crossing bridge in the same way as everyone else. With a range of resources at their disposal, learners meander, take unusual turns and explore. As they do so, they encounter constraints in the world – not just the constraints pertaining to a particular domain of study, or the constraints of a particular learning resource, but (most importantly) constraints of other people. Having said this, learners left to wander freely will often become lost or overwhelmed within a world of constraint which they cannot adapt to. Institutions apply constraints in terms of curricula, assessments and deadlines. Teachers apply constraints when they are necessary: “do it like this”, “concentrate of this subject” and so on; they will also sometimes find it necessary to remove constraints from the learner (“do whatever you think is interesting”). Teachers also have to operate within the constraints of the institution. As they apply other constraints in their teaching, they reveal the particular constraints operating on themselves. The constrained learner sees the constrained teacher negotiating barriers some of which they recognise, and perhaps others which they can’t yet fathom. Learners do not learn content: they learn the constraints which operate on the minds of those who present content – even when those minds are hidden behind the mask of a textbook. Good teachers will make themselves objects for inspection and do what they can to reveal their own constraints. If there is a ‘Zone of proximal development’ it lies in the relationship between the constraints of the learner and those of the teacher.

The teacher designs constraints through pedagogic instruction as a way of opening themselves up to learners. This kind of design is not simply a manifestation in the external world – although it may look like that (say a textbook or a video). It is a strategy for laying a path for an imagined learner to come to understand the inner world of the teacher who carries the knowledge they might wish to gain. Such strategies are experiments: attempts to try things out, see what happens, adjust one’s approach.  In a face-to-face setting, constraints are more available for inspection: the sheer number of different signals and senses flowing through time provides a powerful way in which constraints can be discovered. If the learner is one of a crowd in a lecture room, then the learner’s constraints are less visible to the teacher. Even fewer constraints are available for inspection if the interactions are online, although in this case there is a common constraint is technology.

In considering these intersubjective situations of coordinating constraints, designing is not just done by a teacher: learners design too. Their designs take the form of actions in attempting to articulate their understanding as a way of testing the response of teachers. Although this sounds like Pask’s conversation model, it is different in one important way: the gradual acquisition of concepts and the coordination of utterances is not the root of understanding. That lies through the shared articulation of concepts which are the constraints which provide insight into the constraints of a particular teacher or learner.

Teachers, technologists and learners all engage in design. We all formulate half-baked plans, and set out to explore what happens when we enact them. We learn about constraints as we do this. What is typically called ‘design’ in learning design circles is a kind of constraint bearing upon the design processes of the learner. Success is measured in terms of the predictable behaviour of learners with designed resources where the more effective the constraint applied in learning, the more “successful” is deemed the design. There are few ways (particularly with technology) for learners to meander: they cannot incorporate new kinds of tool and are forced to participate by posting text in forums. If they all post text, then the course is deemed successful. Shouldn’t this kind of attenuation be seen as a failure, not a success? 

Friday, 29 January 2016

The AI Goblin and the coming Internet of Trust

There's been a lot of talk about AI recently - which always makes me suspect that things are about to change drastically in technology. The 'AI Goblin' is a herald of change - but the change is always something other than AI! In the late 80s and early 90s, there was similar talk: "the future is in expert systems, AI, data processing!" was the cry. Only more thoughtful people like Fernando Flores (plus quite a few other cyberneticians at the time) knew this was nonsense. The future of technology, they said (in 1986), was communication. By 1993 with the launch of the World Wide Web, we knew that they were right.

This is not to say that there haven't been advances in AI, and clearly "big data" is a big - if conceptually confused - thing. But Google translate and speech recognition remain the most impressive examples - hardly the kind of "wicked problems" that human intelligence is required to sort out. More modest data analysis and visualisations (for example, Danny Dorling or Hans Rosling) is far more powerful than any amount of automated sentiment analysis, topic modelling, and so on. Where to from here? Intelligent x, y, and z - we are told. But to what purpose? In whose interests?

But just as the 1990's AI goblin heralded a drastic technological change, so it does now - and just as in 1993, it's got little to do with AI.

We have learnt that the communicative internet has many wonderful properties, and it has led to us all becoming wired into the network. We've also learnt that it is possible to lie to people within this complex communication network and (very often) get away with it. Sometimes the lies are very big and very expensive: lies about economic performance particularly. Transparency on today's internet is just a mask: underneath the mask everything can be changed by powerful people, and nobody would be any the wiser. But there is more to communication than exchanging text messages and documents. Where the communicative internet has failed is in establishing trust.

This is why Bitcoin is likely to be seen by future historians (if not economists) as the real revolutionary technology of the age. Because in order to have a functioning currency people have to trust the authority that backs up the currency. We trust the Bank of England when they write on bits of paper "I promise to pay the bearer" and so the currency has value. We also trust the bank and national governments to manage the money supply so that the bits of paper (which are really IOUs) are sufficiently scarce (but not too scarce) to maintain the value of the currency and keep the social mechanisms of exchange going. But Bitcoin has no bank and no central authority. There is no single person who declares "I promise to pay the bearer", there is no authority to control the money supply. There is a distributed ledger which everyone can see, and a mechanism for managing the money supply linked on the activity to verify the contents of the ledger. So what we have here is a technology of trust: a radically different way of establishing faith in something without putting some social institution or individual in a position of authority.

Key to it is the way the ledger works and this - the Blockchain - is the thing that has got technologists excited.

There is no reason why the internet should work in the way that it does. We have servers and domain names because of a particular set of historical conditions and design decisions which led to the World Wide Web. But it has many drawbacks: it's centralised, it's impermanent, it exposes everyone to surveillance which is mostly used for trying to sell us things. The blockchain represents a different way of organising the internet which is decentralised (it's peer-to-peer), permanent (every file, every version of a file, web page and so on is stored), and whilst it provides a transparent audit of every single transaction on  the web, the actual contents of those transactions remain private - so the web doesn't simply become a vehicle for selling you things, whilst illicit activities would technically be flagged-up more readily.

Of course, the banks are interested because they have been in the firing line of breaches of trust which have exploited the internet thus far. National governments are interested because they don't trust each others' economic data. Radicals are interested because they believe the internet can be a better place than a giant panopticon. Educationalists should be interested because a better way of organising technology allows us to rethink the relationship between technology, education and society. 

Sunday, 24 January 2016

Why does educational theory get 'stuck'?

An educational theory is an attempt to calibrate understanding about educational process with experiences gained from educational practice. Yet since education is a moving target, changed by government policies, technologies, new teachers, new students, and attempts to theorise it, the calibration process must be a continual effort. Yet so often, as with the social sciences more broadly, it isn't. Instead we see educational theories "solidified" (or reified) as a kind of blueprint for educational practice. This is particularly surprising given that most educational theories have grown from the system sciences (either General Systems Theory - Piaget, Vygotsky - or Cybernetics - Kolb, Pask, von Glasersfeld... maybe critical pedagogy theorists too) which are fundamentally concerned with circular and continually emergent processes. Why does this happen?

One symptom of the "stuckness" of educational theory is the fact that few educational theories are explored for their explanatory and predictive failures rather than their successes. In cybernetics, failure, or error, is the driving force behind the dynamic process of calibration. Bateson summarised this with his 'zig-zag' diagrams in Mind and Nature, here explaining the relationship between a thermostat managing room temperature and a person:

A theory is a way of logically generating possibilities which provide a lens for exploring reality. Ross Ashby, a cybernetician whose attempts to model the brain (and indeed unsuccessfully make an artificial one) led him to think deeply about his methodology: he argued that his knowledge arose from seeing what wasn't accounted for in any model, or where the model was wrong. Of course, models will always be wrong at some level. But seeing knowledge as emerging from where things don't fit rather than where they do is something of a Copernican turn in epistemology. And there are some obvious things that educational models cannot model: for example, the process of modelling cannot account for the factors that affect modellers (and everyone else): there are constraints bearing upon the modeller's brain: not just biology, but discourse, economics, physics, chemistry and education itself. How could Ashby's brain know a brain? To be fair to him, Ashby knew the question well, and his efforts were really about exploring the parameters of his lack of understanding. Following his example, we might ask how a brain can understand education? How could we pursue our lack of understanding of education scientifically?

To explore the lack of understanding of something, or the contours of understanding/non-understanding, first demands that the separation between knowledge and practice is dissolved. Ryle's distinction between knowing-how and knowing-that doesn't make much sense to a cybernetician. The only thing to say about practical knowledge as opposed to theoretical knowledge is that the balance of constraints bearing upon each is slightly different: academic knowledge is constrained by social forces like discourse and social norms; practical knowledge, whilst also being situated in discourse, is perhaps more directly constrained by individual bodies and psychologies. The two are entwined: the utterances of theoretical knowledge are the results of skilled performances which would fall under Ryle's 'knowing how'.

Having said all this, the fact is that we don't explore constraints in education. Instead we look for causal mechanisms. In the process, theory gets stuck. Why is this, even when it appears that educational theories are deficient in some important aspect or other?

As a critic of  this situation, I am in the position of a 'modeller' whose model of the complexities of education and the role of theory as calibration is not borne out by the experience of observing the dynamics of educational theory. However, if I am true to the principles that I have outlined, I am not seeking to identify mechanisms in my model which explain reality. I am looking for the deficiencies in my model in the light of experience, for those deficiencies give me more information about the boundaries constraint of education. The pursuit of these boundaries drives intellectual excitement and curiosity: in the case of theories of physics or chemistry which successfully predict natural events, the intellectual excitement is the pursuit of the boundaries within which given explanations hold good, and beyond which they break down. So often, it is this intellectual excitement which appears to be missing from education: curiosity gives way to dogma.

The constraints bearing on education (and theory about education) cover every discipline: physics, chemistry, biology, art and aesthetics, sociology, etc. The pathology of stuckness seems to set in when obvious constraints are ignored. For example, positivistic attitudes to education which argue for the measureability of learning through continuous monitoring and testing not only overlooks the psychological, sociological and biological constraints constraints which bear upon educational development, but also overlooks the many new constraints which its own approach (and the agents who uphold it) impose upon educational practice and change the nature of education. But the positivists will defend their position as 'scientific' whose interventions are causally defensible. Do they defend it more strongly because they know (deep down) it ignores so much? Is there a connection between the deficiencies of a model and the vehemence with which it is defended?

There is something odd about the social mechanisms which reward sticking to an established educational theory even in the light of it not being entirely effective. Here we are asking something about the different kinds of constraint that are at work and the ways they interfere with each other. There are plenty of examples of deficient explanatory ideas which are vehemently maintained in the absence of any justification. Religions, for example, are much more 'sticky' than educational theories!

In cybernetics, this interference between different constraints at different levels was studied by Gregory Bateson. Bateson based his theories on Ashby's identification of levels of coordination, in conjunction with Russell and Whitehead's idea of 'logical levels'. In Bateson's language, the 'stuckness' of educational theory is the result of a kind of 'knot' between different logical levels of constraint. A particular variety of this dynamic he called the 'double bind'. For example, we might imagine the constraints of a discourse, with its journals, reviewers, and a host of codified expectations about education. Then we can imagine the constraints within an institution which bear upon individual academics who wish to maintain their careers - which they can do by publishing. And we can imagine the constraints that bear upon individual teachers and learners. To identify a deficiency in a theory which perhaps spoke of the realities of classroom experience, would be be very difficult to get published if the reviewers had a stake in maintaining a particular theoretical pitch which was blind to particular issues. Given that the institution rewards publication, and the individual academic wants to progress in their career, the natural solution to the tension is to maintain existing theory. By doing so, the conditions are created where the tensions produced in the dissonance between theory and practice make it increasingly likely that the existing theory is safely maintained.

Knots like these are borne out of fear. The academic ego which resists evaluating their theory is borne of fear, and since fear is a bigger part of the university culture today than ever before, the state of knowledge within universities is a matter of serious concern.

Bateson was interested in how knots can be untied. The challenge, he argued, was to help people step outside the 'double-bind' situation they are in and understand its dynamic. This involves invoking a 'higher power' in the hierarchy which can observe the situation as a whole. Sometimes a "higher power" simply emerges by virtue of growing older. An interesting example of this is the concept of 'Santa Claus'. For very young children, Santa serves as an explanation for where their presents come from at Christmas, and some children can be genuinely anxious whether they have been 'good enough'. As children grow older, the concept of "Santa" cannot explain the obvious evidence that presents under the Christmas tree appear to be bought by mum and dad. 'Santa' persists as a concept, not only as an explanatory principle for the very young about presents, but also as an explanation for the older children about child psychology - that telling silly stories to young children is a playful and loving thing to do. This is knowledge emerging through the revelation of constraint, where the constraint revealed by the concept of Santa is the cognitive difference between the very young and the not so very young. Understanding this constraint is an important stage in the process of growing up.

Education, by contrast, doesn't have the luxury of simply being able to 'grow up'. The higher power which could address its double-binds will have to come in the form of a new way in which education thinks about itself.

Wednesday, 20 January 2016

Bateson on Numbers, Quantity and Pattern

I was looking in Bateson's Mind and Nature for something other than that which eventually caught my attention (I was looking for what he said about calibration and feedback - more about that later). But the book opened on page 49 with the heading "Number is different from quantity". Since I have been thinking a lot about counting and probability recently, this resonated very strongly - Bateson's position with regard to probability is unclear, but his focus on counting here is important and relevant to the way we look at information and data today. Big data, for example, is fundamentally a counting exercise, not a quantifying one. Bateson says:
"Numbers are the product of counting. Quantities are the product of measurement."
So here we are going to get a distinction between counting and measurement...

"This means that numbers can conceivably be accurate because there is a discontinuity between each integer and the next. Between two and three, there is a jump. In the case of quantity, there is no such jump; and because jump is missing in the world of quantity, it is impossible for any quantity to be exact. You can have exactly three tomatoes. You can never have exactly three gallons of water. Always quantity is approximate."
Given the fact that I was looking for his ideas about calibration, this distinction between measurement as an approximation (a calibration?) and counting I found insightful.

"Even when number and quantity are clearly discriminated, there is another concept that must be recognized and distinguished from both number and quantity. For this other concept, there is, I think, no English word, so we have to be content with remembering that there is subset of patterns whose members are commonly called "numbers". Not all numbers are the products of counting. Indeed, it is the smaller, and therefore commoner, numbers that are often not counted but recognised as patterns at a single glance. Cardplayers do not stop to count the pips in the eight of spades and can even recognize the characteristic patterning of pips up to 'ten'. 
In other words, number is of the world of pattern, gestalt and digital computation; quantity is the world of analogic and probabilistic computation."
He then goes on to talk about Otto Koehler's experiments with jackwdaws which appear to count pieces of meat placed in cups. Bateson argues that the jackdaws "count"; I'm not sure (how could we be sure??!) - but there appear to be similarities between the manifest behaviour of the bird and human behaviour in counting.

He then goes on to say that "Quantity does not determine pattern". This one is particularly important for Big Data enthusiasts.

"It is impossible, in principle, to explain any pattern by invoking a single quantity. But note that a ratio between two quantities is already the beginning of pattern. In others words, quantity and pattern are of different logical type and do not readily fit together in the same thinking. 
What appears to be a genesis of pattern by quantity arises where the pattern was latent before the quantity had impact on the system. The familiar case is that of tension which will break a chain at the weakest link. under change of quantity, tension, a latent difference is made manifest or, as the photographers would say, developed. The development of a photographic negative is precisely the making manifest of latent differences laid down in the photographic emulsion by previous differential exposure to light. 
Imagine an island with two mountains on it. A quantitative change, a rise, in the level of the ocean may convert this single island into two islands. This will happen at the point where the level of the ocean rises higher than the saddle between the two mountains. Again, the qualitative pattern was latent before the quantity had impact on it; and when the pattern changed, the change was sudden and discontinuous.
There is a strong tendency in explanatory prose to invoke quantities of tension, energy, and whatnot to explain the genesis of pattern. I believe that such explanations are inappropriate or wrong. From the point of view of any agent who imposes a quantitative change, any change of pattern which may occur will be unpredictable or divergent. "

Pattern is closely tied up with the idea of constraint. In the example of the mountains, quantity might be thought of as an index of change which reveals latent patterning. "More water" is a kind of construct based on the similarity between "a bit of water" and "a lot of water". In reality the difference between "a bit" and "a lot" is revealed through the constraints of nature that it encounters.

There are some similarities between this thinking and the talk about the "tendencies" and "causal powers" of things in Critical Realism. I've always found the Critical Realist approach confused on this (or found myself easily confused by it). In a few paragraphs, Bateson cuts through the urge to make declarations about the nature of a mind-independent reality without losing sight of scientific integrity. 

Sunday, 17 January 2016

Doing Cybernetics (and talking about it)

Talking about cybernetics is not the same as doing it. Doing cybernetics is a way of speculating about nature whilst actively engaging in a methodical way with nature. Talking about cybernetics is a largely descriptive and analytical exercise. Academia, on the whole, encourages people to talk about things. The kind of speculative activity that cybernetics engages in does not fit academic expectations. There are some areas of academia where doing cybernetics is possible - most notably, the performing arts, and also the practice of education itself. However, cybernetic speculations struggle to establish academic legitimacy - particularly when their codification is divorced from the practice that gave rise to their findings. There seems to be an unseemly rush to codify cybernetic speculations, and then to turn them into objects of discursive inquiry, or (worse) blueprints for interventions or analysis. As a result speculations get reified. The most appalling example of this is in the reification of the various speculations about learning contained within constructivism: speculation becomes dogma.

This has led me to think that maybe a meta-description of what 'doing cybernetics' means might help. I think we need a kind of meta-model of cybernetic activity which defends the highly speculative activities of cybernetics within the context of scientific practice recognised by institutions.

I start by thinking about modelling. Modelling is often associated with prediction and control, and there are many second-rate descriptions of cybernetics which commit this mistake. In fact, modelling is fundamentally about coordinating understanding through mapping a set of idealised constraints. A model, or a machine or software which articulates a model, provides an opportunity for cyberneticians to point at abstract mechanisms and reach shared understanding. Moreover, models are generative of many emergent possibilities. In agreeing about fundamental mechanisms and components, cyberneticians can also agree about the logical emergent potential of a model and its fundamental properties. It is through this logical emergent potential that modelling acquires the connotation of prediction. But this is really to get the wrong end of the stick.

If a model presents logical possibilities, reality (or nature) produces events which may be mapped and measured in the model. The most important process in modelling is the 'indexing' of real events with particular behaviours or emergent processes in the model. The indexing of events is not a trivial exercise: it is at the very heart of the cybernetic process. To index something is to determine similarities and differences between events: it is to reduce perceptions to manageable and measurable components. Although this is a reduction, it is not necessarily reductionist. It would only become reductionist if a particular reduction became dogma, washing over emerging differences which did not fit the model. In actual cybernetic practice, reductions are useful precisely because they reveal their deficiencies in the light of actual experiences.

To measure events is to index them, and then count similar events and their relations. In this process, the key feature is the distinguishing between unsurprising events and surprising ones. In measuring surprise among events, the degree of surprise in nature can be compared to the logical model of how such surprise might be generated. Knowledge is gained by recognising how the constraints of nature are different from the
constraints contained within the abstract model. Usually this leads to the identification of new indexes in nature (new things to count), or critiquing some indexes of things that were thought to be the same, but in fact are distinct. Models consequently have to be adapted to take account of new indexes, and in being adapted generate new sets of possibilities.

The disparity between constraints discovered in nature and constraints within a model is only part of the story. There are also many constraints which bear upon the minds of those who produce models. Among the most serious constraints which appear to bear on all humans is the constraint which encourages individuals to insist on the veracity of particular models even in the face of evidence that they are deficient. Cybernetics requires more than a logical approach to the examination of models and participation with nature. It also requires a high degree of self-examination, reflection and a letting-go of ego in the continual maintenance of a speculative attitude.

In the philosophy of science, the social identification of causal paradigms or explanations has been the central focus. Doing cybernetics brings a different social focus: the coordination of a discourse through the shared participation in nature, and social agreement about indexes within both nature and in models. Agreeing similarities is to determine constraints. To measure the surprisingness of events against a model is to learn about new constraints.

Perhaps the surprising thing is that I think good teachers do this all the time, and always have done!