Forms of knowledge
[Note: this paper is more than usually half-baked. Epistemology is not my field (although it increasingly interests me), and so it is rather simplistic. I beg forgiveness from people who know more about it than I do: it is written from a pragmatic professional position, rather than a properly rigorous academic one.]
"OK—so you've got a Ph.D. Now, don't touch anything!" (Source unknown)"It's all these NVQs (National Vocational Qualifications) nowadays—20 years' experience doesn't count for anything!"
(One of our campus gardeners, about to be made "redundant" and required to re-apply for his present job. August 2004; he was re-appointed.)
There are many ways of slicing the idea of “knowledge”:
Practical Knowledge (Aristotle, Oakeshott )
|Knowing how (Ryle)|
|Tacit knowledge (Polanyi)|
Comprehension (knowledge about)
Apprehension (knowledge by direct acquaintance) (Kolb)
There are problems with all these constructs, and most knowledge in the real world is a combination of many forms, but the distinctions are far from sterile.
For present purposes the major problem is that our educational system values the left-hand column much more than the right, for many reasons, including:
- Such valuation being implicit in the establishment of dedicated educational institutions, and the transformation or reduction of knowledge into "that which can be taught". (Becker, 1963; Lave and Wenger, 1991)
- The technology of assessment, in the face of constraints such as validity, reliability and fairness, not coping well with the right-hand column. (Which is strange, given that practical etc. knowledge is more clearly demonstrable through a "product" or a "performance" than is propositional knowledge where the assessment can never be more than a proxy. Pace the gardener above, the NVQ system in the UK does emphasise assessment by direct observation of performance.)
Hence, in a world obsessed with accountability and qualifications, the left-hand column rules. (No, I am not going to make any dubious connections with the left and right hemispheres of the brain.)
The real separation based on this opposition can be seen, for example, in the provisions of the 1944 Education Act (UK), under which those who succeeded in the 11+ examination went to the grammar schools, whilst the rest went to the secondary technical and modern schools for a more “practical” or “vocational” training.
Hirst (1974) however, argued that all knowledge is of the “knowing that” variety, and that the distinction is therefore spurious. “Knowing how” knowledge consists of “knowing that”, together with a direct experience. Is that the case? If so: is direct experience is best thought of as “knowledge” or not?
When discussing the “knowledge about” versus “knowledge by acquaintance” distinction with a group of doctors, in the context of Kolb’s learning cycle, they related readily to it in terms of their patient’s knowledge of the illness (by acquaintance) as opposed to their own (about).
- Direct experience is cognition, but may not be knowledge, in the sense that one cannot do anything with it, until it is integrated into some kind of mental model.
Tulving (1985) makes a useful distinction between “episodic memory” (the remembered narrative of our lives) and “semantic memory” (our acquired conceptual understanding): direct experience may be known in the sense of having been laid down as part of episodic memory, but it cannot be said to have been “learned from”, until it has been integrated into semantic memory. One could argue that the process of learning is this very transformation and integration. Others have suggested the category of "procedural memory" for "know-how" to bridge the two.
But don't let's get carried away! It is all too easy to mistake command of theoretical knowledge for the whole thing. Lave and Wenger, and Becker, cited above, argue that is because this theoretical (left-hand side of the table) knowledge is what schools are good at teaching :
Let me take a couple of examples from my own reflective journal:
“Today I stood in for S. to do a teaching
observation of M., since she had to get it in before the end of
term. ... She works part-time at a College of Further Education,
and she was teaching a group of nursery nurses ... Their syllabus
required them to have studied team-working, particularly in multi-disciplinary
teams. ... At a technical level, M. is a good teacher: she tried
to draw information and ideas out of the students (all but about
two of them about eighteen), but, since they had little experience
to draw on, she could not get the “right” answers from them. Some
of them were making potentially insightful anecdotal points based
on their work placements, but they were slightly off-track. At last,
M. put up on the whiteboard the three essential components of good
team-working ... I thought at first, that was interesting. Then
I put myself in the position of the students, who were dutifully
making notes, and thought, they have to remember these points for
their exam: there is a lot more to studying in this area than I
thought. Perhaps I ought to make a note of these points for my own
Then I “woke up”. I started my working life in management and organisational development: I have been involved in team-working for the past twenty years, working in teams myself, and conducting training and consultancy on it. There was nothing “wrong” with the three points on the board, but I had never conceptualised the issue to myself in that way, and I saw no particular advantage in doing so. They did not even represent a particular school of theory, which could be contrasted with other perspectives. They seemed to represent the outcome of the text-book author’s search for three simple headings under which to organise his required thousand words on team-working. But for these students, this was now the definitive knowledge on the subject, to which their experience had to be subordinated ... As M. said afterwards, it was what they were expected to “know” ...
And five years later, with a similar group of students...
‘I have just been to observe H. She was teaching a class on “key skills” on “communication”. The requirement to be addressed was “conducting a discussion”. After some discussion in the class [note that], and a word-shower exercise (apparently “brainstorm” is no longer PC) she embarked on a “show and tell” exercise. This was interesting in its own right, but it was meant (and, I suppose, did) to provide “evidence” that they could “discuss”. They then had to write this up in as part of a portfolio for assessment. There was no problem with H's competence at all, but she was lumbered with a stupid syllabus, and I asked myself—and her—where the learning was in all this. The students had amply demonstrated their ability to “discuss” in the first part of the session, whether or not they had come up with the approved answers, but that somehow did not count. We have reached the situation where self-evident skills have been devalued to the point that they do not count until someone has “assessed” them.’
In both cases—and it is easy to multiply examples—the left-hand column has been elevated to the only kind of knowledge which matters. No wonder the students play "surface learning" games.
But this is not merely a tirade against conventional educational wisdom (see the "heterodoxy" section for that kind of thing). It is a real problem.
- Go back to Riesman for a moment. He predicted the growth of the service economy. The "products" of the service economy are not physical objects (such as cars or wheat) but evanescent processes, such as "education", "health care", "financial services" and even "entertainment".
- How might the apprentice in such an economy produce a "master-piece" which would be available for the scrutiny of all to judge his or her achievement?
If you are in a formal educational setting, and
if you are subject to a rigorous and sceptical regime of inspection and quality assurance and hence "objectivity"
Then how do you assess performance with no permanent product?
Answer: you fall back on the readily-assessable. Knowledge (left-hand column knowledge).
One of the courses I teach on is at Master's
level. All the modules within the course are assessed at that level—except
one. A couple of years ago, a professional body conducted an evaluation
of the course for its formal endorsement. Keen to raise the status
of the profession, they wanted everything at "M" (Master's)
level. They wanted to know why this module was at "Level 3"
(the final undergraduate level, below Master's).
My answer was that I did not know how to assess "Teaching in Practice" at that level. Master's level implies a degree of expertise, rather than mere competence or proficiency. But this was a course of teacher training: most of the participants were just starting out on teaching. Assessing them on the so-called "scholarship of teaching" was no problem. It was about what they knew about teaching. But if we were to assess them at the same level on their practice, there would be no chance until they had practised for at least five years (and exhibited a "flair"—whatever that is— for it). We could assess them unproblematically on writing about it, but actually doing it? The argument was grudgingly accepted.
But a few years on, the university has surreptitiously sneaked a "practical", allegedly Master's level module into a new degree. Those responsible claim that nowadays all credit on Master's degree has to be at a post-graduate level. Credit, perhaps, but not the skill which is is supposed to evidence. Presumably one is being awarded credit for being able write very well and eruditely about why one's actual practice is rubbish.
(I am wilfully ignoring the "competence-based training" issue here, which maintains that it is possible to specify and list these "higher-level" competences at a purely practical level. My argument with that is that its reductionism has no way of allowing for context, and higher-level performance always has to take context into account. The use of terms such as "appropriate" does not get round the problem.)
Where is this taking us?
It is going in the direction of making a practical distinction between academic (left side) and professional (right side) knowledge for educational purposes. I am forced to accept the discourse of "discourse" to carry the argument further. Sorry.
In short, there is a fit between the social technology of education and assessment and the discourse of "knowledge about". That discourse is, in the jargon, "privileged".
There is no such fit with "know-how". "Competence" just about works, in a strangulated fashion, at the "lower" levels, but we have great difficulty beyond the craft skills. In some areas of education, such as art and design or performing arts, or even management, great convoluted and fudged effort is put into seeking general criteria even for assessment. We may recognise exceptional excellence (with a tolerable degree of consensus) when it confronts us, but we have little idea of how to operationalise that state-of-the-art good practice which is not mould-breaking.
The classic cop-out, not invented but legitimised by Donald Schön, is "reflection". That is the cross-over category which is claimed to translate practice into assessable theory. It has been latched on for its utility, but it does not really work. It presupposes articulacy, or a dual competence in doing something and talking or writing about it: and the evidence of "expert systems" is that it does not work. However, it meets the needs of academe, so we insist on "reflective journals" which we (think we) can assess.
It used to be thought that Howard Gardner's "multiple intelligences" might potentially break the mould of conventional assessment, but the notion has proved to be less useful or even accurate than previously thought.
More immediately important than the intractable assessment question. however, is that of how we teach on the right-hand side of the table. Learners learn, clearly, but probably in spite of rather than because of their teaching. Craft teachers who respect their own skills and have peer recognition, if not that of the establishment, can help; but they are not equipped to act as advocates within the discourses of academe.
As far as teaching is concerned, read this piece from the New York Times (31 August 09) on experienced teachers' judgements on their training. And then this follow-up on a blog. But what strikes me about both pieces is the lack of actual content in practically all the courses referred to.
The solution? I haven't got one.
It gets worse!
Sorry! The distinction at the top of this page is over-simplified. Different disciplines have different criteria for what constitutes knowledge. The "left-hand"—"right-hand" distinction is far from pure.
- In pure and applied sciences (what are sciences? We'll leave that on one side for the moment) —advanced knowledge presupposes a considerable underpinning of accepted and "given" knowledge. Notionally, "normal" science (Kuhn, 1970) builds on what has gone before, but while it might question its findings on occasion, it does not question its questions. I therefore expect a Master's or research student to exhibit a comprehensive command of previous work in her field, accompanied by a necessary skill in constructing and conducting experimental methods. I do not necessarily expect her to analyse the presuppositions of previous work.
- The same goes for sophisticated disciplines investigating human artefacts, whether physical (archaeology) or social (law, politics). In these areas, on the whole, knowledge accumulates.
- History? Knowledge certainly accumulates. Some historical scholarship and research is about unearthing previously unknown or neglected sources, and assimilating them into our picture of the past. But some is about accommodating that picture to the evidence: Collingwood pointed out years ago (I'm sorry, it is so far in my past that I haven't got the reference) that history—or more accurately, historiography—says as much about the preoccupations and values of the present as about the past. How much critical analysis do we look for at a given academic level?
- Social "sciences"? Sociology and anthropology are so wracked by problems of the interactions between current and previous thought that they are almost paralysed in some sectors. It's not surprising—society is changing fast. Any empirical work over ten years old is suspect, but it is not just that. The assumptions and hence the questions underpinning that empirical work have become questionable in a "post-modern" age: uncritical citation is a sin beyond "A" level.
- Literature? Cultural studies? Substance seems to matter less than the critical stance (or "gaze" to follow Foucault: why did I put that bit in? Discuss.)
- And so to professional studies. It obviously depends on the discipline and its evidence base. However, the prescriptive element of professional studies (and it is the presence of the prescriptive element which makes them "professional") exists in an uneasy relationship with its descriptive (or "evidence") base. Even medicine has a problem with this, and those professional areas which have a less established base are in even more trouble. Social work (about which I know a little more than the others) —is in thrall to political correctness. Management is pragmatic but prone to the vagaries of fashion, often preaching doctrines which have little (respectable) research base, but which "seem to work"; or did a couple of decades ago in a different economic climate... What counts as "advanced knowledge" in such a shifting "Red Queen" environment?
I am not a polymath: I cannot pronounce with authority on most of the disciplines above. But I can testify to the problems of comparing the levels of study/ achievement/ scholarship. The Quality Assurance Agency has a well-meaning (the QAA—well-meaning?) project of "subject benchmarking" to establish what constitute expected levels of knowledge and competence at undergraduate level, but this is written from within the subject disciplines.
Perhaps it is futile to attempt comparisons, but it is a real problem. I expect critical evaluation, and questioning of assumptions from my Master's students: I get dissertations from science-based candidates, who already hold a doctorate, which are scrupulous but positivistic in their approach, and I feel obliged to say that they are not good enough. By what right do I say that? Simply on the grounds that "education" is a shaky discipline blown hither and thither by the winds of fashion?
The new interest in skill acquisition
However, recently there has been more interest in the kinds of knowledge and skill identified on the right-hand side of the table, particularly in the form of craft and performance skills
Ericsson and others (2006) have emphasised the role of deliberate practice in skill acquisition, and were the first to come up with the now well-known figure of 10,000 hours of practice. (See here for a comment and the main reference).
Richard Sennett has discussed the attributes and the acquisition of craft skill—albeit in rather rarefied contexts and a rambling manner. My take here.
And Matthew Crawford (2010) The Case for Working with Your Hands; or why office work is bad for us and fixing things feels good London; Penguin/Viking, published in the USA last year as Shop Class as Soulcraft; an inquiry into the value of work does what it says on the tin. Everyone involved in vocational and professional education should read it. Read more on that here. His 2015 successor The World Beyond Your Head: How to Flourish in an Age of Distraction is heavier going, but still repays the effort, and the discussion of organ-building at the end brings it all together.
(Up-dated 11 October 2015)