Jump to content
The Education Forum

Exam factories


Guest

Recommended Posts

As far as my subject is concerned (history), I believe that it is important that students develop skills that enable them to function effectively as citizens. This of course includes the skills of analysing the material being produced to shape their views on the world. Though the emphasis should be on skills, I believe that the content you select to study is vitally important. Content should be selected that helps them understand the current situation that they find themselves in. For example, see this thread on the History Forum where Andy Walker and myself argued against the teaching of Jack the Ripper.

http://www.schoolhistory.co.uk/forum/index...?showtopic=2125

This view of education makes me highly critical of the current exam system. I think it is vitally important for teachers (and parents) to be aware of the progression that is being made in the intellectual development of the student. However, we spend too much time in schools measuring things that are not important. In other words, we tend to measure what we can easily measure rather than what is important to measure. For example, as a historian, one of the most important things I try to teach is empathy. This is of course very difficult to measure in a way that is scientific acceptable. I have been involved in an experiment where twenty teachers marked an empathy assignment. The range of marks we gave individual students illustrated that is was clearly a subjective, rather than objective exercise.

Successful measurement involves government officials, parents, students, etc., trusting the judgement of the teacher. Our government does not trust teachers and therefore devalues those things that only a skilled professional can measure. On the other hand, it is very easy to measure factual recall (it is also much cheaper, especially if it involves ticking boxes). These figures can then compared with those of other schools in different parts of the country. Great for governments trying to convince the public that standards in schools are improving but it has nothing to do with educating people to live in the modern world.

Link to comment
Share on other sites

  • Replies 47
  • Created
  • Last Reply

Top Posters In This Topic

There is no question that we overtest in schools at the moment – and the tests are often meaningless: they don’t help the kids, they don’t help the parents, they don’t help the teachers. As I’ve indicated before, in my subject area (modern foreign languages) we usually test four discrete skills, but it’s a time-consuming process to get a clear picture of how someone is performing across these four skills, and most of the tests that I’ve seen don’t give accurate results. In the case of speaking and listening skills the only reliable test is a face-to-face, one-to-one interview. It’s very labour-intensive, but it gives an accurate picture of the testee’s language proficiency. My HE institution used to interview nearly all prospective language students in the languages that they intended to study. Our impression was often at odds with the exam results that they had achieved, and we found the impression that we gained in the interview was generally a more accurate predictor of future performance than GCSE or A-Level results.

Where there is a large intake of new MFL students into HE from secondary education a placement test is needed to ensure that they end up in groups appropriate to their level of proficiency. There is a quick-and-dirty solution, however: a vocab test. “Eh?” I hear you gasp. It depends on the type of vocab test, however, and under what circumstances it is administered. Paul Meara (University of Wales Swansea) devised a very effective vocab test for the Dialang website to ensure that the testee is not thrown into a subsequent series of tests in three discrete skills that are too hard or too easy. The testee is confronted with a large list of words, some of which are genuine and some of which are made up. The test is to identify the real words as opposed to the fake words. It sounds like a crazy idea but it works! It's the quickest way that I have seen of acquiring a rough (and fairly accurate) estimate of a language learner's level of proficiency. Try it yourself: http://www.dialang.org

As Prof Wilfried Decoo (Brigham Young University) put it in a recent email to the EUROCALL discussion list:

"Our experience, backed by experimental research, is that the lexical component [of a language] is the most representative in determining levels, and also the easiest and most comprehensible to define and to implement. It is therefore no surprise that the Dialang testing uses the lexical criterium as its first point of entry to determine a level."

Theoretically, one could obtain a high score in a vocab test simply by sitting down and learning large lists of vocab and without paying attention to grammar and the four discrete skills: i.e. raw knowledge and no skills. In practice, however, most language learners don’t do this. They acquire vocab in context and over a longish period. The more they are exposed to the language the more active vocab they acquire, along with the grammar and proficiency in the four skills. This is why the vocab test can be a fairly accurate predictor. However, if you know in advance that you will be assessed on the basis of a vocab test than you can cram vocab specifically for the test. The test is then invalid as an assessment of your language skills overall. Here’s a case therefore when “teaching to the test” does not make any sense. A more sophisticated test such as WebCAPE is in this case a better placement test: http://webcape.byu.edu/Docs/cover.html

Testing is a can of worms – which I found out by working alongside an international team of experts in testing when I was called in by Dialang as a CALL consultant. See the ICT4LT module on Computer Aided Assessment and language learning at: http://www.ict4lt.org/en/en_mod4-1.htm

Link to comment
Share on other sites

Our government does not trust teachers and therefore devalues those things that only a skilled professional can measure. On the other hand, it is very easy to measure factual recall (it is also much cheaper, especially if it involves ticking boxes). These figures can then compared with those of other schools in different parts of the country. Great for governments trying to convince the public that standards in schools are improving but it has nothing to do with educating people to live in the modern world.

John has got to the nub of the matter here. If education was only about getting kids to jump through pre-defined hoops then it could be easily measured and "managed". It is however a much more noble project than this, charged as it is with developing the minds of young people to think about, understand and challenge the world in which they live.

The reductionism which equates success with quantitative test data is understandable in bureaucrats but unforgiveable in educators.

The exam system as it stands does not encourage a love of learning amongst its victims (the students). Rather it encourages them to see learning as "work". Like work it is "only worth doing" if there is an external reward - wages, or in the case of exams - the certificate. We are in danger of turning a generation off learning for good. A cynic might suggest that this is exactly the aim of the current system - conditioning young people to meaningless toil for an external reward :blink:

Link to comment
Share on other sites

I've got to tell you about the 'American Language Test' we used to use in Kuwait with the Kuwaiti Army. It was administered in the language lab by a military attaché from the US Embassy and was about as totally ludicrous as you can imagine. If the students passed, then they got put almost immediately on a plane to Austin, Texas, where they could go to bars and meet girls.

The questions were largely multiple-choice, of the form: The captain had to postpone the meeting. Postpone: a) put off; B) call off; c) call in; d) call on.

The students spent their preparation time going through unofficial lists, "Call off, same same cancel, put off, same same postpone".

Some of the 'social skills' pages were totally surreal (and we were require to 'teach' every single page in each of the manuals). The most surreal was the 'dialog with a US serviceman' (and bear in mind that this course was written for servicemen from Arab countries and Latin America):

Visiting soldier: Say look at that girl in the bar over there. She's mighty pretty.

American: Yes, that's my sister, she's popular with the boys.

Visiting soldier: I'd sure like to go out with her.

American: Why don't you give it a try? Go and buy her a drink.

Visiting soldier: But I don't have any money.

American: That's OK, I'll lend you $10.

Needless to say, we all skated through that page in double-quick time, and didn't take any questions!

After someone had passed, it'd often happen that he'd come round a corner, and a teacher would say "Hi, how's it going?". The poor soul would then go off and find another teacher and ask, "Sir, what's 'how's it going?'?" We'd then say, "same same how are you", and the student would rush back and say, "Fine, thanks".

The Americans said this test was a sure diagnoser of the Kuwaitis' ability to be trained in English by US instructors …

Another of my colleagues had once had a job in the Army teaching Puerto Ricans to speak English. If they passed the tests, they got drafted. If they failed, they could go home again …

Yes, by all means let's use mechanical tests devised largely by non-teachers - they're much more entertaining than the real thing!

Edited by David Richardson
Link to comment
Share on other sites

I did a bit of consultancy work for the British Army some years, familiarising language instructors with the basics of computer assisted language learning. I got the impression that they set quite high standards in the British Army – but their materials were not nearly as entertaining as those that David describes. Thanks for brightening up this miserable, rainy weekend!

Link to comment
Share on other sites

What upsets me about this time of year is that external and internal pressures of our bizarre testing regime and regimes for school and teacher management inevitably lead me away from the needs of the students I teach

(who may or may not be ready for the nonsense QCA or the particular examining board has deemed, in their undeniable insanity, a proper test of their achievement), and leads me to become more and more focussed on the exams, the results, and the league tables.

I am sure that this does lasting damage to many students and certainly does serious damage to my motivation and commitment.

If I were a member of a confident profession surely this would not be the case

????

Link to comment
Share on other sites

I sometimes play with the idea of returning to teach in the UK. There are two factors which always bring me back down to earth: the high cost of living, and the attitude to teaching and learning that Andy Walker describes.

The last teaching post I had in the UK school sector was in Dartford in 1980, and even then I was constantly troubled by what I saw as the anti-educational attitude of the local authority and the government - and that was in the 'good old days' before the National Curriculum and SATs.

It's a dilemma - someone has to teach the children … but how can you teach and encourage learning in a system which seems designed to work against you at every step?

Link to comment
Share on other sites

Guest Adrian Dingle

David

Just to give the other perspective, one of the strongest draws that I feel to come BACK to the UK to teach, is the structure, organization and objectivity that the examination system gives. Long live testing, long live exams!

Link to comment
Share on other sites

Guest Adrian Dingle

I get the feeling that some of our differences in this debate come from our subject areas. Allow me to elaborate.

If you are learning chemistry at the high school level there are certain "knowledge gates" that you HAVE to pass through before you can move on to more complex topics. For example, at the high school level you HAVE to "know" that the atom is considered to be a dense nucleus containing positively charged protons and neutral neutrons, with negatively charged electrons existing outside of the nucleus in well defined chunks of space. That's it (as they say in America) - PERIOD! There is no room for debate about this; this is not a matter for exploration, discussion or an opportunity for some experiential learning. If you don't get this, accept it as fact and move on, you'll never be able to achieve anything in the subject. These facts are easy to test as well. So, given they are fundamental to the subject and easy to test, let's test them. The reason? Because testing that knowledge is fundamentally worthwhile.

Chemistry ain't English or History!

Link to comment
Share on other sites

While it's true to say that there are differences in the way different subject are tested, it's wrong to imply that science subjects, e.g. chemistry, have a monopoly on "knowledge gates".

You won't get far in German (my subject), i.e. communicating and understanding with a reasonable level of confidence, until you have committed to memory a solid body of knowledge - around 2000-3000 words of vocab for starters. You also need to know how to use the words in the correct context and to be aware of fundamental differences between English and German, e.g. that German distinguishes between a male cat (Kater) and female cat (Katze), as well as knowing that Kater is colloquial for hangover.

As for grammar, while I would not expect a student of German to be able to articulate a (fairly rigid) rule of syntax, that a subordinating conjunction at the beginning of a clause in German causes the main verb to be positioned at the end of the clause, I would expect the student to be able to apply the rule instinctively and correctly in spoken and written German. There's no room for debate about this either: you get it right or you get it wrong. If you put the verb in the wrong position you may be understood but you are then classed as a foreigner who has not properly mastered the language.

There are, in fact, numerous knowledge gates through which one has to pass while learning German. Most people - students at school and adult learners alike - only pass through the first set of knowledge gates and then they give up. A Higher GCSE covers just the basics. It's a useful starting point for serious study of the language.

Link to comment
Share on other sites

I have no problem with exams and tests as such - provided that they do the job they are supposed to do.

Sometimes they're just incredibly badly designed, like the American Language Course tests. Sometimes the examiners have hit upon some feature of their subject which is testable … without thinking of what the learner needs to be able to show about her knowledge.

Here's an example from my subject:

What's the difference between these two sentences?

a) She had a good time at the party.

B) She had a green dress at the party.

… well, this is a distinction between two types of transitive verb in English which is of great interest to the academic linguist. Someone learning English, on the other hand, is very unlikely ever to even notice the distinction, let alone make a mistake with it.

My impression of a good many tests I've seen in all sorts of countries is that they test things "because they're there". It's that kind of test which I think there's far too much of.

Edited by David Richardson
Link to comment
Share on other sites

David writes:

I have no problem with exams and tests as such - provided that they do the job they are supposed to do.

I have no problem either. I have the reputation of being quite a tough examiner. I agree with David that some language tests just do not do a proper job. This is true of the current GCSE exams in England. The examiners have also allowed standards to fall, as those of us who have taught in higher education have been observing for years. This was clear to us during the interviews (in German, French etc) that we conducted with new applicants for degree courses.

Link to comment
Share on other sites

I also often see what I would describe as a failure to achieve a balance between reliability and validity in tests. I apologise for telling you things you already know, if that's what I'm doing, but 'reliability' is the feature that the same answer to a question from two different candidates will get the same mark from either the same or two different examiners. 'Validity' is the degree to which the test corresponds to the real world.

Here's a typical 'objective' test question from English language tests in Sweden:

Some or any? Which of these words fits in the gap in this sentence?

Would you like ***** tea?

The answer which would be marked (reliably) correct is 'any', since the course has taught the 'rule' that you use some with statements and any with questions or negatives. There's a question mark after 'tea', so the correct answer must be 'any'.

In the real world, however, the correct answer is most likely to be 'some', since this utterance is used to invite people to drink tea with you, so you have to have some tea to offer them (because the real rule is some when you think there is some; any when you don't know or think there isn't).

However, 'any' is also possible, isn't it, but it's used more among people who already have some connection with each other.

This area comes up in English tests in Sweden, since it highlights an area where there's a difference from Swedish.

I've read academic arguments that say that there's an *inverse* proportion between reliability and validity in language testing. In other words, the more 'objective' the test, the less connection it has to the real world. However, tests with a high degree of validity a) generally cost more to administer; B) rely heavily on the judgements of professional teachers; and c) deliver results which are usually extremely difficult to put into league tables to make simplistic comparisons.

This doesn't make them 'unscientific' though. I'm very impressed with the International English Language Testing Service (IELTS), for example, through which one person can be tested in Beijing, and have her command of English accurately measured against the test of another person in Stockholm.

This may be 'too expensive' to do in schools … but perhaps we need to ask ourselves what the real cost of the inadequate testing systems which most school use really is.

Link to comment
Share on other sites

Re: David's latest comments:

"Would you like any tea?" is more likely to be used by a waiter in a restaurant. Cf. "Any tea or coffee, sir?"

"Would you like some tea?" is likely to be used if someone is inviting someone to have tea (drink or food) in their home. Cf. "Would you like some more tea?"

The students have therefore been taught a rule which does not work. This is where a KWIC concordancer and a a decent corpus would prove that "some" is more common than "any" in this context. Type both sentences into Google and see what you get! (I often use Google as a concordancer.)

I remember Tim Johns (of Data-Driven Learning fame) often used to highlight the absurdities of grammar and usage reference works, focusing particularly on the usage of "some" and "any".

I'm out of email contact for the next two weeks, as from 1pm today.

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×
×
  • Create New...