In the light of this, the Independent Examinations Board (IEB) commissioned a benchmarking study undertaken by the United Kingdom’s National Agency (NARIC), which provides comparison information on international education and qualifications.
The report, published in May 2010, found that ‘the features of the NSC indicate a qualification with an underlying level that is both robust and fit for the purposes of examination senior secondary school levels’ and that ‘[i]n terms of the qualification’s comparability, the report concludes that the National Senior Certificate at Grade 12 is broadly comparable to the GCE A-S level’.
This is certainly good news and has been greeted with a generally positive response. Umalusi (the statutory body which sets and monitors standards for the GET and FET in South Africa ), stated that it is to be welcomed that this report, along with one of their own, reflects ‘positively on the new South African matric qualification’. Professor Crain Soudien, Deputy Vice Chancellor of UCT and Chairman of the IEB, said that he was ‘pleased with the outcome of the evaluation … as it re-assures South Africans that the new National Senior Certificate and its underpinning curriculum are recognized to be of an international standard by a credible organization’.
Not all, however, are convinced. In the first place, it has been pointed out that the report does not necessarily show, as the media release regarding the publication of the report that the NSC ‘shows SA’s National Senior Certificate compares favourably with international standards’: is shows only that it compares with the UK’s GCE A-S level.
Secondly, as one critic pointed out, the report is ‘misleading’ because it implies that it is a good preparation for higher education, whereas in fact the National Benchmarking Test, used by many of South Africa’s universities and universities of technology to assess prospective students, shows that there is no correlation between good marks achieved by candidates for the NSC and the success of students at university.
Thirdly, there was a great deal of scepticism about what happened during the marking process in 2008. One can have a good standard paper, but if the marking is lenient for ‘cooked’, it can skew the results At the time, I tried to get the then-DoE to respond to some of the queries, but without success. So I published an article entitled ‘More Questions than Answers’. Here are some of the unanswered questions:
- The DA claimed that they had been approached by two teachers, one of whom had informed them that colleagues who were marking Maths and Science papers had been told to increase the marks of weaker Matric candidates – for example, boosting marks of 20% to 30% – which is a pass. Is this true?
- Besides the allegations of boosting marks attained, there were also questions raised about the marking memos which seemed to favour the weaker candidates. For example, it was reported that a candidate who answered a question in History could not, in terms of the marking grid, get 0 – even if he/she had written nonsense. This is not how the official grid works. So how did this happen? Where was the decision made— at national, provincial, or marking centre level? If not national or provincial, are these authorities aware of the decision?
- We were similarly informed by markers in one province that, in English Home Language, a candidate could not get below 6 out of 25 (i.e. 24%) for a setwork essay. This would mean that a candidate only actually had to get 16% in terms of real content to achieve a pass of 40% – (i.e. 6 marks’ worth of sense). We were also informed that, when marking the letter in the written work paper, markers were instructed to ignore the format and just mark on content. While content is the key aspect, the format is also important. This is not what the memo says. Is this true?
Many people are also concerned about the adjustments made at Umalusi level. Having experienced the process, I can say that this is misplaced concern, for the following reasons:
- Umalusi is an independent body – it does not answer to the education authorities.
- It operates within parameters (e.g. the range of adjustment that may be may be made).
- Adjustments are made according to previous norms (admittedly not really available for the first round, but there were previous norms that could serve as guidelines).
- The adjustments have been all the years – not just now – for the simple reason that one cannot guarantee that the paper is exactly of the same standard from year to year.
- Downward adjustments are also made.
In the light of the above, one can feel confident that the papers are at least on a par with the British GCE A-S level. But is this as positive as it seems?
Besides the unanswered queries about the marking, it needs also to be noted that there are critics in the UK who maintain that the British papers have been ‘dumbed down’ – in which case, while we can take comfort that our standards are not rock bottom, we can hardly be proud to that our papers compare favourably with their papers.
It is clear that much more research needs to be down before we can boast of our the high standard of our examinations and curriculum.