Ofsted is the Office for Standards in Education, Children’s Services and Skills. This function used to be carried out by Schools Inspectors. There is a particular type of obtuse thinking in bureaucracy which requires that everything has to be spelt out, lest a simple word not cover all eventualities. You can just imagine the OFSTED meeting at which someone said “Why not just call ourselves School Inspectors?” and someone immediately replied “What about outbuildings?”. So, it is only good luck that it is not called “Office of and for standards in education; children’s services of many sorts, particularly those not covered by education; outbuildings AND playgrounds; activities not otherwise specified; rounded personalities; holistic stuff; and skills; and key stage levels in the school situation.”
Anyway, they have worked out that bright children are not being well taught in non-selective state schools. http://www.ofsted.gov.uk/resources/most-able-students-are-they-doing-well-they-should-our-non-selective-secondary-schools
The code word for bright children is “more able”. Many schools do not know which students are more able, and which students are less able. How can any teacher not know that? Do schools even know which of their teachers are “less able”?
The report makes no mention of intelligence or intelligence testing, so presumably the identification of ability will be done by remote sensing. I get the feeling that these sorts of issues are not considered fit subjects in educational circles, and probably do not get much attention in teacher training. Contempt for intellect seems to be the order of the day. (France has intellectuals. England has “able” people desperate to show that they are not “too clever by half”. The results favour England).
To boost plausibility, the report gives some numbers and percentages. Here is one useful snippet (from point 15 on page 16):”The proportions going on to higher education varied for different school sectors: 69% from non-selective state schools; 86.4% from selective state schools; and 75.5% from independent schools.” This suggests to me that the problem is not state schools but the difference between selective and non-selective state schools. No comment is provided, but non-selection has a case to answer, on that one factoid at least.
However, higher education now covers a broad range of activities, only some of them educational, so the report concentrates on the 30 best (and most selective) universities, but then changes the basis of the comparison metrics just as things are getting interesting.
16. The report found that independent school students were more than twice as likely as students in comprehensive schools or academies to be accepted into one of the 30 most highly selective universities: 48.2% of independent school students in England were accepted by these universities compared with 18% of students in non-selective state schools. One hundred schools, comprising 87 independent schools and 13 grammar schools (just 3% of schools with sixth forms and sixth form colleges in the United Kingdom), accounted for over a tenth (11.2%) of admissions to highly selective universities during the three-year period.
17. The study also indicated that the difference in the admission rates to highly selective universities could not be attributed solely to the schools’ average A-level or equivalent results. Fifty-eight per cent of higher education applicants from the 30 best comprehensive schools (with average scores for students exceeding three A grades at A level) were accepted into the 30 most highly selective universities. This compared with 87.1% of applicants from the 30 best independent schools and 74.1% from the 30 best grammar schools.
Points 16 and 17 are interesting, and also muddy the waters. Point 16 gives no figures for the crucial group of selective state schools which we already know outshine the independent schools. Where have they got to? The rest of the point suggests disproportionate entry rates among the top 100 selective schools. This is likely to be due to them selecting even more stringently for intelligence. (Top schools can find plenty of bright and wealthy parents to pay their fees, so can afford to be choosy about which pupils they accept, turning down those offspring who have regressed to the mean).
Point 17 makes an interpretation about A level results which is illuminating. It seems that expectations in non-selective school systems are so low that 3 A’s at A levels are seen as the apotheosis of scholarship. Most of the best universities see 3A’s as base entry level and would be looking for 5 A’s which would include Maths and preferably Further Maths. The correct metric would be to look at the actual points achieved, but for some mysterious reason these are never given. The US gives grade point averages, and almost every country gives scores for every exam. Britain gives grades which obscure the marks achieved. In the face of such institutional foolishness, can it be surprising that individual foolishness flourishes? Anyway, having degraded the scores into grades, the grades are granted University entrance points (140 for A*, 120 for A, 100 for B and so on) so as to reconstitute a grade point total, but crudely, in the British fashion. Even this feeble measure would allow us to test the propositions in Point 17. Absent detailed scholastic results, conspiracy theories can flourish.
As even Ofsted admits, compared with other countries, the United Kingdom is not covering itself in scholastic glory. On PISA results in Maths Shanghai gets 600, UK 492. In Science Shanghai gets 575, UK 514. Ideally immigrants should be stripped out of the results, and I will return to PISA results in a later post.
I don’t do policy, but it would be great if educational policy makers developed an interest in human intelligence.