Who We Are Our Schools Learning & Wellbeing Professional Learning Employment Policies & Publications News Centre
  • Data Informed, Not Data Driven

    September 2010 

    “Data Informed, Not Data Driven.”  These words come from American writer Dianne Ravitch in her book, “The Death and Life of the Great American School System.”  They warrant some serious consideration.

    As I write this the Australian Curriculum Reporting and Assessment Authority (ACARA) is about to release the 2010 NAPLAN results.  In recent weeks these tests have come under severe public scrutiny, with allegations of cheating and suggestions that the tests themselves contain errors.

    The two allegations, whether right or wrong, point to some fundamental problems with the way these tests are being used.  NAPLAN tests are used to compare schools directly, (via the MySchool website) and these comparisons are intended to assist parents in choosing a school for their children, and to determine funding priorities.  Both are deeply flawed purposes.

    I certainly do not argue that we should not be using tests such as NAPLAN, but I do seriously question the use to which governments and the wider society are putting them.

    Tests are a critical part of education.  They should (and do) provide firm data on which professional teachers can base their work.  They can (and do) tell schools how they are going, where they need to improve, what improvements can be noted over time.  They can (and are) used to improve teaching at school and classroom level.  But when they are used to compare schools, by governments and by parents, their usefulness is seriously compromised, even damaged.  If they are used to compare individual teachers (heaven help us if that occurs!) their usefulness is further harmed.

    Let us consider the two allegations made about the tests, that there were errors in the test themselves and that cheating had occurred.

    Firstly, does it really matter if there are minor mistakes in the test papers?  Are not all students exposed to identical errors?  On one level, it may not matter, but on a more fundamental level, these errors point clearly to the fact that the tests are an imperfect means of measuring student performance.  They are developed by imperfect human beings, in limited cultural contexts; they are completed by imperfect students in quite different cultural contexts; the results are interpreted by imperfect human beings in different cultural contexts again.  Margin for error piles upon margin for error.  It is not like measuring the length of a piece of string or counting the number of widgets a factory produces in a given time.  The education process is about persons not products and persons are notoriously difficult to “measure.”

    Education is also about far more than literacy and numeracy.  It is about developing full, grounded human persons who can live happy, fulfilling lives and make a contribution to a better society.

    In many ways we are merely copying the British system.  We might well see NAPLAN test scores go up in coming years, but the British evidence is that this is because teachers are forced to teach to the tests because their performance is being compared, while the broader aspects of real education are left to languish.

    We need the tests, certainly, but for the use of schools, for schools to see how they are going in these limited areas.  They have great usefulness but when we are using them for purposes of comparison, we will see the test results improve, but run the very real risk of seeing the quality of education in its truest sense falling.

    It is interesting to note, for example, that while test results in the UK have improved, quality of life on so many other scales has deteriorated.  Violence, substance abuse, teenage pregnancy, abortion, etc. are all on the increased in the UK.  And, it might be noted that the UK economy is not thriving either.

    The other allegation, that there was cheating occurring, points to a different, though related, set of issues.  Of course, if funding depends on test scores, or if school reputation and prestige is determined by test scores, it is almost to be expected that schools and teachers will be tempted.

    That “cheating” can be of the blatant sort that sees teachers actively helping students during the tests themselves (and there is no evidence of that occurring in our schools), but it can also be of the sort that encourages certain students to stay at home on the day of the tests.  It may even be of the sort that sees a school refuse enrolment to students who may pull scores down!  That would be in direct conflict with our Catholic school mission and it is never an action taken by our schools.  However, Principals remain justifiably concerned that other schools, against whom they are forced into a heightened sense of competition by the MySchool website, may not share their view of such matters.

    Tests such as NAPLAN are important.  They have their place and schools embrace that.  But as a means of comparison of schools in a way that determines funding or reputation, they are fundamentally flawed and are likely to lead to a reduction in the overall quality of education in the country, even if the test scores themselves seem to show improvement. 

    That has been shown so clearly in the countries Australia seem to be following.   Why cannot our decision makers learn from the mistakes of others?  Could it be just for a quick political fix?

    Our education processes need to be data informed – not data driven.