Skip to content

Many have played a role in FSA tests losing their way

If you have a child in grades 4 or 7, welcome to the FSA club. The FSA, or Foundation Skills Assessment, has been around for many years, with the most recent update being in 2008.

If you have a child in grades 4 or 7, welcome to the FSA club.

The FSA, or Foundation Skills Assessment, has been around for many years, with the most recent update being in 2008. According to the Ministry of Education website, the purpose of the exams is to "evaluate how foundation skills are being addressed and make plans for improvement."

According to the Delta Teachers' Association, the tests are "invalid and unreliable and provide no new information for either teachers or parents in coming to a better understanding of their child's progress at school."

Oddly, both arguments are true. And both are flawed.

The government (and we as taxpayers) need to understand how our education systems are working and where improvements can be made. We need some mechanism to understand before we can make changes that will have any positive effect. Measurement, accountability and improvement - sounds good.

We live in an increasingly competitive world. Our children need to be well prepared when they finish school, and that means setting a strong learning foundation.

The DTA encourages parents to keep their kids home the day of the test, citing the fact it doesn't count anyway. Doing this undermines the validity of the results that can then be used to claim the process is ineffective.

I've spoken with teachers who are concerned about being evaluated based on a student's result from a single test. Maybe the child gets nervous writing tests; maybe they are just having a bad day.

Concentrating on individual student or even classroom results is not an accurate reflection of a teacher's ability to teach.

The problem here is focusing on the wrong outcome. Individual results are not the purpose of this test. The purpose is to evaluate the system, not the individual or small group. The information only becomes statistically reliable when looking at a large number of tests.

From the ministry standpoint, the problem lies in how the results are reported. Reading through the ministry website, the results are shown as a percentage meeting or exceeding expectations. The percentages are based on all children, regardless if they wrote the test or not. So if 60 per cent meet the criteria, it could be that 25 per cent didn't meet and 15 per cent didn't write.

Instead of reporting based on the number who write, they report based on all children, so there is no valid means to compare the information year to year. This is where the DTA's strategy to keep kids home works - if the child doesn't write, his score of zero still counts!

To make matters worse, the Fraser Institute jumps on the data and ranks schools. Teachers, seeing the variability of results class by class, don't think this is fair.

They are right, it's not fair - it's a micro view that's too statistically unreliable.

The outcome plots schools against each other, both public and private, and doesn't provide an accurate view of the learning outcomes of our children. Yet parents will walk into the principal's office with the rankings from the newspaper and use that as the basis for deciding on a school.

Many hands have broken this system. Let's hope together they can find a way to fix it. Our kids' future depends on it.