The Northwestern this morning ran this article on the front page, about the report about failure rates among minorities in gateway courses. The A-T ran a story on it a few weeks ago.
The data presented to the board of regents showed a very rate of D,F or W among minorities that was significantly higher than other segments of our student population.
First, does anyone know what the actual data looks like? It is hard to judge this information without more information. Does anyone know if this is similar to other institutions?
This data and its reporting raise a lot of questions for us. Is there bias in the way we teach our gateway courses? Why are the very few minorities on campus doing poorly in classes?
There are many other questions that need to be asked and I hope we can begin thinking about the implication of this data?
Subscribe to:
Post Comments (Atom)
4 comments:
My experience has been that whenever *any* student, irrespective of ethnicity, does poorly, a common root cause is inadequate preparation.
As such, I do not think that simply tallying up our students' ethnicity will give us enough info to solve the problem. It would be much better to look at *all* of our students in light of their high school and personal backgrounds. What school districts did they attend? How did they do? What was the home situation like?
It is quite possible that there would be factors other than ethnicity would better predict who will perform suboptimally. It has long been realized that being the first in a family to attend college puts a a student at risk, regardless of ethnicity. It could be that *anyone*, irrespective of ethnicity, who graduated with less than a 3.5 GPA from a big urban high school is at risk. It may be that *anyone* from a single-parent home with a median income under $20,000 is at risk.
Such things may have much more to do with preparation for college and ability to succeed there than where your ancestors lived.
Big thumbs up to Lammers on that one.
To expand upon Tom's point, one resource that is available but almost never used is a report from ACT that tracks students from high schools and measures their success in college. It gives universities (and high schools) a summary of how students from one high school with a particular rank and a particular ACT score did once they got to college. What you see locally is that Winneconne does a great job, and kids from North Fond du Lac have trouble. Presumably it could be used to give colleges a heads-up about which students are likely to have trouble when they get to college.
That article was so telling,
'We have to find out from the data exactly what’s going on, Manzi said.'
From the data? You could ask the people who quit or failed what their problem was directly. It has to be an open ended effort, not a questionnaire that is limited to the responses the institution wants to see. For example, if they had to work excessive amounts to pay their way compared to other students.
I'd love to see blind grading. In the current system there is ample opportunity for TA's and Profs to exercise bias in grading. Why not eliminate that possibility? The article automatically assumed there was no bias, having absolutely no proof that bias was not at least partially responsible. The entire article assumed a priori that the problem was with the students not understanding the material. Given such a bias at the outset I doubt they will find anything other than they wish to find.
And lastly,
why make gateway classes be gatekeeper classes? Why not have intro classes that are the best of the field instead? Hook people on
the major before you inflict those hellacious weed out requirements.
Post a Comment