Today, higher education IT still has a long way to go. According to Susan Grajek, writer for the EDUCAUSE Review, many researchers take data from anyone who is willing to participate in their studies. This has the potential to skew the results such that they are neither consistent nor representative of the larger population the researchers are trying to study. For example, more doctoral institutions are willing to participate in IT studies than community colleges, so the results will more closely reflect the former’s experiences. Furthermore, the number of participants in these studies has been sinking over time. Researchers have been ignoring each other (which leads to redundant research), testing only one or two variables at a time, and focusing on the statistical significance of their research rather than the practical significance.
A huge amount of data is discarded after a study is over, with the exception of a few studies that are repeated year after year to compare “trends”. However, as the sample size keeps shrinking and the sample traits keep changing, these “trends” might not actually exist in reality. IT researchers have no large, established data-stores, nor do they have a standardized way to measure concepts like availability, disaster recovery, or user satisfaction.
Worst of all, there is no developed system of analytics for higher education IT. Before one can be developed, IT professionals need agreed-upon outcomes, uniform definitions for certain concepts, consistent data, and a large, centralized data-store. As analytics’ implementation is necessary for this branch of IT to develop and stay relevant in the future, one can only hope that higher education can help bring about the necessary conditions.
Comments