In 2001, the testing company brought together an international consortium of educators, technology specialists and government representatives to begin defining the core characteristics of information consumption at the college level.
Knowing where and how to find information, they agreed, was just the beginning. Interpreting, sorting, evaluating, manipulating and repackaging information in dozens of forms from thousands of
sources - as well as having a fundamental understanding of the legal and ethical uses of digital materials - are also important components.
Leaving aside the problem of how one tests this sort of thing, I totally agree that these skills are critical. They always have been critical, but now the volume of information easily acquired is suddenly so high, and it is packaged in new ways that we have associations one way or the other about, as far as reliability goes.
Apropos of which, one wonders what will become of the traditional exhortation against judging a book by its cover. Will it be updated? Will we be told not to judge a site by its domain? Or by its webdesign? Or will the saying remain, becoming ever more opaque to succeeding generations? Ah, I am doomsaying again. I don't actually think that books will disappear.
But more on-topic: the flood of data and how to sort it. I must say that I don't think Google's engine is helping matters here. As I understand it, the program ranks search results more highly (that is, lists them higher up, where you are more likely to see them) the more other people performing the same search have themselves clicked on them. This is a very clever feedback loop for generating popularity. In fact, it may be the perfect simulation of a pack of thirteen-year-old girls opining on their peers. But for vetting and evaluating information, I am not so sure it is ideal. It generates obscurity as well as popularity. The information that is not found early is then not found often, and then, as it is not used either early and often, that information becomes progressively less accessible to everyone.
How can that be good? The thought makes me long for the physicality of the library. There, the book that hasn't been opened for some months doesn't spontaneously slide down the shelf into the dusty shadows or worse. True, volumes less frequently checked out may end up being stored off-site or even deacquired, but it doesn't happen automatically through the workings of software. Actual humans and their judgement are involved.
In fact, the metaphor of the books moving about on their own is not even wholly apt, for in the case of the Google search of the web, it is not the texts, the objects of the search, that move in relationship to each other as a result of searchers' actions; they are all ostensibly still on the same servers and at the same addresses as before. It is the index, the entries in the card catalogue, that are affected. Can you imagine? It is not that the less frequently-consulted book moves off-site, and then you must endure the hassle of requesting it. It is rather that the very information that such a book exists becomes harder and harder to find. Perversely, the Google engine, designed to make the information within documents more accessible, results in the marginalization of this meta information about the existence of some documents.
It is all rather troubling. I long, as already noted, for the library, and I worry about the future. Clearly other people do as well, and some of them have made this flash, which portrays itself as a history of media from the future. I recommend it as a thoughtful piece of speculative fiction.
Engin ummæli:
Skrifa ummæli