Dorothea Salo posed this question of FriendFeed:
Crowdsourced abstracting and indexing service. The next big thing, or a really dumb pipe dream?
MPOW has a contract to provide indexing and abstracts for TRIS, the main transportation database for North America. The quality of indexing and abstracting varies, because most of the people performing the work are contracted, and all have different levels of expertise, training, and interactions with users. I mention users because all too often, they are overlooked in the discussion. Especially now, that most article databases are readily available online, even with a subscription, and not restricted to something like Dialog, the importance to index for information retrieval by an average user is more important.
So how could crowdsourcing for indexing and abstracting work?
- It needs to be for a small community. I could easily this this working in a field where everybody really knows everybody, like transportation. I don’t this this would work for JSTOR or Web of Science, but it could possibly work for TRIS. The volume isn’t too great, and it’s a very narrow subject.
- There needs to be adequate incentives. Money is the greatest incentive for people to take on such a task. This could be in actually payment, or access provided to services, such as the database or other resources that would help defray the cost of indexing. Perhaps even the cost of subscription for the titles indexed? Other incentives could be prestige, as in YPOW is contributing to the greater community, and could be considered a leader. My real incentive would be in better representation for the user. I personally hate trying to negotiate a messy record for a confused user when they did everything right. They didn’t use Google, they tried the database, and it’s letting them down. That’s not their fault, it’s our fault.
- There needs to be adequate training. It’s foolish to think that people can be good indexers or write decent abstracts for any database without training. Sure, library school can give a person a strong foundation, but every application is different. Is there a controlled vocabulary? How is it used? How will people be searching the database, and how do searches pick up the index terms or abstracts? Indexers need to know these things in order to provide terms that will help with retrieval. (I guess this goes back to the users.)
- There needs to be some quality control mechanism. This could be done through allowing people to edit records regardless of who entered them, or having some sort of review process where people could flag errant entries and there could be a discussion. I would probably prefer the former, but I also have concerns about people potentially misusing index terms. That’s not to say, that it would not actually be a problem though.
Ultimately, for any sort of crowdsourcing to work with indexing and abstracting, there needs to be a fair amount of trust within the community. Trust that the controlled vocabulary, if there is one, will be used properly, and trust that the quality of work will be consistent. There will also need to be communication between all parties involved. Users, those using the database to find information, should be able to provide some feedback. Those providing their services and expertise need a forum to discuss issues they may have and to actually engage with the work.
I personally would love to see this happen, not just for MPOW, but any community small enough that it’s actually feasible.
Leave a Reply