Usability Views Article Details
 home | timeline | recent | popular | e-reports | userati | books | about 

Summarizing technical support documents for search: Expert and user studies (19 Sep 2004)
Users start with an information need; in the case of our study this was described in the scenario and was already translated into a query—of course, in a real world situation, the query term selection would be done by the user. On the search-result page, users often read (only) the titles of hitlist items looking for a potential match to their information need. For example, in cases in which they have a specific problem to be solved, they may look for symptoms of the problem in the title. This may explain why the Titles Only approach was not significantly worse than the other summary types. Users may also initially look for boldface search terms in the summary. If it appears that the item may satisfy the information need, users then read (or scan) the summary; if not, they go on to the next hitlist item. If the summary appears to indicate a document that contains the appropriate information required to complete the task, users then look at the corresponding document to determine if it indeed contains the necessary and sufficient information for their information needs. In fact, the summary itself may contain the necessary and sufficient information to satisfy the information need; for example, the user may be looking for an e-mail address or phone number that appears in the summary—and in this case viewing the document itself is unnecessary.

Of course, there are other variations on this basic process. For example, some participants read several titles, assessing the relative likelihood that the documents had the desired information, before deciding which summaries to read; some users read the title and then scanned the summary, skipping the “look for boldface terms only” step, and so on. There are also variants of this process that are elided from the diagram for clarity purposes; for example, a user may go directly from box 10 to box 30 or 40 in Figure 9.

An important aspect of the model is that participants typically read only those summaries that look promising, based on titles and, perhaps, the presence and textual context of boldface search terms. Therefore, it may be that the differences among the summary types tested in this study may be second-order effects. This might explain the finding that although the Abstract group had the most favorable values on a number of measures, these measures usually did not reach statistical significance. Although there are many differences between this experiment and real-life situations, we believe that the model captures realistic behavior regarding the reading of summaries in search-result hitlists.
Article URL: http://www.research.ibm.com/journal/sj/433/wolf.html

Read 201 more articles from IBM sorted by date, popularity, or title.
Next Article: The Four-Domain Architecture: An approach to support enterprise architecture design
 RSS 0.91 Subscribe with Bloglines Add to My Yahoo!
Some of the people who make up the Userati group
This site is a labour of love built by Chris McEvoy