Date post: | 21-Dec-2015 |
Category: |
Documents |
View: | 213 times |
Download: | 0 times |
HOW PEOPLE GO ABOUT SEARCHING VIDEO COLLECTIONS:
LESSONS LEARNED
presented by
Barbara M. WildemuthSchool of Information & Library Science
University of North Carolina at Chapel Hill
VIDEO versus TEXT
• Searching video collections is just like searching collections of text documents.
• Searching video collections is nothing like searching collections of text documents.
The research modelGOALS
PERFORMANCE
PERCEPTIONS
TIME
MENTAL LOAD
PHYSICAL LOAD
EFFORT OUTCOMES
TASKS
VIDEOCHARACTERISTICS
INDIVIDUAL CHARACTERISTICS
SURROGATES
The research modelGOALS
PERFORMANCE
PERCEPTIONS
TIME
MENTAL LOAD
PHYSICAL LOAD
EFFORT OUTCOMES
TASKS
VIDEOCHARACTERISTICS
INDIVIDUAL CHARACTERISTICS
SURROGATES
Tasks associated withvideo retrieval
• Select items from the collection• Select particular clips or frames• Evaluate the style of the video
• All oriented toward re-use of video retrieved
The research modelGOALS
PERFORMANCE
PERCEPTIONS
TIME
MENTAL LOAD
PHYSICAL LOAD
EFFORT OUTCOMES
TASKS
VIDEOCHARACTERISTICS
INDIVIDUAL CHARACTERISTICS
SURROGATES
Which surrogates are most effective?
• Storyboard with text keywords• Storyboard with audio keywords• Slide show with text keywords• Slide show with audio keywords• Fast forward
How fast is too fast?
0
1
2
3
4
5
6
7
8
9
10
11
12
32 64 128 256
Surrogate speed
Mea
n s
core Gist comp (ft)
Visual gist
Object rec (g)
Action rec
Visual gist at 32is better than atother speeds
Object recognition (g) at 32and 64 is better than at 256
Gist comprehension (ft)at 32 and 64 is betterthan at 128 and 256
Action recognition at 32 isbetter than at 128 or 256.
AgileViews framework
• User should be able to move from one view to another quickly and easily– Overview– Preview– History view– Peripheral view– Shared view
The research modelGOALS
PERFORMANCE
PERCEPTIONS
TIME
MENTAL LOAD
PHYSICAL LOAD
EFFORT OUTCOMES
TASKS
VIDEOCHARACTERISTICS
INDIVIDUAL CHARACTERISTICS
SURROGATES
The role of video structure
• What is narrativity?– Cause and effect across scenes– Persistent characters– BOTH
• The next step: use video structure to “tune” the video retrieval system
Methodological contributions
• Measures of performance– Object recognition (text, graphical)– Action recognition– Linguistic gist comprehension (full text,
multiple choice)– Visual gist comprehension (multi-faceted)
• Measures of user perceptions– Perceived ease of use– Perceived usefulness– Flow (concentration, enjoyment)
Next research questions
• Do the current findings hold up outside the lab?
• What role might sound/audio play in helping people to select useful videos from a collection? How can attributes of the sound track be represented in a surrogate?
• What tools are needed to support re-use of the video retrieved?
Your questions?
• More details at:http://www.open-
video.org/
• Acknowledgements to the Open Video team:–Gary Marchionini --Gary Geisler–Xiangming Mu --Meng Yang–Michael Nelson --Beth Fowler
Jon Elsas, Rich Gruss, Anthony Hughes, Jie Luo, Sanghee Oh, Amy Pattee, Terrell Russell, Laura Slaughter, Richard Spinks, Christine Stachowitz, Tom Tolleson, TJ Ward, Curtis Webster, Todd Wilkens
Studies cited today• Geisler, G., Marchionini, G., Nelson, M., Spinks, R., & Yang, M. (2001). Interface
concepts for the Open Video Project. Proceedings of the Annual Conference of the American Society for Information Science & Technology, 58-75. http://www.ischool.utexas.edu/~geisler/info/p514-geisler.pdf
• Hughes, A., Wilkens, T., Wildemuth, B., & Marchionini, G. (2003). Text or pictures? An eyetracking study of how people view digital video surrogates. Proceedings of International Conference on Image and Video Retrieval (CIVR), 271-280. http://www.open-video.org/papers/hughes_civr_2003.pdf
• Wildemuth, B. M., Marchionini, G., Wilkens, T., Yang, M., Geisler, G., Fowler, B., Hughes, A., & Mu, X. (2002). Alternative surrogates for video objects in a digital library: Users’ perspectives on their relative usability. Proceedings of the 6th European Conference on Digital Libraries, September 16 - 18, 2002, Rome, Italy. http://www.open-video.org/papers/ECDL2002.020620.pdf
• Wildemuth, B. M., Marchionini, G., Yang, M., Geisler, G., Wilkens, T., Hughes, A., & Gruss, R. (2003). How fast is too fast? Evaluating fast forward surrogates for digital video. Proceedings of the 3rd ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2003), pp. 221-230. http://www.open-video.org/papers/p221-wildemuth.pdf
• Wilkens, T., Hughes, A., Wildemuth, B. M., & Marchionini, G. (2003). The role of narrative in understanding digital video: An exploratory analysis. Proceedings of the Annual Meeting of the American Society for Information Science & Technology, 40, 323-329. http://www.open-video.org/papers/Wilkens_Asist_2003.pdf
• Yang, M., & Marchionini, G. (2005). Deciphering visual gist and its implications for video retrieval and interface design. Conference on Human Factors in Computing Systems (CHI) (Portland, OR. Apr. 2-7, 2005), 1877-1880. http://www.open-video.org/papers/MengYang_050205_CHI.pdf
• Yang, M., Wildemuth, B. M., Marchionini, G., Wilkens, T., Geisler, G., Hughes, A., Gruss, R., & Webster, C. (2003). Measures of user performance in video retrieval research. UNC School of Information and Library Science (SILS) Technical Report TR-2003-02. http://sils.unc.edu/research/publications/reports/TR-2003-02.pdf