Difference between revisions of "Category:User-centric evaluation"

From RecSysWiki
Jump to navigation Jump to search
(Created page with "Whereas originally the field of recommender systems heavily focused on offline evaluation, recently awareness has grown that the usability and user experience of recommen...")
 
Line 1: Line 1:
 
Whereas originally the field of recommender systems heavily focused on offline evaluation, recently awareness has grown that the [[usability]] and [[user experience]] of recommender systems should be tested in online evaluations with real users. User-centric evaluation methods can be broadly categorized into [[qualitative user-studies]] and [[quantitative user experiments or field trials]]. User studies typically combine [[objective evaluation measures]] with [[subjective evaluation measures]], typically in the form of [[design critiques]], [[interviews]], and [[questionnaires]].
 
Whereas originally the field of recommender systems heavily focused on offline evaluation, recently awareness has grown that the [[usability]] and [[user experience]] of recommender systems should be tested in online evaluations with real users. User-centric evaluation methods can be broadly categorized into [[qualitative user-studies]] and [[quantitative user experiments or field trials]]. User studies typically combine [[objective evaluation measures]] with [[subjective evaluation measures]], typically in the form of [[design critiques]], [[interviews]], and [[questionnaires]].
 +
 +
User-centric evaluation has had difficulties gaining popularity as an evaluation method, because it is often difficult to test new algorithms or systems with real users. Early examples in the recommender systems literature have been of questionable quality. There have been a few suggestions for [[standardization|Standardization of user-centric evaluation metrics]] and [[simplification|Simplifying user-centric evaluation] of the user-centric evaluation process.
  
  
 
[[Category:Evaluation]]
 
[[Category:Evaluation]]

Revision as of 22:56, 28 February 2011

Whereas originally the field of recommender systems heavily focused on offline evaluation, recently awareness has grown that the usability and user experience of recommender systems should be tested in online evaluations with real users. User-centric evaluation methods can be broadly categorized into qualitative user-studies and quantitative user experiments or field trials. User studies typically combine objective evaluation measures with subjective evaluation measures, typically in the form of design critiques, interviews, and questionnaires.

User-centric evaluation has had difficulties gaining popularity as an evaluation method, because it is often difficult to test new algorithms or systems with real users. Early examples in the recommender systems literature have been of questionable quality. There have been a few suggestions for Standardization of user-centric evaluation metrics and [[simplification|Simplifying user-centric evaluation] of the user-centric evaluation process.