Eval Model class
EvalModel(pred_list, truth_list, metric_list)
Class for evaluating a recommender system.
The Evaluation module needs the following parameters:
- A list of computed rank/predictions (in case multiple splits must be evaluated)
- A list of truths (in case multiple splits must be evaluated)
- List of metrics to compute
Obviously the list of computed rank/predictions and list of truths must have the same length, and the rank/prediction in position \(i\) will be compared with the truth at position \(i\)
Examples:
>>> import clayrs.evaluation as eva
>>>
>>> em = eva.EvalModel(
>>> pred_list=rank_list,
>>> truth_list=truth_list,
>>> metric_list=[
>>> eva.NDCG(),
>>> eva.Precision()
>>> eva.RecallAtK(k=5, sys_average='micro')
>>> ]
>>> )
Then call the fit() method of the instantiated EvalModel to perform the actual evaluation
PARAMETER | DESCRIPTION |
---|---|
pred_list |
Recommendations list to evaluate. It's a list in case multiple splits must be evaluated. Both Rank objects (where items are ordered and the score is not relevant) or Prediction objects (where the score predicted is the predicted rating for the user regarding a certain item) can be evaluated |
truth_list |
Ground truths list used to compare recommendations. It's a list in case multiple splits must be evaluated. |
metric_list |
List of metrics that will be used to evaluate recommendation list specified
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
ValueError is raised in case the pred_list and truth_list are empty or have different length |
Source code in clayrs/evaluation/eval_model.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
metric_list: List[Metric]
property
List of metrics used to evaluate recommendation lists
RETURNS | DESCRIPTION |
---|---|
List[Metric]
|
The list containing all metrics |
pred_list: Union[List[Prediction], List[Rank]]
property
truth_list: List[Ratings]
property
append_metric(metric)
Append a metric to the metric list that will be used to evaluate recommendation lists
PARAMETER | DESCRIPTION |
---|---|
metric |
Metric to append to the metric list
TYPE:
|
Source code in clayrs/evaluation/eval_model.py
101 102 103 104 105 106 107 108 |
|
fit(user_id_list=None)
This method performs the actual evaluation of the recommendation frames passed as input in the constructor of the class
In case you want to perform evaluation for selected users, specify their ids parameter of this method. Otherwise, all users in the recommendation frames will be considered in the evaluation process
Examples:
>>> import clayrs.evaluation as eva
>>> selected_users = ['u1', 'u22', 'u3'] # (1)
>>> em = eva.EvalModel(
>>> pred_list,
>>> truth_list,
>>> metric_list=[eva.Precision(), eva.Recall()]
>>> )
>>> em.fit(selected_users)
The method returns two pandas DataFrame: one containing system results for every metric in the metric list, one containing users results for every metric eligible
PARAMETER | DESCRIPTION |
---|---|
user_id_list |
list of string ids for the users to consider in the evaluation (note that only string ids are accepted and not their mapped integers) |
RETURNS | DESCRIPTION |
---|---|
pd.DataFrame
|
The first DataFrame contains the system result for every metric inside the metric_list |
pd.DataFrame
|
The second DataFrame contains every user results for every metric eligible inside the metric_list |
Source code in clayrs/evaluation/eval_model.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
|