A "good" MAE (Mean Absolute Error) score is relative to the specific dataset and context of the problem, but generally, a lower MAE indicates a better model. Comparing the MAE to the magnitude of the target variable is crucial to understanding its significance.
Here's a more detailed breakdown:
-
Understanding MAE: MAE represents the average absolute difference between the predicted and actual values. It provides a linear average of the errors.
-
Context is Key: There's no universally "good" MAE value. A MAE of 10 might be excellent for predicting house prices in millions but terrible for predicting exam scores out of 100.
-
Benchmarking:
- Compare to the target variable's scale: Calculate the average value of your target variable (actual values). Then, express the MAE as a percentage of this average. This gives you a sense of the error relative to the data's magnitude.
- Compare to baseline models: Evaluate a simple baseline model (e.g., always predicting the mean or median). If your MAE is significantly lower than the baseline, your model is adding value.
- Compare to other models: Compare your MAE to the MAEs of other models trained on the same dataset. This gives you a sense of how your model performs relative to others in the field.
-
Guidelines (Relative to target mean, similar to MAPE, but using MAE): These are very general and should be interpreted with caution.
- Below 10%: Often considered excellent, suggesting a high degree of accuracy.
- 10% to 20%: Still good, indicating a reasonable level of accuracy.
- Above 50%: Suggests that the model's predictions are often significantly off, and it might not be very useful. Re-evaluate your features, model, and data.
Example:
Suppose you're predicting house prices in a city, and the average house price is $500,000.
- MAE = $5,000: ($5,000 / $500,000) * 100% = 1%. This would likely be considered a very good MAE.
- MAE = $50,000: ($50,000 / $500,000) * 100% = 10%. This is still generally considered good.
- MAE = $250,000: ($250,000 / $500,000) * 100% = 50%. This would be a very poor MAE, indicating that your model's predictions are significantly off on average.
In conclusion, evaluating a MAE score requires understanding the context of the problem, comparing it to the scale of the target variable, and benchmarking it against baseline or alternative models.