What is Recall?

Recall, also known as sensitivity or true positive rate, is a performance metric used in classification tasks to measure the ability of a model to correctly identify all the positive instances. It is the ratio of true positive predictions (correctly identified positive instances) to the total number of actual positive instances, which includes both true positives and false negatives (instances incorrectly identified as negative).

Recall is defined as:

Recall = True Positives / (True Positives + False Negatives)

Why is Recall important?

Recall is important in situations where the cost of false negatives is high. For example, in a medical diagnosis setting, a high recall indicates that the model is able to identify most patients with a certain disease, minimizing the risk of missing a critical diagnosis. A low recall, on the other hand, would mean that the model often fails to detect the disease in patients who actually have it, which could lead to delayed treatment and worse health outcomes.

Balancing Precision and Recall

In many cases, there is a trade-off between precision and recall. A model that is highly precise might have a low recall, and vice versa. The optimal balance between precision and recall depends on the specific problem and the relative costs of false positives and false negatives. One common metric used to balance precision and recall is the F1 score, which is the harmonic mean of precision and recall:

F1 Score = 2 * (Precision * Recall) / (Precision + Recall)

The F1 score ranges between 0 and 1, with higher values indicating better performance. It is particularly useful when dealing with imbalanced datasets, where one class is much more common than the other.

Resources for Recall: