Let's update the Prediction and Performance section to include details on how the confusion matrix threshold is applied. From the code you can see that we predict the positive class (1, True, or categorical label that is second in the lexicographical ordering of the labels) if the predicted probability is *greater than or equal * to the threshold, which produces the best F1 score.
In the confusion matrix section
Below the paragraph and image in that section we should add:
All predicted probabilities greater than or equal to the F1 Max threshold are labeled with the positive class (e.g., 1, True, or the second label in lexicographical order). The F1 Max threshold is selected to maximize the F1 score calculated from confusion matrix values (true positives, true negatives, false positives, and false negatives).
Prediction class labels are based on the class with the highest predicted probability.