WARNING: matplotlib.font_manager:Matplotlib is building the font cache; this may take a moment. INFO:matplotlib.font_manager:generated new fontManager
Questions Step 2, Out[6]: What are the predicted value results in terms of correct and incorrect predictions? Step 2, In[7]: How do the results of the confusion matrix compare with those in the previous step (explain the coloring)? Step 2, In[9]: Based on the results achieved, is the sensitivity good enough for this scenario? Step 2, In[10]: Is the specificity given at an acceptable level, or would you want to see a different percentage value here? Step 2, In[16]: What values are used in determining the overall accuracy of your model?
Sensitivity is also known as hit rate, recall, or true positive rate (TPR). It measures the proportion of the actual positives that are correctly identified. In this example, the sensitivity is the probablity of detecting an abnormality for patients with an abnormality. [n [9]: \# Sensitivity, hit rate, recall, or true positive rate Sensitivity = float(TP)/(TP+FN)*100 print(f"Sensitivity or TPR: { Sensitivity\}\%") print(f"There is a { Sensitivity\}\% chance of detecting patients with an abnormality have an abnormality") Sensitivity or TPR: 90.47619047619048% There is a 90.47619047619048% chance of detecting patients with an abnormality have an abnormality \# Sensitivity, hit rate, recall, or true positive rate Sensitivity = float (TP)/(TP+FN)?100 print(f"Sensitivity or TPR: \{Sensitivity\}\%") print(f"There is a \{Sensitivity\}\% chance of detecting patients with an abnormality have an abnormality") Sensitivity or TPR: 90.47619047619048% There is a 90.47619047619048% chance of detecting patients with an abnormality have an abnormality Step 2, In[9]: Based on the results achieved, is the sensitivity good enough for this scenario? Question: Is the sensitivity good enough for this scenario? Specificity The next statistic is specificity, which is also known as the true negative. It measures the proportion of the actual negatives that are correctly identified. In this example, the specificity is the probablity of detecting normal, for patients who are normal. [10]: \# Specificity or true negative rate Specificity = float(TN)/(TN+FP)*100 print(f"Specificity or TNR: { Specificity\}\%") print(f"There is a { Specificity\} chance of detecting normal patients are normal.") \# Specificity or true negative rate Specificity = float (TN)/(TN+FP)?100 print(f"Specificity or TNR: \{Specificity\}\%") print(f"There is a { Specificity\}\% chance of detecting normal patients are normal.") Specificity or TNR: 70.0% There is a 70.0% chance of detecting normal patients are normal. Step 2, In[10]: Is the specificity given at an acceptable level, or would you want to see a different percentage value here?
Step 2, Out[6]: What are the predicted value results in terms of correct and incorrect predictions? You results will vary, but you should have results that are similiar to this example:
False discovery rate In this example, the false discovery rate is the probability of predicting an abnormality when the patient doesn't have one. In [15]: \# False discovery rate FDR = float(FP)/(TP+FP)*100 print(f"False Discovery Rate: { FDR }%" ) print(f"You have an abnormality, but there is a { FDR }% chance this is incorrect.") \# False discovery rate FDR=float(FP)/(TP+FP)?100 print (f"False Discovery Rate: { FDR F" ? ) print (f"You have an abnormality, but there is a {FDR} chance this is incorrect.") False Discovery Rate: 13.636363636363635% You have an abnormality, but there is a 13.636363636363635% chance this is incorrect. Overall accuracy How accuracte is your model? In [16]: \# Overall accuracy ACC = float (TP+TN)/(TP+FP+FN+TN)?100 print (f"Accuracy: { ACC % ") Accuracy: 83.87096774193549% In [16]: \# Overall accuracy ACC = float (TP+TN)/(TP+FP+FN+TN)?100 print (f"Accuracy: { ACC }%") Accuracy : 83.87096774193549% Step 2, In[16]: What values are used in determining the overall accuracy of your model? In []: In summary, you calculated the following metrics from your model: In [17]: print(f"Sensitivity or TPR: { Sensitivity\}\%") print(f"Specificity or TNR: \{Specificity\}\%") print(f"Precision: \{Precision\}\%") print(f"Negative Predictive Value: \{NPV\}\%") print(f"False Positive Rate: \{FPR\}\%") print(f"False Negative Rate: \{FNR\}\%") print(f"False Discovery Rate: \{FDR\}" ) print(f"Accuracy: \{ACC %") In [17]: print(f"Sensitivity or TPR: \{Sensitivity\}\%") print (f"Specificity or TNR: \{Specificity\}\%") print(f"Precision: \{Precision\}\%") print(f"Negative Predictive value: { NPV }%" ) print ( f"False Positive Rate: { FPR }%" ") print (f"False Negative Rate: { FNR }%" ) print (f"False Discovery Rate: \{FDR }%" ") print (f"Accuracy: { ACC }%")