Draft (Not Horrible) Models

The following are metrics for draft (not horrible) models based on different runs of the labeled data. We consider a model not to be horrible when the following conditions are met: (1) the accuracy beats that of always guessing "yes" or always guessing "no;" (2) the recall is greater than 0.5; (3) the precision is greater than 0.5; and (4) the AUC is greater than 0.5. Currently, the data below represent the results of a single training-test split of 80%-20% per label. Please keep in mind that these models are just a proof-of-concept to show that we can get some signal out of the labeled data. Consequently, they're pretty straightforward in their construction and lack any true optimization.

Version:   Choose a model:

Model Performance Summary