Are you: (1) a subject matter expert who would like to work with us creating lists of legal issues or lists of service providers; (2) would you like to review our work; or
(3) do you have a dataset of people talking about their legal and life problems that you might be willing to share with us?
Sign up/tell us more using this form.
Draft (Not Horrible) Models
The following are metrics for draft (not horrible) models based on different runs of the labeled data. We consider a model not to be horrible when the following conditions are met: (1) the accuracy beats that of always guessing "yes" or always guessing "no;" (2) the recall is greater than 0.5; (3) the precision is greater than 0.5; and (4) the AUC is greater than 0.5. Currently, the data below represent the results of a single training-test split of 80%-20% per label. Please keep in mind that these models are just a proof-of-concept to show that we can get some signal out of the labeled data. Consequently, they're pretty straightforward in their construction and lack any true optimization.
Version:
Choose a model:
Model Performance Summary