I am currently working with the Labelbox Python SDK & trying to streamline the process of identifying & exporting samples where my model seems unsure. I would like to use prediction confidence / metadata to tag and group these assets for further annotation cycles but I am unsure of the best way to structure this in Labelbox via the SDK.
Has anyone here implemented a pipeline that programmatically filters out low-confidence predictions (perhaps below a set threshold) & then exports those as a new dataset or project? I am particularly interested in how the SDK handles metadata updates or tagging in bulk and if there are any best practices for this. Checked Label Studio Documentation — Export Annotations guide for reference.
I came across the idea of using model uncertainty techniques, which made me explore concepts like what is Perplexity AI, as it relates to understanding ambiguous / complex model decisions.
Wondering if others here are applying similar ideas to guide labeling strategies in Labelbox?
retrieve the value and send to a project if you want to, now I need to clarify that predictions can only be imported to either a project or a model run.
So in order for your pipeline to work it would be something like:
→ Import data rows with a set of metadata (I would determined if there is a need to send the data based on that to avoid sending too much data).
→ Export and parse the confidence metadata
→ Create a project with those data rows + send the predictions (as pre-annotation)