Multiple labelers annotating the same images

Perhaps I am missing something simple in the setup or can’t find it in the documentation, but I seem to be having issues with multiple labelers. We have a small group of undergraduates (around 5) working to help us label underwater imagery. We meet every week to label together so people can ask questions etc, but during our first session everybody was sent to the same image to after clicking ‘start labeling’. There then seemed to be issues with submission and reviewing because I know that multiple people submitted, but I only see on person’s image to review. Is there a setting so that if people are labeling at the same time they aren’t sent to the same images? Hope this makes sense!

Hi @meredith.mcpherson, if you have not already, I would recommend checking out what the Quality settings are for the project in question. You can learn more about our available quality assurance options in our documentation here.

Depending on your quality settings, it could be expected behavior that multiple labelers are served the same asset.

Can you please share with me the method you are using to review these labels? For example, are you selecting a data row from the Data Rows tab, using a Workflow step, or some alternative method?

Hi @Zeke, Thanks for the info. I am using the simple Workflow to review (shown in the screenshot below). I currently have reviewed everything, returned some to be reworked, and some have been approved. I am also using the benchmark method so have labeled about 4 images as benchmark at this point. When reviewing I also have been using the issues and comments tool. Again, maybe I am missing something but I feel that much of the documentation focuses on reviewing, but my confusing is stemming from the fact that all the labelers were sent the same image to label but then they didn’t all show up to be reviewed once submitted. Is there a way to randomize the image that gets sent to a labeler when they hit ‘start labeling’? Thanks!

@meredith.mcpherson

Once you open a data row to be reviewed, you can press the right-facing arrow to view all of the labels that have been made on the specific data row.

The labels are grouped by data row due to our movement towards a data-row-centric paradigm in which, rather than approving or rejecting labels made on the same asset, the work performed on the data row is summarily reviewed. In the case of benchmarks, since the benchmarked label should be the gold standard, this is the label that should be used for model training, while the additional labels are utilized for the performance review of labelers.

All labelers must label each benchmark, which is why all labelers received the same asset in their queue. The order in which benchmarks are received should be randomized, but if there are very few benchmarks then they may be near the front of each labeler’s queue. This is by design, as it is helpful to receive feedback about the team’s performance as soon as possible.

Hi @Zeke, Thanks for your response. I ended up managing to troubleshoot the issue with one of my colleagues and we figured out that it was the benchmark protocol causing our confusion.