Importing image annotations "Running" forever

This is my code to upload annotations for for a new labeling project “Test2” which has images uploaded to it. The ontology is ‘canned goods’ and the class is ‘Argentina corned beef’.

I am using GroundingDINO to do some pre-labeling, when I plot it with supervision (roboflow) the boxes do show up.

import labelbox as lb
from labelbox import Client, LabelingFrontend, OntologyBuilder, Project
import labelbox.data.annotation_types as lb_types
from labelbox.schema.ontology import OntologyBuilder, Tool, Classification
from labelbox.schema.annotation_import import MALPredictionImport
# Create a subfolder for the current SKU
sku = run_skus[0]
subfolder_path = os.path.join(IMAGE_DIR_PATH, sku)
os.makedirs(subfolder_path, exist_ok=True)

# List to store the Labelbox formatted data
labelbox_annotations = []
box_annotations = []

# Loop through each image, run predictions, and format for Labelbox
for img_path in image_paths:
# Load the image
image = cv2.imread(img_path)

# Detect objects
detections = grounding_dino_model.predict_with_classes(
image=image,
classes=CLASSES,
box_threshold=BOX_TRESHOLD,
text_threshold=TEXT_TRESHOLD
)

# Convert detections to Labelbox format
for detection in detections:
# get the bounding box and confidence
bbox = detection[0]
confidence = detection[1]
# Use just the filename as the global_key
filename = os.path.basename(img_path)
# Create an ObjectAnnotation for each detection
bbox_annotation = lb_types.ObjectAnnotation(
name=sku,
value=lb_types.Rectangle(
start=lb_types.Point(x=int(bbox[0]), y=int(bbox[1])),
end=lb_types.Point(x=int(bbox[2]), y=int(bbox[3]))
),
confidence=confidence)

# Append the annotation to the list
labelbox_annotations.append(
lb_types.Label(data=lb_types.ImageData(global_key=filename), annotations=[bbox_annotation])
)


# Upload the annotations to Labelbox
upload_job = lb.MALPredictionImport.create_from_objects(
    client=client,
    project_id=project.uid,
    name="Pre-labels2",
    predictions=labelbox_annotations
)

# Get the project by its name (assuming you've already created it on Labelbox)
project_name = "Test2"
project = next(client.get_projects(where=Project.name == project_name))

# Upload the annotations to Labelbox
upload_job = lb.MALPredictionImport.create_from_objects(
client=client,
project_id=project.uid,
name="Pre-labels2",
predictions=labelbox_annotations
)

labelbox_annotations[-3:] looks like

[Label(uid=None, data=ImageData(im_bytes=None,file_path=None,url=None,arr=None), annotations=[ObjectAnnotation(confidence=0.2374183088541031, name='Argentina corned beef', feature_schema_id=None, extra={}, value=Rectangle(extra={}, start=Point(extra={}, x=1977.0, y=977.0), end=Point(extra={}, x=2274.0, y=1450.0)), classifications=[])], extra={}),
Label(uid=None, data=ImageData(im_bytes=None,file_path=None,url=None,arr=None), annotations=[ObjectAnnotation(confidence=0.1572323888540268, name='Argentina corned beef', feature_schema_id=None, extra={}, value=Rectangle(extra={}, start=Point(extra={}, x=1974.0, y=758.0), end=Point(extra={}, x=2267.0, y=1027.0)), classifications=[])], extra={}),
Label(uid=None, data=ImageData(im_bytes=None,file_path=None,url=None,arr=None), annotations=[ObjectAnnotation(confidence=0.16410909593105316, name='Argentina corned beef', feature_schema_id=None, extra={}, value=Rectangle(extra={}, start=Point(extra={}, x=2005.0, y=523.0), end=Point(extra={}, x=2283.0, y=831.0)), classifications=[])], extra={})]

For ref. detections looks like:
Detections(xyxy=array([[1463.5146 , 479.05798 , 2296.8342 , 1460.3906 ], [1501.7441 , 995.58765 , 1987.998 , 1433.1663 ], [1468.9159 , 747.9865 , 1941.9889 , 1110.7316 ], [ 19.76123 , 54.78015 , 4135.61 , 2656.079 ], [ 22.023193, 43.441895, 4134.799 , 3043.9595 ], [ 13.544434, 2586.5571 , 4145.923 , 3043.7246 ], [ 12.916138, 2585.3928 , 2277.3843 , 3026.8557 ], [2280.704 , 2621.322 , 4144.2935 , 3034.153 ], [1731.3396 , 483.58582 , 2003.3721 , 917.5128 ], [1977.0553 , 977.9269 , 2274.1853 , 1450.3502 ], [1974.5818 , 758.5367 , 2267.9158 , 1027.406 ], [2005.4254 , 523.9257 , 2283.1843 , 831.9123 ]], dtype=float32), class_id=array([None, None, None, None, None, None, None, None, None, None, None, None], dtype=object), confidence=array([0.7441298 , 0.34596214, 0.29213998, 0.2530078 , 0.3171391 , 0.28089854, 0.20802084, 0.23411071, 0.17634948, 0.23741831, 0.15723239, 0.1641091 ], dtype=float32), tracker_id=None)

On the labelbox UI I go to Annotate - Test2 - Automation - my import has been uploading for really long and it’s a very small set of annotations and images. I have confirmed the global_key (i.e. image file id) is the same on label box as in the labelbox_annotations.

Hi @kevin.jeswani, can you please share with me the project ID of the project to which you are attempting to upload these predictions?

clmexfdch09nm073abeqm6j4s

Thanks, @kevin.jeswani.

I see that you are including a confidence score in your ObjectAnotation objects. Confidence scores should not be used when uploading MAL predictions to a project, as these only apply to model runs.

I suspect that you may have been viewing or combining the docs for uploading image predicitons when instead if you are looking to upload predictions to a project for human review and editing in the labeling queue, you should be focusing on importing image annotations.

I’d suggest attempting to upload your MAL job without the confidence parameter on each annotation and seeing if that allows the annotations to appear in the labeling queue as expected.