How to: MAL Imports - Convert YOLOV8 Image Annotations to Labelbox Annotations

Hello Community,

This post will show a few methods to get YOLO annotations with Ultralytics onto Labelbox. To accomplish this, we will be using the Ultralytics python package along with Labelbox MAL image imports.

Please feel free to modify these scripts to your needs, but use them at your own risk.

Review this article on how to get Labelbox annotations to YOLO format.

Setup

Utilizing Ultralytics, you can use the YoloV8 model to pre-label images or even upload ground truths to Labelbox. A typical beginning workflow would be to run your images through YoloV8 to obtain your results. Below is a script demonstrating this process.

model = ultralytics.YOLO("yolov8n-seg.pt")
results: list[ultralytics.engine.results.Results] = model(["https://storage.googleapis.com/labelbox-datasets/image_sample_data/2560px-Kitano_Street_Kobe01s5s4110.jpeg"])

The particular Yolo model used includes segment masks, but if you only need bounding boxes, you can switch to a different model to improve performance. Please read Ultralytic Documents for more information.

Inside Labelbox, you must create a matching ontology and project with the data rows you are trying to label. Setting that up is outside the scope of this post, but review Labelbox Developer Docs for more information.

Once you have your labelbox ontology and project setup, for this tutorial, you must create a mapping with your Yolo class name to the Labelbox feature name. Below is a Python dictionary object demonstrating an example of what this would look like:

# {<yolo_class_name>: <labelbox_feature_name>}
class_mapping = {
    "person": "Person",
    "bus": "Vehicle",
    "truck": "Vehicle"
}

Import Yolo Annotations

Now that you are set up, you can use the below scripts to create your Labelbox labels for bounding boxes, segment masks, or polygon annotations. You will need to obtain a list of global_keys that match the order of your results list for Labelbox to identify the corresponding data row for your label.

Bounding Box

from ultralytics.engine.model import Model
from ultralytics.engine.results import Results
import labelbox.types as lb_types

def get_yolo_bbox_annotation_predictions(yolo_results: Results, model: Model, ontology_mapping: dict[str:str], global_keys: list[str]) -> list[lb_types.Label]:
    """Convert YOLOV8 model bbox prediction results to labelbox annotations format

    Args:
        yolo_results (ultralytics.engine.results.Results): YoloV8 prediction results.
        model (ultralytics.engine.model.Model): YoloV8 model.
        ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Bbox feature name must match class name given from Yolo.
        global_keys (list[str]): List of labelbox global keys for image. Must be same order as results.
    Returns:
        list[lb_types.Label]
    """
    labels = []

    for i,result in enumerate(yolo_results):
        annotations = []
        for bbox in result.boxes:
            class_name = model.names[int(bbox.cls)]

            if not class_name in ontology_mapping.keys():
                continue

            start_x, start_y, end_x, end_y = bbox.xyxy.tolist()[0]

            bbox_source = lb_types.ObjectAnnotation(
                name=ontology_mapping[class_name],
                value=lb_types.Rectangle(
                    start=lb_types.Point(x=start_x, y=start_y),
                    end=lb_types.Point(x=end_x, y=end_y)
                ))

            annotations.append(bbox_source)

        labels.append(
            lb_types.Label(data=lb_types.ImageData(global_key=global_keys[i]), annotations=annotations)
        )

    return labels

Segment Mask

from ultralytics.engine.model import Model
from ultralytics.engine.results import Results
import labelbox.types as lb_types

def get_yolo_segment_annotation_predictions(yolo_results: list[Results], model: Model, ontology_map: dict[str:str], global_keys: list[str]) -> list[lb_types.Label]:
    """Convert YOLOV8 segment mask prediction results to labelbox annotations format

    Args:
        yolo_results (list[ultralytics.engine.results.Results]): YoloV8 prediction results.
        model (ultralytics.engine.model.Model): YoloV8 model.
        ontology_map (dict[str:str]): Dictionary mapping YoloV8 classes to lablebox feature name.
        global_keys (list[str]): List of labelbox global keys for image. Must be same order as results.
    Returns:
        list[lb_types.Label] 
    """
    labels = []

    for x,result in enumerate(yolo_results):
      annotations = []
      for i, mask in enumerate(result.masks.data):
        class_name = model.names[int(result.boxes[i].cls)]
        
        if not class_name in ontology_map.keys():
            continue
        
        # Get binary numpy array to byte array. You must resize mask to match image.
        mask = (mask.numpy() * 255).astype("uint8")   
        img = Image.fromarray(mask, "L")
        img = img.resize((result.orig_shape[1], result.orig_shape[0]))
        img_byte_arr = io.BytesIO()
        img.save(img_byte_arr, format='PNG')
        encoded_image_bytes = img_byte_arr.getvalue()
        
        mask_data = lb_types.MaskData(im_bytes=encoded_image_bytes)
        mask_annotation = lb_types.ObjectAnnotation(
            name=ontology_map[class_name],
            value=lb_types.Mask(
                mask=mask_data,
                color=(255, 255, 255))
        )
        annotations.append(mask_annotation)
      labels.append(
                lb_types.Label(data=lb_types.ImageData(global_key=global_keys[x]), annotations=annotations)
                )

    return labels

Polygon

from ultralytics.engine.model import Model
from ultralytics.engine.results import Results
import labelbox.types as lb_types

def get_yolo_polygon_annotation_predictions(yolo_results: list[Results], model: Model, ontology_map: dict[str:str], global_keys: list[str]) -> list[lb.Label]:
    """Convert YOLOV8 model results to labelbox polygon annotations format

    Args:
        yolo_results (list[ultralytics.engine.results.Results]): YoloV8 prediction results.
        model (ultralytics.engine.model.Model): YoloV8 model.
        ontology_map (dict[str:str]): Dictionary mapping YoloV8 classes to lablebox feature name.
        global_keys (list[str]): List of labelbox global keys for image. Must be same order as results.
    Returns:
        list[lb_types.Label] 
        """
        
    labels = []

    for x,result in enumerate(yolo_results):
        annotations = []
        for i, coordinates in enumerate(result.masks.xy):
            class_name = model.names[int(result.boxes[i].cls)]

            if not class_name in ontology_map.keys():
                continue

            polygon_annotation = lb_types.ObjectAnnotation(
                name=ontology_map[class_name],
                value=lb_types.Polygon(
                    points = [lb_types.Point(x=coordinate[0], y=coordinate[1]) for coordinate in coordinates]
                )
            )
            annotations.append(polygon_annotation)
        labels.append(
            lb_types.Label(data=lb_types.ImageData(global_key=global_keys[x]), annotations=annotations)
        )
    
    return labels

Conclusion

Once you have your list of labels, you can import them into Labelbox as either ground truths or pre-labels. Please reference these documents for more information. Also, note that these functions only apply to image annotations not video annotations.

3 Likes