--- license: agpl-3.0 language: - en base_model: - Ultralytics/YOLO11 pipeline_tag: object-detection tags: - pytorch - YOLO11 - Ultralytics - Stereograph - Stereographs - Stereoscope - Underwood & Underwood library_name: ultralytics --- VitageReality Model =================== # Purpose This model can be used to seperate the left and right half image from a stereograph card. The have been historically used with a [stereoscope](https://en.wikipedia.org/wiki/Stereoscope). Keep in mind that there have been several simila systems in the 19th and 20th century, currently this model is anly trained to work with the [Underwood & Underwood](https://en.wikipedia.org/wiki/Underwood_%26_Underwood) sterescope (sometimes also known as Holmes sterescope). Other types aren't handled yet. # Description and use case Hve a look at these two blog posts for a introduction and use case: * [AI segmentation for stereoscopic cards](https://christianmahnke.de/en/post/vintagereality-ai/) * [Historical stereograms on modern hardware](https://christianmahnke.de/en/post/vintagereality-apple-spatial/) # Basic usage The Model can be used with Python ## Required dependencies ```bash pip install ultralytics ``` ## Usage ```python from ultralytics import YOLO from PIL import Image pil_image = Image.open(args.image) model = YOLO(model_path) results = model(pil_image, verbose=False, retina_masks=True) left_img = None right_img = None r = results[0] if not r.boxes or len(r.boxes) != 2: raise Exception(f"YOLO detected {len(r.boxes) if r.boxes else 0} regions, expected exactly 2.") boxes = r.boxes.xyxy.cpu().numpy() # This is needed to get the order right indexed_boxes = sorted(enumerate(boxes), key=lambda x: x[1][0]) left_idx, left_box = indexed_boxes[0] right_idx, right_box = indexed_boxes[1] masks = r.masks.data.cpu().numpy() ``` Afterwars you can either use `left_box` and `right_box` to extract the half images. You can also use the masks to extract the half image only, without the arch at the top.