Doods2 - API for detecting objects in images and video streams using Tensorflow

Related tags

Deep Learningdoods2
Overview

DOODS2 - Return of DOODS

Dedicated Open Object Detection Service - Yes, it's a backronym...

DOODS is a REST service that detects objects in images or video streams. It's designed to be very easy to use, run as a container and available remotely. It also supports GPUs and EdgeTPU hardware acceleration.

DOODS2 is a rewrite of DOODS in Python. It supports the exact same REST api endpoints as the original DOODS but it also includes endpoints for handling streaming feeds with realtime feedback as annotated video and websocket JSON detection data.

Why Python you may ask... Well, lots of machine learning stuff is in Python and there is pretty good support for Object Detection and helpers in Python. Maintaining the code in Go was a huge pain. DOODS2 is designed to have a compatible API specification with DOODS as well as adding some additional features. It's my hope that in Python I might get a little more interest from the community in maintaining it and adding features.

DOODS2 drops support for gRPC as I doubt very much anyone used it anyways.

Quickstart in Docker

On your local machine run: docker run -it -p 8080:8080 snowzach/doods2:latest and open a browser to http://localhost:8080 Try uploading an image file or passing it an RTSP video stream. You can make changes to the specification by referencing the Detect Request payload.

Two detectors are included with the base image that you can try.

  • default - coco_ssd_mobilenet_v1_1.0_quant.tflite - A decent and fast Tensorflow light object detector.
  • tensorflow - faster_rcnn_inception_v2_coco_2018_01_28.pb - A much slower but more accurate Tensorflow object detector.

Docker Images

DOODS2 is distributed in a docker image. There are several tags you can reference to pick the image you like.

  • armv7l - 32 bit ARM devices with a v7 CPU like the Raspberry Pi
  • aarch64 - 64 bit ARM devices with a v8 CPU (Raspberry Pi 64 bit, ODroid, etc)
  • noavx - 64 bit x86_64 architecture WITHOUT avx support. This should run on just about everything.
  • latest - The latest tag references the above 3 tags so if you pick latest it should work on just about everything.

Additional more optimized tags are available:

  • amd64 - 64 bit x86_64 architecture WITH avx support. This should be faster than the noavx image on newer processors.
  • gpu - 64 bit x86_64 architecture with NVidia GPU support. See the section below on how to run this.

REST API

The REST API has several endpoints for detecting objects in images as well as streams. Details of the payloads and endpoints are below.

DETECT REQUEST

Every request to DOODS involves the Detect Request JSON object that looks like this.

{
  // This ID field is user definable and will return the same ID that was passed in.
  "id": "whatever",
  // This is the name of the detector to be used for this detection. If not specified, 'default' will be used if it exists.
  "detector_name": "default",
  // Data is either base64 encoded image date for a single image, it may also be a URL to an image
  // For a stream it's expected to be a URL that can be read by ffmpeg. `rtsp://..` or `http://..` is typical.
  // You can also provide a video URL to detect a single image. It will grab a single frame from the source to 
  // run detection on. (It may be kinda slow though)
  "data": "b64 or url",
  // The image option determines, for API calls that return an image, what format the image should be.
  // Supported options currently are "jpeg" and "png"
  "image": "jpeg",
  // The throtle option determines, for streaming API calls only, how often it should return results
  // in seconds. For example, 5 means return 1 result about every 5 seconds. A value of 0 indicates
  // it should return results as fast as it can. 
  "throttle": 5,
  // Ths is an optional list of strings of preprocessing functions to apply to the images. Each supported
  // option is listed below.
  "preprocess": [
    // grayscale = changes the image to grayscale before processing  
    "grayscale"
  ],
  // detect is an object of label->confidence matches that will be applied to the entire image
  // The "*" for the label name indicates it can match any label. If a specific label is listed
  // then it cannot be matched by the wildcard. This example matches any label at 50% confidence
  // and only car if it's confidence is over 60%.
  "detect": {
    "*": 50,
    "car": 60
  },
  // The regions array is a list of specific matches for areas within your image/video stream.
  // When processing rules, the first detection rule to match wins. 
  "regions": [
    // The top,left,bottom and right are float values from 0..1 that indicate a bounding box to look
    // for object detections. They are based on the image size. A 400x300 image with a bounding box
    // as shown in the example blow would look for objects inside the box of
    // {top: 300*0.1 = 30, left: 400*0.1 = 40, bottom: 300*0.9 = 270, right: 400*0.9 = 360}
    // The detect field is exactly how it's described above in the global detection option for you
    // to specify the labels that you wish to match. 
    // The covers boolean indicates if this region must completely cover the detected object or 
    // not. If covers = true, then the detcted object must be completely inside of this region to match.
    // If covers = false than if any part of this object is inside of this region, it will match.
    {"top": 0.1, "left": 0.1, "bottom": 0.9, "right": 0.9, "detect": {"*":50}, "covers": false}
    ...
  ]
}  

DETECT RESPONSE

{
  // This is the ID passed in the detect request.
  "id": "whatever",
  // If you specified a value for image in the detection request, this is the base64 encoded imge
  // returned from the detection. It has all of the detectons bounding boxes marked with label and 
  // confidence.
  "image": "b64 data...",
  // Detections is a list of all of the objects detected in the image after being passed through 
  // all of the filters. The top,left,bottom and right values are floats from 0..1 describing a 
  // bounding box of the object in the image. The label of the object and the confidence from 0..100
  // are also provided.
  "detections": [
    {"top": 0.1, "left": 0.1, "bottom": 0.9, "right": 0.9, "label": "dog", "confidence": 90.0 }
    ...
  ],
  // Any errors detected in the processing
  "error": "any errors"
}

API Endpoints

GET - /

If you just browse to the DOODS2 endpoint you will be presented with a very simple UI for testing and working with DOODS. It allows you to upload an image and test settings as well as kick off streaming video processes to monitor results in realtime as you tune your settings.

GET - /detectors

This API call returns the configured detectors on DOODS and includes the list of labels that each detector supports.

POST - /detect

This API call takes a JSON Detect Request in the POST body and returns a JSON Detect Response with the detections.

POST /image

This API call takes a JSON Detect Request in the POST body and returns an image as specified in the image propert of the Detect Request with all of the bounding boxes drawn with labels and confidence. This is equivalent of calling the POST /detect endpoint but only returning the image rather than all of the detection information as well.

GET /stream?detection_request=

This endpoint takes a URL Encoded JSON Detect Request document as the detect_request query parameter. It expected the data value of the Detect Request to be a streaming video URL (like rtsp://...) It will connect to the stream and continuously process detections as fast as it can (or as dictated by the throttle parameter) and returns an MJPEG video stream suitable for viewing in most browsers. It's useful for testing.

WS /stream

This is a websocket endpoint where once connected expects you to send a single JSON Detect Request. In the request it's expected that the data parameter will be a streaming video URL (like rtsp://...) It will connect to the stream and continuously process detections as fast as it can (or as dictated by the throttle parameter). It will return JSON Detect Response every time it processes a frame. Additionally, if you specified a value for the image parameter, it will include the base64 encoded image in the image part of the response with the bounding boxes, labels and confidence marked.

Configuraton Format

DOODS requires a YAML configuration file to operate. There is a built in default configuration in the docker image that references built in default models. The configuration file looks like this by default.

server:
  host: 0.0.0.0
  port: 8080
logging:
  level: info
doods:
  detectors:
    - name: default
      type: tflite
      modelFile: models/coco_ssd_mobilenet_v1_1.0_quant.tflite
      labelFile: models/coco_labels0.txt
      hwAccel: false
    - name: tensorflow
      type: tensorflow
      modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
      labelFile: models/coco_labels1.txt
      hwAccel: false

You can pass a new configuration file using an environment variable CONFIG_FILE. There is also a --config and -c command line option. for passing a configuration file. The environment variable takes precedences if set. Otherwise it defaults to looking for a config.yaml in the current directory.

Configuration options can also be set with environment variables using the value in all caps separated by underscore. For example you can set SERVER_HOST=127.0.0.1 to only listen on localhost. Setting the doods detectors must be done with a config file.

EdgeTPU

DOODS2 supports the EdgeTPU hardware accelerator. This requires Tensorflow lite edgetpu.tflite models. In the config you need to set the hwAccel boolean to true for the model and it will load the edgeTPU driver and model. As well, you will need to pass the edgeTPU device to DOODS. This is typically done with the docker flag --device=/dev/bus/usb or in a docker-compose file with:

version: '3.2'
services:
  doods:
    image: snowzach/doods2:gpu
    ports:
      - "8080:8080"
    devices:
      - /dev/bus/usb

You can download models for the edgeTPU here: https://coral.ai/models/object-detection

GPU Support

NVidia GPU support is available in the :gpu tagged image. This requires the host machine have NVidia CUDA installed as well as Docker 19.03 and above with the nvidia-container-toolkit.

See this page on how to install the CUDA drives and the container toolkit: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

You need to tell docker to pass the GPU through for DOODS to use. You can do this with the docker run command by adding --gpus all to the command. You can also do this with docker-compose by adding this to the DOODS container specification:

version: '3.2'
services:
  doods:
    image: snowzach/doods2:gpu
    ports:
      - "8080:8080"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

Supported Detectors / Models

There are currently 3 supported dectector formats

  • tflite - Tensorflow lite .tflite models
  • tensorflow - Original tensoflow frozen models (Usually end with .pb)
  • tensorflow2 - Tensorflow 2 object detection modes. (Points to a directory with information)

Tensorflow Lite - .tflite

Just download the file, make it available to dudes and put the path to the tflite model file in for the modelFile config option and the path to the text labelsFile in the config option. You can also set hwAccel if it's an edgetpu.tflite and of course you actually have a EdgeTPU connected.

Tensorflow 1 - .pb

These are protobuf files that end in .pb. You just need to download them and usually un-tgz the archive and get the .pb file and provide it to DOODS along with the labels file.

There's a good list of these here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md

Tensorflow 2 - Model Directory

Tensorflow 2 models are a little more complex and have a directory of files that you must pass into DOODS. You can download the file and extract it to it's directory. For the modelFile option pass the path to the directory. You will need to download the labels file as well and provide it's path in the labelsFile option.

This is a model zoo for Tensorflow 2 models: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md

I think they are better but they generally are much slower and probably require a GPU to work in a reasonable amount of time.

Comments
  • Deepstack model error

    Deepstack model error

    Love the new rewrite! I'd been using v1 for some time and just moved over to v2 container recently. Great stuff!

    The recent addition of the deepstack models does not seem to be working. I see in issue #28 after it was closed that there's a report that trying to get a deepstack model working resulted in a error. I'm getting the same error with two different .pt models

    Error:

    2022-03-12 21:30:35,452 - uvicorn.access - INFO - 172.17.0.1:36888 - "POST /detect HTTP/1.1" 500
    2022-03-12 21:30:35,452 - uvicorn.error - ERROR - Exception in ASGI application
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
        result = await app(self.scope, self.receive, self.send)
      File "/usr/local/lib/python3.8/dist-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
        return await self.app(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/fastapi/applications.py", line 208, in __call__
        await super().__call__(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 112, in __call__
        await self.middleware_stack(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 181, in __call__
        raise exc
      File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 159, in __call__
        await self.app(scope, receive, _send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 57, in __call__
        task_group.cancel_scope.cancel()
      File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 574, in __aexit__
        raise exceptions[0]
      File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 30, in coro
        await self.app(scope, request.receive, send_stream.send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 82, in __call__
        raise exc
      File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 71, in __call__
        await self.app(scope, receive, sender)
      File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 656, in __call__
        await route.handle(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 259, in handle
        await self.app(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 61, in app
        response = await func(request)
      File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 226, in app
        raw_response = await run_endpoint_function(
      File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 159, in run_endpoint_function
        return await dependant.call(**values)
      File "/opt/doods/api.py", line 40, in detect
        detect_response = self.doods.detect(detect_request)
      File "/opt/doods/doods.py", line 127, in detect
        ret = detector.detect(image)
      File "/opt/doods/detectors/deepstack.py", line 45, in detect
        results = self.torch_model(image, augment=False)[0]
      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 126, in forward
        return self._forward_once(x, profile, visualize)  # single-scale inference, train
      File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 149, in _forward_once
        x = m(x)  # run
      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 61, in forward
        if self.inplace:
      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1177, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'Detect' object has no attribute 'inplace'
    

    Both of the models work with the deepstack service itself, and doods properly reports the labels embedded in the models

        {
          "name": "dark",
          "type": "deepstack",
          "model": "external/models/dark.pt",
          "labels": [
            "Bicycle",
            "Boat",
            "Bottle",
            "Bus",
            "Car",
            "Cat",
            "Chair",
            "Cup",
            "Dog",
            "Motorbike",
            "People",
            "Table"
          ],
          "width": 0,
          "height": 0
        },
        {
          "name": "combined",
          "type": "deepstack",
          "model": "external/models/combined.pt",
          "labels": [
            "person",
            "bicycle",
            "car",
            "motorcycle",
            "bus",
            "truck",
            "bird",
            "cat",
            "dog",
            "horse",
            "sheep",
            "cow",
            "bear",
            "deer",
            "rabbit",
            "raccoon",
            "fox",
            "coyote",
            "possum",
            "skunk",
            "squirrel",
            "pig",
            ""
          ],
          "width": 0,
          "height": 0
        }
    

    so the models seem intact.

    My config for the two models is minimal:

        - name: dark
          type: deepstack
          modelFile: external/models/dark.pt
        - name: combined
          type: deepstack
          modelFile: external/models/combined.pt
    

    do I need more than that or is there some issue with the deepstack integration at the moment?

    opened by JustinGeorgi 42
  • Bird-Model with index-error. How to resolve?

    Bird-Model with index-error. How to resolve?

    Great Happiness for the new dood2!! hoped that it solved my problem with the bird-model.

    https://tfhub.dev/google/lite-model/aiy/vision/classifier/birds_V1/3

    as is spent the whole day to get it succesfull running in a bad python script - the new doods2 arrived. great. 🌹

    doods 2 loads the model and the label-file
    (csv-format with comma, but also error without commas) but throws an error on detections in the index

    some googling said: convert the model to "newer" tf-lite ?

    INFO:     Started server process [1]
    INFO:     Waiting for application startup.
    INFO:     Application startup complete.
    INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
    INFO:     192.168.14.135:56005 - "GET / HTTP/1.1" 304 Not Modified
    INFO:     192.168.14.135:56005 - "POST /image HTTP/1.1" 500 Internal Server Error
    ERROR:    Exception in ASGI application
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
        result = await app(self.scope, self.receive, self.send)
      File "/usr/local/lib/python3.8/dist-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
        return await self.app(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/fastapi/applications.py", line 208, in __call__
        await super().__call__(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 112, in __call__
        await self.middleware_stack(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 181, in __call__
        raise exc
      File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 159, in __call__
        await self.app(scope, receive, _send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 82, in __call__
        raise exc
      File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 71, in __call__
        await self.app(scope, receive, sender)
      File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 656, in __call__
        await route.handle(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 259, in handle
        await self.app(scope, receive, send)
      File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 61, in app
        response = await func(request)
      File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 226, in app
        raw_response = await run_endpoint_function(
      File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 159, in run_endpoint_function
        return await dependant.call(**values)
      File "/opt/doods/api.py", line 38, in image
        detect_response = self.doods.detect(detect_request)
      File "/opt/doods/doods.py", line 108, in detect
        ret = detector.detect(image)
      File "/opt/doods/detectors/tflite.py", line 64, in detect
        classes = self.interpreter.get_tensor(self.output_details[1]['index'])[0] # Class index of detected objects
    IndexError: list index out of range
    

    image

    both label-files dont work: image

    opened by ozett 38
  • Doods server causes RPi to freeze

    Doods server causes RPi to freeze

    From time-to-time the Doods server (running in docker) will use up all memory on my RPi, which is also running my hass server, and subsequently freeze the whole machine. Only a reboot will unfreeze it. Any suggestions on how to track memory usage of the Doods server and restart it if it uses up too much memory. Hass logs from the most recent freeze are below:

    Traceback (most recent call last): File "/srv/homeassistant/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 487, in async_update_ha_state await self.async_device_update() File "/srv/homeassistant/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 691, in async_device_update raise exc File "/srv/homeassistant/lib/python3.8/site-packages/homeassistant/components/image_processing/init.py", line 138, in async_update await self.async_process_image(image.content) File "/srv/homeassistant/lib/python3.8/site-packages/homeassistant/components/image_processing/init.py", line 118, in async_process_image return await self.hass.async_add_executor_job(self.process_image, image) File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/srv/homeassistant/lib/python3.8/site-packages/homeassistant/components/doods/image_processing.py", line 295, in process_image response = self._doods.detect( File "/srv/homeassistant/lib/python3.8/site-packages/pydoods/init.py", line 29, in detect response = requests.post( File "/srv/homeassistant/lib/python3.8/site-packages/requests/api.py", line 117, in post return request('post', url, data=data, json=json, **kwargs) File "/srv/homeassistant/lib/python3.8/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/srv/homeassistant/lib/python3.8/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/srv/homeassistant/lib/python3.8/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/srv/homeassistant/lib/python3.8/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8080): Max retries exceeded with url: /detect (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x5eb0b028>: Failed to establish a new connection: [Errno 24] Too many open files'))

    opened by duffyd 20
  • [FR]: Openlogo-model: logo-detection with pytorch

    [FR]: Openlogo-model: logo-detection with pytorch

    maybe this is a pytorch model like yolo5 and can be integrated? is not on the torch-hub. so this must be done manually?

    Modell: https://github.com/OlafenwaMoses/DeepStack_OpenLogo/releases/download/v1/openlogo.pt

    Site: https://github.com/OlafenwaMoses/DeepStack_OpenLogo

    image

    opened by ozett 15
  • MQTT Integration

    MQTT Integration

    This project is exactly what I'm looking for to work on some home automation projects (Frigate is pretty heavy-weight, and I'm not looking for security video recording, just automation based on location, posture, etc.). The tools I'm using so far often support MQTT as a way to trigger events, so one way or another I'd like to pipe the results out to MQTT.

    It wouldn't be too difficult to write a separate service that just connects to the /stream WS and forwards all the returned JSON to MQTT, but it would mean a separate service to launch, monitor, etc.. Alternatively, I could try to add some MQTT info to the DetectRequest (sorry if I'm getting some of these names wrong, on a phone so it's hard to tab around) so that DOODS could just optionally send the JSON itself (at the expense of having to include MQTT libraries).

    This definitely falls outside the scope of "REST service", so this isn't really a feature request. I just figured before I fork and start playing around with things I would ask and see if you had any thoughts on which approach would work best since you know the DOODS code.

    opened by Sammy1Am 15
  • Instalation with Coral Edge USB (report with issues)

    Instalation with Coral Edge USB (report with issues)

    Hello,

    Did the installation and ran docker image with additional » -- device /dev/bus/usb «. You have a typo in your instructions about this parameter.

    Then I changed config.yaml in my container and copied model and label file for Edge Coral.

    doods: detectors: - name: default type: tflite modelFile: models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite labelFile: models/coco_labels_0.txt hwAccel: true

    1. http://192.168.xxx.xxx:8080/detectors shows:

    {"detectors":[{"name":"default","type":"tensorflow2","model":"models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite","labels":["person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","traffic light","fire hydrant","n/a","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","n/a","backpack","umbrella","n/a","n/a","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","n/a","wine glass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","couch","potted plant","bed","n/a","dining table","n/a","n/a","toilet","n/a","tv","laptop","mouse","remote","keyboard","cell phone","microwave","oven","toaster","sink","refrigerator","n/a","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"],"width":0,"height":0}]}

    1. Additional check upon startup of container

    [email protected]:~ $ sudo docker start 6710fb8a10e2 6710fb8a10e2 [email protected]:~ $ sudo docker attach 6710fb8a10e2 /usr/local/lib/python3.8/dist-packages/tensorflow_io/python/ops/init.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: li btensorflow_io_plugins.so, from paths: ['/usr/local/lib/python3.8/dist-packages/ tensorflow_io/python/ops/libtensorflow_io_plugins.so'] caused by: ["[Errno 2] The file to load file system plugin from does not exist.: '/usr/local/lib/python3.8/dist-packages/tensorflow_io/python/ops/libtensorflow_ io_plugins.so'"] warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}") /usr/local/lib/python3.8/dist-packages/tensorflow_io/python/ops/init.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensor flow_io.so, from paths: ['/usr/local/lib/python3.8/dist-packages/tensorflow_io/p ython/ops/libtensorflow_io.so'] caused by: ['/usr/local/lib/python3.8/dist-packages/tensorflow_io/python/ops/lib tensorflow_io.so: cannot open shared object file: No such file or directory'] warnings.warn(f"file system plugins are not loaded: {e}") Failed to load delegate from libedgetpu.so.1.0

    INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)

    Question: Is this container related or raspberry bullseye OS?

    opened by mmatesic01 13
  • ERROR - Could not create detector pytorch/pytorch: No module named 'IPython'

    ERROR - Could not create detector pytorch/pytorch: No module named 'IPython'

    Tried Doods2 by running,

    docker run -it -p 8080:8080 snowzach/doods2:latest

    The container successfully ran but out of the 3 models, only 'default' and 'tensorflow' shows in the dropdown on the http://localhost:8080/. The pytorch(Yolo5s) is missing. The following error is seen in the logs:

    ERROR - Could not create detector pytorch/pytorch: No module named 'IPython'

    opened by SuperJuke 10
  • Coral USB error: Could not load EdgeTPU detector

    Coral USB error: Could not load EdgeTPU detector

    Hi there,

    I need help with my doods2 setup. I'm trying to start doods2 with my coral usb device and it's failing to create the tflite detector.

    $ docker run -it -p 7070:8080 --privileged --device /dev/bus/usb -v /path_example/doods2/config.yaml:/opt/doods/config.yaml -v /path_example/doods2/
    models:/opt/doods/models snowzach/doods2:latest
    2022-07-18 00:37:58,763 - doods.doods - ERROR - Could not create detector tflite/default: Could not load EdgeTPU detector
    2022-07-18 00:37:59,030 - uvicorn.error - INFO - Started server process [1]
    2022-07-18 00:37:59,031 - uvicorn.error - INFO - Waiting for application startup.
    2022-07-18 00:37:59,033 - uvicorn.error - INFO - Application startup complete.
    2022-07-18 00:37:59,034 - uvicorn.error - INFO - Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
    

    config.yaml

    server:
      host: 0.0.0.0
      port: 8080
      metrics: true
    logging:
      level: all
    doods:
      log: detections
      boxes:
        enabled: True
        boxColor: [0, 255, 0]
        boxThickness: 1
        fontScale: 1.2
        fontColor: [0, 255, 0]
        fontThickness: 1
      regions:
        enabled: True
        boxColor: [255, 0, 255]
        boxThickness: 1
        fontScale: 1.2
        fontColor: [255, 0, 255]
        fontThickness: 1
      globals:
        enabled: True
        fontScale: 1.2
        fontColor: [255, 255, 0]
        fontThickness: 1
      detectors:
        - name: default
          type: tflite
          modelFile: /opt/doods/models/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite
          labelFile: /opt/doods/models/coco_labels.txt
          hwAccel: true
    
    $ lsusb
    Bus 002 Device 011: ID 18d1:9302 Google Inc. 
    

    I'm running on a Raspberry Pi 4B. I've tried to connect the coral device straight into the USB2.0 and USB3.0 pi slots as well as connected to a powered USB3.0 hub.

    When I attach the shell of the container and check for libedgetpu.so.1.0, I can see it present.

    [email protected]:/usr/lib# ls /usr/lib/aarch64-linux-gnu/libedgetpu.so.1.0 -ltr
    -rw-r--r-- 1 root root 1135880 Jul  9  2021 /usr/lib/aarch64-linux-gnu/libedgetpu.so.1.0
    
    opened by TrungLam 10
  • Tensorflow1 models support

    Tensorflow1 models support

    First of all thank you for moving Doods to Python!! I fully agree with you that Python is a good and well known environment for machine learning and computer vision tasks.

    I’m experiencing some problems with a MobilenetV2 model that is actually derived from the model for TF1 provided by Coral Google and trained with my dataset.

    It seems there is a problem in managing the output: the log reports an error on line 90 in tflite.py (omitting the full traceback):

    2022-01-07 19:03:35,450 - uvicorn.access - INFO - 192.168.1.100:53992 - "POST /detect HTTP/1.1" 500
    2022-01-07 19:03:35,450 - uvicorn.error - ERROR - Exception in ASGI application
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
       result = await app(self.scope, self.receive, self.send)
    …
    …
      File "/opt/doods/detectors/tflite.py", line 90, in detect
        if int(classes[i]) in self.labels:
    ValueError: cannot convert float NaN to integer
    

    In my opinion this could be related to the fact that my model is TF1 and not TF2, as I can normally manage the output considering the result as tensors, with code similar to this one:

    def output_tensor(interpreter, i):
      """Returns output tensor view."""
      tensor = interpreter.tensor(interpreter.get_output_details()[i]['index'])()
      return np.squeeze(tensor)
    
    def get_output(interpreter, score_threshold, image_scale=1.0): 
      """Returns list of detected objects."""
      boxes = output_tensor(interpreter, 0)
      class_ids = output_tensor(interpreter, 1)
      scores = output_tensor(interpreter, 2)
      count = int(output_tensor(interpreter, 3))   
      width, height = input_size(interpreter)
      sx, sy = width / image_scale, height / image_scale 
    

    while I noted the code in doods2 (tflite.py) is quite different:

    boxes = self.interpreter.get_tensor(self.output_details[0]['index'])[0] # Bounding box coordinates of detected objects
    classes = self.interpreter.get_tensor(self.output_details[1]['index'])[0] # Class index of detected objects
    scores = self.interpreter.get_tensor(self.output_details[2]['index'])[0] # Confidence of detected objects
    

    May be some change should be done in order to accept also TF1 models in doods. If this is the case, could you consider to accept also TF1 models in doods2 providing a special flag in configuration (e.g tf_Version: 1 or 2)? Thanks in advance.

    opened by eugemaf 9
  • How to enable/add yolov5s

    How to enable/add yolov5s

    The documentation says that yolov5s is "A fast and accurate detector" and I can see that the files are included in the container but for some reason the default config doesn't include an entry for this detector.

    Is there a reason it isn't included by default in the config but is included in the filesystem?

    How can I add this detector in the config file?

    Many thanks

    opened by diagonali 8
  • Label filtering not working with pytorch

    Label filtering not working with pytorch

    I switched over from doods1 to doods2 using the Home Assistant plugin. I also decided to check out pytorch.

    I noticed after scanning the image though that it detected a "bench" in the image, when that isn't list of labels configured in the doods image processing configuration.

    opened by nvx 8
  • Pytorch->Deepstack detector not working with recently trained models

    Pytorch->Deepstack detector not working with recently trained models

    Hi, Sorry to annoy you again but I think yolov changed something in their stuff again because I'm no longer able to run doods2 on recently trained models. I already tried pulling the latest Image from Doods2 while also creating a new Volume but to no avail. That's the Error I'm getting while doods2 ist starting with a new model:

    sudo docker run -it -p 8080:8080 --restart unless-stopped -v Test:/opt/doods snowzach/doods2:latest 2022-12-27 21:23:55,911 - doods.doods - ERROR - Could not create detector deepstack/pytorch: Can't get attribute '_rebuild_parameter_v2' on <module 'torch._utils' from '/usr/local/lib/python3.8/dist-packages/torch/_utils.py'> 2022-12-27 21:23:56,050 - uvicorn.error - INFO - Started server process [1] 2022-12-27 21:23:56,051 - uvicorn.error - INFO - Waiting for application startup. 2022-12-27 21:23:56,052 - uvicorn.error - INFO - Application startup complete. 2022-12-27 21:23:56,054 - uvicorn.error - INFO - Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)

    Putting the predecessor model back the Error dissapears. Following I've also attached the command line Output of doods starting for the first time after pulling the Image:

    Status: Downloaded newer image for snowzach/doods2:latest 2022-12-27 21:39:37,548 - doods.doods - INFO - Registered detector type:tflite name:default 2022-12-27 21:39:44,975 - doods.doods - INFO - Registered detector type:tensorflow name:tensorflow /usr/local/lib/python3.8/dist-packages/torch/hub.py:267: UserWarning: You are about to download and run code from an untrusted repository. In a future release, this won't be allowed. To add the repository to your trusted list, change the command to {calling_fn}(..., trust_repo=False) and a command prompt will appear asking for an explicit confirmation of trust, or load(..., trust_repo=True), which will assume that the prompt is to be answered with 'yes'. You can also use load(..., trust_repo='check') which will only prompt for confirmation if the repo is not already trusted. This will eventually be the default behaviour warnings.warn( Downloading: "https://github.com/ultralytics/yolov5/zipball/master" to /root/.cache/torch/hub/master.zip requirements: YOLOv5 requirements "gitpython" "scipy>=1.4.1" not found, attempting AutoUpdate... WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv WARNING: You are using pip version 21.3.1; however, version 22.3.1 is available. You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command. Collecting gitpython Downloading GitPython-3.1.29-py3-none-any.whl (182 kB) Collecting scipy>=1.4.1 Downloading scipy-1.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (33.8 MB) Collecting gitdb<5,>=4.0.1 Downloading gitdb-4.0.10-py3-none-any.whl (62 kB) Requirement already satisfied: numpy<1.26.0,>=1.18.5 in /usr/local/lib/python3.8/dist-packages (from scipy>=1.4.1) (1.21.5) Collecting smmap<6,>=3.0.1 Downloading smmap-5.0.0-py3-none-any.whl (24 kB) Installing collected packages: smmap, gitdb, scipy, gitpython Successfully installed gitdb-4.0.10 gitpython-3.1.29 scipy-1.9.3 smmap-5.0.0

    requirements: 2 packages updated per /root/.cache/torch/hub/ultralytics_yolov5_master/requirements.txt requirements: ⚠️ Restart runtime or rerun command for updates to take effect

    YOLOv5 🚀 2022-12-27 Python-3.8.10 torch-1.13.0+cu117 CPU

    Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt to yolov5s.pt... 100%|██████████████████████████████████████| 14.1M/14.1M [00:00<00:00, 30.2MB/s]

    Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients Adding AutoShape... 2022-12-27 21:40:19,227 - doods.doods - INFO - Registered detector type:pytorch name:pytorch 2022-12-27 21:40:19,367 - uvicorn.error - INFO - Started server process [1] 2022-12-27 21:40:19,368 - uvicorn.error - INFO - Waiting for application startup. 2022-12-27 21:40:19,369 - uvicorn.error - INFO - Application startup complete. 2022-12-27 21:40:19,370 - uvicorn.error - INFO - Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)

    As before for reference I'm attaching a freshly trained Coco128 Model which is also not working:

    best_coco128_yolov5s.zip

    Would appreciate it if you could take a look again :-)

    opened by Idefix0496 2
  • undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11

    undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11

    Run up the latest docker image and getting the following error

    doods_1  | Traceback (most recent call last):
    doods_1  |   File "main.py", line 8, in <module>
    doods_1  |     from doods import Doods
    doods_1  |   File "/opt/doods/doods.py", line 20, in <module>
    doods_1  |     from detectors.pytorch import PyTorch
    doods_1  |   File "/opt/doods/detectors/pytorch.py", line 7, in <module>
    doods_1  |     import torch
    doods_1  |   File "/usr/local/lib/python3.8/dist-packages/torch/__init__.py", line 191, in <module>
    doods_1  |     _load_global_deps()
    doods_1  |   File "/usr/local/lib/python3.8/dist-packages/torch/__init__.py", line 153, in _load_global_deps
    doods_1  |     ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
    doods_1  |   File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
    doods_1  |     self._handle = _dlopen(self._name, mode)
    doods_1  | OSError: /usr/local/lib/python3.8/dist-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11
    

    Docker compose file:

    version: '3.2'
    
    services:
      doods:
        image: snowzach/doods2:amd64-gpu
        ports:
          - "8088:8080"
        restart: unless-stopped
        deploy:
          resources:
            reservations:
              devices:
                - driver: nvidia
                  count: 1
                  capabilities: [gpu]
    

    Output of nvidia-smi/nvcc

    [email protected]:~/docker/doods$ nvidia-smi
    Mon Dec 19 11:23:18 2022
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 450.216.04   Driver Version: 450.216.04   CUDA Version: 11.0     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  Quadro P600         Off  | 00000000:01:00.0 Off |                  N/A |
    | 27%   42C    P0    N/A /  N/A |      0MiB /  2000MiB |      2%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    
    [email protected]:~/docker/doods$ nvcc --version
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2021 NVIDIA Corporation
    Built on Sun_Feb_14_21:12:58_PST_2021
    Cuda compilation tools, release 11.2, V11.2.152
    Build cuda_11.2.r11.2/compiler.29618528_0
    
    
    opened by scottgrobinson 1
  • Pytorch not available:

    Pytorch not available: "No module named git"

    Hey, I have tried to run latest image on my NUC (amd64), and it seems that pytorch registering is failing because of missing git. I am attaching logs: 2022-11-19 10:09:01,373 - doods.doods - INFO - Registered detector type:tflite name:default 2022-11-19 10:09:04,072 - doods.doods - INFO - Registered detector type:tensorflow name:tensorflow 2022-11-19 10:09:06,574 - doods.doods - ERROR - Could not create detector pytorch/pytorch: No module named 'git' 2022-11-19 10:09:06,628 - uvicorn.error - INFO - Started server process [1]

    I checked the docker image and there is indeed missing apt install for git while it is present for arm. Could this be added in next release? Thank you!

    opened by pevecyan 1
  • Troubleshooting/Documenting EdgeTPU usage with DOODS2 running in a Proxmox LXC

    Troubleshooting/Documenting EdgeTPU usage with DOODS2 running in a Proxmox LXC

    Looking through the documentation in the README section, it seems like if you want to use edgeTPU, you need to create a docker-compose.yaml like the example provided so you can pass the device to the DOODS container. It also looks like you need to create a config.yaml to add the edgeTPU Detector.

    There is mention that it will look for the config.yaml in the current working directory and if it does not see one it will use the built-in config in the image. Is that correct?

    I have this folder structure:

    ~/doods2$ ls -lah total 20K drwxr-xr-x 3 doods doods 4.0K Nov 7 10:50 . drwxr-xr-x 4 doods doods 4.0K Nov 7 09:48 .. -rw-r--r-- 1 doods doods 1.1K Nov 4 11:15 config.yaml -rw-r--r-- 1 doods doods 144 Nov 7 09:52 docker-compose.yaml drwxr-xr-x 2 doods doods 4.0K Nov 4 11:14 models

    In my config.yaml, I have added a third detector called edgeTPU and it references the models folder for the model and labels.

    detectors: - name: default type: tflite modelFile: models/coco_ssd_mobilenet_v1_1.0_quant.tflite labelFile: models/coco_labels0.txt hwAccel: false numThreads: 4 - name: tensorflow type: tensorflow modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb labelFile: models/coco_labels1.txt hwAccel: false - name: edgeTPU type: tflite modelFile: models/ssd_mobilenet_v1_coco_quant_postprocess_edgetpu.tflite labelFile: models/coco_labels.txt hwAccel: true numThreads: 4

    And in my docker-compose.yaml, I have the USB device passed:

    version: '3.2' services: doods: image: snowzach/doods2:amd64-gpu ports: - "8080:8080" devices: - /dev/bus/usb/002/002

    For some reason, I am only getting the two default detectors as options. I'm not sure if I'm missing a step that may not be documented, or if it is just ignoring my config.yaml and using the built-in one for some reason.

    Screenshot_2022-11-07-14-31-21_2560x1080

    Any ideas?

    TIA!

    opened by tokenwizard 12
  • error while decoding MB

    error while decoding MB

    Hi, sometimes (after hours) these error is shown:

    [h264 @ 0x7fa5f4d5eb40] error while decoding MB 66 35, bytestream -11

    I read https://stackoverflow.com/questions/64464169/error-while-decoding-mb-171-1-bytestream-27 and think, that the camera has a single fault image....

    Can you add the workaround to doods? (link) or do you have another idea?

    Thanks for doods!

    more infos: https://www.anycodings.com/1questions/2653121/opencv-read-errorh264-0x8f915e0-error-while-decoding-mb-53-20-bytestream-7

    This problem is created when you use the captured frames in the further processing and you create a delay in the pipeline while the rtsp is still streaming.
    
    The solution is to put the capture on a different thread and the frames you use on another thread.
    

    But my python is very bad....

    opened by MarkStephan89 1
Owner
Zach
Zach
Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method

C++/ROS Source Codes for "Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method" published in IEEE Trans. Intelligent Transportation Systems

Bai Li 88 Dec 23, 2022
S2s2net - Sentinel-2 Super-Resolution Segmentation Network

S2S2Net Sentinel-2 Super-Resolution Segmentation Network Getting started Install

Wei Ji 10 Nov 10, 2022
You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling

You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling Transformer-based models are widely used in natural language processi

Zhanpeng Zeng 12 Jan 01, 2023
Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash"

Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash Abstract: Apple recently revealed its deep perceptual hashing system NeuralHash to

<a href=[email protected]"> 11 Dec 03, 2022
A Python package for generating concise, high-quality summaries of a probability distribution

GoodPoints A Python package for generating concise, high-quality summaries of a probability distribution GoodPoints is a collection of tools for compr

Microsoft 28 Oct 10, 2022
Personals scripts using ageitgey/face_recognition

HOW TO USE pip3 install requirements.txt Add some pictures of known people in the folder 'people' : a) Create a folder called by the name of the perso

Antoine Bollengier 1 Jan 06, 2022
Deploying PyTorch Model to Production with FastAPI in CUDA-supported Docker

Deploying PyTorch Model to Production with FastAPI in CUDA-supported Docker A example FastAPI PyTorch Model deploy with nvidia/cuda base docker. Model

Ming 68 Jan 04, 2023
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 03, 2023
Deep and online learning with spiking neural networks in Python

Introduction The brain is the perfect place to look for inspiration to develop more efficient neural networks. One of the main differences with modern

Jason Eshraghian 447 Jan 03, 2023
Using this codebase as a tool for my own research. Making some modifications to the original repo for my own purposes.

For SwapNet Create a list.txt file containing all the images to process. This can be done with the GNU find command: find path/to/input/folder -name '

Andrew Jong 2 Nov 10, 2021
DataCLUE: 国内首个以数据为中心的AI测评(含模型分析报告)

DataCLUE: A Benchmark Suite for Data-centric NLP You can get the english version of README. 以数据为中心的AI测评(DataCLUE) 内容导引 章节 描述 简介 介绍以数据为中心的AI测评(DataCLUE

CLUE benchmark 135 Dec 22, 2022
Pose estimation with MoveNet Lightning

Pose Estimation With MoveNet Lightning MoveNet is the TensorFlow pre-trained model that identifies 17 different key points of the human body. It is th

Yash Vora 2 Jan 04, 2022
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)

Contrastive Unpaired Translation (CUT) video (1m) | video (10m) | website | paper We provide our PyTorch implementation of unpaired image-to-image tra

1.7k Dec 27, 2022
Direct design of biquad filter cascades with deep learning by sampling random polynomials.

IIRNet Direct design of biquad filter cascades with deep learning by sampling random polynomials. Usage git clone https://github.com/csteinmetz1/IIRNe

Christian J. Steinmetz 55 Nov 02, 2022
Unity Propagation in Bayesian Networks Handling Inconsistency via Unity Smoothing

This repository contains the scripts needed to generate the results from the paper Unity Propagation in Bayesian Networks Handling Inconsistency via U

0 Jan 19, 2022
PyTorch implementation of the YOLO (You Only Look Once) v2

PyTorch implementation of the YOLO (You Only Look Once) v2 The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorc

申瑞珉 (Ruimin Shen) 433 Nov 24, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

Semi-supervised-learning-for-medical-image-segmentation. Recently, semi-supervised image segmentation has become a hot topic in medical image computin

Healthcare Intelligence Laboratory 1.3k Jan 03, 2023
"Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"(WWW 2021)

STAR_KGC This repo contains the source code of the paper accepted by WWW'2021. "Structure-Augmented Text Representation Learning for Efficient Knowled

Bo Wang 60 Dec 26, 2022
VR-Caps: A Virtual Environment for Active Capsule Endoscopy

VR-Caps: A Virtual Environment for Capsule Endoscopy Overview We introduce a virtual active capsule endoscopy environment developed in Unity that prov

DeepMIA Lab 90 Dec 27, 2022