Easy to use Python camera interface for NVIDIA Jetson

Related tags

Deep Learningjetcam
Overview

JetCam

JetCam is an easy to use Python camera interface for NVIDIA Jetson.

  • Works with various USB and CSI cameras using Jetson's Accelerated GStreamer Plugins

  • Easily read images as numpy arrays with image = camera.read()

  • Set the camera to running = True to attach callbacks to new frames

JetCam makes it easy to prototype AI projects in Python, especially within the Jupyter Lab programming environment installed in JetCard.

If you find an issue, please let us know!

Setup

git clone https://github.com/NVIDIA-AI-IOT/jetcam
cd jetcam
sudo python3 setup.py install

JetCam is tested against a system configured with the JetCard setup. Different system configurations may require additional steps.

Usage

Below we show some usage examples. You can find more in the notebooks.

Create CSI camera

Call CSICamera to use a compatible CSI camera. capture_width, capture_height, and capture_fps will control the capture shape and rate that images are aquired. width and height control the final output shape of the image as returned by the read function.

from jetcam.csi_camera import CSICamera

camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30)

Create USB camera

Call USBCamera to use a compatbile USB camera. The same parameters as CSICamera apply, along with a parameter capture_device that indicates the device index. You can check the device index by calling ls /dev/video*.

from jetcam.usb_camera import USBCamera

camera = USBCamera(capture_device=1)

Read

Call read() to read the latest image as a numpy.ndarray of data type np.uint8 and shape (224, 224, 3). The color format is BGR8.

image = camera.read()

The read function also updates the camera's internal value attribute.

camera.read()
image = camera.value

Callback

You can also set the camera to running = True, which will spawn a thread that acquires images from the camera. These will update the camera's value attribute automatically. You can attach a callback to the value using the traitlets library. This will call the callback with the new camera value as well as the old camera value

camera.running = True

def callback(change):
    new_image = change['new']
    # do some processing...

camera.observe(callback, names='value')

Cameras

CSI Cameras

These cameras work with the CSICamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Raspberry Pi Camera V2 62.2 3280x2464 $25
Raspberry Pi Camera V2 (NOIR) x 62.2 3280x2464 $31
Arducam IMX219 CS lens mount 3280x2464 $65
Arducam IMX219 M12 lens mount 3280x2464 $60
LI-IMX219-MIPI-FF-NANO 3280x2464 $29
WaveShare IMX219-77 77 3280x2464 $19
WaveShare IMX219-77IR x 77 3280x2464 $21
WaveShare IMX219-120 120 3280x2464 $20
WaveShare IMX219-160 160 3280x2464 $23
WaveShare IMX219-160IR x 160 3280x2464 $25
WaveShare IMX219-200 200 3280x2464 $27

USB Cameras

These cameras work with the USBCamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Logitech C270 60 1280x720 $18

See also

  • JetBot - An educational AI robot based on NVIDIA Jetson Nano

  • JetRacer - An educational AI racecar using NVIDIA Jetson Nano

  • JetCard - An SD card image for web programming AI projects with NVIDIA Jetson Nano

  • torch2trt - An easy to use PyTorch to TensorRT converter

Comments
  • Camera works, Jetcam does not

    Camera works, Jetcam does not

    I am trying to get a Raspberry Pi v2 camera module working on a Jetson Xavier NX with Jetpack 4.4 installed.

    (Specifically, I want to use Jetcam because one of your other projects, https://github.com/NVIDIA-AI-IOT/trt_pose uses Jetcam in its live demo.)

    I know my camera is connected properly and working because if I run:

    gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink
    

    ... I get a video image on screen immediately, no problem.

    However, running even the most basic example (csi_camera notebook), I always get errors:

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         23             if not re:
    ---> 24                 raise RuntimeError('Could not read image from camera.')
         25         except:
    
    RuntimeError: Could not read image from camera.
    
    During handling of the above exception, another exception occurred:
    
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-2-4d23bcae2fae> in <module>
          1 from jetcam.csi_camera import CSICamera
          2 
    ----> 3 camera = CSICamera(width=224, height=224, capture_width=1980, capture_height=1080, capture_fps=30)
    
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         25         except:
         26             raise RuntimeError(
    ---> 27                 'Could not initialize camera.  Please see error trace.')
         28 
         29         atexit.register(self.cap.release)
    
    RuntimeError: Could not initialize camera.  Please see error trace
    

    I've even tried the fix (hack?) suggested in https://github.com/NVIDIA-AI-IOT/jetcam/issues/12 but this makes no difference.

    Any advice on what to look for or what the issue could be?

    opened by anselanza 3
  • remove duplicate comma

    remove duplicate comma

    This duplicate comma causes an error on Jetpack 4.3 (OpenCV 4). error opening bin: could not parse caps "video/x-raw, , format=(string)BGR" Fix #17

    opened by borongyuan 3
  • camera can not initial

    camera can not initial

    Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    from jetcam.csi_camera import CSICamera camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 24, in init RuntimeError: Could not read image from camera.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 27, in init RuntimeError: Could not initialize camera. Please see error trace.

    opened by wangnan31415926 1
  • cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    [email protected]:/usr/lib$ python3
    Python 3.6.8 (default, Jan 14 2019, 11:02:34)
    [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from jetcam.usb_camera import USBCamera
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/usb_camera.py", line 3, in <module>
    ImportError: /usr/local/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol: _ZTIN2cv3dnn14dnn4_v201809175LayerE
    >>>
    
    
    

    My HW is jetson nano and SW env is

    [email protected]:/usr/lib$ jetson-release
     - NVIDIA Jetson NANO/TX1
       * Jetpack 4.2 [L4T 32.1.0]
       * CUDA GPU architecture 5.3
     - Libraries:
       * CUDA 10.0.166
       * cuDNN 7.3.1.28-1+cuda10.0
       * TensorRT 5.0.6.3-1+cuda10.0
       * Visionworks 1.6.0.500n
       * OpenCV 4.0.1 compiled CUDA: YES
     - Jetson Performance: active
    [email protected]:/usr/lib$
    
    
    opened by hgnan 0
  • Install failure

    Install failure

    I runsudo python3 setup.py install

    I get the following:

    /usr/local/lib/python3.8/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.1.36ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    running bdist_egg
    running egg_info
    writing jetcam.egg-info/PKG-INFO
    writing dependency_links to jetcam.egg-info/dependency_links.txt
    writing top-level names to jetcam.egg-info/top_level.txt
    reading manifest file 'jetcam.egg-info/SOURCES.txt'
    adding license file 'LICENSE.md'
    writing manifest file 'jetcam.egg-info/SOURCES.txt'
    installing library code to build/bdist.linux-aarch64/egg
    running install_lib
    running build_py
    creating build/bdist.linux-aarch64/egg
    creating build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/csi_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/__init__.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/usb_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/utils.py -> build/bdist.linux-aarch64/egg/jetcam
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/csi_camera.py to csi_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/__init__.py to __init__.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/usb_camera.py to usb_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/camera.py to camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/utils.py to utils.cpython-38.pyc
    creating build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/PKG-INFO -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/SOURCES.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/dependency_links.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/top_level.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    zip_safe flag not set; analyzing archive contents...
    creating 'dist/jetcam-0.0.0-py3.8.egg' and adding 'build/bdist.linux-aarch64/egg' to it
    removing 'build/bdist.linux-aarch64/egg' (and everything under it)
    Processing jetcam-0.0.0-py3.8.egg
    Removing /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Copying jetcam-0.0.0-py3.8.egg to /usr/lib/python3.8/site-packages
    jetcam 0.0.0 is already the active version in easy-install.pth
    
    Installed /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Processing dependencies for jetcam==0.0.0
    Finished processing dependencies for jetcam==0.0.0
    

    import jetcam returns ModuleNotFoundError: No module named 'jetcam'

    What am I doing wrong?

    opened by master0v 1
  • Jetbot Camera Not Working- RuntimeError: Could not initialize camera.  Please see error trace.

    Jetbot Camera Not Working- RuntimeError: Could not initialize camera. Please see error trace.

    Hello, For some reason I can't get my camera to work again. For context, I tried to use a custom dataset from roboflow but then my kernel kept dying after installing roboflow. I reconfigured the right numpy and edited my .bashrc as I saw in NVIDIA's forum. But now the camera wont initialize. I know it works because it used to work before. I also am able to save a short video with it and able to call it in the terminal. But whenever I try to run a cell in Jupyter that requires the camera, it fails. I've tried restarting the camera too. But no luck :( Any help would be appreciated!

    opened by niiita 1
  • Camera ON LED continues to be on unless I restart the OS.

    Camera ON LED continues to be on unless I restart the OS.

    Hi,

    How can I close the camera after camera.unobserve(update_image, names='value') ? The camera ON LED continues to be on unless I restart the OS. I am using Logitech C270 USB camera. Is there a command to close the camera?

    opened by jam244 0
  • jetcam thread race - read thread and processing thread

    jetcam thread race - read thread and processing thread

    with camera.running = True, jetcam spawns a thread which reads into camera.value

    Now let's say we do, new_image = change['new'] and do some processing. I guess Python does shallow copying and only assigns a reference to the original image array in the new_image variable. So, effectively, new_image and camera.value are pointing to the same memory region. Lets say my processing-thread takes a very long time. In the mean time, camera.value is updated by jetcam-thread. This can cause a thread race. Is that right?

    opened by PhilipsKoshy 0
  • Cannot query video position: status=0, value=-1, duration=-1

    Cannot query video position: status=0, value=-1, duration=-1

    i tried camera = USBCamera(width=224, height=224, capture_width=640, capture_height=480, capture_device=0) the reply is [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

    opened by Chenhait 0
  • AttributeError: 'directional_link' object has no attribute 'link'

    AttributeError: 'directional_link' object has no attribute 'link'

    The beginning steps is OK. But when 'camera_link.link()' fail to execute and I got an error: `--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in ----> 1 camera_link.link()

    AttributeError: 'directional_link' object has no attribute 'link'`

    Don't know what is the reason.

    opened by watershade 0
Releases(v0.0.0)
Owner
NVIDIA AI IOT
NVIDIA AI IOT
The audio-video synchronization of MKV Container Format is exploited to achieve data hiding

The audio-video synchronization of MKV Container Format is exploited to achieve data hiding, where the hidden data can be utilized for various management purposes, including hyper-linking, annotation

Maxim Zaika 1 Nov 17, 2021
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Xtra Computing Group 1.4k Dec 22, 2022
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
This is an official implementation for "Video Swin Transformers".

Video Swin Transformer By Ze Liu*, Jia Ning*, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin and Han Hu. This repo is the official implementation of "V

Swin Transformer 981 Jan 03, 2023
[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021

Convolutional MLP ConvMLP: Hierarchical Convolutional MLPs for Vision Preprint link: ConvMLP: Hierarchical Convolutional MLPs for Vision By Jiachen Li

SHI Lab 143 Jan 03, 2023
Weight initialization schemes for PyTorch nn.Modules

nninit Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin. ##Update This repo has been

Alykhan Tejani 69 Jan 26, 2021
Datasets for new state-of-the-art challenge in disentanglement learning

High resolution disentanglement datasets This repository contains the Falcor3D and Isaac3D datasets, which present a state-of-the-art challenge for co

NVIDIA Research Projects 37 May 26, 2022
Attention-guided gan for synthesizing IR images

SI-AGAN Attention-guided gan for synthesizing IR images This repository contains the Tensorflow code for "Pedestrian Gender Recognition by Style Trans

1 Oct 25, 2021
The repo of the preprinting paper "Labels Are Not Perfect: Inferring Spatial Uncertainty in Object Detection"

Inferring Spatial Uncertainty in Object Detection A teaser version of the code for the paper Labels Are Not Perfect: Inferring Spatial Uncertainty in

ZINING WANG 21 Mar 03, 2022
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 06, 2023
Title: Heart-Failure-Classification

This Notebook is based off an open source dataset available on where I have created models to classify patients who can potentially witness heart failure on the basis of various parameters. The best

Akarsh Singh 2 Sep 13, 2022
A crossplatform menu bar application using mpv as DLNA Media Renderer.

Macast Chinese README A menu bar application using mpv as DLNA Media Renderer. Install MacOS || Windows || Debian Download link: Macast release latest

4.4k Jan 01, 2023
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Ibai Gorordo 42 Oct 07, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 07, 2022
Social Network Ads Prediction

Social network advertising, also social media targeting, is a group of terms that are used to describe forms of online advertising that focus on social networking services.

Khazar 2 Jan 28, 2022
For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training.

LongScientificFormer For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training. Some code

Athar Sefid 6 Nov 02, 2022
Lip Reading - Cross Audio-Visual Recognition using 3D Convolutional Neural Networks

Lip Reading - Cross Audio-Visual Recognition using 3D Convolutional Neural Networks - Official Project Page This repository contains the code develope

Amirsina Torfi 1.7k Dec 18, 2022
FrankMocap: A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

FrankMocap pursues an easy-to-use single view 3D motion capture system developed by Facebook AI Research (FAIR). FrankMocap provides state-of-the-art 3D pose estimation outputs for body, hand, and bo

Facebook Research 1.9k Jan 07, 2023
ConvMAE: Masked Convolution Meets Masked Autoencoders

ConvMAE ConvMAE: Masked Convolution Meets Masked Autoencoders Peng Gao1, Teli Ma1, Hongsheng Li2, Jifeng Dai3, Yu Qiao1, 1 Shanghai AI Laboratory, 2 M

Alpha VL Team of Shanghai AI Lab 345 Jan 08, 2023