Easy to use Python camera interface for NVIDIA Jetson

Related tags

Deep Learningjetcam
Overview

JetCam

JetCam is an easy to use Python camera interface for NVIDIA Jetson.

  • Works with various USB and CSI cameras using Jetson's Accelerated GStreamer Plugins

  • Easily read images as numpy arrays with image = camera.read()

  • Set the camera to running = True to attach callbacks to new frames

JetCam makes it easy to prototype AI projects in Python, especially within the Jupyter Lab programming environment installed in JetCard.

If you find an issue, please let us know!

Setup

git clone https://github.com/NVIDIA-AI-IOT/jetcam
cd jetcam
sudo python3 setup.py install

JetCam is tested against a system configured with the JetCard setup. Different system configurations may require additional steps.

Usage

Below we show some usage examples. You can find more in the notebooks.

Create CSI camera

Call CSICamera to use a compatible CSI camera. capture_width, capture_height, and capture_fps will control the capture shape and rate that images are aquired. width and height control the final output shape of the image as returned by the read function.

from jetcam.csi_camera import CSICamera

camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30)

Create USB camera

Call USBCamera to use a compatbile USB camera. The same parameters as CSICamera apply, along with a parameter capture_device that indicates the device index. You can check the device index by calling ls /dev/video*.

from jetcam.usb_camera import USBCamera

camera = USBCamera(capture_device=1)

Read

Call read() to read the latest image as a numpy.ndarray of data type np.uint8 and shape (224, 224, 3). The color format is BGR8.

image = camera.read()

The read function also updates the camera's internal value attribute.

camera.read()
image = camera.value

Callback

You can also set the camera to running = True, which will spawn a thread that acquires images from the camera. These will update the camera's value attribute automatically. You can attach a callback to the value using the traitlets library. This will call the callback with the new camera value as well as the old camera value

camera.running = True

def callback(change):
    new_image = change['new']
    # do some processing...

camera.observe(callback, names='value')

Cameras

CSI Cameras

These cameras work with the CSICamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Raspberry Pi Camera V2 62.2 3280x2464 $25
Raspberry Pi Camera V2 (NOIR) x 62.2 3280x2464 $31
Arducam IMX219 CS lens mount 3280x2464 $65
Arducam IMX219 M12 lens mount 3280x2464 $60
LI-IMX219-MIPI-FF-NANO 3280x2464 $29
WaveShare IMX219-77 77 3280x2464 $19
WaveShare IMX219-77IR x 77 3280x2464 $21
WaveShare IMX219-120 120 3280x2464 $20
WaveShare IMX219-160 160 3280x2464 $23
WaveShare IMX219-160IR x 160 3280x2464 $25
WaveShare IMX219-200 200 3280x2464 $27

USB Cameras

These cameras work with the USBCamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Logitech C270 60 1280x720 $18

See also

  • JetBot - An educational AI robot based on NVIDIA Jetson Nano

  • JetRacer - An educational AI racecar using NVIDIA Jetson Nano

  • JetCard - An SD card image for web programming AI projects with NVIDIA Jetson Nano

  • torch2trt - An easy to use PyTorch to TensorRT converter

Comments
  • Camera works, Jetcam does not

    Camera works, Jetcam does not

    I am trying to get a Raspberry Pi v2 camera module working on a Jetson Xavier NX with Jetpack 4.4 installed.

    (Specifically, I want to use Jetcam because one of your other projects, https://github.com/NVIDIA-AI-IOT/trt_pose uses Jetcam in its live demo.)

    I know my camera is connected properly and working because if I run:

    gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink
    

    ... I get a video image on screen immediately, no problem.

    However, running even the most basic example (csi_camera notebook), I always get errors:

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         23             if not re:
    ---> 24                 raise RuntimeError('Could not read image from camera.')
         25         except:
    
    RuntimeError: Could not read image from camera.
    
    During handling of the above exception, another exception occurred:
    
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-2-4d23bcae2fae> in <module>
          1 from jetcam.csi_camera import CSICamera
          2 
    ----> 3 camera = CSICamera(width=224, height=224, capture_width=1980, capture_height=1080, capture_fps=30)
    
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         25         except:
         26             raise RuntimeError(
    ---> 27                 'Could not initialize camera.  Please see error trace.')
         28 
         29         atexit.register(self.cap.release)
    
    RuntimeError: Could not initialize camera.  Please see error trace
    

    I've even tried the fix (hack?) suggested in https://github.com/NVIDIA-AI-IOT/jetcam/issues/12 but this makes no difference.

    Any advice on what to look for or what the issue could be?

    opened by anselanza 3
  • remove duplicate comma

    remove duplicate comma

    This duplicate comma causes an error on Jetpack 4.3 (OpenCV 4). error opening bin: could not parse caps "video/x-raw, , format=(string)BGR" Fix #17

    opened by borongyuan 3
  • camera can not initial

    camera can not initial

    Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    from jetcam.csi_camera import CSICamera camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 24, in init RuntimeError: Could not read image from camera.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 27, in init RuntimeError: Could not initialize camera. Please see error trace.

    opened by wangnan31415926 1
  • cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    [email protected]:/usr/lib$ python3
    Python 3.6.8 (default, Jan 14 2019, 11:02:34)
    [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from jetcam.usb_camera import USBCamera
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/usb_camera.py", line 3, in <module>
    ImportError: /usr/local/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol: _ZTIN2cv3dnn14dnn4_v201809175LayerE
    >>>
    
    
    

    My HW is jetson nano and SW env is

    [email protected]:/usr/lib$ jetson-release
     - NVIDIA Jetson NANO/TX1
       * Jetpack 4.2 [L4T 32.1.0]
       * CUDA GPU architecture 5.3
     - Libraries:
       * CUDA 10.0.166
       * cuDNN 7.3.1.28-1+cuda10.0
       * TensorRT 5.0.6.3-1+cuda10.0
       * Visionworks 1.6.0.500n
       * OpenCV 4.0.1 compiled CUDA: YES
     - Jetson Performance: active
    [email protected]:/usr/lib$
    
    
    opened by hgnan 0
  • Install failure

    Install failure

    I runsudo python3 setup.py install

    I get the following:

    /usr/local/lib/python3.8/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.1.36ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    running bdist_egg
    running egg_info
    writing jetcam.egg-info/PKG-INFO
    writing dependency_links to jetcam.egg-info/dependency_links.txt
    writing top-level names to jetcam.egg-info/top_level.txt
    reading manifest file 'jetcam.egg-info/SOURCES.txt'
    adding license file 'LICENSE.md'
    writing manifest file 'jetcam.egg-info/SOURCES.txt'
    installing library code to build/bdist.linux-aarch64/egg
    running install_lib
    running build_py
    creating build/bdist.linux-aarch64/egg
    creating build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/csi_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/__init__.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/usb_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/utils.py -> build/bdist.linux-aarch64/egg/jetcam
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/csi_camera.py to csi_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/__init__.py to __init__.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/usb_camera.py to usb_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/camera.py to camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/utils.py to utils.cpython-38.pyc
    creating build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/PKG-INFO -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/SOURCES.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/dependency_links.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/top_level.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    zip_safe flag not set; analyzing archive contents...
    creating 'dist/jetcam-0.0.0-py3.8.egg' and adding 'build/bdist.linux-aarch64/egg' to it
    removing 'build/bdist.linux-aarch64/egg' (and everything under it)
    Processing jetcam-0.0.0-py3.8.egg
    Removing /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Copying jetcam-0.0.0-py3.8.egg to /usr/lib/python3.8/site-packages
    jetcam 0.0.0 is already the active version in easy-install.pth
    
    Installed /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Processing dependencies for jetcam==0.0.0
    Finished processing dependencies for jetcam==0.0.0
    

    import jetcam returns ModuleNotFoundError: No module named 'jetcam'

    What am I doing wrong?

    opened by master0v 1
  • Jetbot Camera Not Working- RuntimeError: Could not initialize camera.  Please see error trace.

    Jetbot Camera Not Working- RuntimeError: Could not initialize camera. Please see error trace.

    Hello, For some reason I can't get my camera to work again. For context, I tried to use a custom dataset from roboflow but then my kernel kept dying after installing roboflow. I reconfigured the right numpy and edited my .bashrc as I saw in NVIDIA's forum. But now the camera wont initialize. I know it works because it used to work before. I also am able to save a short video with it and able to call it in the terminal. But whenever I try to run a cell in Jupyter that requires the camera, it fails. I've tried restarting the camera too. But no luck :( Any help would be appreciated!

    opened by niiita 1
  • Camera ON LED continues to be on unless I restart the OS.

    Camera ON LED continues to be on unless I restart the OS.

    Hi,

    How can I close the camera after camera.unobserve(update_image, names='value') ? The camera ON LED continues to be on unless I restart the OS. I am using Logitech C270 USB camera. Is there a command to close the camera?

    opened by jam244 0
  • jetcam thread race - read thread and processing thread

    jetcam thread race - read thread and processing thread

    with camera.running = True, jetcam spawns a thread which reads into camera.value

    Now let's say we do, new_image = change['new'] and do some processing. I guess Python does shallow copying and only assigns a reference to the original image array in the new_image variable. So, effectively, new_image and camera.value are pointing to the same memory region. Lets say my processing-thread takes a very long time. In the mean time, camera.value is updated by jetcam-thread. This can cause a thread race. Is that right?

    opened by PhilipsKoshy 0
  • Cannot query video position: status=0, value=-1, duration=-1

    Cannot query video position: status=0, value=-1, duration=-1

    i tried camera = USBCamera(width=224, height=224, capture_width=640, capture_height=480, capture_device=0) the reply is [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

    opened by Chenhait 0
  • AttributeError: 'directional_link' object has no attribute 'link'

    AttributeError: 'directional_link' object has no attribute 'link'

    The beginning steps is OK. But when 'camera_link.link()' fail to execute and I got an error: `--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in ----> 1 camera_link.link()

    AttributeError: 'directional_link' object has no attribute 'link'`

    Don't know what is the reason.

    opened by watershade 0
Releases(v0.0.0)
Owner
NVIDIA AI IOT
NVIDIA AI IOT
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

967 Jan 04, 2023
Automated Evidence Collection for Fake News Detection

Automated Evidence Collection for Fake News Detection This is the code repo for the Automated Evidence Collection for Fake News Detection paper accept

Mrinal Rawat 2 Apr 12, 2022
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
A MatConvNet-based implementation of the Fully-Convolutional Networks for image segmentation

MatConvNet implementation of the FCN models for semantic segmentation This package contains an implementation of the FCN models (training and evaluati

VLFeat.org 175 Feb 18, 2022
Tensorflow implementation of Semi-supervised Sequence Learning (https://arxiv.org/abs/1511.01432)

Transfer Learning for Text Classification with Tensorflow Tensorflow implementation of Semi-supervised Sequence Learning(https://arxiv.org/abs/1511.01

DONGJUN LEE 82 Oct 22, 2022
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

Antoine Caillon 589 Jan 02, 2023
Python Wrapper for Embree

pyembree Python Wrapper for Embree Installation You can install pyembree (and embree) via the conda-forge package. $ conda install -c conda-forge pyem

Anthony Scopatz 67 Dec 24, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
A 10000+ hours dataset for Chinese speech recognition

WenetSpeech Official website | Paper A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition Download Please visit the official website, rea

310 Jan 03, 2023
Attack on Confidence Estimation algorithm from the paper "Disrupting Deep Uncertainty Estimation Without Harming Accuracy"

Attack on Confidence Estimation (ACE) This repository is the official implementation of "Disrupting Deep Uncertainty Estimation Without Harming Accura

3 Mar 30, 2022
网络协议2天集训

网络协议2天集训 抓包工具安装 Wireshark wireshark下载地址 Tcpdump CentOS yum install tcpdump -y Ubuntu apt-get install tcpdump -y k8s抓包测试环境 查看虚拟网卡veth pair 查看

120 Dec 12, 2022
Music Generation using Neural Networks Streamlit App

Music_Gen_Streamlit "Music Generation using Neural Networks" Streamlit App TO DO: Make a run_app.sh Introduction [~5 min] (Sohaib) Team Member names/i

Muhammad Sohaib Arshid 6 Aug 09, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

88 Nov 22, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

LAVT: Language-Aware Vision Transformer for Referring Image Segmentation Where we are ? 12.27 目前和原论文仍有1%左右得差距,但已经力压很多SOTA了 ckpt__448_epoch_25.pth mIoU

zichengsaber 60 Dec 11, 2022
Using modified BiSeNet for face parsing in PyTorch

face-parsing.PyTorch Contents Training Demo References Training Prepare training data: -- download CelebAMask-HQ dataset -- change file path in the pr

zll 1.6k Jan 08, 2023
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 207 Dec 30, 2022