Simple renderer for use with MuJoCo (>=2.1.2) Python Bindings.

Overview

Viewer for MuJoCo in Python

Interactive renderer to use with the official Python bindings for MuJoCo.

Starting with version 2.1.2, MuJoCo comes with native Python bindings officially supported by the MuJoCo devs.

If you have been a user of mujoco-py, you might be looking to migrate.
Some pointers on migration are available here.

Install

$ git clone https://github.com/rohanpsingh/mujoco-python-viewer
$ cd mujoco-python-viewer
$ pip install -e .

Or, install via Pip.

$ pip install mujoco-python-viewer

Usage

import mujoco
import mujoco_viewer

model = mujoco.MjModel.from_xml_path('humanoid.xml')
data = mujoco.MjData(model)

# create the viewer object
viewer = mujoco_viewer.MujocoViewer(model, data)

# simulate and render
for _ in range(100000):
    mujoco.mj_step(model, data)
    viewer.render()

# close
viewer.close()

The render should pop up and the simulation should be running.
Double-click on a geom and hold Ctrl to apply forces (right) and torques (left).

ezgif-2-6758c40cdf

Press ESC to quit.
Other key bindings are shown in the overlay menu (almost similar to mujoco-py).

Comments
  • Not able to get view to render in real time.

    Not able to get view to render in real time.

    I am running a simulation with a time step of 0.001 and gravity of -9.8. My model isn't very tall just 0.4m. When I use your viewer it puts everything into slow motion. If I turn off help it gets faster but still in slow motion. Pressing D seems to make it just go faster than real life. Why is it not moving at the same rate it would in real life? How do I get it to render in real time?

    opened by Robokan 5
  • Can I use mujoco-python-viewer using `dm_control` API?

    Can I use mujoco-python-viewer using `dm_control` API?

    I'm new to mujoco and I'm trying to play with interactive visualization. mujoco-python-viewer seems really useful!

    I noticed though that I cannot use it with the dm_control.mujoco.Physics API (which is more convenient for named indexing, etc.).

    To clarify my intention, below is an example of the way I would like to use it:

    from dm_control import mujoco
    import mujoco_viewer
    
    physics = mujoco.Physics.from_xml_path('my_model.xml')
    model = physics.model
    data = physics.data
    
    viewer = mujoco_viewer.MujocoViewer(model, data)
    
    for _ in range(10000):
        if viewer.is_alive:
            physics.step()
            viewer.render()
        else:
            break
    
    viewer.close()
    

    Is there a way to do that?

    opened by omershalev 5
  • Quitting does not release ctx

    Quitting does not release ctx

    When ESC is pressed to terminate the viewer, the code will just:

    print("Prssed ESC")
    print("Quitting.")
    glfw.terminate()
    sys.exit(0)
    

    Is there a reason why this code is not just calling the self.close() which does partly the same and in addition releases ctx?

    opened by rpapallas 5
  • Code simplification, kinematic loop example

    Code simplification, kinematic loop example

    Added a Kinematic Loop example Simplified the mujoco_viewer.py with callbacks in another file. Auto creates a root/tmp (unless root/tmp exists) to save screen captures in it.

    TODO: No.2 in #4

    opened by rohit-kumar-j 4
  • Added ability to toggle on/off the small bottom-left menu

    Added ability to toggle on/off the small bottom-left menu

    Sometimes, especially for experiments, it's good to have a clean window without any menus to take screenshots. I added a small code that provides a toggle to turn on or off the bottom-left stats menu. I also added an optional parameter to turn it off when initializing the viewer. By default, nothing changes, the stats menu will be visible as like before.

    I had to introduce two different names for these menus: help_menu for the previous menu and statistics_menu for the bottom-left one.

    opened by rpapallas 3
  • Feature: Extra Examples (outside a simple the viewer)?

    Feature: Extra Examples (outside a simple the viewer)?

    I'm currently working with masse, torques, etc, seeing this series. And was hoping to take the examples directory a bit further (Although I'm not sure how much of this is practical). By creating sort of a tutorial/example with a simple pendulum to obtain units of torque, tuning values of kp, kd, ki etc... A graphing of PID error like the profiler/sensor section of the simulate viewer which generates live graphs.

    Perhaps a wiki with these:

    Existing graphing: profiler_mujoco

    opened by rohit-kumar-j 3
  • How to display arrow when dragging?

    How to display arrow when dragging?

    This is nice repository. This code will help me a lot.

    But I have a question about displaying arrow when dragging.

    In example in readme.md, arrow to indicate force is displayed like this.

    161459985-a47e74dc-92c9-4a0b-99fc-92d1b5b04163

    https://user-images.githubusercontent.com/53563180/185560247-a8f1c8f9-95a5-450d-bd3f-c6554323b6c6.mp4

    However, it shows a box and prevent me from undastanfing the direction of the force in my trial. I also trys left/right ctrl keys.

    Do you have any idea to fix this?

    thanks

    this is my environment python 3.7.12 glfw 2.5.4 mujoco-python-viewer 0.1.1

    opened by gyuta 2
  • Testing

    Testing

    I tested the code on python3 on mac (intel) and I had to do 3 changes to get it work

    1. remove import imagio (package is not needed and I was not able to install it anyway)
    2. line 500 and 533 I had to change is to ==
    opened by pab47 2
  • [Issue] Multi-instances for multiple view

    [Issue] Multi-instances for multiple view

    Thanks for the great work, and I am trying to transform my script to use this lib from mujoco-py. But I realized that this library seems to be incapable to create multiple instances: For example, a -1 observer view, and 0,1 for stereo vision.

    I am wondering if there is any workaround in-mind related to this?

    Best Jack

    opened by jaku-jaku 2
  • Bugs occur when using 'double click'  and 'ctrl and left click or right click' on mac

    Bugs occur when using 'double click' and 'ctrl and left click or right click' on mac

    Hello, I am using mbp m1 to test this viewer with mujoco python bindings.

    I found that when I run the basic example by this viewer, mouse actions are wrong on mac.

    The bug is "double click" cannot select an object but turn on/off the contact force option( and c button function by keyboard can still work). So I cannot use ctrl + left/right click to give a torque or force on object.

    I tested this in MuJoCo simulation by importing a xml file to MuJoCo directly, also on mac, and the "double click" worked and can select the object. So it's not the mujoco issue. Besides, I also tested same version viewer on ubuntu, it works very well.

    I suspect there are somethings different on mac. I checked the code but found nothing.

    Please have a look, many thanks!

    opened by KJaebye 1
  • Converted class to a context manager

    Converted class to a context manager

    This allows a client to use the class in the following way:

    with MujocoViewer(model, data) as viewer:
        viewer.render()
    

    and it will call viewer.close() when it goes out of scope, so the client doesn't have to.

    opened by rpapallas 1
  • How to record simulation movies?

    How to record simulation movies?

    Hello,

    I am new to Mujoco but I could take a screenshot by referring to your program! Thank you very much.

    However, I did not know how to take a video and would like to know how to do so.

    I am sorry to trouble you with this, but thank you in advance for your time.

    Thank you in advance.

    opened by miyukin73 1
  • Full Reload of Sim without closing window?

    Full Reload of Sim without closing window?

    This might be a breaking change:

    # pass in the xml path to the viewer directly and upon KEY_BACKSPACE, reload the sim
    viewer = viewer.MujocoViewer(xml_path="Projects/rjax_python/robots/humanoid/scene.xml")
    while True:
            mujoco.mj_step(viewer.model, viewer.data)
            viewer.render()
    

    Each time the model and data have to be accessed, they have to be done via viewer.model and viewer.data. The examples, etc need to change. Would this PR be okay? (Of course, the changes will be reflected in the README and examples)

    Need for this/Use case:

    No relaunching of the python script for .xml tweaking, no need to use simulate.cc for the same

    Implementation Example:

    https://user-images.githubusercontent.com/37873142/192229058-3711d7ab-b69c-46c0-b6b3-998364ce704f.mp4

    (If the video stops in the middle, kindly scrub manually to the end. The video may be corrupted)

    opened by rohit-kumar-j 2
  • Large/Small Font options with MjrContext?

    Large/Small Font options with MjrContext?

    Too many changes in #23. So I want to ask this here (perhaps there are too many config options): Add font options withinMjrContext?

    viewer.__init__(font="small")  # or "large"
    
    if font == "large":
       self.ctx = mujoco.MjrContext(
           self.model, mujoco.mjtFontScale.mjFONTSCALE_150.value)
    elif font == "small":
        self.ctx = mujoco.MjrContext(
           self.model, mujoco.mjtFontScale.mjFONTSCALE_100.value)
    

    | Small | Large | |:------:|:-------:| | | |

    opened by rohit-kumar-j 0
  • Added graph rendering, Actuator force visualization[no sites], Sim reset method(backspace) and window positioning

    Added graph rendering, Actuator force visualization[no sites], Sim reset method(backspace) and window positioning

    Graph preview (KEY: G)

    Unfortunately, the time at the bottom of the graph was not recorded in the video. It gives a time-based graph The red line is a random signal(sine in this case)

    https://user-images.githubusercontent.com/37873142/190720262-22f09c46-363b-4dc3-8c37-b340ed66a69b.mp4

    Actuator Force visualization via graphs

    The sites are used to get the location and orientation of the body at the actuator location only.

    https://user-images.githubusercontent.com/37873142/190720353-2cf6e5d8-a1d5-4c67-8757-a5c2f8d6cbd6.mp4

    ... and added examples

    opened by rohit-kumar-j 5
  • User options

    User options

    Hello,

    I needed some way to get some "user options". I have different MuJoCo data that I would like to visualize, so I wanted a way for a user to press "1" and then the client code alter the viewer data/model to the first data, then press "2" and alter the viewer data/model to the second data etc.

    I have written this here: https://github.com/rpapallas/mujoco-python-viewer/commit/dc8679ee39623cd7d93b7576ed1d089d938beee7

    If you think something like this is going to be useful and could be implemented like this or differently, please let me know. This could be a generic feature like "user options" allowing the client code to do something if a certain user option is pressed, which currently is limited to numeric values, but could be any value while shift is pressed. I understand that this might not be useful to everyone, though.

    opened by rpapallas 0
Releases(v0.1.2)
  • v0.1.2(Aug 23, 2022)

    New feature

    • Ctrl+S will save current camera configuration in config.yaml
    • Saved camera configuration will automatically be loaded on startup and applied (if possible)

    NOTE

    Not tested on Windows or MacOS

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Aug 7, 2022)

  • v0.1.0(Jul 26, 2022)

    Added

    • Support for offscreen rendering!
    • Sample program for offscreen: examples/offscreen_demo.py

    Changes

    • examples/markers_demo.py will now loop forever until window is closed.

    Fixes

    • Fix thread crash behavior on ESC key.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 22, 2022)

Owner
Rohan P. Singh
PhD student at JRL, Japan.
Rohan P. Singh
Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models

Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models Abstract Many applications of generative models rely on the marginali

Stanford Intelligent Systems Laboratory 9 Jun 06, 2022
Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021)

Pano-AVQA Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021) [Paper] [Poster] [Video] Getting Starte

Heeseung Yun 9 Dec 23, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
Transfer Learning Shootout for PyTorch's model zoo (torchvision)

pytorch-retraining Transfer Learning shootout for PyTorch's model zoo (torchvision). Load any pretrained model with custom final layer (num_classes) f

Alexander Hirner 169 Jun 29, 2022
SEC'21: Sparse Bitmap Compression for Memory-Efficient Training onthe Edge

Training Deep Learning Models on The Edge Training on the Edge enables continuous learning from new data for deployed neural networks on memory-constr

Brown University Scale Lab 4 Nov 18, 2022
Project for music generation system based on object tracking and CGAN

Project for music generation system based on object tracking and CGAN The project was inspired by MIDINet: A Convolutional Generative Adversarial Netw

1 Nov 21, 2021
TriMap: Large-scale Dimensionality Reduction Using Triplets

TriMap TriMap is a dimensionality reduction method that uses triplet constraints to form a low-dimensional embedding of a set of points. The triplet c

Ehsan Amid 235 Dec 24, 2022
The Official TensorFlow Implementation for SPatchGAN (ICCV2021)

SPatchGAN: Official TensorFlow Implementation Paper "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation"

39 Dec 30, 2022
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

Kakao Brain 72 Dec 28, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".

Detecting Twenty-thousand Classes using Image-level Supervision Detic: A Detector with image classes that can use image-level labels to easily train d

Meta Research 1.3k Jan 04, 2023
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023
Code and dataset for ACL2018 paper "Exploiting Document Knowledge for Aspect-level Sentiment Classification"

Aspect-level Sentiment Classification Code and dataset for ACL2018 [paper] ‘‘Exploiting Document Knowledge for Aspect-level Sentiment Classification’’

Ruidan He 146 Nov 29, 2022
unofficial pytorch implementation of RefineGAN

RefineGAN unofficial pytorch implementation of RefineGAN (https://arxiv.org/abs/1709.00753) for CSMRI reconstruction, the official code using tensorpa

xinby17 5 Jul 21, 2022
This is the offical website for paper ''Category-consistent deep network learning for accurate vehicle logo recognition''

The Pytorch Implementation of Category-consistent deep network learning for accurate vehicle logo recognition This is the offical website for paper ''

Wanglong Lu 28 Oct 29, 2022
CTF challenges from redpwnCTF 2021

redpwnCTF 2021 Challenges This repository contains challenges from redpwnCTF 2021 in the rCDS format; challenge information is in the challenge.yaml f

redpwn 27 Dec 07, 2022
TGS Salt Identification Challenge

TGS Salt Identification Challenge This is an open solution to the TGS Salt Identification Challenge. Note Unfortunately, we can no longer provide supp

neptune.ai 123 Nov 04, 2022
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

Jittor: a Just-in-time(JIT) deep learning framework Quickstart | Install | Tutorial | Chinese Jittor is a high-performance deep learning framework bas

2.7k Jan 03, 2023
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022