A Simulation Environment to train Robots in Large Realistic Interactive Scenes

Overview

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson is a simulation environment providing fast visual rendering and physics simulation based on Bullet. iGibson is equipped with fifteen fully interactive high quality scenes, hundreds of large 3D scenes reconstructed from real homes and offices, and compatibility with datasets like CubiCasa5K and 3D-Front, providing 8000+ additional interactive scenes. Some of the features of iGibson include domain randomization, integration with motion planners and easy-to-use tools to collect human demonstrations. With these scenes and features, iGibson allows researchers to train and evaluate robotic agents that use visual signals to solve navigation and manipulation tasks such as opening doors, picking up and placing objects, or searching in cabinets.

Latest Updates

[8/9/2021] Major update to iGibson to reach iGibson 2.0, for details please refer to our arxiv preprint.

  • iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks.
  • iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked.
  • iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.

[12/1/2020] Major update to iGibson to reach iGibson 1.0, for details please refer to our arxiv preprint.

  • Release of iGibson dataset that includes 15 fully interactive scenes and 500+ object models annotated with materials and physical attributes on top of existing 3D articulated models.
  • Compatibility to import CubiCasa5K and 3D-Front scene descriptions leading to more than 8000 extra interactive scenes!
  • New features in iGibson: Physically based rendering, 1-beam and 16-beam LiDAR, domain randomization, motion planning integration, tools to collect human demos and more!
  • Code refactoring, better class structure and cleanup.

[05/14/2020] Added dynamic light support 🔦

[04/28/2020] Added support for Mac OSX 💻

Citation

If you use iGibson or its assets and models, consider citing the following publication:

@misc{li2021igibson,
      title={iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks}, 
      author={Chengshu Li and Fei Xia and Roberto Mart\'in-Mart\'in and Michael Lingelbach and Sanjana Srivastava and Bokui Shen and Kent Vainio and Cem Gokmen and Gokul Dharan and Tanish Jain and Andrey Kurenkov and Karen Liu and Hyowon Gweon and Jiajun Wu and Li Fei-Fei and Silvio Savarese},
      year={2021},
      eprint={2108.03272},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}
@inproceedings{shen2021igibson,
      title={iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes}, 
      author={Bokui Shen and Fei Xia and Chengshu Li and Roberto Mart\'in-Mart\'in and Linxi Fan and Guanzhi Wang and Claudia P\'erez-D'Arpino and Shyamal Buch and Sanjana Srivastava and Lyne P. Tchapmi and Micael E. Tchapmi and Kent Vainio and Josiah Wong and Li Fei-Fei and Silvio Savarese},
      booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year={2021},
      pages={accepted},
      organization={IEEE}
}

Documentation

The documentation for iGibson can be found here: iGibson Documentation. It includes installation guide (including data download instructions), quickstart guide, code examples, and APIs.

If you want to know more about iGibson, you can also check out our webpage, iGibson 2.0 arxiv preprint and iGibson 1.0 arxiv preprint.

Dowloading the Dataset of 3D Scenes

For instructions to install iGibson and download dataset, you can visit installation guide and dataset download guide.

There are other datasets we link to iGibson. We include support to use CubiCasa5K and 3DFront scenes, adding up more than 10000 extra interactive scenes to use in iGibson! Check our documentation on how to use those.

We also maintain compatibility with datasets of 3D reconstructed large real-world scenes (homes and offices) that you can download and use with iGibson. For Gibson Dataset and Stanford 2D-3D-Semantics Dataset, please fill out this form. For Matterport3D Dataset, please fill in this form and send it to [email protected]. Please put "use with iGibson simulator" in your email. Check our dataset download guide for more details.

Using iGibson with VR

If you want to use iGibson VR interface, please visit the [VR guide (TBA)].

Contributing

This is the github repository for iGibson (pip package igibson) 2.0 release. (For iGibson 1.0, please use 1.0 branch.) Bug reports, suggestions for improvement, as well as community developments are encouraged and appreciated. Please, consider creating an issue or sending us an email.

The support for our previous version of the environment, Gibson, can be found in the following repository.

Acknowledgments

iGibson uses code from a few open source repositories. Without the efforts of these folks (and their willingness to release their implementations under permissable copyleft licenses), iGibson would not be possible. We thanks these authors for their efforts!

Comments
  • Motion planning doesn't avoid obstacles

    Motion planning doesn't avoid obstacles

    Motion-planned arm movement will not avoid walls in an interactive scene. Do walls have a body ID like floors that should be appended to the MotionPlanningWrapper's obstacles list?

    opened by CharlesAverill 31
  • get_lidar_all

    get_lidar_all

    Hello, https://github.com/StanfordVL/iGibson/blob/5f8d253694b23b41c53959774203ba5787578b74/igibson/render/mesh_renderer/mesh_renderer_cpu.py#L1390 The function get_lidar_all is not working. The camera does not turn during the 4 iterations. So the result of the readings is the same chair scene rotated 90 degrees, 4 times and patched together. I am trying to reconstruct a 360 degree scene by transforming 3d streams to the global coordianate system and patching them together but nothing is working. Please help.

    opened by elhamAm 22
  •  Exception: floors.txt cannot be found in model: area1

    Exception: floors.txt cannot be found in model: area1

    Hi, there is something wrong with me,when I run roslaunch gibson2-ros turtlebot_rgbd.launch
    it shows: Exception:floors.txt cannot be found in model: area1 I have downloaded the entire gibson_v2 dataset, and the area1 subset does not contain the file floors.txt. How can I get floors.txt?

    opened by Jingjinganhao 18
  • ERROR: Unable to initialize EGL

    ERROR: Unable to initialize EGL

    Hi team, thank you for maintaining this project.

    My iGibson installation went fine, but I am facing an issue that seems common among many iGibson beginners.

    (igib) ➜  ~ python -m igibson.examples.environments.env_nonint_example
    
     _   _____  _  _
    (_) / ____|(_)| |
     _ | |  __  _ | |__   ___   ___   _ __
    | || | |_ || || '_ \ / __| / _ \ | '_ \
    | || |__| || || |_) |\__ \| (_) || | | |
    |_| \_____||_||_.__/ |___/ \___/ |_| |_|
    
    ********************************************************************************
    Description:
        Creates an iGibson environment from a config file with a turtlebot in Rs (not interactive).
        It steps the environment 100 times with random actions sampled from the action space,
        using the Gym interface, resetting it 10 times.
        ********************************************************************************
    INFO:igibson.render.mesh_renderer.get_available_devices:Command '['/home/mukul/iGibson/igibson/render/mesh_renderer/build/test_device', '0']' returned non-zero exit status 1.
    INFO:igibson.render.mesh_renderer.get_available_devices:Device 0 is not available for rendering
    WARNING:igibson.render.mesh_renderer.mesh_renderer_cpu:Device index is larger than number of devices, falling back to use 0
    WARNING:igibson.render.mesh_renderer.mesh_renderer_cpu:If you have trouble using EGL, please visit our trouble shooting guideat http://svl.stanford.edu/igibson/docs/issues.html
    libEGL warning: DRI2: failed to create dri screen
    libEGL warning: DRI2: failed to create dri screen
    ERROR: Unable to initialize EGL
    

    I went through all the closed issues related to this, but nothing helped. I also went through the troubleshooting guide and things seemed fine to me. Here are the outputs of some commands I ran to check the EGL installation:

    • (igib) ➜  ~ ldconfig -p | grep EGL
      	libEGL_nvidia.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.0
      	libEGL_mesa.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_mesa.so.0
      	libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so.1
      	libEGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so
      
    • (igib) ➜  ~ nvidia-smi
      Thu Mar 31 15:28:55 2022       
      +-----------------------------------------------------------------------------+
      | NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
      |-------------------------------+----------------------+----------------------+
      | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
      | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
      |                               |                      |               MIG M. |
      |===============================+======================+======================|
      |   0  NVIDIA GeForce ...  On   | 00000000:01:00.0  On |                  N/A |
      | 41%   39C    P8    20W / 215W |    331MiB /  8192MiB |      2%      Default |
      |                               |                      |                  N/A |
      +-------------------------------+----------------------+----------------------+
      
    • Reinstalling after USE_GLAD set to FALSE didn't work either.

    • (base) ➜  ~ ./iGibson/igibson/render/mesh_renderer/build/query_devices
      2
      
      (base) ➜  ~ ./iGibson/igibson/render/mesh_renderer/build/test_device 0
      libEGL warning: DRI2: failed to create dri screen
      libEGL warning: DRI2: failed to create dri screen
      INFO: Unable to initialize EGL
      
      
      (base) ➜  ~ ./iGibson/igibson/render/mesh_renderer/build/test_device 1
      INFO: Loaded EGL 1.5 after reload.
      INFO: GL_VENDOR=Mesa/X.org
      INFO: GL_RENDERER=llvmpipe (LLVM 12.0.0, 256 bits)
      INFO: GL_VERSION=3.1 Mesa 21.2.6
      INFO: GL_SHADING_LANGUAGE_VERSION=1.40
      

    Please let me know if I can share any more information that could be helpful in debugging this.

    Thanks!

    opened by mukulkhanna 16
  • GLSL 1.5.0 is not supported

    GLSL 1.5.0 is not supported

    Hi,

    I followed the instructions for Gibson2 installation. When I run the demo code test:

    I get this error: GLSL 1.5.0 is not supported. Supported versions are ....

    I did retry the installation with USE_GLAD set to FALSE in CMakeLists, but this resulted in the installation crashing.

    Any ideas on the next steps I can take?

    opened by sanjeevkoppal 14
  • Could you please update your tutorial for ros integration?

    Could you please update your tutorial for ros integration?

    the demo is using ros1, turtlebot1 and python 2.7, which are all out of date. By using miniconda env based on python2.7, you even can not properly install igibson2!!

    opened by MRWANG995 13
  • 4 Questions for iGibson 2.0 / Bheavior Challenge

    4 Questions for iGibson 2.0 / Bheavior Challenge

    Thanks, your recent help was great! I am amazed by your support, thank you!

    Here a few more points:

    • I tried to use a different activity. Therefore I changed behavior_onboard_sensing.yaml by setting task: boxing_books_up_for_storage, but then I got an error message that ...fixed_furniture file can't be found. So I activated online_sampling. in the yaml-file. Does this randomize which objects are loaded and where they are placed?

    But then I got:

    Traceback (most recent call last):
      File "stable_baselines3_behavior_example.py", line 202, in <module>
        main()
      File "stable_baselines3_behavior_example.py", line 137, in main
        env = make_env(0)()
      File "stable_baselines3_behavior_example.py", line 129, in _init
        physics_timestep=1 / 300.0,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_mp_env.py", line 108, in __init__
        automatic_reset=automatic_reset,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 64, in __init__
        render_to_tensor=render_to_tensor,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/igibson_env.py", line 60, in __init__
        render_to_tensor=render_to_tensor,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/env_base.py", line 78, in __init__
        self.load()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 175, in load
        self.load_task_setup()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 164, in load_task_setup
        self.load_behavior_task_setup()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/envs/behavior_env.py", line 132, in load_behavior_task_setup
        online_sampling=online_sampling,
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/activity/activity_base.py", line 92, in initialize_simulator
        self.initial_state = self.save_scene()
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/activity/activity_base.py", line 98, in save_scene
        self.state_history[snapshot_id] = save_internal_states(self.simulator)
      File "/is/sg2/jmeier/behaviorchallenge/iGibson/igibson/utils/checkpoint_utils.py", line 38, in save_internal_states
        for name, obj in simulator.scene.objects_by_name.items():
    AttributeError: 'NoneType' object has no attribute 'objects_by_name'
    

    Can you help me such that I can load other activities? Do I have to take additional steps when I want to load my own activities besides placing them in bddl/activity_definitions/? Or would you recommend me to place it somewhere else?

    • I would like to use the editor of behavior challenge to create a custom activity, but it seems not accessible. Can you already say, when we can use it again? https://behavior-annotations.herokuapp.com/. If the support for BehaviorChallenge's github is as quick as yours, I also don't mind to post it there ;-)

    • A theoretical question: Is it possible to transport an object that can carry objects itself. e.g. is it possible to put an object into a bin and then transport the bin including the object in one hand?

    • Is it possible to do all the 100 activities in the discrete action space? If so, how would I remove dust e.g.?

    opened by meier-johannes94 13
  • Inverse Kinematics example is not up-to-date

    Inverse Kinematics example is not up-to-date

    The Inverse Kinematics example script does not work out-of-the-box, and will error out with a message about control_freq being specified in Fetch's configuration file.

    When this error is bypassed by commenting out the assertion, errors still occur. Fetch does not have a "robot_body" attribute, so

    fetch.robot_body.reset_position([0, 0, 0])
    

    should become

    fetch.reset_position([0, 0, 0])
    

    which is the standard in the functioning examples.

    Similarly, it seems that

    fetch.get_end_effector_position()
    

    should become

    fetch.links["gripper_link"].get_position()
    

    RobotLink does not have a body_part_index, so

    robot_id, fetch.links["gripper_link"].body_part_index, [x, y, z], threshold, maxIter
    

    should become something like

    robot_id, fetch.links["gripper_link"].(body/link)_id, [x, y, z], threshold, maxIter
    

    After all of these changes, the example wildly flails around Fetch's arm, which I wouldn't imagine is the intended purpose of the example.

    This script is fairly important for outlining the usage of IK in iGibson. If I fix it, I will submit a PR. Just wanted to outline the issue here as well.

    opened by CharlesAverill 12
  • PointNav Task

    PointNav Task

    Hi, I was trying to train pointnav agent using the given example 'stable_baselines3_example.py' but It gives me a memory error(attached). I solve this by reducing 'num_environments' from 8 to 1. But it isn't converging. I also attached the tensorboard logs. Do I need to change any other parameters (e.g. learning rate etc) to make it work with 1 environment.? Screenshot 2022-02-02 at 11 02 08

    opened by asfandasfo 11
  • Cannot download the dataset from Gibson Database of 3D Spaces

    Cannot download the dataset from Gibson Database of 3D Spaces

    Hi @fxia22 and @ChengshuLi, I tried to download the Gibson2 Room Dataset from https://docs.google.com/forms/d/e/1FAIpQLScWlx5Z1DM1M-wTSXaa6zV8lTFkPmTHW1LqMsoCBDWsTDjBkQ/viewform, and I couldn't access the cloud storage because of the following issue.

    This XML file does not appear to have any style information associated with it. The document tree is shown below. UserProjectAccountProblem User project billing account not in good standing.

    The billing account for the owning project is disabled in state absent

    Could you please check if the payment was properly made?

    opened by jjanixe 11
  • docker: Error response from daemon: could not select device driver

    docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]

    Hi there,

    I am unable to get either docker or pip installation to run with GUI on a remote server (Ubuntu 18.04.5 LTS). nvidia-smi shows NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 With a GeForce RTX 2080 SUPER

    After installing docker according to these direction: https://docs.docker.com/engine/install/ubuntu/
    sudo docker run hello-world runs successfully I cloned the repository

    git clone [email protected]:StanfordVL/iGibson.git cd iGibson ./docker/pull-images.sh

    docker images shows that I have these repositories download: igibson/igibson-gui latest f1609b44544a 6 days ago 8.11GB igibson/igibson latest e2d4fafb189b 6 days ago 7.48GB

    But sudo ./docker/headless-gui/run.sh elicits this error: Starting VNC server on port 5900 with password 112358 please run "python simulator_example.py" once you see the docker command prompt: docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

    sudo ./docker/base/run.sh also elicits: docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

    One guess is that something is wrong with OpenGL, but I don't know how to fix it. If I run glxinfo -B, I get name of display: localhost:12.0 libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast display: localhost:12 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) OpenGL vendor string: Intel Inc. OpenGL renderer string: Intel(R) Iris(TM) Plus Graphics 655 OpenGL version string: 1.4 (2.1 INTEL-14.7.8)

    Note: I can successfully run xeyes on the server and have it show up on my local machine. And glxgears shows the gears image but the gears are not rotating. (and returns this error: libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast )

    I also tried the steps from the trouble shooting page: ldconfig -p | grep EGL yields libEGL_nvidia.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.0 libEGL_nvidia.so.0 (libc6) => /usr/lib/i386-linux-gnu/libEGL_nvidia.so.0 libEGL_mesa.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL_mesa.so.0 libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so.1 libEGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libEGL.so And I checked that /usr/lib/x86_64-linux-gnu/libEGL.so -> libEGL.so.1.0.0

    I also do not appear to have any directories such as /usr/lib/nvidia-vvv (I only have /usr/lib/nvidia, /usr/lib/nvidia-cuda-toolkit, and /usr/lib/nvidia-visual-profiler)

    Any help would be very much appreciated! Thank you so much.

    opened by izkula 10
  • Angular velocity improperly calculated for TwoWheelRobot for proprioception dictionary

    Angular velocity improperly calculated for TwoWheelRobot for proprioception dictionary

    The TwoWheelRobot seems to be incorrectly calculating the base angular velocity that is returned in the proprioception dictionary.

    $\omega$ = angular velocity $\upsilon$ = linear velocity $V_r$ = right wheel velocity $V_l$ = left wheel velocity $R$ = wheel radius $l$ = wheel axle length

    The incorrect formula can be found here and is

    \omega=\frac{V_r-V_l}{l}
    

    The equations to get wheel velocities from linear and angular velocities that are applied to a DD controller are here. These equations seem to be the source of truth that the proprioception calculation should match. The equations are the following:

    V_l = \frac{\upsilon - \omega \cdot l/2}{R}
    
    V_r = \frac{\upsilon + \omega \cdot l/2}{R}
    

    Solving for $\omega$ and $\upsilon$ results in the following equations:

    \omega = \frac{(V_l - V_r) \cdot R }{2 \cdot l}
    
    \upsilon = \frac{(V_l + V_r) \cdot R}{2}
    

    Ultimately, I think the angular velocity formula needs to be updated here to this $\omega = \frac{(V_l - V_r) \cdot R }{2 \cdot l}$

    opened by sujaygarlanka 0
  • Error in Mesh Renderer cpu file

    Error in Mesh Renderer cpu file

    When running the ext_object scripts, I encountered an error in the mesh renderer. On line 1094 of mesh_renderer_cpu.py the code refers to an attribute of an InstanceGroup object called pose_rot. The actual attribute as defined in the InstanceGroup object is poses_rot. The line below is similarly effected with the pose_trans call needing to be poses_trans. My code works when I fix the typo on line 1094 but I wanted to let you know so you can fix it for others.

    opened by mullenj 0
  • Vision sensor issue in VR environment

    Vision sensor issue in VR environment

    When I put both a Fetch robot and Behavior Robot in a VR environment (Behavior robot is the VR avatar) and have a vision sensor in environment YAML file, I get the issue below. I believe this may be a bug in mesh_renderer_cpu.py where it tries to get RGB data for all robots in the scene and fails when it reaches the Behavior Robot. I think it needs to skip the Behavior Robots. Is this in fact a bug or an issue on my end? Thanks.

    Traceback (most recent call last):
      File "main.py", line 79, in <module>
        main()
      File "main.py", line 73, in main
        state, reward, done, _ = env.step(action)
      File "C:\Users\icaro\513-final-project\igibson\envs\igibson_env.py", line 360, in step
        state = self.get_state()
      File "C:\Users\icaro\513-final-project\igibson\envs\igibson_env.py", line 279, in get_state
        vision_obs = self.sensors["vision"].get_obs(self)
      File "C:\Users\icaro\513-final-project\igibson\sensors\vision_sensor.py", line 155, in get_obs
        raw_vision_obs = env.simulator.renderer.render_robot_cameras(modes=self.raw_modalities)
      File "C:\Users\icaro\513-final-project\igibson\render\mesh_renderer\mesh_renderer_cpu.py", line 1256, in render_robot_cameras
        frames.extend(self.render_single_robot_camera(robot, modes=modes, cache=cache))
      File "C:\Users\icaro\513-final-project\igibson\render\mesh_renderer\mesh_renderer_cpu.py", line 1270, in render_single_robot_camera
        for item in self.render(modes=modes, hidden=hide_instances):
    TypeError: 'NoneType' object is not iterable
    
    opened by sujaygarlanka 0
  • I want to use quadrotor in iGibson1.0, But I didn't found the correspond yaml file

    I want to use quadrotor in iGibson1.0, But I didn't found the correspond yaml file

    image As show in above picture, I want use the example code‘ igibson/examples/demo/robot_example.py.’ to achieve the four robots will have a fun cocktail party. But I want replace one as a quadrotor, then I didn't found the quadrotor's yaml file in /igibson/examples/configs. What shoud I do in the next step? image

    opened by YigaoWang 0
  • BehaviorRobot issue when use_tracked_body set to false

    BehaviorRobot issue when use_tracked_body set to false

    The robot misaligns the body with the hands and head when the user_tracked_body parameter is false. Also, the body falls as if it is disconnected from the hands and head. Neither happens when user_tracked_body is true. The picture attached shows how the robot is rendered in the beginning. Do you know why this may be the case or is it a bug?

    I am trying to have the robot move about the space using an Oculus joystick, so I assume that setting this parameter to false is required.

    Screenshot 2022-11-06 181854

    bug 
    opened by sujaygarlanka 5
Releases(2.2.1)
  • 2.2.1(Oct 27, 2022)

    iGibson 2.2.1 is a new patch version with the below changes:

    Changelog:

    • Restores support for legacy BehaviorRobot proprioception dimensionality to match BEHAVIOR baselines, using the legacy_proprioception constructor flag.
    • Fixes setuptools build issues.
    • Remove references to non-dataset scenes.
    • Fix BehaviorRobot saving/loading bugs.

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.2.0...2.2.1

    Source code(tar.gz)
    Source code(zip)
  • 2.2.0(May 9, 2022)

    iGibson 2.2.0 is a new minor version with the below features:

    Changelog:

    • Fixes iGibson ROS integration
    • Adds the Tiago robot
    • Adds primitive action interface and a sample set of (work-in-progress) object-centric action primitives
    • Fixes some bugs around point nav task robot pose sampling
    • Fixes some bugs around occupancy maps

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.1.0...2.2.0

    Source code(tar.gz)
    Source code(zip)
  • 2.1.0(Mar 10, 2022)

    iGibson 2.1.0 is a bugfix release (that is numbered as a minor version because 2.0.6, which was a breaking change, was incorrectly numbered as a patch).

    Changelog:

    • Fixed performance regression in scenes with large numbers of markers (see #169)
    • Fixed broken iGibson logo
    • Fixed Docker images
    • Removed vendored OpenVR to drastically shrink package size
    • Add better dataset version checking

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.0.6...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.6(Feb 17, 2022)

    Bug-fixes

    • Fix texture randomization
    • Renderer updates object poses when the objects' islands are awake
    • Set ignore_visual_shape to True by default
    • EmptyScene render_floor_plane set to True by default
    • Fix shadow rendering for openGL 4.1
    • Fix VR demo scripts

    Improvements

    • Major refactoring of Scene saving and loading
    • Major refactoring of unifying Robots into Objects
    • Make BehaviorRobot inherit BaseRobot
    • Clean up robot demos
    • Add optical flow example
    • Improve AG (assistive grasping)
    • Support for multi-arm robots
    • Handle hidden instances for optimized renderer
    • Unify semantic class ID
    • Clean up ray examples
    • Move VR activation out of BehaviorRobot
    • Base motion planning using onboard sensing, global 2d map, or full observability
    • Add gripper to JR2
    • Add dataset / assets version validation

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.0.5...2.0.6

    Source code(tar.gz)
    Source code(zip)
  • 2.0.5(Jan 21, 2022)

    Re-release of iGibson 2.0.4 due to issue in PyPI distribution pipeline.

    Bug-fixes

    • Robot camera rendering where there is non-zero rotation in the x-axis (forward direction)
    • Rendering floor plane in StaticIndoorScene
    • BehaviorRobot assisted grasping ray-casting incorrect
    • BehaviorRobot head rotation incorrect (moving faster than it's supposed to)
    • URDFObject bounding box computation incorrect
    • EGL context error if pybullet GUI created before EGL context
    • Rendering on retina screens
    • Viewer breaks in planning mode when no robot
    • LiDAR rendering

    Improvements

    • Major refactoring of Simualtor (including rendering mode), Task, Environment, Robot, sampling code, scene/object/robot importing logic, etc.
    • Better CI and automation
    • Add predicates of BehaviorTask to info of Env
    • Major updates of examples
    • Minor updates of docs

    New Features

    • Add Controller interface to all robots

    Full Changelog: https://github.com/StanfordVL/iGibson/compare/2.0.3...2.0.5

    Source code(tar.gz)
    Source code(zip)
  • 2.0.3(Nov 10, 2021)

    Bug-fixes

    • pybullet retore state
    • adjacency ray casting
    • link CoM frame computation
    • sem/ins segmentation rendering
    • simulator force_sync renderer
    • material id for objects without valid MTL
    • Open state checking for windows
    • BehaviorRobot trigger fraction out of bound
    • BehaviorRobot AG joint frame not at contact point

    Improvements

    • Refactor iG object inheritance
    • Improve documentation
    • Improve sampling
    • scene caches support FetchGripper robot
    • BehaviorRobot action space: delta action on top of actual pose, not "ghost" pose
    • Upgrade shader version to 460
    • Minify docker container size

    New Features

    • VR Linux support
    • GitHub action CI
    Source code(tar.gz)
    Source code(zip)
  • 2.0.2(Oct 19, 2021)

  • 2.0.1(Sep 8, 2021)

  • 2.0.0(Aug 11, 2021)

    Major update to iGibson to reach iGibson 2.0, for details please refer to our arxiv preprint.

    • iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks.
    • iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked.
    • iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.

    iGibson 2.0 is also the version to use with BEHAVIOR Challenge. For more information please visit: http://svl.stanford.edu/behavior/challenge.html

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0rc4(Jul 19, 2021)

  • 1.0.3(Jul 19, 2021)

  • 1.0.1(Dec 24, 2020)

    Changes:

    • Fix python2 compatibility issue.
    • Ship examples and config files with the pip package.
    • Fix shape caching issue.

    Note: if you need to download source code, please download from gibson2-1.0.1.tar.gz, instead of the one GitHub provides, since the latter doesn't include submodules.

    Source code(tar.gz)
    Source code(zip)
    gibson2-1.0.1-cp27-cp27mu-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp35-cp35m-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp36-cp36m-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp37-cp37m-manylinux1_x86_64.whl(23.74 MB)
    gibson2-1.0.1-cp38-cp38-manylinux1_x86_64.whl(23.19 MB)
    gibson2-1.0.1.tar.gz(21.23 MB)
  • 1.0.0(Dec 8, 2020)

    Major update to iGibson to reach iGibson v1.0, for details please refer to our technical report.

    • Release of iGibson dataset, which consists of 15 fully interactive scenes and 500+ object models.
    • New features of the Simulator: Physically-based rendering; 1-beam and 16-beam lidar simulation; Domain randomization support.
    • Code refactoring and cleanup.
    Source code(tar.gz)
    Source code(zip)
    gibson2-1.0.0-cp35-cp35m-manylinux1_x86_64.whl(15.45 MB)
    gibson2-1.0.0-cp36-cp36m-manylinux1_x86_64.whl(15.45 MB)
    gibson2-1.0.0-cp38-cp38-manylinux1_x86_64.whl(15.45 MB)
    gibson2-1.0.0.tar.gz(13.09 MB)
  • 0.0.4(Apr 7, 2020)

    iGibson, the Interactive Gibson Environment, is a simulation environment providing fast visual rendering and physics simulation (based on Bullet). It is packed with a dataset with hundreds of large 3D environments reconstructed from real homes and offices, and interactive objects that can be pushed and actuated. iGibson allows researchers to train and evaluate robotic agents that use RGB images and/or other visual sensors to solve indoor (interactive) navigation and manipulation tasks such as opening doors, picking and placing objects, or searching in cabinets.

    Major changes since original GibsonEnv:

    • Support agent interaction with the environment
    • Support faster rendering, rendering to tensor support
    • Removed dependencies of PyOpenGL, better support for headless rendering
    • Support our latest version of assets.
    Source code(tar.gz)
    Source code(zip)
    gibson2-0.0.4-cp27-cp27mu-manylinux1_x86_64.whl(3.41 MB)
    gibson2-0.0.4-cp35-cp35m-manylinux1_x86_64.whl(3.41 MB)
Owner
Stanford Vision and Learning Lab
Research Codebase
Stanford Vision and Learning Lab
Code associated with the paper "Towards Understanding the Data Dependency of Mixup-style Training".

Mixup-Data-Dependency Code associated with the paper "Towards Understanding the Data Dependency of Mixup-style Training". Running Alternating Line Exp

Muthu Chidambaram 0 Nov 11, 2021
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
Weighing Counts: Sequential Crowd Counting by Reinforcement Learning

LibraNet This repository includes the official implementation of LibraNet for crowd counting, presented in our paper: Weighing Counts: Sequential Crow

Hao Lu 18 Nov 05, 2022
Linear image-to-image translation

Linear (Un)supervised Image-to-Image Translation Examples for linear orthogonal transformations in PCA domain, learned without pairing supervision. Tr

Eitan Richardson 40 Aug 31, 2022
Virtual hand gesture mouse using a webcam

NonMouse 日本語のREADMEはこちら This is an application that allows you to use your hand itself as a mouse. The program uses a web camera to recognize your han

Yuki Takeyama 55 Jan 01, 2023
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 02, 2022
Non-Vacuous Generalisation Bounds for Shallow Neural Networks

This package requires jax, tensorflow, and numpy. Either tensorflow or scikit-learn can be used for loading data. To run in a nix-shell with required

Felix Biggs 0 Feb 04, 2022
[AAAI 2022] Separate Contrastive Learning for Organs-at-Risk and Gross-Tumor-Volume Segmentation with Limited Annotation

A paper Introduction This is an official release of the paper Separate Contrastive Learning for Organs-at-Risk and Gross-Tumor-Volume Segmentation wit

Jiacheng Wang 14 Dec 08, 2022
Collective Multi-type Entity Alignment Between Knowledge Graphs (WWW'20)

CG-MuAlign A reference implementation for "Collective Multi-type Entity Alignment Between Knowledge Graphs", published in WWW 2020. If you find our pa

Bran Zhu 28 Dec 11, 2022
Breast Cancer Classification Model is applied on a different dataset

Breast Cancer Classification Model is applied on a different dataset

1 Feb 04, 2022
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

AI Wizards for Software Management (AWSM) Research Group 14 Nov 13, 2022
Anomaly detection related books, papers, videos, and toolboxes

Anomaly Detection Learning Resources Outlier Detection (also known as Anomaly Detection) is an exciting yet challenging field, which aims to identify

Yue Zhao 6.7k Dec 31, 2022
i-RevNet Pytorch Code

i-RevNet: Deep Invertible Networks Pytorch implementation of i-RevNets. i-RevNets define a family of fully invertible deep networks, built from a succ

Jörn Jacobsen 378 Dec 06, 2022
A machine learning malware analysis framework for Android apps.

🕵️ A machine learning malware analysis framework for Android apps. ☢️ DroidDetective is a Python tool for analysing Android applications (APKs) for p

James Stevenson 77 Dec 27, 2022
A PyTorch-centric hybrid classical-quantum machine learning framework

torchquantum A PyTorch-centric hybrid classical-quantum dynamic neural networks framework. News Add a simple example script using quantum gates to do

MIT HAN Lab 400 Jan 02, 2023
SOFT: Softmax-free Transformer with Linear Complexity, NeurIPS 2021 Spotlight

SOFT: Softmax-free Transformer with Linear Complexity SOFT: Softmax-free Transformer with Linear Complexity, Jiachen Lu, Jinghan Yao, Junge Zhang, Xia

Fudan Zhang Vision Group 272 Dec 25, 2022
Official PaddlePaddle implementation of Paint Transformer

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Paddle Implementation] Update We have optimized the serial inference p

TianweiLin 284 Dec 31, 2022
BackgroundRemover lets you Remove Background from images and video with a simple command line interface

BackgroundRemover BackgroundRemover is a command line tool to remove background from video and image, made by nadermx to power https://BackgroundRemov

Johnathan Nader 1.7k Dec 30, 2022
HomoInterpGAN - Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation

HomoInterpGAN Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation (CVPR 2019, oral) Installation The implementation is base

Ying-Cong Chen 99 Nov 15, 2022
Lightweight Cuda Renderer with Python Wrapper.

pyRender Lightweight Cuda Renderer with Python Wrapper. Compile Change compile.sh line 5 to the glm library include path. This library can be download

Jingwei Huang 53 Dec 02, 2022