This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool

Overview

OpenSurfaces Segmentation UI

This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool. A dummy server backend is included to run the demo.

You can also view the demo online.

To run the demo, there are two versions: one with django, and one with no framework. The django version uses a dummy django server and compiles the website live as necessary. The non-django version is a flat html file extracted from the django version.

If you find this tool helpful, please cite our project:

@inproceedings{bell13opensurfaces,
	author = "Sean Bell and Paul Upchurch and Noah Snavely and Kavita Bala",
	title = "OpenSurfaces: A Richly Annotated Catalog of Surface Appearance",
	booktitle = "SIGGRAPH Conf. Proc.",
	volume = "32",
	number = "4",
	year = "2013",
}

and report any bugs using the GitHub issue tracker. Also, please "star" this project on GitHub; it's nice to see how many people are using our code.

Version 1: Run with Django (Ubuntu Linux)

  1. Install dependencies (coffee-script, django, django-compressor, ua-parser, BeautifulSoup):

    Note: this will change your django current installation if you are not somewhere between 1.4.* and 1.6.*. I suggest looking into the virtualenv package if this is a problem for you.

./django-setup-demo.sh
  1. Start the local webserver:
./django-run-demo.sh
  1. Visit localhost:8000 in a web browser

To get the demo to work on Mac and Windows, you will have to look at the above scripts and run the equivalent commands for your system.

After drawing 6 polygons, the submit button will show you the POST data that would have been sent to the server.

Version 2: Run without Django (Linux or Mac)

  1. Install npm and node.js. On Ubuntu, this is:
sudo apt-get install npm nodejs
  1. Install coffee-script:
sudo npm install -g coffee-script
  1. Build static files (js, css, img) and then start a local python-based webserver:
./python-run-demo.sh
  1. Visit localhost:8000 in a web browser

To get the demo to work on Windows, you will have to look at the above scripts and run the equivalent commands for your system.

Project Notes

POST data

When a user submits, the client will POST the data to the same URL. On success, the client expects the JSON response {"message": "success", "result": "success"}. The client will then notify the MTurk server that the task is completed. For more details, see example_project/segmentation/views.py.

When a user submits, the POST will contain these fields:

results: a dictionary mapping from the photo ID (which is just "1" in
	this example) to a list of polygons.  Example:
	{"1": [[x1,y1,x2,y2,x3,y3,...], [x1,y1,x2,y2,...]]}.
	Coordinates are scaled with respect to the source photo dimensions, so both
	x and y are in the range 0 to 1.

time_ms: amount of time the user spent (whether or not they were active)

time_active_ms: amount of time that the user was active in the current window

action_log: a JSON-encoded log of user actions

screen_width: user screen width

screen_height: user screen height

version: always "1.0"

feedback: omitted if there is no feedback; JSON encoded dictionary of the form:
{
	'thoughts': user's response to "What did you think of this task?",
	'understand': user's response to "What parts didn't you understand?",
	'other': user's response to "Any other feedback, improvements, or suggestions?"
}

Feedback survey

When the user finishes the task, a popup will ask for feedback. In the django version, disable this by setting ask_for_feedback to 'false' in the file example_project/segmentation/vies.py. In the non-django verfsion, update the window.ask_for_feedback variable in index.html.

I recommend asking for feedback after the 2nd or 3rd time a user has submitted, not the first time, and then not asking again (otherwise it gets annoying). Users usually don't have feedback until they have been working for a little while.

Compiling from coffeescript

The javascript for the tool is automatically compiled from coffeescript files by django-compressor and accessed by the client at a url of the form /static/cache/js/*.js. This is set up already if using django.

If not using django, the python-run-demo.sh does this for you by manually compiling coffeescript files and storing them in the /static/ folder.

Browser compatibility

This UI works in Chrome and Firefox only. The Django version includes a browser check that shows an error page if the user is not on Chrome or Firefox or is on a mobile device.

Local /static/ folder

After you run the demo setup, the directory /static/ will contain compiled css and javascript files.

If you are usikng django and change any part of the static files (js, css, images, coffeescript), you will need to repopulate the static folder with this command:

example_project/manage.py collectstatic --noinput

If you are building on top of this repository:

In example_project/settings.py:

  1. Change SECRET_KEY to some random string.
  2. Fill in the rest of the values (admin name, database, etc).

If you want to add this demo to your own (separate) Django project:

In your settings.py file, make the following changes:

  1. Make sure STATIC_ROOT is set to an absolute writable path.

  2. Add this to the STATICFILES_FINDERS tuple:

	'compressor.finders.CompressorFinder',
  1. Add this to the INSTALLED_APPS tuple:
	'django.contrib.humanize',
	'compressor',
	'segmentation',
  1. Add this to settings.py (e.g. at the end):
	# Django Compressor
	COMPRESS_ENABLED = True
	COMPRESS_OUTPUT_DIR = 'cache'
	COMPRESS_PRECOMPILERS = (
		('text/coffeescript', 'coffee --bare --compile --stdio'),
		('text/less', 'lessc -x {infile} {outfile}'),
	)
Owner
Sean Bell
CEO and Co-Founder, GrokStyle Inc. PhD, Cornell University
Sean Bell
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
Preprocessed Datasets for our Multimodal NER paper

Unified Multimodal Transformer (UMT) for Multimodal Named Entity Recognition (MNER) Two MNER Datasets and Codes for our ACL'2020 paper: Improving Mult

76 Dec 21, 2022
[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

AGIS-Net Introduction This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning. paper | suppl

Yue Gao 102 Jan 02, 2023
Learning multiple gaits of quadruped robot using hierarchical reinforcement learning

Learning multiple gaits of quadruped robot using hierarchical reinforcement learning We propose a method to learn multiple gaits of quadruped robot us

Yunho Kim 17 Dec 11, 2022
UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down. UpChecker - just run file and use project easy

UpChecker UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down.

Yan 4 Apr 07, 2022
Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE)

OG-SPACE Introduction Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE) is a computational framewo

Data and Computational Biology Group UNIMIB (was BI*oinformatics MI*lan B*icocca) 0 Nov 17, 2021
DSL for matching Python ASTs

py-ast-rule-engine This library provides a DSL (domain-specific language) to match a pattern inside a Python AST (abstract syntax tree). The library i

1 Dec 18, 2021
A keras implementation of ENet (abandoned for the foreseeable future)

ENet-keras This is an implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from ENet-training (lua-t

Pavlos 115 Nov 23, 2021
PyTorch implementation of Algorithm 1 of "On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models"

Code for On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models This repository will reproduce the main results from our pape

Mitch Hill 32 Nov 25, 2022
ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプル

ByteTrack-ONNX-Sample ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプルです。 ONNXに変換したモデルも同梱しています。 変換自体を試したい方はByteT

KazuhitoTakahashi 16 Oct 26, 2022
FS2KToolbox FS2K Dataset Towards the translation between Face

FS2KToolbox FS2K Dataset Towards the translation between Face -- Sketch. Download (photo+sketch+annotation): Google-drive, Baidu-disk, pw: FS2K. For

Deng-Ping Fan 5 Jan 03, 2023
Code and training data for our ECCV 2016 paper on Unsupervised Learning

Shuffle and Learn (Shuffle Tuple) Created by Ishan Misra Based on the ECCV 2016 Paper - "Shuffle and Learn: Unsupervised Learning using Temporal Order

Ishan Misra 44 Dec 08, 2021
StrongSORT: Make DeepSORT Great Again

StrongSORT StrongSORT: Make DeepSORT Great Again StrongSORT: Make DeepSORT Great Again Yunhao Du, Yang Song, Bo Yang, Yanyun Zhao arxiv 2202.13514 Abs

369 Jan 04, 2023
CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image.

CoReNet CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image. It produces coherent reconstructions, where all objec

Google Research 80 Dec 25, 2022
根据midi文件演奏“风物之诗琴”的脚本 "Windsong Lyre" auto play

Genshin-lyre-auto-play 简体中文 | English 简介 根据midi文件演奏“风物之诗琴”的脚本。由Python驱动,在此承诺, ⚠️ 项目内绝不含任何能够引起安全问题的代码。 前排提示:所有键盘在动但是原神没反应的都是因为没有管理员权限,双击run.bat或者以管理员模式

御坂17032号 386 Jan 01, 2023
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
Implicit Graph Neural Networks

Implicit Graph Neural Networks This repository is the official PyTorch implementation of "Implicit Graph Neural Networks". Fangda Gu*, Heng Chang*, We

Heng Chang 48 Nov 29, 2022
QMagFace: Simple and Accurate Quality-Aware Face Recognition

Quality-Aware Face Recognition 26.11.2021 start readme QMagFace: Simple and Accurate Quality-Aware Face Recognition Research Paper Implementation - To

Philipp Terhörst 59 Jan 04, 2023
Learning to Map Large-scale Sparse Graphs on Memristive Crossbar

Release of AutoGMap:Learning to Map Large-scale Sparse Graphs on Memristive Crossbar For reproduction of our searched model, the Ubuntu OS is recommen

2 Aug 23, 2022
Representing Long-Range Context for Graph Neural Networks with Global Attention

Graph Augmentation Graph augmentation/self-supervision/etc. Algorithms gcn gcn+virtual node gin gin+virtual node PNA GraphTrans Augmentation methods N

UC Berkeley RISE 67 Dec 30, 2022