Vignette is a face tracking software for characters using osu!framework.

Overview


Discord GitHub Super-Linter Total Download Count

Vignette is a face tracking software for characters using osu!framework. Unlike most solutions, Vignette is:

  • Made with osu!framework, the game framework that powers osu!lazer, the next iteration of osu!.
  • Open source, from the very core.
  • Always evolving - Vignette improves every update, and it tries to know you better too, literally.

Running

We provide releases from GitHub Releases and also from Visual Studio App Center. Vignette releases builds for a select few people before we create a release here, so pay attention.

You can also run Vignette by cloning the repository and running this command in your terminal.

dotnet run --project Vignette.Desktop

Developing

Please make sure you meet the prerequisites:

Contributing

The style guide is defined in the .editorconfig at the root of this repository and it will be picked up in intellisense by capable editors. Please follow the provided style for consistency.

License

Vignette is Copyright © 2020 Ayane Satomi and the Vignette Authors, licensed under GNU General Public License v3.0 with SDK exception. For the full license text please see the LICENSE file in this repository. Live2D however is also additionally under another license which can be found here: Live2D Open Software License.

Commercial Use and Support

While Vignette is GPL-3.0, We do not provide commercial support, there is nothing stopping you from using it commercially but if you want proper dedicated support from the Vignette engineers, we highly recommend the Enterprise tier on our Open Collective.

Comments
  • Refactor User Interface

    Refactor User Interface

    First and foremost, this is the awaited UI refresh which now sports a sidebar instead of a full screen menu. This also sports updated styling on several components and updates osu!framework and Fluent System Icons. Backdrops (backgrounds) get a significant update as well now allowing video and images as a target.

    Under the hood, I have refactored themeing and keybind management (UI is again to follow). Themes can now be edited on the fly but only the export button works. Applying live will follow. I've also laid down the foundation to avatar, recognition, and camera settings but only as hollow controls that don't do anything.

    priority:high area:user-interface 
    opened by LeNitrous 18
  • Refactor Vignette.Camera

    Refactor Vignette.Camera

    This PR fixes issue #234.

    The previous solution that I've implemented is simply not adding a duplicate item in the FluentDropdown, and warning about it with a console write statement. image image

    Now, the solution is indexing the friendly names so that all options pop up. We're now faced with a "can't open camera by index" bug.

    opened by Speykious 9
  • Allow osu!framework to not block compositing

    Allow osu!framework to not block compositing

    Desktop effects are killed globally when vignette is running; Some parts like disabling decorations are fine, but transparency, wobbly windows, smooth animations for actions, etc are all disabled as long as Vignette is running.

    proposal 
    opened by Martmists-GH 8
  • [NeedHelp]It crashed

    [NeedHelp]It crashed

    It crashed the first time I open it. I'm using windows7,service pack 1 dotnet x64 5.0.11.30524

    It happened like this in most cases QQ截图20220109124725

    And sometimes like this QQ截图20220109125148

    As far as I know,no logs/crash reports or dumps are created :( Can U help?

    invalid:wont-fix 
    opened by huzpsb 7
  • Vignette bundles the dotnet runtime

    Vignette bundles the dotnet runtime

    It seems the last issue went missing so I'm re-adding it.

    Reasons to bundle:

    • No need for end user to install it

    Reasons not to bundle:

    • User likely already has dotnet installed
    • Installer or install script can install it if missing
    • Prevent duplication of dependencies
    • Allow package manager (or user) to update dotnet with important fixes without the need for a new Vignette release
    • Some systems may need a custom patch to dotnet, which a bundled runtime would overwrite
    invalid:wont-fix 
    opened by Martmists-GH 6
  • Evaluate CNTK or Tensorflow for Tracking Backend

    Evaluate CNTK or Tensorflow for Tracking Backend

    Unfortunately, our Tracking Backend, which is FaceRecognitionDotNet, which uses DLib and OpenCV, didn't turn out as performant as expected. The delta is too high to make a significant data, and the models currently perform poorly. In light of that, I will have to make a backend we can control directly instead of relying on others' work which we're not sure that has any quality.

    Right now we're looking at CNTK and Tensorflow. While CNTK is from Microsoft, there is more laywork on Tensorflow, so we'll have to decide on this.

    proposal priority:high 
    opened by sr229 6
  • Use FFmpeg instead of EmguCV

    Use FFmpeg instead of EmguCV

    Currently, EmguCV is being used only to handle webcam input. We've had various problems with runtimes not being in the right place and cameras not being detected.

    Thus I propose that we use FFmpeg for that task. I think that it will be much easier to deal with as we can just use it as a system-installed binary. Not to mention that the library is LGPL which is just perfect for our use-case.

    priority:medium area:recognition 
    opened by Speykious 5
  • Lag Compensation for Prediction Data to Live2D

    Lag Compensation for Prediction Data to Live2D

    As part of #28, we have discussed how raw data would result on jittery rough data, even if the neural network used is theoretically as precise as a human eye predicting the facial movements of the subject. To compensate for jittery input, we will implement a sort of lag-compensation algorithm.

    Background

    John Carmack's work with Latency Mitigation for Virtual Reality Devices (source) explains that the physical movement from the user's head up to the eyes is critical to the experience. While the document is designed mainly for virtual reality, one can argue the methodologies used to provide a seamless experience for virtual reality can be applied for a face tracking application, as face tracking like HMDs, are also very demanding "human-in-the-loop" interfaces.

    Byeong-Doo Choi, et al.'s work with frame interpolation using a novel algorithm for motion prediction would enhance a target video's temporal resolution, by using Adaptive OBMC. Such frame interpolation techniques according to the paper has been proven to give better results than the current algorithms used for frame interpolation in the market.

    Strategy

    As stated on the background, there are many strategies we can perform lag compensation for such raw jittery input from prediction data from the neural network, it is limited to these two strategies:

    Frame Interpolation by Motion Prediction

    Byeong Doo-Choi, et al. achieves frame interpolation by the following:

    First, we propose the bilateral motion estimation scheme to obtain the motion field of an interpolated frame without yielding the hole and overlapping problems. Then, we partition a frame into several object regions by clustering motion vectors. We apply the variable-size block MC (VS-BMC) algorithm to object boundaries in order to reconstruct edge information with a higher quality. Finally, we use the adaptive overlapped block MC (OBMC), which adjusts the coefficients of overlapped windows based on the reliabilities of neighboring motion vectors. The adaptive OBMC (AOBMC) can overcome the limitations of the conventional OBMC, such as over-smoothing and poor de-blocking

    According to their experiments, such method would produce better image quality for the interpolated frames, which is helpful for prediction in our neural network, however it comes with a cost of having to process the video at runtime, as the experiment is only done on pre-rendered video frames already.

    View Bypass/Time Warping

    John Carmack's work with reducing input latency for VR HMDs suggests a multitude of methods, one of them is View Bypass - a method achieved by getting a newer sampling of the input.

    To achieve this, the input should be sampled once but can be used by both the simulation and the rendering task, thus reducing the latency for such. However, the input and the game thread must run in parallel and the programmer must be careful not to reference the game state otherwise it would cause a race condition.

    Another method mentioned by Carmack is Time Warping, which he states that:

    After drawing a frame with the best information at your disposal, possibly with bypassed view parameters, instead of displaying it directly, fetch the latest user input, generate updated view parameters, and calculate a transformation that warps the rendered image into a position that approximates where it would be with the updated parameters. Using that transform, warp the rendered image into an updated form on screen that reflects the new input. If there are two dimensional overlays present on the screen that need to remain fixed, they must be drawn or composited in after the warp operation, to prevent them from incorrectly moving as the view parameters change.

    There are different methods of warping which is forward warping and reverse warping, and such warping methods can be used along with View Bypassing. However, the increased complexity for lag compensation of doing input with the main loop concurrently is possible as the input loop will be independent of the game state entirely.

    Conclusion

    Such strategies mentioned would allow us to have smoother experience, however, based on my personal analysis, I found that Carmack's solutions would be more feasible for a project of our scale. We simply don't have the team and the technical resources to do from-camera video interpolation as it would be computationally expensive to be implemented with minimal overhead.

    area:documentation proposal priority:high 
    opened by sr229 5
  • Hook up Tracking Worker to Live2D

    Hook up Tracking Worker to Live2D

    As the final task for Milestone 1, we're going to hook up the tracking worker to Live2D and see if we can spot some bugs before we turn in on our release.

    proposal priority:high 
    opened by sr229 5
  • User Inteface

    User Inteface

    We want to customize the Layout, and to do that we need to do the following:

    • Make the Live2D a draggable component
    • Custom Backgrounds (Green Screen default, white default background, or Image).
    • Persist this layout into a format (YAML, perhaps?)

    Todo

    • [ ] Draggable and resizable Live2D container.
    • [ ] Backgrounds support (White background, Green background, user-defined).

    Essentially, since we're going to have a layout similar to this:

    image

    proposal priority:high 
    opened by sr229 5
  • Extension System

    Extension System

    Discussed in https://github.com/vignetteapp/vignette/discussions/216

    Originally posted by sr229 May 9, 2021 This has been requested by the community; however, this is kinda low priority as we focus most on the core components. The way this works is the following:

    • Extensions can expose their settings in MainMenu.
    • They will be strictly be conformant to the o!f model to properly load. This is considered "bare minimum" for what people requires to make an extension.
    • They will be packaged as either a .dll or a .nupkg which the program can "extract" or "compile" into a DLL, something we can do once we have a better idea with how to dynamically load assemblies.

    Anyone can propose a better design here since this is a RFC, we appreciate alternative approaches for this.

    priority:high 
    opened by sr229 4
  • UI controls, sprites, containers, etc as a Nuget package.

    UI controls, sprites, containers, etc as a Nuget package.

    It would be a nice idea if you could make a seperate library that includes all the UI controls, themeable sprite, containers, etc as a nuget package. It could allow other developers to integrate it to their projects and have access to a nice suite of UI controls + other stuff instead of writing them from scratch.

    priority:high area:user-interface 
    opened by Whatareyoulaughingat 6
  • VRM Support

    VRM Support

    Here's a little backlog while we're working on the rendering/scene/model API for the extensions. Since this is a reference implementation for all 3D/2D model support extensions, VRM is going to be our flagship extension and will serve as a extension reference for model support.

    References

    proposal priority:high area-extensions 
    opened by sr229 0
  • Steamworks API integration

    Steamworks API integration

    As part of #251, we might want to include Steamworks API just in case people might have a use for it on our Steam releases. It would be optional and will be hidden under a build flag.

    proposal priority:medium 
    opened by sr229 2
  • First time user experience (OOBE)

    First time user experience (OOBE)

    Design specifications are now released for the first time user experience. This will guide them to set up the bare essentials so they can get up and running quickly.

    priority:medium area:user-interface 
    opened by sr229 0
  • Internalization Support (i18n)

    Internalization Support (i18n)

    We'll have to support multiple languages. A good start is looking at Crowdin as a source. We'll support languages by demand but for starters I think we'll support English, Japanese, and Chinese (Simplified and Traditional) given we have people proficient in those languages.

    As for implementation, That would be the second part of investigation.

    good first issue priority:low 
    opened by LeNitrous 13
  • Documentation Tasks

    Documentation Tasks

    We'll have to document more significant parts at some point. We'd want contributors to have an idea how everything works in the back-end after all.

    For now we can direct them to osu!framework's Getting Started wiki pages.

    area:documentation good first issue priority:low 
    opened by LeNitrous 0
Releases(2021.1102.2)
Owner
Vignette
The open source VTuber Toolkit. Made with 💖.
Vignette
Apply our monocular depth boosting to your own network!

MergeNet - Boost Your Own Depth Boost custom or edited monocular depth maps using MergeNet Input Original result After manual editing of base You can

Computational Photography Lab @ SFU 142 Dec 17, 2022
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022
A minimal implementation of Gaussian process regression in PyTorch

pytorch-minimal-gaussian-process In search of truth, simplicity is needed. There exist heavy-weighted libraries, but as you know, we need to go bare b

Sangwoong Yoon 38 Nov 25, 2022
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation (CVPR 2022)

CCAM (Unsupervised) Code repository for our paper "CCAM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localizati

Computer Vision Insitute, SZU 113 Dec 27, 2022
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

0 Jan 23, 2022
Redash reset for python

redash-reset This will use a default REDASH_SECRET_KEY key of c292a0a3aa32397cdb050e233733900f this allows you to reset the password of the user ID bu

Robert Wiggins 5 Nov 14, 2022
Bayesian Inference Tools in Python

BayesPy Bayesian Inference Tools in Python Our goal is, given the discrete outcomes of events, estimate the distribution of categories. Using gradient

Max Sklar 99 Dec 14, 2022
SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

SEOVER-Master This code is the implementation of paper: SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

4 Feb 24, 2022
Gems & Holiday Package Prediction

Predictive_Modelling Gems & Holiday Package Prediction This project is based on 2 cases studies : Gems Price Prediction and Holiday Package prediction

Avnika Mehta 1 Jan 27, 2022
YOLOX-Paddle - A reproduction of YOLOX by PaddlePaddle

YOLOX-Paddle A reproduction of YOLOX by PaddlePaddle 数据集准备 下载COCO数据集,准备为如下路径 /ho

QuanHao Guo 6 Dec 18, 2022
EgGateWayGetShell py脚本

EgGateWayGetShell_py 免责声明 由于传播、利用此文所提供的信息而造成的任何直接或者间接的后果及损失,均由使用者本人负责,作者不为此承担任何责任。 使用 python3 eg.py urls.txt 目标 title:锐捷网络-EWEB网管系统 port:4430 漏洞成因 ?p

榆木 61 Nov 09, 2022
Official Repository of NeurIPS2021 paper: PTR

PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning Figure 1. Dataset Overview. Introduction A critical aspect of human vis

Yining Hong 32 Jun 02, 2022
ICLR21 Tent: Fully Test-Time Adaptation by Entropy Minimization

⛺️ Tent: Fully Test-Time Adaptation by Entropy Minimization This is the official project repository for Tent: Fully-Test Time Adaptation by Entropy Mi

Dequan Wang 204 Dec 25, 2022
Pytorch and Keras Implementations of Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects.

The repository contains the implementations for Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects. Model

Ankur Deria 115 Jan 06, 2023
This repository introduces a short project about Transfer Learning for Classification of MRI Images.

Transfer Learning for MRI Images Classification This repository introduces a short project made during my stay at Neuromatch Summer School 2021. This

Oscar Guarnizo 3 Nov 15, 2022
A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Aladdin Persson 4.7k Jan 08, 2023
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
My implementation of Image Inpainting - A deep learning Inpainting model

Image Inpainting What is Image Inpainting Image inpainting is a restorative process that allows for the fixing or removal of unwanted parts within ima

Joshua V Evans 1 Dec 12, 2021
Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL)

Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL) This repository contains all source code used to generate the results in the article "

Charlotte Loh 3 Jul 23, 2022