Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Overview

Fellowship Prediction

GitHub Profile Comparative Analysis Tool Built with BentoML

Fellowship Prediction Header Logo

Table of Contents:

Winner

This project won the MLH Fellowship Orientation Hackathon - Batch 4 along with other great projects by MLH Fellows. We highly suggest you check them out.

Features

Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Try it now!

Demo Git

Provides you with an extensive analysis on the following features of your profile:

Feature Description
Commits Number of total commits the user made
Contributions Number of repositories where the user made contributions
Followers Number of followers the user has
Forks Number of forks the user has in their repositories
Issues Number of issues the user has raised
Organizations Number of organizations the user is a part of
Repos Number of repositories the user has
Stars Number of stars the user has on their repositories

And gives you a comprehensive score of how similar your GitHub Profile is to an average MLH Fellow's GitHub.

It also shows your statistics in a user-friendly data visualization format for you to gauge the range of your skills and become the next MLH Fellow!

Disclaimer

Dear user, building this application, we were trying our best to provide with data insights into things you can improve through your GitHub Profile. This is a hackakthon project that is built by Open Source Fellows and is not directly affiliated with MLH in any capacity. The positive score in your application does not guarantee your chances of becoming a fellow because there are external things apart from GitHub that affect the decision process.

We also hope that you understand that your GitHub Stats do not affect your value to the community as a developer. We all have different paths to success in our lives, and they do not necessarily involve high scores. Regardless of your numbers, you are going to succeed in your journey.

Technologies Used

Tech Stack Used

We used the following technologies:

  • BentoML along with Heroku to build an API endpoint that calculates the comprehensive score for the user based on a simple query.
  • Flask deployed to Heroku to setup a bridge between the frameworks and collect the input data.
  • React.js served on Firebase to provide user-friendly UI for future MLH fellows to use.

Contributing

To contribute to this open-source project, follow these steps:

  1. Fork the repository.
  2. Create a branch: git checkout -b <branch_name>.
  3. Make your changes and commit them: git commit -m '<commit_message>'.
  4. Push to your branch: git push origin <project_name>/<location>.
  5. Create a pull request.

To work on BentoML:

  1. Go to model/bento_deploy to find necessary files.
  2. Read BentoML Start Guide to learn more about the files.
  3. Improve the BentoML Interface to provide our users with a more accurate score.
  4. Create the BentoML prediction service with python bento_packer.py and commit the saved class from bentoml get IrisClassifier:latest --print-location --quiet.

To work on the Back-End:

  1. Consult scr/server and its README.
  2. Make contributions.

Alternatively: Reach out to one of the Project Contributors for questions.

Demo

YouTube Logo that Leads to our demo

Motivation

We built this project because we wanted to help prospective MLH Fellows with their progress toward a better GitHub profile with solid projects and a record of active work. We also wanted to give them some insights into what an average fellow at MLH looks like.

When we were just aspiring to become MLH Fellows, we would look for different sources of information to know what MLH is looking for in their fellows and better ways to prepare. So we tried to address this issue and hopefully support future fellows on their way to success.

However, we make an important notion that your GitHub Profile does not define you as a developer. Our tool is simply to let you see into the data for areas of potential improvement and keep working toward your goals. We do not consider things like:

  • Personal communication levels
  • Spot availability
  • Match in project interests

The mentioned points affect your chances on becoming a fellow. Unfortunately, there is no way to take them into consideration.

Team

Damir Temir


Damir Temir

Working on the project, I learned the basics of BentoML and deploying the server model to the cloud like Heroku. I also gained some experience in Data Mining and Processing, which is an invaluable skill toward my journey to Machine Learning Engineering.

The contributions I made are:

  • Wrote Jupyter Notebooks where we showcase our work with the GitHub API.
  • Set up a Git repository with active GitHub Projects and proper infrastructure.
  • Mined data on more than 650 fellows in the MLH Fellowship organization.
  • Created a BentoML API node deployed to Heroku for querying.

Aymen Bennabi


Aymen Bennabi

During the hackathon I majorly worked on the Front-End part of the project. I created a friendly UI/UX to collect data and visualize the results. Also, I helped a little bit with the Back-End by creating a facade API to make working with GitHub easier. The new interface adds a level of abstraction that mainly focuses on quantitative data that we needed to do the statistical analysis.

I really enjoyed the Orientation Hackathon. I now feel more confident working with Git/GitHub. I also started learning about functional programming base API (OCamal/dream).

Tasha Kim


Aymen Bennabi

Utilizing BentoML gave us a flexible, high-performance framework to serve, manage, and deploy our model to predict MLH fellowship status using user's GitHub profiles. In particular, I enjoyed working with ML frameworks like Matplotlib, Seaborn, and Pandas, as well as Cloud native deployment services, and API serving that were all packaged into a single service.

Some of my contributions were:

  • Implemented the ANNOVA model as an alternative improved statiscal comparison to the one we are using now. Our current one works fine, but we can use this in the case we want a more rigorous and detailed comparison (multiple pairwise comparison (post hoc comparison) analysis for all unplanned comparison using Tukey’s honestly significantly differenced (HSD) test).
  • Built a CI (continuous integration) pipeline for build, run, and testing of our node app as well as python app using github actions.
  • Implemented method to compute average statistics for aggregated mlh fellow data.

Shout out to everyone in our team!

Eyimofe Ogunbiyi


Eyimofe Bennabi

I worked on the Back-End Server for the project and the deployment pipeline on Heroku. I was able to use the Flask Rest Framework for the Back-End which was a new experience for me.

License

This project is served under the MIT License.

MIT License

Copyright (c) 2021 Damir Temir

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Neural Logic Inductive Learning

Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn

36 Nov 28, 2022
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
Gradient representations in ReLU networks as similarity functions

Gradient representations in ReLU networks as similarity functions by Dániel Rácz and Bálint Daróczy. This repo contains the python code related to our

1 Oct 08, 2021
Implementation of the master's thesis "Temporal copying and local hallucination for video inpainting".

Temporal copying and local hallucination for video inpainting This repository contains the implementation of my master's thesis "Temporal copying and

David Álvarez de la Torre 1 Dec 02, 2022
Alex Pashevich 62 Dec 24, 2022
A simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

This is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

crispengari 3 Jan 08, 2022
Differential fuzzing for the masses!

NEZHA NEZHA is an efficient and domain-independent differential fuzzer developed at Columbia University. NEZHA exploits the behavioral asymmetries bet

147 Dec 05, 2022
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 514 Dec 28, 2022
Extremely easy multi instancing software for minecraft speedrunning.

Easy Multi Extremely easy multi/single instancing software for minecraft speedrunning. A couple of goals of this project: Setup multi in minutes No fi

Duncan 8 Jul 16, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

697 Jan 06, 2023
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
Koopman operator identification library in Python

pykoop pykoop is a Koopman operator identification library written in Python. It allows the user to specify Koopman lifting functions and regressors i

DECAR Systems Group 34 Jan 04, 2023
FID calculation with proper image resizing and quantization steps

clean-fid: Fixing Inconsistencies in FID Project | Paper The FID calculation involves many steps that can produce inconsistencies in the final metric.

Gaurav Parmar 606 Jan 06, 2023
Open source implementation of "A Self-Supervised Descriptor for Image Copy Detection" (SSCD).

A Self-Supervised Descriptor for Image Copy Detection (SSCD) This is the open-source codebase for "A Self-Supervised Descriptor for Image Copy Detecti

Meta Research 68 Jan 04, 2023
TransCD: Scene Change Detection via Transformer-based Architecture

TransCD: Scene Change Detection via Transformer-based Architecture

wangzhixue 29 Dec 11, 2022
PuppetGAN - Cross-Domain Feature Disentanglement and Manipulation just got way better! 🚀

Better Cross-Domain Feature Disentanglement and Manipulation with Improved PuppetGAN Quite cool... Right? Introduction This repo contains a TensorFlow

Giorgos Karantonis 5 Aug 25, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

Shihua Huang 23 Jul 22, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 03, 2023