Search Git commits in natural language

Overview

NaLCoS - NAtural Language COmmit Search

Search commit messages in your repository in natural language.

GitHub Issues Stargazers License Code style: black
GitHub release (latest by date) PyPi
All contributors


NaLCoS (NAtural Language COmmit Search) is a command-line tool for searching commit messages in your repository in natural language.

The key features are:

  • Search commit messages in both local and remote GitHub repositories.
  • Search for commits in a specific branch.
  • Restrict the number of commits to look back in history while searching.
  • Increase the number of retrieved results.

image

Internally, NaLCoS uses Sentence Transformers with pre-trained weights from multi-qa-MiniLM-L6-cos-v1. I chose this particular model because it has a good Performance vs Speed tradeoff. Since this model was designed for semantic search and has been pre-trained on 215M (question, answer) pairs from diverse sources, it is a good choice for tasks such as finding similarity between two sentences.

NaLCoS encodes the query string and all the commits into their corresponding vector embeddings and computes the cosine similarity between the query and all the commits. This is then used to rank the commits.

Why did I build this?

Most of the times when I've used Machine Learning till now, has been in dedicated environments such as Google Colab or Kaggle. I had been learning Natural Language Processing for a while and wanted to use transformers to build something different that is not very resource (read GPU) intensive and can be used like an everyday tool.

Though many Transformer models are far from fitting this description, I found that distilled models are not as hungry as their older siblings are infamous for. Searching for Git commits using natural language was something on which I could not find any pre-existing tool and thus decided to give this a shot.

Though there are various improvements left, I'm happy with what this initially turned out to be. I'm eager to see what further enhancements can be made to this to make it more efficient and useful.

Requirements

NaLCoS uses the following packages:

Installation

Installing with pip (Recommended)

Install with pip or your favourite PyPi manager:

$ pip install nalcos

Run NaLCoS with the --help flag to see all the available options:

$ nalcos --help

Note: When you run the nalcos command for the first time, it will, download the model which would be cached and used the next time you run NaLCoS.

Installing bleeding edge from the GitHub repository

  • Clone the repository:
$ git clone https://github.com/thepushkarp/nalcos.git

This also downloads the model weights stored in the nalcos/models directory so you don't have to download them while running the model for the first time.

  • Create a virtual environment (click here to read about activating virtualenv):
$ virtualenv venv
  • Activate virtualenv (for Linux and MacOS):
  $ source ./venv/bin/activate
  • Activate virtualenv (for Windows):
   $ cd venv/Scripts/
   $ activate
  • Install the requirements:
$ pip install -r requirements.txt
  • Change directory to the nalcos directory:
$ cd nalcos/
  • Run NaLCoS with the --help flag to see all the available options:
$ python nalcos.py --help

Usage

A detailed information about the usage of NaLCoS can be found below:

usage: nalcos [-h] [-g] [-n N_MATCHES] [-b BRANCH] [-l LOOK_PAST] [-v] query location

Search a commit in your git repository using natural language.

positional arguments:
  query                 The query to search for similar commit messages.
  location              The repository path to search in. If `-g` flag is not passed, searches locally in the path specified, else
                        takes in a remote GitHub repository name in the format '{owner}/{repo_name}'

optional arguments:
  -h, --help            show this help message and exit
  -g, --github          Flag to search on GitHub instead of searching in a local repository. Due to API limits currently this
                        allows for around 15 lookups per hour from your IP.
  -n N_MATCHES, --n-matches N_MATCHES
                        The number of matching results to return. Default 10.
  -b BRANCH, --branch BRANCH
                        The branch to search in. If not specified, the current branch will be used by default.
  -l LOOK_PAST, --look-past LOOK_PAST
                        Look back this many commits. Default 100.
  -v, --version         show program's version number and exit

Examples

  • Input:
$ python nalcos.py "improve language" "github/docs" --github
  • Output:
Found 100 commits.

                                        Commits related to "improve language" in "github/docs"
┏━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
┃ No. ┃ Commit ID ┃ Commit Message                                                        ┃ Commit Author      ┃ Commit Date          ┃
┡━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
│  1. │ 51bfdbb95 │ Merge branch 'main' into fatenhealy-fix-supportedlanguage             │ Faten Healy        │ 2021-09-12T22:26:31Z │
│  2. │ a9c2c8eea │ fix deprecation label spelling (#21474)                               │ Rachael Sewell     │ 2021-09-13T18:12:03Z │
│  3. │ 94e3c092d │ English search sync (#21446)                                          │ Rachael Sewell     │ 2021-09-13T17:30:08Z │
│  4. │ b048e27e9 │ Merge pull request #9909 from github/fatenhealy-fix-supportedlanguage │ Ramya Parimi       │ 2021-09-12T22:35:19Z │
│  5. │ 73c2717f7 │ Fix typo                                                              │ Adrian Mato        │ 2021-09-13T06:35:27Z │
│  6. │ 86b571982 │ Export changes to a branch for codespaces (#21462)                    │ Matthew Isabel     │ 2021-09-13T14:55:50Z │
│  7. │ 969288662 │ Update diff limit to 500KB (#20616)                                   │ jjkennedy3         │ 2021-09-11T09:12:38Z │
│  8. │ f28ee46d4 │ Update OpenAPI Descriptions (#21447)                                  │ github-openapi-bot │ 2021-09-11T09:22:28Z │
│  9. │ 92af3a469 │ update search indexes                                                 │ GitHub Actions     │ 2021-09-12T09:50:46Z │
│ 10. │ e6018f2aa │ update search indexes                                                 │ GitHub Actions     │ 2021-09-11T02:05:19Z │
└─────┴───────────┴───────────────────────────────────────────────────────────────────────┴────────────────────┴──────────────────────┘

Future plans

Please visit the NaLCoS To Do Project Board to see current status and future plans.

Known issues

Not all retrieved results are always relevant. I could think of two primary reasons for this:

  • The data the model was pre-trained on is not representative of how people write commit messages. Since commit messages usually contain technical jargon, merge commit messages, abbreviations and other non-common terms, the model (which has a limited vocabulary) is not able to generalize well to this data.
  • Two commits may be related even when their commit messages may not be similar and similarly two commit messages maybe unrelated even when their commit messages are similar. We often need more metadata (such as lines changes, files changed) etc. to make the predictions more accurate.

Contributing

Any suggestions, improvements or bug reports are welcome.

Contributors

Thanks goes to these wonderful people (emoji key):


Pushkar Patel

💻 📖 🚧

This project follows the all-contributors specification. Contributions of any kind welcome!

License

This project is licensed under the terms of the MIT license.

Comments
  • Patches

    Patches

    • :truck: Renames .cache directory to models and add Project Board link in README
    • Adds version.py
    • Adds contributing section in README
    • :lipstick: Add black code style
    • Adds back torch cuda support
    • :zap: Improve similarity computation
    • :tada: Upload to PyPi
    opened by thepushkarp 3
  • Version 0.2: Visual changes

    Version 0.2: Visual changes

    • Fix status message typo
    • :sparkles: Adds a flag to show similarity scores of the result
    • :sparkles: Adds an flag to display the entire commit message
    • Module error fixes
    • :sparkles: Adds commit links for results from GitHub
    • :art: Improve download prograss bar display when loading for first time
    • :bookmark: Bump version to 0.2
    opened by thepushkarp 1
  • Add a flag to download the model

    Add a flag to download the model

    Currently, if the model is not downloaded, the program downloads it during the first run, in the middle of the "Retrieving the commits ..." status mesage.

    This can be improved by adding a flag through which the user can download/redownload the model when they need it.

    Additionally, the program should prompt the user when it is run without downloading the weights with a choice to download it now or to abort the program.

    opened by thepushkarp 1
  • Use a Python Wrapper to the GitHub API

    Use a Python Wrapper to the GitHub API

    We can use some Python Wrapper of the GitHub API such as ghapi or PyGitHub instead of using the requests library.

    Additional Reference: https://docs.github.com/en/rest/overview/libraries#python

    This can help with #11

    opened by thepushkarp 1
  • Visual improvements

    Visual improvements

    • :sparkles: Adds a flag to show similarity scores of the result
    • :sparkles: Adds an flag to display the entire commit message
    • Module error fixes
    • :sparkles: Adds commit links for results from GitHub
    opened by thepushkarp 0
  • Add Automated Testing

    Add Automated Testing

    AUtomate the testing of the module (preferably with GitHub Actions for CI).

    NOTE: Considering the large installation size and time of the Torch and HuggingFace modules, the resources allocated may go over the GH Actions limit. This is something we have to take care of.

    Follow up of #13

    opened by thepushkarp 0
  • Adds README and some bug fixes

    Adds README and some bug fixes

    • Adds API limit exceeded warnning
    • :zap: Reverts back to using whole commit msg for serarch; displays only title
    • :memo: Add README
    • :zap: Reduces default value of look_past from 1000 to 100
    • :bug: Retrieves all branch names for GitHub repos and add branch not found Exception
    opened by thepushkarp 0
  • Try out other models.

    Try out other models.

    Currently, we are using multi-qa-MiniLM-L6-cos-v1, which has a speed (sentences encoded/sec on 1 V100 GPU) of 14200 and a model size of 80 MB. We should try out other models to see if we can get better performance and speed out of them.

    Additionally, we can also try using other types of tokenizers.

    Further reading:

    • https://www.sbert.net/docs/pretrained_models.html
    • https://huggingface.co/sentence-transformers
    • https://huggingface.co/transformers/tokenizer_summary.html
    help wanted 
    opened by thepushkarp 0
  • Add personal API token support

    Add personal API token support

    Do #28 before this

    Currently, the project is using an unauthenticated GH API which is capped to 60 requests per hour from an IP address.

    We can add the option to add a user's personal API access token to increase this limit.

    enhancement 
    opened by thepushkarp 0
Releases(v0.2)
  • v0.2(Sep 18, 2021)

    Changelog

    • Adds an option of showing the similarity score for the results using the -s flag.
    • Adds option of viewing the entire commit message instead of just the commit title using the -v flag.
    • Commits in results retrieved from GitHub have links to the commits
    • Improved the model download progress bar display when loading model for the first time
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Sep 14, 2021)

    Changelog

    • Make similarity computation more efficient
    • Add support for computation on CUDA
    • Add Black Code style in requirements
    • Published to PyPi at https://pypi.org/project/nalcos/ 🥳
    Source code(tar.gz)
    Source code(zip)
  • v0.1(Sep 13, 2021)

    Features ✨

    • Search commit messages in both local and remote GitHub repositories.
    • Search for commits in a specific branch.
    • Restrict the number of commits to look back in history while searching.
    • Increase the number of retrieved results.
    Source code(tar.gz)
    Source code(zip)
Owner
Pushkar Patel
Research Intern at SPIRE Labs, IISC Bangalore | GitHub Campus Expert @iiitv
Pushkar Patel
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 881 Jan 03, 2023
A Flask Sentiment Analysis API, with visual implementation

The Sentiment Analysis Api was created using python flask module,it allows users to parse a text or sentence throught the (?text) arguement, then view the sentiment analysis of that sentence. It can

Ifechukwudeni Oweh 10 Jul 17, 2022
Modeling cumulative cases of Covid-19 in the US during the Covid 19 Delta wave using Bayesian methods.

Introduction The goal of this analysis is to find a model that fits the observed cumulative cases of COVID-19 in the US, starting in Mid-July 2021 and

Alexander Keeney 1 Jan 05, 2022
End-to-end MLOps pipeline of a BERT model for emotion classification.

image source EmoBERT-MLOps The goal of this repository is to build an end-to-end MLOps pipeline based on the MLOps course from Made with ML, but this

Dimitre Oliveira 4 Nov 06, 2022
Japanese Long-Unit-Word Tokenizer with RemBertTokenizerFast of Transformers

Japanese-LUW-Tokenizer Japanese Long-Unit-Word (国語研長単位) Tokenizer for Transformers based on 青空文庫 Basic Usage from transformers import RemBertToken

Koichi Yasuoka 3 Dec 22, 2021
Optimal Transport Tools (OTT), A toolbox for all things Wasserstein.

Optimal Transport Tools (OTT), A toolbox for all things Wasserstein. See full documentation for detailed info on the toolbox. The goal of OTT is to pr

OTT-JAX 255 Dec 26, 2022
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
This is a modification of the OpenAI-CLIP repository of moein-shariatnia

This is a modification of the OpenAI-CLIP repository of moein-shariatnia

Sangwon Beak 2 Mar 04, 2022
GPT-2 Model for Leetcode Questions in python

Leetcode using AI 🤖 GPT-2 Model for Leetcode Questions in python New demo here: https://huggingface.co/spaces/gagan3012/project-code-py Note: the Ans

Gagan Bhatia 100 Dec 12, 2022
We have built a Voice based Personal Assistant for people to access files hands free in their device using natural language processing.

Voice Based Personal Assistant We have built a Voice based Personal Assistant for people to access files hands free in their device using natural lang

Rushabh 2 Nov 13, 2021
nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

Bernhard Liebl 2 Jun 10, 2022
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

GPT-NeoX An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hun

EleutherAI 3.1k Jan 08, 2023
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 | 한국어 State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained models

Hugging Face 77.1k Dec 31, 2022
Nmt - TensorFlow Neural Machine Translation Tutorial

Neural Machine Translation (seq2seq) Tutorial Authors: Thang Luong, Eugene Brevdo, Rui Zhao (Google Research Blogpost, Github) This version of the tut

6.1k Dec 29, 2022
Repository of the Code to Chatbots, developed in Python

Description In this repository you will find the Code to my Chatbots, developed in Python. I'll explain the structure of this Repository later. Requir

Li-am K. 0 Oct 25, 2022
ByT5: Towards a token-free future with pre-trained byte-to-byte models

ByT5: Towards a token-free future with pre-trained byte-to-byte models ByT5 is a tokenizer-free extension of the mT5 model. Instead of using a subword

Google Research 409 Jan 06, 2023
A library that integrates huggingface transformers with the world of fastai, giving fastai devs everything they need to train, evaluate, and deploy transformer specific models.

blurr A library that integrates huggingface transformers with version 2 of the fastai framework Install You can now pip install blurr via pip install

ohmeow 253 Dec 31, 2022
Türkçe küfürlü içerikleri bulan bir yapay zeka kütüphanesi / An ML library for profanity detection in Turkish sentences

"Kötü söz sahibine aittir." -Anonim Nedir? sinkaf uygunsuz yorumların bulunmasını sağlayan bir python kütüphanesidir. Farkı nedir? Diğer algoritmalard

KaraGoz 4 Feb 18, 2022
TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset.

TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset. TunBERT was applied to three NLP downstream tasks: Sentiment Analysis (S

InstaDeep Ltd 72 Dec 09, 2022
LSTM model - IMDB review sentiment analysis

NLP - Movie review sentiment analysis The colab notebook contains the code for building a LSTM Recurrent Neural Network that gives 87-88% accuracy on

Sundeep Bhimireddy 1 Jan 29, 2022