A Neural Language Style Transfer framework to transfer natural language text smoothly between fine-grained language styles like formal/casual, active/passive, and many more. Created by Prithiviraj Damodaran. Open to pull requests and other forms of collaboration.

Overview

PyPI - License Visits Badge

Styleformer

A Neural Language Style Transfer framework to transfer natural language text smoothly between fine-grained language styles like formal/casual, active/passive, and many more.For instance, understand What makes text formal or casual/informal.

Table of contents

Usecases for Styleformer

Area 1: Data Augmentation

  • Augment training datasets with various fine-grained language styles.

Area 2: Post-processing

  • Apply style transfers to machine generated text.
  • e.g.
    • Refine a Summarised text to active voice + formal tone.
    • Refine a Translated text to more casual tone to reach younger audience.

Area 3: Controlled paraphrasing

  • Formal <=> Casual and Active <=> style transfers adds a notion of control over how we paraphrase when compared to free-form paraphrase where there is control or guarantee over the paraphrases.

Area 4: Assisted writing

  • Integrate this to any human writing interfaces like email clients, messaging tools or social media post authoring tools. Your creativity is your limit to te uses.
  • e.g.
    • Polish an email with business tone for professional uses.

Installation

pip install git+https://github.com/PrithivirajDamodaran/Styleformer.git

Quick Start

Casual to Formal (Available now !)

from styleformer import Styleformer
import torch
import warnings
warnings.filterwarnings("ignore")

'''
#uncomment for re-producability
def set_seed(seed):
  torch.manual_seed(seed)
  if torch.cuda.is_available():
    torch.cuda.manual_seed_all(seed)

set_seed(1234)
'''

# style = [0=Casual to Formal, 1=Formal to Casual, 2=Active to Passive, 3=Passive to Active etc..]
sf = Styleformer(style = 0) 

source_sentences = [
"I am quitting my job",
"Jimmy is on crack and can't trust him",
"What do guys do to show that they like a gal?",
"i loooooooooooooooooooooooove going to the movies.",
"That movie was fucking awesome",
"My mom is doing fine",
"That was funny LOL" , 
"It's piece of cake, we can do it",
"btw - ur avatar looks familiar",
"who gives a crap?",
"Howdy Lucy! been ages since we last met.",
"Dude, this car's dope!",
"She's my bestie from college",
"I kinda have a feeling that he has a crush on you.",
"OMG! It's finger-lickin' good.",
]   

for source_sentence in source_sentences:
    target_sentence = sf.transfer(source_sentence)
    print("-" *100)
    print("[Informal] ", source_sentence)
    print("-" *100)
    if target_sentence is not None:
        print("[Formal] ",target_sentence)
        print()
    else:
        print("No good quality transfers available !")
[Informal]  I am quitting my job
[Formal]  I will be stepping down from my job.
----------------------------------------------------------------------------------------------------
[Informal]  Jimmy is on crack and can't trust him
[Formal]  Jimmy is a crack addict I cannot trust him
----------------------------------------------------------------------------------------------------
[Informal]  What do guys do to show that they like a gal?
[Formal]  What do guys do to demonstrate their affinity for women?
----------------------------------------------------------------------------------------------------
[Informal]  i loooooooooooooooooooooooove going to the movies.
[Formal]  I really like to go to the movies.
----------------------------------------------------------------------------------------------------
[Informal]  That movie was fucking awesome
[Formal]  That movie was wonderful.
----------------------------------------------------------------------------------------------------
[Informal]  My mom is doing fine
[Formal]  My mother is doing well.
----------------------------------------------------------------------------------------------------
[Informal]  That was funny LOL
[Formal]  That was hilarious
----------------------------------------------------------------------------------------------------
[Informal]  It's piece of cake, we can do it
[Formal]  The whole process is simple and is possible.
----------------------------------------------------------------------------------------------------
[Informal]  btw - ur avatar looks familiar
[Formal]  Also, your avatar looks familiar.
----------------------------------------------------------------------------------------------------
[Informal]  who gives a crap?
[Formal]  Who cares?
----------------------------------------------------------------------------------------------------
[Informal]  Howdy Lucy! been ages since we last met.
[Formal]  Hello, Lucy It has been a long time since we last met.
----------------------------------------------------------------------------------------------------
[Informal]  Dude, this car's dope!
[Formal]  I find this car very appealing.
----------------------------------------------------------------------------------------------------
[Informal]  She's my bestie from college
[Formal]  She is my best friend from college.
----------------------------------------------------------------------------------------------------
[Informal]  I kinda have a feeling that he has a crush on you.
[Formal]  I have a feeling that he is attracted to you.
----------------------------------------------------------------------------------------------------
[Informal]  OMG! It's finger-lickin' good.
[Formal]  It is so good, it is delicious.
----------------------------------------------------------------------------------------------------

Knobs

# inference_on = [0=Regular model On CPU, 1= Regular model On GPU, 2=Quantized model On CPU]
target_sentence = sf.transfer(source_sentence, inference_on=0, quality_filter=0.95, max_candidates=5)

Models

Model Type Status
prithivida/informal_to_formal_styletransfer Seq2Seq Beta
prithivida/formal_to_informal_styletransfer Seq2Seq WIP
prithivida/active_to_passive_styletransfer Seq2Seq WIP
prithivida/passive_to_active_styletransfer Seq2Seq WIP
prithivida/positive_to_negative_styletransfer Seq2Seq WIP
prithivida/negative_to_positive_styletransfer Seq2Seq WIP

Dataset

  • TBD
  • Fined tuned on T5 on a Tesla T4 GPU and it took ~2 hours to train each of the above models with batch_size = 16 and epochs = 5.(Will share training args shortly)

Benchmark

  • TBD

References

Citation

  • TBD
Comments
  • added streamlit app

    added streamlit app

    Following points are covered in this PR:

    • Added Streamlit app. (CTF,FTC,ATP,PTA)
    • Fixed bug in PTA style transfer

    @PrithivirajDamodaran Attaching screenshot of streamlit app for reference. Let me know your suggestions

    app_screenshot

    opened by shashankdeshpande 6
  • Trimming long sentences

    Trimming long sentences

    Following the code snippet for a better understanding of the problem, I am facing.

    from styleformer import Styleformer
    import torch
    import warnings
    warnings.filterwarnings("ignore")
    
    # style = [0=Casual to Formal, 1=Formal to Casual, 2=Active to Passive, 3=Passive to Active etc..]
    sf = Styleformer(style = 0) 
    
    source_sentences = [
                       "Corruption in african countries hinders economic, political and social development. It is a major obstacle to economic growth, good governance and fundamental freedoms, such as freedom of speech or the right of citizens to hold governments accountable. In addition, corruption affects the lives of individuals, families and communities. The 10th global corruption barometer (gcb) program - in africa, shows that while many people in africa feel that corruption is on the rise in their country, many also feel confident that, as citizens, they can make a difference in the fight against corruption."
    ]
    
    for paragraph in source_sentences:
        # sentences = sent_tokenize(paragraph)
        sentences = paragraph.split('. ')
        for source_sentence in sentences:
            target_sentence = sf.transfer(source_sentence)
            print("-" *100)
            print("[Casual] ", source_sentence)
            print("-" *100)
            if target_sentence is not None:
                print("[Formal] ",target_sentence)
                print()
            else:
                print("No good quality transfers available !")
    

    Program Output


    [Casual] Corruption in african countries hinders economic, political and social development

    [Formal] In African countries, corruption affects economic, political, and social development.


    [Casual] It is a major obstacle to economic growth, good governance and fundamental freedoms, such as freedom of speech or the right of citizens to hold governments accountable

    [Formal] It's a major obstacle to economic growth, good governance, and fundamental freedoms, such as the freedom of speech or the right of citizens to


    [Casual] In addition, corruption affects the lives of individuals, families and communities

    [Formal] Additionally, corruption has a negative impact on individuals, families and communities.


    [Casual] The 10th global corruption barometer (gcb) program - in africa, shows that while many people in africa feel that corruption is on the rise in their country, many also feel confident that, as citizens, they can make a difference in the fight against corruption.

    [Formal] The tenth Global Corruptibility Barometer (GCB) program - in Africa - shows that while many people in Africa feel that corruption

    Please help to fix this for longer sentences. Thanks in advance!

    wontfix 
    opened by Nomiluks 4
  • OSError: Can't load config for 'prithivida/parrot_adequacy_on_BART'. Make sure that....

    OSError: Can't load config for 'prithivida/parrot_adequacy_on_BART'. Make sure that....

    Hi Prithviraj,

    Fantastic work you are doing.

    While testing your models, I intended to deploy the model wrapped in a flask app on EC2.

    Although the results work on Google Colab, I receive the following error on EC2 -

    OSError: Can't load config for 'prithivida/parrot_adequacy_on_BART'. Make sure that:
    
    - 'prithivida/parrot_adequacy_on_BART' is a correct model identifier listed on 'https://huggingface.co/models'
    
    - or 'prithivida/parrot_adequacy_on_BART' is the correct path to a directory containing a config.json file
    

    Can you guide me on how this can be resolved?

    Regards, Paritosh

    invalid 
    opened by katreparitosh 2
  • Issue with loading saved models

    Issue with loading saved models

    Hi, I'm trying to save and load the tokenizer and model. I use the following to save them:

    tokenizer = AutoTokenizer.from_pretrained("prithivida/informal_to_formal_styletransfer")
    tokenizer.save_pretrained('./data/style_tokenizer')
    model = AutoModelForSeq2SeqLM.from_pretrained("prithivida/informal_to_formal_styletransfer")
    model.save_pretrained('./data/style_model')
    

    But when I try to load them, from the local path, I get the following error:

    OSError: Can't load config for '../data/style_tokenizer'. Make sure that:
    
    - '../data/style_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models'
    
    - or '../data/style_tokenizer' is the correct path to a directory containing a config.json file
    

    This somehow makes sense since saving the vectorizer, no config.json is being created.

    Any idea how can I save/load the tokenizer and model?

    opened by Naviden 1
  • Code to train the model

    Code to train the model

    Hey, Can you please share the code, where you train models? We have tasks similar to issues you solve but in other domains. It might be very helpful for us. Do you fine-tune only T5 or you make additional changes to T5 fine-tuning? Thanks

    question 
    opened by ivan-bulka 1
  • cant create a Styleformer(style=n)

    cant create a Styleformer(style=n)

    it keeps throing the same error(diffrent request id) OSError: There was a specific connection error when trying to load prithivida/informal_to_formal_styletransfer: <class 'requests.exceptions.HTTPError'> (Request ID: K9-6-Ks5uMEai7cOcQ3gC)

    opened by TalSchiff 0
  • Unable to create the styleformer instance

    Unable to create the styleformer instance

    OSError: prithivida/parrot_adequacy_on_BART is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'

    I'm using the latest version and seeing the following issue. I was wondering if anything has changed on the huggingface models front?

    opened by ks2002119 0
  • How to do inferencing using multiple GPU's for styleformer

    How to do inferencing using multiple GPU's for styleformer

    I am using this model to do inferencing on 1 million data point using A100 GPU's with 4 GPU. I am launching a inference.py code using Googles vertex-ai Container.

    How can I make inference code to utilise all 4 GPU's ? So that inferencing is super-fast.

    Here is the same code I use in inference.py:

    from styleformer import Styleformer
    import warnings
    warnings.filterwarnings("ignore")
    
    # style = [0=Casual to Formal, 1=Formal to Casual, 2=Active to Passive, 3=Passive to Active etc..]
    sf = Styleformer(style = 1) 
    import torch
    def set_seed(seed):
      torch.manual_seed(seed)
      if torch.cuda.is_available():
        torch.cuda.manual_seed_all(seed)
    
    set_seed(1212)
    
    source_sentences = [
    "I would love to meet attractive men in town",
    "Please leave the room now",
    "It is a delicious icecream",
    "I am not paying this kind of money for that nonsense",
    "He is on cocaine and he cannot be trusted with this",
    "He is a very nice man and has a charming personality",
    "Let us go out for dinner",
    "We went to Barcelona for the weekend. We have a lot of things to tell you.",
    ]   
    
    for source_sentence in source_sentences:
        # inference_on = [0=Regular model On CPU, 1= Regular model On GPU, 2=Quantized model On CPU]
        target_sentence = sf.transfer(source_sentence, inference_on=1, quality_filter=0.95, max_candidates=5)
        print("[Formal] ", source_sentence)
        if target_sentence is not None:
            print("[Casual] ",target_sentence)
        else:
            print("No good quality transfers available !")
        print("-" *100)     
    
    opened by pratikchhapolika 6
  • Sentiment Transfer

    Sentiment Transfer

    Love the library!

    Was hoping to do sentiment transfer but I see that has not yet been integrated. Any pointers towards off the shelf models that can do that?

    opened by JosephGatto 1
Releases(v1.0)
Owner
Prithivida
Applied NLP, XAI for NLP and Data Engineering
Prithivida
🦅 Pretrained BigBird Model for Korean (up to 4096 tokens)

Pretrained BigBird Model for Korean What is BigBird • How to Use • Pretraining • Evaluation Result • Docs • Citation 한국어 | English What is BigBird? Bi

Jangwon Park 183 Dec 14, 2022
Modified GPT using average pooling to reduce the softmax attention memory constraints.

NLP-GPT-Upsampling This repository contains an implementation of Open AI's GPT Model. In particular, this implementation takes inspiration from the Ny

WD 1 Dec 03, 2021
Python interface for converting Penn Treebank trees to Stanford Dependencies and Universal Depenencies

PyStanfordDependencies Python interface for converting Penn Treebank trees to Universal Dependencies and Stanford Dependencies. Example usage Start by

David McClosky 64 May 08, 2022
BERT-based Financial Question Answering System

BERT-based Financial Question Answering System In this example, we use Jina, PyTorch, and Hugging Face transformers to build a production-ready BERT-b

Bithiah Yuan 61 Sep 18, 2022
Prompt tuning toolkit for GPT-2 and GPT-Neo

mkultra mkultra is a prompt tuning toolkit for GPT-2 and GPT-Neo. Prompt tuning injects a string of 20-100 special tokens into the context in order to

61 Jan 01, 2023
Phrase-Based & Neural Unsupervised Machine Translation

Unsupervised Machine Translation This repository contains the original implementation of the unsupervised PBSMT and NMT models presented in Phrase-Bas

Facebook Research 1.5k Dec 28, 2022
Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2.

Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2. It is trained (finetuned) on a curated list of approximately 45K Python (~470MB) files gathered from the

Galois Autocompleter 91 Sep 23, 2022
Data preprocessing rosetta parser for python

datapreprocessing_rosetta_parser I've never done any NLP or text data processing before, so I wanted to use this hackathon as a learning opportunity,

ASReview hackathon for Follow the Money 2 Nov 28, 2021
Speach Recognitions

easy_meeting Добро пожаловать в интерфейс сервиса автопротоколирования совещаний Easy Meeting. Website - http://cf5c-62-192-251-83.ngrok.io/ Принципиа

Maksim 3 Feb 18, 2022
Code for "Generative adversarial networks for reconstructing natural images from brain activity".

Reconstruct handwritten characters from brains using GANs Example code for the paper "Generative adversarial networks for reconstructing natural image

K. Seeliger 2 May 17, 2022
BERT Attention Analysis

BERT Attention Analysis This repository contains code for What Does BERT Look At? An Analysis of BERT's Attention. It includes code for getting attent

Kevin Clark 401 Dec 11, 2022
MRC approach for Aspect-based Sentiment Analysis (ABSA)

B-MRC MRC approach for Aspect-based Sentiment Analysis (ABSA) Paper: Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet Extracti

Phuc Phan 1 Apr 05, 2022
Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)

Universal Adversarial Triggers for Attacking and Analyzing NLP This is the official code for the EMNLP 2019 paper, Universal Adversarial Triggers for

Eric Wallace 248 Dec 17, 2022
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.0.1 1.1.0 1.2.0 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 ubuntu18/python3.8/pip ubuntu18

ESPnet 5.9k Jan 03, 2023
Transformer training code for sequential tasks

Sequential Transformer This is a code for training Transformers on sequential tasks such as language modeling. Unlike the original Transformer archite

Meta Research 578 Dec 13, 2022
Levenshtein and Hamming distance computation

distance - Utilities for comparing sequences This package provides helpers for computing similarities between arbitrary sequences. Included metrics ar

112 Dec 22, 2022
[EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correction

LM-Critic: Language Models for Unsupervised Grammatical Error Correction This repo provides the source code & data of our paper: LM-Critic: Language M

Michihiro Yasunaga 98 Nov 24, 2022
TruthfulQA: Measuring How Models Imitate Human Falsehoods

TruthfulQA: Measuring How Models Imitate Human Falsehoods

69 Dec 25, 2022
DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time

DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches the answers out of 60 billion phrases in Wikipedia, it is also v

Jinhyuk Lee 543 Jan 08, 2023
Simple Speech to Text, Text to Speech

Simple Speech to Text, Text to Speech 1. Download Repository Opsi 1 Download repository ini, extract di lokasi yang diinginkan Opsi 2 Jika sudah famil

Habib Abdurrasyid 5 Dec 28, 2021