Applied Natural Language Processing in the Enterprise - An O'Reilly Media Publication

Overview

Applied Natural Language Processing in the Enterprise

This is the companion repo for Applied Natural Language Processing in the Enterprise, an O'Reilly Media publication by Ankur A. Patel and Ajay Uppili Arasanipalai. Here, you will find all the source code from the book, published here on GitHub for your convenience.

Follow the steps below to get started with setting up your environment and running the code examples.

Setup

To install all the required libraries and dependencies, run the following command:

pip install nlpbook

However, the recommended approach is to use conda, a cross-platform, language-agnostic package manager that automatically handles dependency conflicts.

If you have not already, install the Miniforge distribution of Python 3.8 based on your OS. If you are on Windows, you can choose the Anaconda distribution of Python 3.8 instead of the Miniforge distribution, if you wish to.

Once conda is installed, run the following command:

conda install -c nlpbook nlpbook

Alternatively, if you'd like to keep your environment for this book isolated from the rest of your system (which we highly recommend), run the following commands:

conda create -n nlpbook
conda activate nlpbook
conda install -c nlpbook nlpbook

Then run conda activate nlpbook every time you want to return to your environment. To exit the environment, run conda deactivate.

Next, install the spaCy models.

python -m spacy download en_core_web_sm
python -m spacy download en_core_web_lg
python -m spacy download en_core_web_trf

Setup Environment Directly

If you're interested in setting up an environment to quickly get up and running with the code for this book, run the following commands from the root of this repo (please see the "Getting the Code" section below on how to set up the repo first).

conda env create --file environment.yml
conda activate nlpbook

You can also grab all the dependacies via pip:

pip install -r requirements.txt

Getting the Code

All publicly released code is in this repository. The simplest way to get started is via Git:

git clone https://github.com/nlpbook/nlpbook.git

If you're on Windows or another platform that doesn't already have git installed, you may need to obtain a Git client.

If you want a specific version to match the copy of the book you have (this can occasionally change), you can find previous versions on the releases page.

Getting the Data

Next, download data from AWS S3 (the data files are too large to store and access on Github).

aws s3 cp s3://applied-nlp-book/data/ data --recursive --no-sign-request
aws s3 cp s3://applied-nlp-book/models/ag_dataset/ models/ag_dataset --recursive --no-sign-request

How This Repo is Organized

Each chapter in the book has a corresponding notebook in the root of this project repository. They are named chXX.ipynb for the chapter XX. The appendices are named apXX.ipynb.

Note: This repo only contains the code for the chapters, not the actual text in the book. For the complete text, please purchase a copy of the book. Chapters 1, 2, and 3 have been open-sourced, courtesy of O'Reilly and the authors.

Once you'd navigated to the nlpbook project directory, you can lauch a Jupyter client such as Jupyter Lab, Jupyter Notebooks, or VS Code to view and run the notebooks.

Contributions and Errata

We welcome any suggestions, feedback, and errata from readers. If you notice anything that seems off in the book or could use improvement, we've love to hear from you. Feel free to submit an issue here on GitHub or on our errata page.

Copyright Notice

This material is made available by the Creative Commons Attribution-Noncommercial-No Derivatives 4.0 International Public License.

Note: You are free to use the code in accordance with the MIT license, but you are not allowed to redistribute or sell any of the text presented in chapters 1, 2, and 3, which have been open-sourced for the benefit of the community. Please consider purchasing a copy of the book if you are interested in reading the text that accompanies the code presented in this repo.

You might also like...
💫 Industrial-strength Natural Language Processing (NLP) in Python

spaCy: Industrial-strength NLP spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest researc

🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.

State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0 🤗 Transformers provides thousands of pretrained models to perform tasks o

A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

Basic Utilities for PyTorch Natural Language Processing (NLP)

Basic Utilities for PyTorch Natural Language Processing (NLP) PyTorch-NLP, or torchnlp for short, is a library of basic utilities for PyTorch NLP. tor

A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow.  This is part of the CASL project: http://casl-project.ai/
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

DELTA is a deep learning based natural language and speech processing platform.
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

💫 Industrial-strength Natural Language Processing (NLP) in Python

spaCy: Industrial-strength NLP spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest researc

Comments
  • Download failed for train_prepared.csv

    Download failed for train_prepared.csv

    download failed: s3://applied-nlp-book/data/ag_dataset/prepared/train_prepared.csv to data/train_prepared.csv An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

    opened by sharma-ji 2
  • Chapter 05: data contains no attribute

    Chapter 05: data contains no attribute "Field"

    In chapter 05 when setting up the fields for training an Embedding on IMDB data you propose:

    TEXT = data.Field(lower=True, include_lengths=True, \
    batch_first=False, tokenize='spacy')
    LABEL = data.LabelField()
    

    However, data has not been defined yet. The module data imported from torchtext.__all__ does not contain an attribute Field. In the sources of torchtext I couldn't find it either.

    Can you advise or define data ?

    My Python version: 1.9.0 My Torchtext version: 0.10.0

    opened by iNLyze 1
  • No 'data' folder in Ch. 1

    No 'data' folder in Ch. 1

    Hello,

    I purchased your book and started reading Ch.1. Great book so far. I tried to emulate what is written in your book and ipynb. But there is no folder "data" that can retrieve Jeopardy questions. I guess this kind of incompleteness will not be the last even though I am reading your first chapter. Could you run your notebooks in a new environment and check what is missing? Thank you in advance. It would be an option to make your notebooks run in Colab. Then, you can write a setup file at the beginning of each chapter and users won't have issues running the scripts.

    opened by knslee07 1
Releases(v1.0.0)
  • v1.0.0(May 29, 2021)

    This is the initial public release of the source code for "Applied Natural Language Processing in the Enterprise" by Ankur A. Patel and Ajay Uppili Arasanipalai.

    Source code(tar.gz)
    Source code(zip)
Owner
Applied Natural Language Processing in the Enterprise
An O'Reilly Media book by Ankur A. Patel and Ajay Uppili Arasanipalai
Applied Natural Language Processing in the Enterprise
Extracting Summary Knowledge Graphs from Long Documents

GraphSum This repo contains the data and code for the G2G model in the paper: Extracting Summary Knowledge Graphs from Long Documents. The other basel

Zeqiu (Ellen) Wu 10 Oct 21, 2022
Natural Language Processing Tasks and Examples.

Natural Language Processing Tasks and Examples With the advancement of A.I. technology in recent years, natural language processing technology has bee

Soohwan Kim 53 Dec 20, 2022
Problem: Given a nepali news find the category of the news

Classification of category of nepali news catorgory using different algorithms Problem: Multiclass Classification Approaches: TFIDF for vectorization

pudasainishushant 2 Jan 09, 2022
Train 🤗-transformers model with Poutyne.

poutyne-transformers Train 🤗 -transformers models with Poutyne. Installation pip install poutyne-transformers Example import torch from transformers

Lennart Keller 2 Dec 18, 2022
:P Some basic stuff I'm gonna use for my upcoming Agile Software Development and Devops

reverse-image-search-py bash script.sh img_name.jpg Requirements pip install requests pip install pyshorteners Dry run [ Sudhanva M 3 Dec 18, 2021

chaii - hindi & tamil question answering

chaii - hindi & tamil question answering This is the solution for rank 5th in Kaggle competition: chaii - Hindi and Tamil Question Answering. The comp

abhishek thakur 33 Dec 18, 2022
A music comments dataset, containing 39,051 comments for 27,384 songs.

Music Comments Dataset A music comments dataset, containing 39,051 comments for 27,384 songs. For academic research use only. Introduction This datase

Zhang Yixiao 2 Jan 10, 2022
Intent parsing and slot filling in PyTorch with seq2seq + attention

PyTorch Seq2Seq Intent Parsing Reframing intent parsing as a human - machine translation task. Work in progress successor to torch-seq2seq-intent-pars

Sean Robertson 159 Apr 04, 2022
Download videos from YouTube/Twitch/Twitter right in the Windows Explorer, without installing any shady shareware apps

youtube-dl and ffmpeg Windows Explorer Integration Download videos from YouTube/Twitch/Twitter and more (any platform that is supported by youtube-dl)

Wolfgang 226 Dec 30, 2022
A relatively simple python program to generate one of those reddit text to speech videos dominating youtube.

Reddit text to speech generator A basic reddit tts video generator Current functionality Generate videos for subs based on comments,(askreddit) so rea

Aadvik 17 Dec 19, 2022
This library is testing the ethics of language models by using natural adversarial texts.

prompt2slip This library is testing the ethics of language models by using natural adversarial texts. This tool allows for short and simple code and v

9 Dec 28, 2021
FactSumm: Factual Consistency Scorer for Abstractive Summarization

FactSumm: Factual Consistency Scorer for Abstractive Summarization FactSumm is a toolkit that scores Factualy Consistency for Abstract Summarization W

devfon 83 Jan 09, 2023
ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体

ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体,包括上市公司所属行业关系、行业上级关系、产品上游原材料关系、产品下游产品关系、公司主营产品、产品小类共6大类。 上市公司4,654家,行业511个,产品95,559条、上游材料56,824条,上级行业480条,下游产品390条,产品小类52,937条,所属行业3,946条。

liuhuanyong 415 Jan 06, 2023
voice2json is a collection of command-line tools for offline speech/intent recognition on Linux

Command-line tools for speech and intent recognition on Linux

Michael Hansen 988 Jan 04, 2023
Signature remover is a NLP based solution which removes email signatures from the rest of the text.

Signature Remover Signature remover is a NLP based solution which removes email signatures from the rest of the text. It helps to enchance data conten

Forges Alterway 8 Jan 06, 2023
Python-zhuyin - An open source Python library that provides a unified interface for converting between Chinese pinyin and Zhuyin (bopomofo)

Python-zhuyin - An open source Python library that provides a unified interface for converting between Chinese pinyin and Zhuyin (bopomofo)

2 Dec 29, 2022
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"

Status: Archive (code is provided as-is, no updates expected) Update August 2020: For an example repository that achieves state-of-the-art modeling pe

OpenAI 1.3k Dec 28, 2022
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 138 Dec 30, 2022
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 04, 2023
This simple Python program calculates a love score based on your and your crush's full names in English

This simple Python program calculates a love score based on your and your crush's full names in English. There is no logic or reason in the calculation behind the love score. The calculation could ha

p.katekomol 1 Jan 24, 2022