CLIPImageClassifier wraps clip image model from transformers

Overview

CLIPImageClassifier

CLIPImageClassifier wraps clip image model from transformers.

CLIPImageClassifier is initialized with the argument classes, these are the texts that we want to classify an image to one of them The executor receives Documents with uri attribute. Each Document's uri represent the path to an image. The executor will read the image and classify it to one of the classes.

The result will be saved inside a new tag called class within the original document. The class tag is a dictionary that contains two things:

  • label: the chosen class from classes.
  • score: the confidence score in the chosen class given by the model.

Usage

Use the prebuilt images from Jina Hub in your Python code, add it to your Flow and classify your images according to chosen classes:

from jina import Flow
classes = ['this is a cat','this is a dog','this is a person']
f = Flow().add(
    uses='jinahub+docker://CLIPImageClassifier',
    uses_with={'classes':classes}
    )
docs = DocumentArray()
doc = Document(uri='/your/image/path')
docs.append(doc)

with f:
    f.post(on='/classify', inputs=docs, on_done=lambda resp: print(resp.docs[0].tags['class']['label']))

Returns

Document with class tag. This class tag which is a dict.It contains label which is an str and a float confidence score for the image.

GPU Usage

This executor also offers a GPU version. To use it, make sure to pass 'device'='cuda', as the initialization parameter, and gpus='all' when adding the containerized Executor to the Flow. See the Executor on GPU section of Jina documentation for more details.

Here's how you would modify the example above to use a GPU:

from jina import Flow

classes = ['this is a cat','this is a dog','this is a person']	
f = Flow().add(
    uses='jinahub+docker://CLIPImageClassifier',
    uses_with={
    'classes':classes,
    'device':'cuda',
    'gpus':'all'
    }
    )
docs = DocumentArray()
doc = Document(uri='/your/image/path')
docs.append(doc)

with f:
    f.post(on='/classify', inputs=docs, on_done=lambda resp: print(resp.docs[0].tags['class']['label']))

Reference

CLIP Image model

You might also like...
CLIP+FFT text-to-image
CLIP+FFT text-to-image

Aphantasia This is a text-to-image tool, part of the artwork of the same name. Based on CLIP model, with FFT parameterizer from Lucent library as a ge

A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN.
A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN.

Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU.

CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

 Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized

VQGAN-CLIP-Docker About Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized This is a stripped and minimal dependency repository for running loca

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

CLIP (Contrastive Language–Image Pre-training) trained on Indonesian data

CLIP-Indonesian CLIP (Radford et al., 2021) is a multimodal model that can connect images and text by training a vision encoder and a text encoder joi

Comments
  • CLIPImageClassifier error

    CLIPImageClassifier error

    I tried to run the following flow on "jinahub+sandbox" but I got the following error could you please share your insight with me? I am running the code from my Jupyter notebook.

    import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) from jina import Flow classes = ['this is a cat','this is a dog','this is a person'] doc = Document(uri='image/dog.jpg') docs = DocumentArray() docs.append(doc) f = Flow().add( uses='jinahub://CLIPImageClassifier',name="classifier", uses_with={'classes':classes})

    with f: f.post(on='/classify', inputs=docs, on_done=lambda resp: print(resp.docs[0].tags['class']['label']))

    -----------------------error------------------ PkgResourcesDeprecationWarning: 1.1build1 is an invalid version and will not be supported in a future release (raised from /home/ubuntu/pyenv/lib/python3.10/site-packages/pkg_resources/init.py:116) PkgResourcesDeprecationWarning: 0.1.43ubuntu1 is an invalid version and will not be supported in a future release (raised from /home/ubuntu/pyenv/lib/python3.10/site-packages/pkg_resources/init.py:116) UserWarning: VersionConflict(torchvision 0.12.0+cpu (/usr/local/lib/python3.10/dist-packages), Requirement.parse('torchvision==0.10.0')) (raised from /home/ubuntu/pyenv/lib/python3.10/site-packages/jina/hubble/helper.py:483) ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. ╭────── 🎉 Flow is ready to serve! ──────╮ │ 🔗 Protocol GRPC │ │ 🏠 Local 0.0.0.0:55600 │ │ 🔒 Private 172.31.17.247:55600 │ │ 🌍 Public 34.221.179.218:55600 │ ╰────────────────────────────────────────╯ ERROR classifier/[email protected] AttributeError("'DocumentArrayInMemory' [07/06/22 16:34:35] object has no attribute 'get_attributes'")
    add "--quiet-error" to suppress the exception details
    ╭────────────── Traceback (most recent call last) ───────────────╮
    │ /home/ubuntu/pyenv/lib/python3.10/site-packages/jina/serve/ru… │
    │ in process_data │
    │ │
    │ 162 │ │ │ │ if self.logger.debug_enabled: │
    │ 163 │ │ │ │ │ self._log_data_request(requests[0]) │
    │ 164 │ │ │ │ │
    │ ❱ 165 │ │ │ │ return await self._data_request_handler. │
    │ 166 │ │ │ except (RuntimeError, Exception) as ex: │
    │ 167 │ │ │ │ self.logger.error( │
    │ 168 │ │ │ │ │ f'{ex!r}' │
    │ │
    │ /home/ubuntu/pyenv/lib/python3.10/site-packages/jina/serve/ru… │
    │ in handle │
    │ │
    │ 147 │ │ ) │
    │ 148 │ │ │
    │ 149 │ │ # executor logic │
    │ ❱ 150 │ │ return_data = await self._executor.acall( │
    │ 151 │ │ │ req_endpoint=requests[0].header.exec_endpoin │
    │ 152 │ │ │ docs=docs, │
    │ 153 │ │ │ parameters=params, │
    │ │
    │ /home/ubuntu/pyenv/lib/python3.10/site-packages/jina/serve/ex… │
    │ in acall
    │ │
    │ 271 │ │ if req_endpoint in self.requests: │
    │ 272 │ │ │ return await self.acall_endpoint(req_end │
    │ 273 │ │ elif default_endpoint in self.requests: │
    │ ❱ 274 │ │ │ return await self.acall_endpoint(__defau │
    │ 275 │ │
    │ 276 │ async def acall_endpoint(self, req_endpoint, **k │
    │ 277 │ │ func = self.requests[req_endpoint] │
    │ │
    │ /home/ubuntu/pyenv/lib/python3.10/site-packages/jina/serve/ex… │
    │ in acall_endpoint
    │ │
    │ 292 │ │ │ if iscoroutinefunction(func): │
    │ 293 │ │ │ │ return await func(self, **kwargs) │
    │ 294 │ │ │ else: │
    │ ❱ 295 │ │ │ │ return func(self, **kwargs) │
    │ 296 │ │
    │ 297 │ @property │
    │ 298 │ def workspace(self) -> Optional[str]: │
    │ │
    │ /home/ubuntu/pyenv/lib/python3.10/site-packages/jina/serve/ex… │
    │ in arg_wrapper │
    │ │
    │ 177 │ │ │ │ def arg_wrapper( │
    │ 178 │ │ │ │ │ executor_instance, *args, **kwargs │
    │ 179 │ │ │ │ ): # we need to get the summary from th │
    │ the self │
    │ ❱ 180 │ │ │ │ │ return fn(executor_instance, *args, │
    │ 181 │ │ │ │ │
    │ 182 │ │ │ │ self.fn = arg_wrapper │
    │ 183 │
    │ │
    │ /home/ubuntu/.jina/hub-package/9k3zudzu/clip_image_classifier… │
    │ in classify │
    │ │
    │ 56 │ │ for docs_batch in docs.traverse_flat( │
    │ 57 │ │ │ parameters.get('traversal_paths', self.traver │
    │ 58 │ │ ).batch(batch_size=parameters.get('batch_size', s │
    │ ❱ 59 │ │ │ image_batch = docs_batch.get_attributes('blob │
    │ 60 │ │ │ with torch.inference_mode(): │
    │ 61 │ │ │ │ input = self._generate_input_features(cla │
    │ 62 │ │ │ │ outputs = self.model(**input) │
    ╰────────────────────────────────────────────────────────────────╯
    AttributeError: 'DocumentArrayInMemory' object has no attribute
    'get_attributes'
    Exception in thread Thread-107: Traceback (most recent call last): File "/home/ubuntu/pyenv/lib/python3.10/site-packages/jina/clients/base/grpc.py", line 86, in _get_results async for resp in stub.Call( File "/home/ubuntu/pyenv/lib/python3.10/site-packages/grpc/aio/_call.py", line 326, in _fetch_stream_responses await self._raise_for_status() File "/home/ubuntu/pyenv/lib/python3.10/site-packages/grpc/aio/_call.py", line 236, in _raise_for_status raise _create_rpc_error(await self.initial_metadata(), await grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with: status = StatusCode.UNKNOWN details = "Unexpected <class 'grpc.aio._call.AioRpcError'>: <AioRpcError of RPC that terminated with: status = StatusCode.UNKNOWN details = "Unexpected <class 'TypeError'>: format_exception() got an unexpected keyword argument 'etype'" debug_error_string = "{"created":"@1657125275.618452649","description":"Error received from peer ipv4:0.0.0.0:58903","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Unexpected <class 'TypeError'>: format_exception() got an unexpected keyword argument 'etype'","grpc_status":2}"

    " debug_error_string = "{"created":"@1657125275.619606817","description":"Error received from peer ipv4:0.0.0.0:55600","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Unexpected <class 'grpc.aio._call.AioRpcError'>: <AioRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNKNOWN\n\tdetails = "Unexpected <class 'TypeError'>: format_exception() got an unexpected keyword argument 'etype'"\n\tdebug_error_string = "{"created":"@1657125275.618452649","description":"Error received from peer ipv4:0.0.0.0:58903","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Unexpected <class 'TypeError'>: format_exception() got an unexpected keyword argument 'etype'","grpc_status":2}"\n>","grpc_status":2}"

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1009, in _bootstrap_inner self.run() File "/home/ubuntu/pyenv/lib/python3.10/site-packages/jina/helper.py", line 1292, in run self.result = asyncio.run(func(*args, **kwargs)) File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete return future.result() File "/home/ubuntu/pyenv/lib/python3.10/site-packages/jina/clients/mixin.py", line 164, in _get_results async for resp in c._get_results(*args, **kwargs): File "/home/ubuntu/pyenv/lib/python3.10/site-packages/jina/clients/base/grpc.py", line 155, in _get_results raise e File "/home/ubuntu/pyenv/lib/python3.10/site-packages/jina/clients/base/grpc.py", line 135, in _get_results raise BadClient(msg) from err jina.excepts.BadClient: gRPC error: StatusCode.UNKNOWN Unexpected <class 'grpc.aio._call.AioRpcError'>: <AioRpcError of RPC that terminated with: status = StatusCode.UNKNOWN details = "Unexpected <class 'TypeError'>: format_exception() got an unexpected keyword argument 'etype'" debug_error_string = "{"created":"@1657125275.618452649","description":"Error received from peer ipv4:0.0.0.0:58903","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Unexpected <class 'TypeError'>: format_exception() got an unexpected keyword argument 'etype'","grpc_status":2}"


    AttributeError Traceback (most recent call last) File ~/pyenv/lib/python3.10/site-packages/jina/helper.py:1307, in run_async(func, *args, **kwargs) 1306 try: -> 1307 return thread.result 1308 except AttributeError:

    AttributeError: '_RunThread' object has no attribute 'result'

    During handling of the above exception, another exception occurred:

    BadClient Traceback (most recent call last) Input In [15], in <cell line: 12>() 8 f = Flow().add( 9 uses='jinahub://CLIPImageClassifier',name="classifier", 10 uses_with={'classes':classes}) 12 with f: ---> 13 f.post(on='/classify', inputs=docs, on_done=lambda resp: print(resp.docs[0].tags['class']['label']))

    File ~/pyenv/lib/python3.10/site-packages/jina/clients/mixin.py:173, in PostMixin.post(self, on, inputs, on_done, on_error, on_always, parameters, target_executor, request_size, show_progress, continue_on_error, return_responses, **kwargs) 170 if return_results: 171 return result --> 173 return run_async( 174 _get_results, 175 inputs=inputs, 176 on_done=on_done, 177 on_error=on_error, 178 on_always=on_always, 179 exec_endpoint=on, 180 target_executor=target_executor, 181 parameters=parameters, 182 request_size=request_size, 183 **kwargs, 184 )

    File ~/pyenv/lib/python3.10/site-packages/jina/helper.py:1311, in run_async(func, *args, **kwargs) 1308 except AttributeError: 1309 from jina.excepts import BadClient -> 1311 raise BadClient( 1312 'something wrong when running the eventloop, result can not be retrieved' 1313 ) 1314 else: 1316 raise RuntimeError( 1317 'you have an eventloop running but not using Jupyter/ipython, ' 1318 'this may mean you are using Jina with other integration? if so, then you ' 1319 'may want to use Client/Flow(asyncio=True). If not, then ' 1320 'please report this issue here: https://github.com/jina-ai/jina' 1321 )

    BadClient: something wrong when running the eventloop, result can not be retrieved

    opened by sk-haghighi 4
Releases(v0.2)
Owner
Jina AI
A Neural Search Company. We provide the cloud-native neural search solution powered by state-of-the-art AI technology.
Jina AI
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations wit

Peizhuo 504 Dec 30, 2022
Code for "Unsupervised State Representation Learning in Atari"

Unsupervised State Representation Learning in Atari Ankesh Anand*, Evan Racah*, Sherjil Ozair*, Yoshua Bengio, Marc-Alexandre Côté, R Devon Hjelm This

Mila 217 Jan 03, 2023
DeiT: Data-efficient Image Transformers

DeiT: Data-efficient Image Transformers This repository contains PyTorch evaluation code, training code and pretrained models for DeiT (Data-Efficient

Facebook Research 3.2k Jan 06, 2023
Global Filter Networks for Image Classification

Global Filter Networks for Image Classification Created by Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, Jie Zhou This repository contains PyTorch

Yongming Rao 273 Dec 26, 2022
Convert scikit-learn models to PyTorch modules

sk2torch sk2torch converts scikit-learn models into PyTorch modules that can be tuned with backpropagation and even compiled as TorchScript. Problems

Alex Nichol 101 Dec 16, 2022
InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Jan 09, 2023
A Python library for working with arbitrary-dimension hypercomplex numbers following the Cayley-Dickson construction of algebras.

Hypercomplex A Python library for working with quaternions, octonions, sedenions, and beyond following the Cayley-Dickson construction of hypercomplex

7 Nov 04, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 09, 2023
Minecraft agent to farm resources using reinforcement learning

BarnyardBot CS 175 group project using Malmo download BarnyardBot.py into the python examples directory and run 'python BarnyardBot.py' in the console

0 Jul 26, 2022
A numpy-based implementation of RANSAC for fundamental matrix and homography estimation. The degeneracy updating and local optimization components are included and optional.

Description A numpy-based implementation of RANSAC for fundamental matrix and homography estimation. The degeneracy updating and local optimization co

AoxiangFan 9 Nov 10, 2022
PyTorch implementation of MICCAI 2018 paper "Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector"

Grouped SSD (GSSD) for liver lesion detection from multi-phase CT Note: the MICCAI 2018 paper only covers the multi-phase lesion detection part of thi

Sang-gil Lee 36 Oct 12, 2022
The ICS Chat System project for NYU Shanghai Fall 2021

ICS_Chat_System [Catenger] This is the ICS Chat System project for NYU Shanghai Fall 2021 Creators: Shavarsh Melikyan, Skyler Chen and Arghya Sarkar,

1 Dec 20, 2021
LegoDNN: a block-grained scaling tool for mobile vision systems

Table of contents 1 Introduction 1.1 Major features 1.2 Architecture 2 Code and Installation 2.1 Code 2.2 Installation 3 Repository of DNNs in vision

41 Dec 24, 2022
A setup script to generate ITK Python Wheels

ITK Python Package This project provides a setup.py script to build ITK Python binary packages and infrastructure to build ITK external module Python

Insight Software Consortium 59 Dec 14, 2022
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

Lens by Credo AI - Responsible AI Assessment Framework Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data a

Credo AI 27 Dec 14, 2022
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023
Weakly Supervised Segmentation by Tensorflow.

Weakly Supervised Segmentation by Tensorflow. Implements semantic segmentation in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

CHENG-YOU LU 52 Dec 27, 2022
"3D Human Texture Estimation from a Single Image with Transformers", ICCV 2021

Texformer: 3D Human Texture Estimation from a Single Image with Transformers This is the official implementation of "3D Human Texture Estimation from

XiangyuXu 193 Dec 05, 2022
The-Secret-Sharing-Schemes - This interactive script demonstrates the Secret Sharing Schemes algorithm

The-Secret-Sharing-Schemes This interactive script demonstrates the Secret Shari

Nishaant Goswamy 1 Jan 02, 2022
Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Google Cloud Storage

Keepsake Version control for machine learning. Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Goo

Replicate 1.6k Dec 29, 2022