A simple app that provides django integration for RQ (Redis Queue)

Overview

Django-RQ

Build Status

Django integration with RQ, a Redis based Python queuing library. Django-RQ is a simple app that allows you to configure your queues in django's settings.py and easily use them in your project.

Support Django-RQ

If you find django-rq useful, please consider supporting its development via Tidelift.

Requirements

Installation

pip install django-rq
  • Add django_rq to INSTALLED_APPS in settings.py:
INSTALLED_APPS = (
    # other apps
    "django_rq",
)
  • Configure your queues in django's settings.py (syntax based on Django's database config):
RQ_QUEUES = {
    'default': {
        'HOST': 'localhost',
        'PORT': 6379,
        'DB': 0,
        'PASSWORD': 'some-password',
        'DEFAULT_TIMEOUT': 360,
    },
    'with-sentinel': {
        'SENTINELS': [('localhost', 26736), ('localhost', 26737)],
        'MASTER_NAME': 'redismaster',
        'DB': 0,
        'PASSWORD': 'secret',
        'SOCKET_TIMEOUT': None,
        'CONNECTION_KWARGS': {
            'socket_connect_timeout': 0.3
        },
    },
    'high': {
        'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0'), # If you're on Heroku
        'DEFAULT_TIMEOUT': 500,
    },
    'low': {
        'HOST': 'localhost',
        'PORT': 6379,
        'DB': 0,
    }
}

RQ_EXCEPTION_HANDLERS = ['path.to.my.handler'] # If you need custom exception handlers
  • Include django_rq.urls in your urls.py:
# For Django < 2.0
urlpatterns += [
    url(r'^django-rq/', include('django_rq.urls')),
]

# For Django >= 2.0
urlpatterns += [
    path('django-rq/', include('django_rq.urls'))
]

Usage

Putting jobs in the queue

Django-RQ allows you to easily put jobs into any of the queues defined in settings.py. It comes with a few utility functions:

  • enqueue - push a job to the default queue:
import django_rq
django_rq.enqueue(func, foo, bar=baz)
  • get_queue - returns an Queue instance.
import django_rq
queue = django_rq.get_queue('high')
queue.enqueue(func, foo, bar=baz)

In addition to name argument, get_queue also accepts default_timeout, is_async, autocommit, connection and queue_class arguments. For example:

queue = django_rq.get_queue('default', autocommit=True, is_async=True, default_timeout=360)
queue.enqueue(func, foo, bar=baz)

You can provide your own singleton Redis connection object to this function so that it will not create a new connection object for each queue definition. This will help you limit number of connections to Redis server. For example:

import django_rq
import redis
redis_cursor = redis.StrictRedis(host='', port='', db='', password='')
high_queue = django_rq.get('high', connection=redis_cursor)
low_queue = django_rq.get('low', connection=redis_cursor)
  • get_connection - accepts a single queue name argument (defaults to "default") and returns a connection to the queue's Redis server:
import django_rq
redis_conn = django_rq.get_connection('high')
  • get_worker - accepts optional queue names and returns a new RQ Worker instance for specified queues (or default queue):
import django_rq
worker = django_rq.get_worker() # Returns a worker for "default" queue
worker.work()
worker = django_rq.get_worker('low', 'high') # Returns a worker for "low" and "high"

@job decorator

To easily turn a callable into an RQ task, you can also use the @job decorator that comes with django_rq:

from django_rq import job

@job
def long_running_func():
    pass
long_running_func.delay() # Enqueue function in "default" queue

@job('high')
def long_running_func():
    pass
long_running_func.delay() # Enqueue function in "high" queue

You can pass in any arguments that RQ's job decorator accepts:

@job('default', timeout=3600)
def long_running_func():
    pass
long_running_func.delay() # Enqueue function with a timeout of 3600 seconds.

It's possible to specify default for result_ttl decorator keyword argument via DEFAULT_RESULT_TTL setting:

RQ = {
    'DEFAULT_RESULT_TTL': 5000,
}

With this setting, job decorator will set result_ttl to 5000 unless it's specified explicitly.

Running workers

django_rq provides a management command that starts a worker for every queue specified as arguments:

python manage.py rqworker high default low

If you want to run rqworker in burst mode, you can pass in the --burst flag:

python manage.py rqworker high default low --burst

If you need to use custom worker, job or queue classes, it is best to use global settings (see Custom queue classes and Custom job and worker classes). However, it is also possible to override such settings with command line options as follows.

To use a custom worker class, you can pass in the --worker-class flag with the path to your worker:

python manage.py rqworker high default low --worker-class 'path.to.GeventWorker'

To use a custom queue class, you can pass in the --queue-class flag with the path to your queue class:

python manage.py rqworker high default low --queue-class 'path.to.CustomQueue'

To use a custom job class, provide --job-class flag.

Support for RQ Scheduler

If you have RQ Scheduler installed, you can also use the get_scheduler function to return a Scheduler instance for queues defined in settings.py's RQ_QUEUES. For example:

import django_rq
scheduler = django_rq.get_scheduler('default')
job = scheduler.enqueue_at(datetime(2020, 10, 10), func)

You can also use the management command rqscheduler to start the scheduler:

python manage.py rqscheduler

Support for django-redis and django-redis-cache

If you have django-redis or django-redis-cache installed, you can instruct django_rq to use the same connection information from your Redis cache. This has two advantages: it's DRY and it takes advantage of any optimization that may be going on in your cache setup (like using connection pooling or Hiredis.)

To use configure it, use a dict with the key USE_REDIS_CACHE pointing to the name of the desired cache in your RQ_QUEUES dict. It goes without saying that the chosen cache must exist and use the Redis backend. See your respective Redis cache package docs for configuration instructions. It's also important to point out that since the django-redis-cache ShardedClient splits the cache over multiple Redis connections, it does not work.

Here is an example settings fragment for django-redis:

CACHES = {
    'redis-cache': {
        'BACKEND': 'redis_cache.cache.RedisCache',
        'LOCATION': 'localhost:6379:1',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient',
            'MAX_ENTRIES': 5000,
        },
    },
}

RQ_QUEUES = {
    'high': {
        'USE_REDIS_CACHE': 'redis-cache',
    },
    'low': {
        'USE_REDIS_CACHE': 'redis-cache',
    },
}

Queue Statistics

django_rq also provides a dashboard to monitor the status of your queues at /django-rq/ (or whatever URL you set in your urls.py during installation.

You can also add a link to this dashboard link in /admin by adding RQ_SHOW_ADMIN_LINK = True in settings.py. Be careful though, this will override the default admin template so it may interfere with other apps that modifies the default admin template.

These statistics are also available in JSON format via /django-rq/stats.json, which is accessible to staff members. If you need to access this view via other HTTP clients (for monitoring purposes), you can define RQ_API_TOKEN and access it via /django-rq/stats.json/<API_TOKEN>.

demo-django-rq-json-dashboard.png

Additionally, these statistics are also accessible from the command line.

python manage.py rqstats
python manage.py rqstats --interval=1  # Refreshes every second
python manage.py rqstats --json  # Output as JSON
python manage.py rqstats --yaml  # Output as YAML

demo-django-rq-cli-dashboard.gif

Configuring Sentry

Django-RQ >= 2.0 uses sentry-sdk instead of the deprecated raven library. Sentry should be configured within the Django settings.py as described in the Sentry docs.

You can override the default Django Sentry configuration when running the rqworker command by passing the sentry-dsn option:

./manage.py rqworker --sentry-dsn=https://*****@sentry.io/222222

This will override any existing Django configuration and reinitialise Sentry, setting the following Sentry options:

{
    'debug': options.get('sentry_debug'),
    'ca_certs': options.get('sentry_ca_certs'),
    'integrations': [RedisIntegration(), RqIntegration(), DjangoIntegration()]
}

Configuring Logging

Starting from version 0.3.3, RQ uses Python's logging, this means you can easily configure rqworker's logging mechanism in django's settings.py. For example:

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "formatters": {
        "rq_console": {
            "format": "%(asctime)s %(message)s",
            "datefmt": "%H:%M:%S",
        },
    },
    "handlers": {
        "rq_console": {
            "level": "DEBUG",
            "class": "rq.utils.ColorizingStreamHandler",
            "formatter": "rq_console",
            "exclude": ["%(asctime)s"],
        },
        # If you use sentry for logging
        'sentry': {
            'level': 'ERROR',
            'class': 'raven.contrib.django.handlers.SentryHandler',
        },
    },
    'loggers': {
        "rq.worker": {
            "handlers": ["rq_console", "sentry"],
            "level": "DEBUG"
        },
    }
}

Note: error logging to Sentry is known to be unreliable with RQ when using async transports (the default transport). Please configure Raven to use sync+https:// or requests+https:// transport in settings.py:

RAVEN_CONFIG = {
    'dsn': 'sync+https://public:[email protected]/1',
}

For more info, refer to Raven's documentation.

Custom Queue Classes

By default, every queue will use DjangoRQ class. If you want to use a custom queue class, you can do so by adding a QUEUE_CLASS option on a per queue basis in RQ_QUEUES:

RQ_QUEUES = {
    'default': {
        'HOST': 'localhost',
        'PORT': 6379,
        'DB': 0,
        'QUEUE_CLASS': 'module.path.CustomClass',
    }
}

or you can specify DjangoRQ to use a custom class for all your queues in RQ settings:

RQ = {
    'QUEUE_CLASS': 'module.path.CustomClass',
}

Custom queue classes should inherit from django_rq.queues.DjangoRQ.

If you are using more than one queue class (not recommended), be sure to only run workers on queues with same queue class. For example if you have two queues defined in RQ_QUEUES and one has custom class specified, you would have to run at least two separate workers for each queue.

Custom Job and Worker Classes

Similarly to custom queue classes, global custom job and worker classes can be configured using JOB_CLASS and WORKER_CLASS settings:

RQ = {
    'JOB_CLASS': 'module.path.CustomJobClass',
    'WORKER_CLASS': 'module.path.CustomWorkerClass',
}

Custom job class should inherit from rq.job.Job. It will be used for all jobs if configured.

Custom worker class should inherit from rq.worker.Worker. It will be used for running all workers unless overriden by rqworker management command worker-class option.

Testing Tip

For an easier testing process, you can run a worker synchronously this way:

from django.test import TestCase
from django_rq import get_worker

class MyTest(TestCase):
    def test_something_that_creates_jobs(self):
        ...                      # Stuff that init jobs.
        get_worker().work(burst=True)  # Processes all jobs then stop.
        ...                      # Asserts that the job stuff is done.

Synchronous Mode

You can set the option ASYNC to False to make synchronous operation the default for a given queue. This will cause jobs to execute immediately and on the same thread as they are dispatched, which is useful for testing and debugging. For example, you might add the following after you queue configuration in your settings file:

# ... Logic to set DEBUG and TESTING settings to True or False ...

# ... Regular RQ_QUEUES setup code ...

if DEBUG or TESTING:
    for queueConfig in RQ_QUEUES.itervalues():
        queueConfig['ASYNC'] = False

Note that setting the is_async parameter explicitly when calling get_queue will override this setting.

Running Tests

To run django_rq's test suite:

`which django-admin.py` test django_rq --settings=django_rq.tests.settings --pythonpath=.

Deploying on Ubuntu

Create an rqworker service that runs the high, default, and low queues.

sudo vi /etc/systemd/system/rqworker.service

[Unit]
Description=Django-RQ Worker
After=network.target

[Service]
WorkingDirectory=<<path_to_your_project_folder>>
ExecStart=/home/ubuntu/.virtualenv/<<your_virtualenv>>/bin/python \
    <<path_to_your_project_folder>>/manage.py \
    rqworker high default low

[Install]
WantedBy=multi-user.target

Enable and start the service

sudo systemctl enable rqworker
sudo systemctl start rqworker

Deploying on Heroku

Add django-rq to your requirements.txt file with:

pip freeze > requirements.txt

Update your Procfile to:

web: gunicorn --pythonpath="$PWD/your_app_name" config.wsgi:application

worker: python your_app_name/manage.py rqworker high default low

Commit and re-deploy. Then add your new worker with:

heroku scale worker=1

Django Suit Integration

You can use django-suit-rq to make your admin fit in with the django-suit styles.

Changelog

See CHANGELOG.md.

Comments
  • Passing timeout to queued job

    Passing timeout to queued job

    Is it possible to pass a timeout value to an enqueued job? I believe the default is 180 seconds -- which is short for some long-running jobs.

    Thanks for the great tool!

    opened by ErikEvenson 18
  • AttributeError: 'module' object has no attribute

    AttributeError: 'module' object has no attribute

    I am trying to create a background job with RQ:

    import django_rq                                                         
    
    
        def _send_password_reset_email_async(email):                             
            print(email)                                                         
    
        # Django admin action to send reset password emails                                                                 
        def send_password_reset_email(modeladmin, request, queryset):            
            for user in queryset:                                                
                django_rq.enqueue(_send_password_reset_email_async, user.email)  
        send_password_reset_email.short_description = 'Send password reset email'
    

    I keep getting this error:

    Traceback (most recent call last):
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/worker.py", line 568, in perform_job
        rv = job.perform()
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/job.py", line 
    
    495, in perform
        self._result = self.func(*self.args, **self.kwargs)
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/job.py", line 206, in func
        return import_attribute(self.func_name)
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/utils.py", line 151, in import_attribute
        return getattr(module, attribute)
    AttributeError: 'module' object has no attribute '_send_password_reset_email_async
    

    I also posted it to SO earlier http://stackoverflow.com/questions/32733934/rq-attributeerror-module-object-has-no-attribute

    opened by lee-kagiso 16
  • OperationalError: SSL error: decryption failed or bad record mac

    OperationalError: SSL error: decryption failed or bad record mac

    Not sure if this is a django-rq issue or python-rq, so I figured I'd start here...

    My application was working perfectly under Django 1.7.x.

    I updated to Django 1.8.x and my workers blew up.

    WARNING 11:29:35 worker 16516 140021590959936 Moving job to u'failed' queue
    WARNING 2015-08-30 11:29:35,394 worker 16516 140021590959936 Moving job to u'failed' queue
    ERROR 11:29:35 worker 16518 140021590959936 OperationalError: SSL error: decryption failed or bad record mac
    
    Traceback (most recent call last):
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/rq/worker.py", line 568, in perform_job
        rv = job.perform()
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/rq/job.py", line 495, in perform
        self._result = self.func(*self.args, **self.kwargs)
      File "/tank/code/uitintranet/intranet/tasks.py", line 131, in backup_router
        router = Router.objects.get(pk=router_pk)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/manager.py", line 127, in manager_method
        return getattr(self.get_queryset(), name)(*args, **kwargs)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 328, in get
        num = len(clone)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 144, in __len__
        self._fetch_all()
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 965, in _fetch_all
        self._result_cache = list(self.iterator())
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 238, in iterator
        results = compiler.execute_sql()
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 840, in execute_sql
        cursor.execute(sql, params)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute
        return super(CursorDebugWrapper, self).execute(sql, params)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
        return self.cursor.execute(sql, params)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__
        six.reraise(dj_exc_type, dj_exc_value, traceback)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
        return self.cursor.execute(sql, params)
    OperationalError: SSL error: decryption failed or bad record mac
    

    I checked, and updated rq from 0.5.4 to 0.5.5 and the error continues. If I run a command from my Django shell_plus without '.delay()', it runs fine. If I run a command with '.delay()' or I run a command using the scheduler, I get the error above.

    I noticed a post on StackOverflow that basically says 'close the DB connection at the beginning of each job'. (http://stackoverflow.com/questions/17523912/django-python-rq-databaseerror-ssl-error-decryption-failed-or-bad-record-mac)

    I tested with one of my jobs, and it does fix the problem. But I'm worried about possible side effects, and having to add a few lines of code to hundreds of job definitions.

    opened by darkpixel 16
  • Adding scheduled jobs to UI

    Adding scheduled jobs to UI

    Adds Scheduled Jobs image

    and associated page to show scheduled jobs

    Most of the work was already done in: https://github.com/rq/django-rq/pull/162 so thank you to @sburns

    opened by quantumlink 13
  • If the worker loses the DB connection it is not able to reconnect and you must restart it

    If the worker loses the DB connection it is not able to reconnect and you must restart it

    If the worker loses the DB connection it isn't able to reconnect and you must restart it

    Workers should adopt the same DB connection/reconnection policy that Django uses in his request/response cycle

    Every time a django-rq worker performs a job, the worker should assure the database-orm-connections are in a valid state. This policy is enforced by django in its request/response cycle via the function close_old_connections

    Look at: https://github.com/django/django/blob/master/django/db/init.py

    Similary, in django-rq worker, the close_old_connections should be called before and after the execution of each job

    (see also #49 for some preliminary thoughts)

    opened by depaolim 12
  • Add view for scheduled jobs

    Add view for scheduled jobs

    Attempt on #120

    This will probably break with a huge list of scheduled jobs since there's no pagination. Not sure how to approach that properly though.

    opened by marksteve 12
  • Added DEFAULT_TIMEOUT and RESULT_TTL in settings.RQ_QUEUES

    Added DEFAULT_TIMEOUT and RESULT_TTL in settings.RQ_QUEUES

    As stated in #57 supports code:

    RQ_QUEUES = {
        'default': {
            'HOST': 'localhost',
            'PORT': 6379,
            'DB': 0,
            'PASSWORD': 'some-password',
            'DEFAULT_TIMEOUT': 60,
            'RESULT_TTL': 30,
        },
        'high': {
            'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379'), # If you're on Heroku
            'DB': 0,
        },
    }
    

    You can close #57 and try to think of tests...

    opened by lechup 12
  • Test django-rq

    Test django-rq

    Hello,

    I try to test an app which has some asynchronous tasks using django-rq, and it's not always as easy as it could/should be.

    For the moment I have got two use cases - Mail tests and Signal tests, that work with synchronous job, but doesn't work with asynchronous tasks.

    Mail test:

    # in jobs.py for example
    
    @job
    def send_async_mail():
        print get_connection()  # return an instance of locmem.EmailBackend
        send_mail('foo', 'bar', '[email protected]', ['[email protected]'], fail_silently=False)
    
    
    def send_custom_mail():
         send_async_mail.delay()
    
    # in tests.py
    
    from django_rq import get_worker
    from jobs import send_custom_mail
    
    class AsyncSendMailTestCase(unittest.TestCase):
        def test_async_mail(self):
            send_custom_mail()
            get_worker().work(burst=True)
            print get_connection() # return an OTHER instance of locmem.EmailBackend
            print len(mail.outbox)  # return 0, should return 1
    
    

    As you see above, the "problem" is that the instance of get_connection() (https://docs.djangoproject.com/en/dev/topics/email/?from=olddocs#django.core.mail.get_connection) is not the same.

    Also, I'm facing (I think) the same problem using mock_signal_receiver https://github.com/dcramer/mock-django/blob/master/mock_django/signals.py in asynchronous jobs. A kind of a reference to the receiver is losted with asynchronous jobs. I'm quite sure of it because, when I remove the asynchronous part of my code, it works.

    Any ideas on how to deal with that kind of problem ?

    Regards,

    opened by ouhouhsami 12
  • Add support for Django 3.0, drop Django 1.x, Python 2

    Add support for Django 3.0, drop Django 1.x, Python 2

    Fixes #382


    Summary

    • Add support for Django 3.0
    • Drop support for Django 1.x
    • Drop support for Python 2.x
    • Add Python 3.8 to test matrix

    Version support

    | Django | 1.x | 2.x | 3.x | | :-- | :-- | --- | --- | | Python 2.7 | ✖️ | ✖️ | ✖️ | | Python 3.5 | ✖️ | 👍| ✖️^ | | Python 3.6 | ✖️ | 👍| 👍 | | Python 3.7 | ✖️ | 👍| 👍| | Python 3.8 | ✖️ | 👍| 👍|

    ^ Django 3.0 does not support Python 3.5 (https://docs.djangoproject.com/en/3.0/faq/install/#what-python-version-can-i-use-with-django)

    Changes

    Aside from supporting Django 3.0, the dropping of Django 1.x and Python 2.x enables a number of changes across the project, including:

    • Replace django.conf.urls with django.urls (and use path, re_path instead of url)
    -from django.conf.urls import url
    +from django.urls import path
    ...
     urlpatterns = [
    -    url(r'^$', views.home, name='home'),
    -    url(r'^admin/', admin.site.urls),
    +    path('', views.home, name='home'),
    +    path('admin/', admin.site.urls),
     ]
    
    • Use of unittest.mock instead of importing mock project
    • Replace admin_static template tags with static

    Additional changes (open to debate):

    1. Consolidate all local imports as relative
    2. Add a Pipfile for aiding with local development
    opened by hugorodgerbrown 11
  • add argument to disable sentry integration

    add argument to disable sentry integration

    allow disabling sentry integration in the case where the project use sentry-sdk as SENTRY_DSN is not compatible between sentry_sdk and raven (missing secret key, a.k.a. password)

    opened by Bolayniuss 11
  • More consistent usage of queue class / worker class overriding

    More consistent usage of queue class / worker class overriding

    What has been changed:

    • Added RQ.WORKER_CLASS setting to complement RQ.QUEUE_CLASS;
    • Changed rqworker management command to use defaults via get_worker_class/get_queue_class instead of defining it's own;
    • in queues.py get_queue and get_queue_by_index now have **kwargs to allow passing any additional kwargs to queue class constructor;
    • in worker.py add get_worker_class and use it in other functions;
    • isort in all changed files.
    opened by skirsdeda 10
  • broken state after redis reset (restart)

    broken state after redis reset (restart)

    hi there, apologies in advance if this is not appropriate or not related to this project itself.

    today we had a minor outage caused by a redis restart: all workers kept running after this disconnect (redis went down for a patch upgrade) but seemingly workers did not registered and stopped processing any entries from the queue ever since the restart. killing the it all and starting workers again got things back on track

    is that by design? I'm not very familiar with this project TBH, I found very strange that this reconnection wasn't handled gracefully. I'm currently on django-rq = "==2.3.1" rq = "==1.2.2"

    should I be specifically handling this situation with a custom exception handler? I don't have any descent logs but I assume the worker has successfully reconnected to redis, it just did not process the queue anymore. thanks in advance

    edit: before posting it, I found this https://github.com/rq/rq/pull/1387 would that fix the issue reported above? I'm keeping this issue because I think it might help others facing the same problem.

    opened by enapupe 0
  • Worker sometimes loose connection and needs to be restarted with psycopg2

    Worker sometimes loose connection and needs to be restarted with psycopg2

    Similar to #216 , My worker sometimes loose the connection and must be restarted

    [...]
    cursor = self.connection.cursor()
    psycopg2.InterfaceError: connection already closed
    

    My env :

    python==3.8.13
    Django==3.2.16
    django-rq==2.6.0
    psycopg2==2.9.5
    redis==4.4.0rc4
    rq==1.11.1
    
    opened by Jeanbouvatt 2
  • “python_requires” should be set with “>=3.4”, as django-rq  is not compatible with all Python versions.

    “python_requires” should be set with “>=3.4”, as django-rq is not compatible with all Python versions.

    Currently, the keyword argument python_requires of setup() is not set, and thus it is assumed that this distribution is compatible with all Python versions. However, I found it is not compatible with Python 2. My local Python version is 2.7, and I encounter the following error when executing “pip install django-rq”

    Collecting django-rq
      Downloading django_rq-2.5.1-py2.py3-none-any.whl (48 kB)
         |████████████████████████████████| 48 kB 424 kB/s 
    Collecting rq>=1.2
      Downloading rq-1.3.0-py2.py3-none-any.whl (59 kB)
         |████████████████████████████████| 59 kB 594 kB/s 
    ERROR: Could not find a version that satisfies the requirement django>=2.0 (from django-rq) (from versions: 1.1.3, 1.1.4, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.3, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14, 1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.6.11, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.8a1, 1.8b1, 1.8b2, 1.8rc1, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.9a1, 1.9b1, 1.9rc1, 1.9rc2, 1.9, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 1.9.7, 1.9.8, 1.9.9, 1.9.10, 1.9.11, 1.9.12, 1.9.13, 1.10a1, 1.10b1, 1.10rc1, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8, 1.11a1, 1.11b1, 1.11rc1, 1.11, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.11.5, 1.11.6, 1.11.7, 1.11.8, 1.11.9, 1.11.10, 1.11.11, 1.11.12, 1.11.13, 1.11.14, 1.11.15, 1.11.16, 1.11.17, 1.11.18, 1.11.20, 1.11.21, 1.11.22, 1.11.23, 1.11.24, 1.11.25, 1.11.26, 1.11.27, 1.11.28, 1.11.29)
    ERROR: No matching distribution found for django>=2.0 (from django-rq)
    
    

    Dependencies of this distribution are listed as follows:

    'django>=2.0', 
    'rq>=1.2',
    'redis>=3'
    

    I found that 'django>=2.0' requires Python>=3.4, which results in installation failure of django-rq in Python 2.7.

    Way to fix: modify setup() in setup.py, add python_requires keyword argument:

    setup(…
         python_requires=">=3.4",
         …)
    

    Thanks for your attention. Best regrads, PyVCEchecker

    opened by PyVCEchecker 0
  • Sporadic test failures with sqlite: no such table: django_rq_queue

    Sporadic test failures with sqlite: no such table: django_rq_queue

    We have a big test suite in our django project. We use django-rq. Sometimes, when we run the tests (usually on our gitlab ci), the tests fail with the error: django.db.utils.OperationalError: no such table: django_rq_queue

    The don't always fail. It's more a 10% to 20% chance of failure.

    Any ideas what's going on?

    opened by finsterwalder 0
  • Job decorator ignored when using enqueue_in

    Job decorator ignored when using enqueue_in

    Hi! I have a little problem with the @job decorator when using enqueue_in it seems that the decorator is ignored when using enqueue_in. Below is a little example tasks.py

    from django_rq import get_scheduler, job
    @job("default", timeout=185)
    def test():
        time.sleep(182)
        print("Sleeped 190s")
        get_scheduler().enqueue_in(timedelta(seconds=1), test)
    

    Logs from the worker, after us delay() on test job. TLTR: After the first delay, everything works fine, and timeout works. but when the next job is queued, the timeout param declared in the job decorator is ignored.

    worker-1 | [2022-09-26 19:05:33,422] default: tasks.test() (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | default: tasks.test() (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | Sleeped 190s
    worker-1 | [2022-09-26 19:08:35,533] default: Job OK (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | default: Job OK (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | [2022-09-26 19:08:35,534] Result is kept for 500 seconds
    worker-1 | Result is kept for 500 seconds
    worker-1 | [2022-09-26 19:08:36,601] default: tasks.test() (ef8e854f-b412-4b50-8ba0-5d650b721cf0)
    worker-1 | default: tasks.test() (ef8e854f-b412-4b50-8ba0-5d650b721cf0)
    worker-1 | [2022-09-26 19:11:36,620] Traceback (most recent call last):
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute     
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test
    worker-1 |     time.sleep(182)
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '   
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)  
    worker-1 | Traceback (most recent call last):  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform    
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test     
    worker-1 |     time.sleep(182)     
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)
    worker-1 | Traceback (most recent call last):  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job  
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test     
    worker-1 |     time.sleep(182)     
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)
    worker-1 | Traceback (most recent call last):  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute      
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test     
    worker-1 |     time.sleep(182)     
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)
    
    opened by krzysieqq 0
Releases(v2.6.0)
  • v2.6.0(Nov 5, 2022)

    • Added --max-jobs argument to rqworker management command. Thanks @arpit-goel!
    • Remove job from ScheduledJobRegistry if a scheduled job is enqueued from admin. Thanks @robertaistleitner!
    • Minor code cleanup. Thanks @reybog90!
    Source code(tar.gz)
    Source code(zip)
  • v2.5.1(Nov 22, 2021)

  • v2.5.0(Nov 17, 2021)

    • Better integration with Django admin, along with a new Access admin page permission that you can selectively grant to users. Thanks @haakenlid!
    • Worker count is now updated everytime you view workers for that specific queue. Thanks @cgl!
    • Add the capability to pass arbitrary Redis client kwargs. Thanks @juanjgarcia!
    • Always escape text when rendering job arguments. Thanks @rhenanbartels!
    • Add @never_cache decorator to all Django-RQ views. Thanks @Cybernisk!
    • SSL_CERT_REQS argument should also be passed to Redis client even when Redis URL is used. Thanks @paltman!
    Source code(tar.gz)
    Source code(zip)
  • v2.4.0(Nov 8, 2020)

  • v2.3.2(May 14, 2020)

  • v2.3.1(Apr 10, 2020)

    • Added --with-scheduler argument to rqworker management command. Thanks @stlk!
    • Fixed a bug where opening job detail would crash if job.dependency no longer exists. Thanks @selwin!
    Source code(tar.gz)
    Source code(zip)
  • v2.3.0(Feb 9, 2020)

    • Support for RQ's new ScheduledJobRegistry. Thanks @Yolley!
    • Improve performance when displaying pages showing a large number of jobs by using Job.fetch_many(). Thanks @selwin!
    • django-rq will now automatically cleanup orphaned worker keys in job registries. Thanks @selwin!
    • Site name now properly displayed in Django-RQ admin pages. Thanks @tom-price!
    • NoSuchJobErrors are now handled properly when requeuing all jobs. Thanks @thomasmatecki!
    • Support for displaying jobs with names containing $. Thanks @gowthamk63!
    Source code(tar.gz)
    Source code(zip)
Owner
RQ
RQ community
RQ
A pickled object field for Django

django-picklefield About django-picklefield provides an implementation of a pickled object field. Such fields can contain any picklable objects. The i

Gintautas Miliauskas 167 Oct 18, 2022
Silk is a live profiling and inspection tool for the Django framework.

Silk is a live profiling and inspection tool for the Django framework. Silk intercepts and stores HTTP requests and database queries before presenting them in a user interface for further inspection:

Jazzband 3.7k Jan 02, 2023
Official Python agent for the Elastic APM

elastic-apm -- Elastic APM agent for Python This is the official Python module for Elastic APM. It provides full out-of-the-box support for many of th

elastic 369 Jan 05, 2023
A Django chatbot that is capable of doing math and searching Chinese poet online. Developed with django, channels, celery and redis.

Django Channels Websocket Chatbot A Django chatbot that is capable of doing math and searching Chinese poet online. Developed with django, channels, c

Yunbo Shi 8 Oct 28, 2022
Simple alternative to Doodle polls and scheduling (Python 3, Django 3, JavaScript)

What is jawanndenn? jawanndenn is a simple web application to schedule meetings and run polls, a libre alternative to Doodle. It is using the followin

Sebastian Pipping 169 Jan 06, 2023
Use watchfiles in Django’s autoreloader.

django-watchfiles Use watchfiles in Django’s autoreloader. Requirements Python 3.7 to 3.10 supported. Django 2.2 to 4.0 supported. Installation Instal

Adam Johnson 43 Dec 14, 2022
Django datatables and widgets, both AJAX and traditional. Display-only ModelForms.

Django datatables and widgets, both AJAX and traditional. Display-only ModelForms. ModelForms / inline formsets with AJAX submit and validation. Works with Django templates.

Dmitriy Sintsov 132 Dec 14, 2022
A CBV to handle multiple forms in one view

django-shapeshifter A common problem in Django is how to have a view, especially a class-based view that can display and process multiple forms at onc

Kenneth Love 167 Nov 26, 2022
Neighbourhood - A python-django web app to help the residence of a given neighborhood know their surrounding better

Neighbourhood A python-django web app to help the residence of a given neighborh

Levy Omolo 4 Aug 25, 2022
Imparare Django ricreando un sito facsimile a quello Flask

SitoPBG-Django Imparare Django ricreando un sito facsimile a quello Flask Note di utilizzo Necessita la valorizzazione delle seguenti variabili di amb

Mario Nardi 1 Dec 08, 2021
An extremely fast JavaScript and CSS bundler and minifier

Website | Getting started | Documentation | Plugins | FAQ Why? Our current build tools for the web are 10-100x slower than they could be: The main goa

Evan Wallace 34.2k Jan 04, 2023
Full control of form rendering in the templates.

django-floppyforms Full control of form rendering in the templates. Authors: Gregor Müllegger and many many contributors Original creator: Bruno Renié

Jazzband 811 Dec 01, 2022
Twitter Bootstrap for Django Form

Django bootstrap form Twitter Bootstrap for Django Form. A simple Django template tag to work with Bootstrap Installation Install django-bootstrap-for

tzangms 557 Oct 19, 2022
This is a sample Django Form.

Sample FORM Installation guide Clone repository git clone https://github.com/Ritabratadas343/SampleForm.git cd to repository. Create a virtualenv by f

Ritabrata Das 1 Nov 05, 2021
Automated image processing for Django. Currently v4.0

ImageKit is a Django app for processing images. Need a thumbnail? A black-and-white version of a user-uploaded image? ImageKit will make them for you.

Matthew Dapena-Tretter 2.1k Jan 04, 2023
Management commands to help backup and restore your project database and media files

Django Database Backup This Django application provides management commands to help backup and restore your project database and media files with vari

687 Jan 04, 2023
Django-environ allows you to utilize 12factor inspired environment variables to configure your Django application.

Django-environ django-environ allows you to use Twelve-factor methodology to configure your Django application with environment variables. import envi

Daniele Faraglia 2.7k Jan 07, 2023
Cookiecutter Django is a framework for jumpstarting production-ready Django projects quickly.

Cookiecutter Django Powered by Cookiecutter, Cookiecutter Django is a framework for jumpstarting production-ready Django projects quickly. Documentati

Daniel Feldroy 10k Dec 31, 2022
A Django Online Library Management Project.

Why am I doing this? I started learning 📖 Django few months back, and this is a practice project from MDN Web Docs that touches the aspects of Django

1 Nov 13, 2021
Radically simplified static file serving for Python web apps

WhiteNoise Radically simplified static file serving for Python web apps With a couple of lines of config WhiteNoise allows your web app to serve its o

Dave Evans 2.1k Dec 15, 2022