Hue Editor: Open source SQL Query Assistant for Databases/Warehouses

Overview

CircleCI DockerPulls GitHub contributors

Hue Logo

Query. Explore. Share.

The Hue Editor is a mature open source SQL Assistant for querying any Databases & Data Warehouses.

Many companies and organizations use Hue to quickly answer questions via self-service querying.

  • 1000+ customers
  • Top Fortune 500

are executing 1000s of queries daily. It ships in data platforms like Cloudera, Google DataProc, Amazon EMR, Open Data Hub...

Hue is also ideal for building your own Cloud SQL Editor and any contributions are welcome.

Read more on: gethue.com

Hue Editor

Getting Started

You can start Hue via three ways described below. Once setup, you would then need to configure Hue to point to the desired databases you want to query.

Quick Demos:

The Forum is here in case you are looking for help.

Docker

Start Hue in a single click with the Docker Guide or the video blog post.

docker run -it -p 8888:8888 gethue/hue:latest

Now Hue should be up and running on your default Docker IP on the port 8888 http://localhost:8888!

Read more about configurations then.

Kubernetes

helm repo add gethue https://helm.gethue.com
helm repo update
helm install hue gethue/hue

Read more about configurations at tools/kubernetes.

Development

First install the dependencies, clone the Hue repo, build and get the development server running.

# 
git clone https://github.com/cloudera/hue.git
cd hue
make apps
build/env/bin/hue runserver

Now Hue should be running on http://localhost:8000!

Read more in the development documentation.

Note: For a very Quick Start and not even bother with installing a dev environment, go with the Dev Docker

Community

License

Apache License, Version 2.0

Comments
  • Building a debian package with Python 3.7 deps

    Building a debian package with Python 3.7 deps

    Describe the bug:

    Hi! This is more a request to figure out what is the best procedure to build Hue from source, and then package it (in my case, in a deb package). From the documentation I see that make apps is needed to populate the build directory, and I though that the Debian package needed only to copy that directory on the target system, but then I realized that the procedure is more complex.

    I also tried with PREFIX=/some/path PYTHON_VER=python3.7 make install and ended up in:

      File "/home/vagrant/hue-release-4.7.1/desktop/core/ext-py/MySQL-python-1.2.5/setup_posix.py", line 2, in <module>
        from ConfigParser import SafeConfigParser
    ModuleNotFoundError: No module named 'ConfigParser'
    

    I checked the ext-py directories and some of them seem not ready for Python3, so I am wondering if I am doing the right steps.

    If I follow https://github.com/cloudera/hue#building-from-source everything works fine (so the dev server comes up without any issue).

    Steps to reproduce it?

    Hue source version 4.7.1, then:

    PYTHON_VER=python3.7 make apps
    PYTHON_VER=python3.7 PREFIX=/some/local/path make install
    

    Followed https://docs.gethue.com/administrator/installation/install/

    Ideally what I want to achieve is something similar to the debian package that Cloudera releases with CDH, but built for python 3.7 and some other dependencies (like Mariadb dev instead of Mysql, etc..).

    Any help would be really appreciated :)

    opened by elukey 28
  • add Apache Flink connector to Hue

    add Apache Flink connector to Hue

    Is the issue already present in https://github.com/cloudera/hue/issues or discussed in the forum https://discourse.gethue.com?

    no

    What is the Hue version or source? (e.g. open source 4.5, CDH 5.16, CDP 1.0...)

    n/a

    Is there a way to help reproduce it?

    n/a

    opened by bowenli86 26
  • Oozie job browser's graph do not show red/green bars for failed/successful subworkflows

    Oozie job browser's graph do not show red/green bars for failed/successful subworkflows

    Describe the bug:

    While testing a failed coordinator run with the new Hue version I noticed that the Oozie's Job browser graph doesn't show anymore red/green bars under the subworkflows failed/completed. It is more difficult to find what was the problem and debug it under this conditions :(

    To visualize:

    Current behavior: Screen Shot 2020-09-22 at 8 44 04 AM



    Expected/old behavior: Screen Shot 2020-09-22 at 8 44 56 AM

    I noticed the following JS error though:

    hue.errorcatcher.34bb8f5ecd32.js:31 Hue detected a Javascript error:  https://hue-next.wikimedia.org/static/desktop/js/bundles/hue/vendors~hue~notebook~tableBrowser-chunk-4d9d2c608a19e4e1ab7a.51951e55bebc.js 1705 2 Uncaught Error: Syntax error, unrecognized expression: [id^[email protected]_partition]
    window.onerror @ hue.errorcatcher.34bb8f5ecd32.js:31
    jquery.js:1677 Uncaught Error: Syntax error, unrecognized expression: [id^[email protected]_partition]
        at Function.Sizzle.error (jquery.js:1677)
        at Sizzle.tokenize (jquery.js:2377)
        at Sizzle.select (jquery.js:2838)
        at Function.Sizzle [as find] (jquery.js:894)
        at jQuery.fn.init.find (jquery.js:3095)
        at new jQuery.fn.init (jquery.js:3205)
        at jQuery (jquery.js:157)
        at <anonymous>:66:25
        at Object.D (knockout-latest.js:11)
        at Object.success (<anonymous>:59:26)
        at fire (jquery.js:3496)
        at Object.fireWith [as resolveWith] (jquery.js:3626)
        at done (jquery.js:9786)
        at XMLHttpRequest.<anonymous> (jquery.js:10047)
    

    Steps to reproduce it?

    Simply navigate to an URL like /hue/jobbrowser/jobs/0009826-200915132022208-oozie-oozi-W#!id=0009824-200915132022208-oozie-oozi-W.

    Hue version or source? (e.g. open source 4.5, CDH 5.16, CDP 1.0...). System info (e.g. OS, Browser...).

    4.7.1

    Stale 
    opened by elukey 25
  • Add a dask-sql parser and connector

    Add a dask-sql parser and connector

    From Nils:

    Thanks to @romain for his nice comment to this medium post on dask-sql, where he asks if it would be a good idea to add dask-sql as a parser and a connector to Hue. I would be very happy to collaborate on this! Thank you very much for proposing this.

    Looking at the code (and your very nice documentation), I have the feeling that adding a new "dask-sql" component should be rather straightforward - whereas adding the SQL parser would probably be a bit more work. I would probably start with the presto dialect and first remove everything that dask-sql does not implement (so far).

    Creating a correct parser seems like a really large task to me and I would be happy if you have good suggestions on how to properly sync Hue with the SQL dask-sql understands and if you have any suggestions on how to speed up the development.

    I would also be happy for any guidance on how to reasonably split this large task and if you think in general if this is a good idea.

    Thanks!

    connector Stale 
    opened by romainr 21
  • Presto SQLAlchemy Interface over Https fails to connect to the server

    Presto SQLAlchemy Interface over Https fails to connect to the server

    Is the issue already present in https://github.com/cloudera/hue/issues or discussed in the forum https://discourse.gethue.com?

    Describe the bug: For SSL enabled Presto Cluster Hue's SQLAlchemy interface fails to connect.

    Steps to reproduce it? Our Presto cluster is SSL LDAP enabled, so configured it like below in Notebook section:

    Case-1: [[[presto]]] interface=sqlalchemy name=Presto options=’{“url”: “presto://user:[email protected]:8443/hive/dwh”}’ In the above case, below is the exception in hue logs:

    (exceptions.ValueError) Protocol must be https when passing a password [SQL: SHOW SCHEMAS] Case-2: [[[presto]]] interface=sqlalchemy name=Presto options=’{“url”: “presto://username:[email protected]://presto coordinator:8443/hive/dwh”}’

    In this case, following is the error:

    ValueError: invalid literal for int() with base 10: ‘’ [14/Aug/2020 10:59:28 -0700] decorators ERROR Error running autocomplete Traceback (most recent call last): File “/usr/share/hue/desktop/libs/notebook/src/notebook/decorators.py”, line 114, in wrapper return f(*args, **kwargs) File “/usr/share/hue/desktop/libs/notebook/src/notebook/api.py”, line 729, in autocomplete autocomplete_data = get_api(request, snippet).autocomplete(snippet, database, table, column, nested) File “/usr/share/hue/desktop/libs/notebook/src/notebook/connectors/sql_alchemy.py”, line 113, in decorator raise QueryError(message) QueryError: invalid literal for int() with base 10: ‘’

    Hue version or source? Docker image of hue 4.7.1 deployed in Kubernetes

    opened by DRavikanth 20
  • HUE-2962 [desktop] -  'Config' object has no attribute 'get'

    HUE-2962 [desktop] - 'Config' object has no attribute 'get'

    Hello @spaztic1215

    I am trying to upgrade to that last version of Hue and I think the commit 8c88233 (HUE-2962 [desktop] Add configuration property definition for HS2) introduced an error in the "connectors/hiveserver2.py" when it tries to read the Hive configuration.

    I am running without Impala (app_blacklist=impala,security,search,rdbms,zookeeper)

    Have you experienced this?

    Traceback (most recent call last):
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/wsgiserver.py", line 1215, in communicate
        req.respond()
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/wsgiserver.py", line 576, in respond
        self._respond()
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/wsgiserver.py", line 588, in _respond
        response = self.wsgi_app(self.environ, self.start_response)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/wsgi.py", line 206, in __call__
        response = self.get_response(request)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py", line 194, in get_response
        response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py", line 236, in handle_uncaught_exception
        return callback(request, **param_dict)
      File "/usr/local/hue.prod/desktop/core/src/desktop/views.py", line 331, in serve_500_error
        return render("500.mako", request, {'traceback': traceback.extract_tb(exc_info[2])})
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_util.py", line 227, in render
        **kwargs)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_util.py", line 148, in _render_to_response
        return django_mako.render_to_response(template, *args, **kwargs)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 125, in render_to_response
        return HttpResponse(render_to_string(template_name, data_dictionary), **kwargs)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 114, in render_to_string_normal
        result = template.render(**data_dict)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/template.py", line 443, in render
        return runtime._render(self, self.callable_, args, data)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 786, in _render
        **_kwargs_for_callable(callable_, data))
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 818, in _render_context
        _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 844, in _exec_template
        callable_(context, *args, **kwargs)
      File "/tmp/tmpkguRIt/desktop/500.mako.py", line 40, in render_body
        __M_writer(unicode( commonheader(_('500 - Server error'), "", user) ))
      File "/usr/local/hue.prod/desktop/core/src/desktop/views.py", line 409, in commonheader
        'is_ldap_setup': 'desktop.auth.backend.LdapBackend' in desktop.conf.AUTH.BACKEND.get()
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 112, in render_to_string_normal
        template = lookup.get_template(template_name)
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 89, in get_template
        return real_loader.get_template(uri)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/lookup.py", line 245, in get_template
        return self._load(srcfile, uri)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/lookup.py", line 311, in _load
        **self.template_args)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/template.py", line 321, in __init__
        module = self._compile_from_file(path, filename)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/template.py", line 379, in _compile_from_file
        module = compat.load_module(self.module_id, path)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/compat.py", line 55, in load_module
        return imp.load_source(module_id, path, fp)
      File "/tmp/tmpkguRIt/desktop/common_header.mako.py", line 25, in <module>
        home_url = url('desktop.views.home')
      File "/usr/local/hue.prod/desktop/core/src/desktop/lib/django_mako.py", line 131, in url
        return reverse(view_name, args=args, kwargs=view_args)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 536, in reverse
        return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 403, in _reverse_with_prefix
        self._populate()
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 303, in _populate
        lookups.appendlist(pattern.callback, (bits, p_pattern, pattern.default_args))
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 230, in callback
        self._callback = get_callable(self._callback_str)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/utils/functional.py", line 32, in wrapper
        result = func(*args)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/urlresolvers.py", line 97, in get_callable
        mod = import_module(mod_name)
      File "/usr/local/hue.prod/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/utils/importlib.py", line 40, in import_module
        __import__(name)
      File "/usr/local/hue.prod/desktop/core/src/desktop/configuration/api.py", line 29, in <module>
        from notebook.connectors.hiveserver2 import HiveConfiguration, ImpalaConfiguration
      File "/usr/local/hue.prod/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 77, in <module>
        class HiveConfiguration(object):
      File "/usr/local/hue.prod/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 103, in HiveConfiguration
        "options": [config.lower() for config in hive_settings.get()]
    AttributeError: 'Config' object has no attribute 'get'
    
    opened by syepes 20
  • Receiving Error

    Receiving Error "Incorrect string value: '\xC5\x9F\xC4\xB1k ...' for column 'query' at row 1" while running query from HUE Hive Query editor

    We are running a simple select query with one condition. The query is running fine from hive cli, beeline, zeppelin but when ruuning the same query from HUE Hive query editor, received the below error: "Incorrect string value: '\xC5\x9F\xC4\xB1k ...' for column 'query' at row 1" SELECT * from <table_name> WHERE = 'Turkish Characters' ; HUE Version: 3.9.0 Note: 1.Data has some special (Turkish) Characters. Not sure how HUE handles this type of characters. 2.HUE is running in web browser which is in UTF-8, 3.Looking for any configurations in HUE for turkish character encoding. During initial research, this issue seems to be related to HUE-4889. Any help?

    Stale 
    opened by shyamshaw 18
  • Hue 3.8.0 - Map reduce job in workflow is not using the job properties

    Hue 3.8.0 - Map reduce job in workflow is not using the job properties

    Mapreduce job as part of a workflow is not submitting the job properties to oozie/yarn. When I submit the following as part of a workflow, I am getting an exception about the output dir. Similar job in Job Designer is succeeding with out any issues.

    Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.MapReduceMain], main() threw exception, Output directory not set in JobConf.
      org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf.
      at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:118)
      at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:460)
      at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    

    image

    On a side note, is there a way to export the oozie workflow xml ? I don't see it in 3.8.0, I have it in my existing 3.7.1 installation?

    Also, the import feature is missing image

    opened by skkolli 18
  • CircleCI: Add build jobs that run on Linux ARM64

    CircleCI: Add build jobs that run on Linux ARM64

    What changes were proposed in this pull request?

    This is the third part of #2531! It is based on top of #2555, so it should be merged after #2555 !

    This PR introduces the build jobs on Linux ARM64 and a new workflow that runs these jobs every Sunday at 1AM UTC.

    How was this patch tested?

    Only CircleCI's config.yml has been modified. CircleCI jobs should still work!

    feature tools roadmap 
    opened by martin-g 17
  • While creating the jdbc connecting string getting the error invalid syntax (jdbc.py, line 47)

    While creating the jdbc connecting string getting the error invalid syntax (jdbc.py, line 47)

    Please find the below error message while accessing the data from JDBC connection string

    JDBC connection string: [[[Avalanche]]] name=Avalanche interface=jdbc options='{"url": "jdbc:ingres://av-0pq13o8h1zrp.avdev.actiandatacloud.com:27839/db", "driver": "com.ingres.jdbc.IngresDriver", "user": "*", "password": "**"}' jdbc py-error

    Error Message:

    exceptions_renderable ERROR Potential trace: [<FrameSummary file /home/actian/Hue_server/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py, line 172 in _execute>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/resource.py, line 157 in get>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/resource.py, line 78 in invoke>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/resource.py, line 105 in _invoke>, <FrameSummary file /home/actian/Hue_server/hue/desktop/core/src/desktop/lib/rest/http_client.py, line 229 in execute>] [10/Dec/2020 06:26:27 -0800] cluster INFO Resource Manager not available, trying another RM: YARN RM returned a failed response: HTTPConnectionPool(host='localhost', port=8088): Max retries exceeded with url: /ws/v1/cluster/apps?doAs=admin&user.name=hue&user=admin&finalStatus=UNDEFINED&limit=1000&startedTimeBegin=1607005587000 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f98a04db5b0>: Failed to establish a new connection: [Errno 111] Connection refused')).

    Stale 
    opened by balajivenka 17
  • MockRequest instance has no attribute 'fs'

    MockRequest instance has no attribute 'fs'

    Hue version 4.8.0 (docker)

    When I execute a script in PIG editor or a spark program in SPARK editor, I always get the message "MockRequest instance has no attribute 'fs'". It seems that there's no impact on the behaviour. In the logs, there's this error:

    [09/Feb/2021 10:30:48 +0100] decorators   ERROR    Error running fetch_result_data
    Traceback (most recent call last):
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/decorators.py", line 114, in wrapper
        return f(*args, **kwargs)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/api.py", line 318, in fetch_result_data
        response = _fetch_result_data(request.user, notebook, snippet, operation_id, rows=rows, start_over=start_over)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/api.py", line 329, in _fetch_result_data
        'result': get_api(request, snippet).fetch_result(notebook, snippet, rows, start_over)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/models.py", line 517, in get_api
        return ApiWrapper(request, snippet)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/models.py", line 503, in __init__
        self.api = _get_api(request, snippet)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/connectors/base.py", line 436, in get_api
        return OozieApi(user=request.user, request=request)
      File "/usr/share/hue/desktop/libs/notebook/src/notebook/connectors/oozie_batch.py", line 58, in __init__
        self.fs = self.request.fs
    AttributeError: MockRequest instance has no attribute 'fs'
    

    What is the reason of this message ? Maybe a miss configuration in HUE ? thanks in advance

    opened by stephbat 16
  • Fix aws region attribute in values.yaml

    Fix aws region attribute in values.yaml

    What changes were proposed in this pull request?

    • Fix aws region attribute in values.yaml

    How was this patch tested?

    • Manual

    There seems to be a typo in helm chart values.yaml. configmap-hue yaml has a .Values.aws.region

    please review.

    opened by a1tair6 1
  • [frontend] fix multi session id mismatch error

    [frontend] fix multi session id mismatch error

    What changes were proposed in this pull request?

    https://github.com/cloudera/hue/issues/3145 I create this pr to fix the issue above. But I am not so familliar with javascript and the original session logic here, maybe there are better solutions.

    • (Please fill in changes proposed in this fix) remove the code of updating existed session's id after execution.

    How was this patch tested?

    manual tests

    Please review Hue Contributing Guide before opening a pull request.

    opened by sumtumn 1
  • notebook multisession errror: Session matching query does not exist.

    notebook multisession errror: Session matching query does not exist.

    Is the issue already present in https://github.com/cloudera/hue/issues or discussed in the forum https://discourse.gethue.com? no Describe the bug: session ids mismatch in notebook, which leads to the exception in hue server.

    Traceback (most recent call last): File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/decorators.py", line 119, in wrapper return f(*args, **kwargs) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/api.py", line 236, in execute response = _execute_notebook(request, notebook, snippet) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/api.py", line 161, in _execute_notebook response['handle'] = interpreter.execute(notebook, snippet) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 99, in decorator return func(*args, **kwargs) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 322, in execute _session = self._get_session_by_id(notebook, snippet['type']) File "/usr/lib/emr/current/hue/desktop/libs/notebook/src/notebook/connectors/hiveserver2.py", line 721, in _get_session_by_id return Session.objects.get(**filters) File "/usr/lib/emr/current/hue/build/env/lib/python2.7/site-packages/Django-1.11.29-py2.7.egg/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/lib/emr/current/hue/build/env/lib/python2.7/site-packages/Django-1.11.29-py2.7.egg/django/db/models/query.py", line 380, in get self.model._meta.object_name DoesNotExist: Session matching query does not exist.

    Steps to reproduce it?

    1. create a blank notebook
    2. add a hive snippet, which invokes api /notebook/api/create_session, the response is
    {"status": 0, "session": {"reuse_session": false, "session_id": "2b47d9a6457354c0:9f8030b890dbe8af", "properties": [{"multiple": true, "defaultValue": [], "value": [], "nice_name": "Files", "key": "files", "help_text": "Add one or more files, jars, or archives to the list of resources.", "type": "hdfs-files"}, {"multiple": true, "defaultValue": [], "value": [], "nice_name": "Functions", "key": "functions", "help_text": "Add one or more registered UDFs (requires function name and fully-qualified class name).", "type": "functions"}, {"nice_name": "Settings", "multiple": true, "key": "settings", "help_text": "Hive and Hadoop configuration properties.", "defaultValue": [], "type": "settings", "options": ["hive.map.aggr", "hive.exec.compress.output", "hive.exec.parallel", "hive.execution.engine", "mapreduce.job.queuename"], "value": []}], "configuration": {"hive.map.aggr": "true", "hive.execution.engine": "tez", "mapreduce.job.queuename": "default", "hive.exec.parallel": "true", "hive.exec.compress.output": "false", "hive.server2.thrift.resultset.default.fetch.size": "1000"}, "type": "hive", "id": 101}}
    

    session id is 101 3. execute hive sql, it passes. 4. add a sparksql snippet, which also invokes api /notebook/api/create_session, the response is

    {"status": 0, "session": {"reuse_session": false, "session_id": "4745b846bfcd60a7:ab1a5701d2f35983", "properties": [{"multiple": true, "defaultValue": [], "value": [], "nice_name": "Files", "key": "files", "help_text": "Add one or more files, jars, or archives to the list of resources.", "type": "hdfs-files"}, {"multiple": true, "defaultValue": [], "value": [], "nice_name": "Functions", "key": "functions", "help_text": "Add one or more registered UDFs (requires function name and fully-qualified class name).", "type": "functions"}, {"nice_name": "Settings", "multiple": true, "key": "settings", "help_text": "Hive and Hadoop configuration properties.", "defaultValue": [], "type": "settings", "options": ["hive.map.aggr", "hive.exec.compress.output", "hive.exec.parallel", "hive.execution.engine", "mapreduce.job.queuename"], "value": []}], "configuration": {}, "type": "sparksql", "id": 103}}
    

    session id is 103 5. execute one sparksql, it passes. 6. reexecute the hive sql, error happens: Session matching query does not exist.

    I find it happens because the session ids of hive and sparksql mismatch. E.g. reexecute the hive sql invokes api /notebook/api/execute/hive, and the notebook field in payload is

    {"id":null,"uuid":"05536e0a-016b-4b38-b547-b492733fe6c9","parentSavedQueryUuid":null,"isSaved":false,"sessions":[{"type":"hive","properties":[{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Files","key":"files","help_text":"Add one or more files, jars, or archives to the list of resources.","type":"hdfs-files"},{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Functions","key":"functions","help_text":"Add one or more registered UDFs (requires function name and fully-qualified class name).","type":"functions"},{"nice_name":"Settings","multiple":true,"key":"settings","help_text":"Hive and Hadoop configuration properties.","defaultValue":[],"type":"settings","options":["hive.map.aggr","hive.exec.compress.output","hive.exec.parallel","hive.execution.engine","mapreduce.job.queuename"],"value":[]}],"reuse_session":false,"session_id":"4745b846bfcd60a7:ab1a5701d2f35983","configuration":{"hive.map.aggr":"true","hive.execution.engine":"tez","mapreduce.job.queuename":"default","hive.exec.parallel":"true","hive.exec.compress.output":"false","hive.server2.thrift.resultset.default.fetch.size":"1000"},"id":103},{"type":"sparksql","properties":[{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Files","key":"files","help_text":"Add one or more files, jars, or archives to the list of resources.","type":"hdfs-files"},{"multiple":true,"defaultValue":[],"value":[],"nice_name":"Functions","key":"functions","help_text":"Add one or more registered UDFs (requires function name and fully-qualified class name).","type":"functions"},{"nice_name":"Settings","multiple":true,"key":"settings","help_text":"Hive and Hadoop configuration properties.","defaultValue":[],"type":"settings","options":["hive.map.aggr","hive.exec.compress.output","hive.exec.parallel","hive.execution.engine","mapreduce.job.queuename"],"value":[]}],"reuse_session":false,"session_id":"4745b846bfcd60a7:ab1a5701d2f35983","configuration":{},"id":103}],"type":"notebook","name":"My Notebook"}
    

    We can see that session ids of hive and sparsql are both 103, instead of hive 101, sparksql 103.

    Hue version or source? (e.g. open source 4.5, CDH 5.16, CDP 1.0...). System info (e.g. OS, Browser...). open source 4.9.0

    editor 
    opened by sumtumn 1
  • [raz] Fix non-ascii directory creation in S3

    [raz] Fix non-ascii directory creation in S3

    What changes were proposed in this pull request?

    • Creating a directory with non-ascii characters was throwing a signature mismatch error.

    • On further investigation, found out that there was a signature mismatch for the GET requests which go through the boto2 client implementation in Hue, and the signed headers from RAZ are not matching what S3 was generating to verify them.

    • This is only happening for GET requests, so most probably all GET method operations must be failing with this error.

    • Next steps were to see why is this signature mismatch happening for the path, is it the path which we sent to RAZ for signed header generation or to S3 side request for actual S3 operation where S3 verifies the header?

    • After narrowing down the issue, found out that we need to fully unquote the path before sending to RAZ so that signed headers are sent for correct path and S3 can verify them correctly. We didn't touch the path sent to S3 side.

    How was this patch tested?

    Tested E2E in a live cluster.

    Please review Hue Contributing Guide before opening a pull request.

    opened by Harshg999 0
  • [docs] Update docs to use latest npm version v5

    [docs] Update docs to use latest npm version v5

    What changes were proposed in this pull request?

    gethue npm was upgraded to v5, this doc upgrades the same in the doc

    How was this patch tested?

    By running hugo server

    opened by sreenaths 1
Releases(release-4.10.0)
  • release-4.10.0(Jun 11, 2021)

    This release is 4.10.

    Please see more at:

    https://docs.gethue.com/releases/release-notes-4.10.0/ https://gethue.com/blog/hue-4-10-sql-scratchpad-component-rest-api-small-file-importer-slack-app/

    And feel free to send feedback!

    Hue Team

    Source code(tar.gz)
    Source code(zip)
  • release-4.9.0(Feb 2, 2021)

    This release is 4.9.

    Please see more at:

    https://docs.gethue.com/releases/release-notes-4.9.0/ https://gethue.com/blog/hue-4-9-sql-dialects-phoenix-dasksql-flink-components/ And feel free to send feedback!

    Hue Team

    Source code(tar.gz)
    Source code(zip)
  • release-4.8.0(Sep 23, 2020)

    This release is 4.8.

    Please see more at:

    • https://docs.gethue.com/releases/release-notes-4.8.0/
    • https://gethue.com/blog/hue-4-8-phoenix-flink-sparksql-components/

    And feel free to send feedback!

    Hue Team

    Source code(tar.gz)
    Source code(zip)
  • release-4.7.1(Jun 26, 2020)

    This release contains 4.7 and additional fixes.

    Please see more at:

    • https://docs.gethue.com/releases/release-notes-4.7.0/
    • https://gethue.com/blog/hue-4-7-and-its-improvements-are-out/

    And feel free to send feedback.

    Hue Team

    Source code(tar.gz)
    Source code(zip)
A 2-dimensional physics engine written in Cairo

A 2-dimensional physics engine written in Cairo

Topology 38 Nov 16, 2022
Integrate bus data from a variety of sources (batch processing and real time processing).

Purpose: This is integrate bus data from a variety of sources such as: csv, json api, sensor data ... into Relational Database (batch processing and r

1 Nov 25, 2021
Using approximate bayesian posteriors in deep nets for active learning

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
Building house price data pipelines with Apache Beam and Spark on GCP

This project contains the process from building a web crawler to extract the raw data of house price to create ETL pipelines using Google Could Platform services.

1 Nov 22, 2021
BigDL - Evaluate the performance of BigDL (Distributed Deep Learning on Apache Spark) in big data analysis problems

Evaluate the performance of BigDL (Distributed Deep Learning on Apache Spark) in big data analysis problems.

Vo Cong Thanh 1 Jan 06, 2022
This repository contains some analysis of possible nerdle answers

Nerdle Analysis https://nerdlegame.com/ This repository contains some analysis of possible nerdle answers. Here's a quick overview: nerdle.py contains

0 Dec 16, 2022
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

2 Dec 13, 2022
Pipeline and Dataset helpers for complex algorithm evaluation.

tpcp - Tiny Pipelines for Complex Problems A generic way to build object-oriented datasets and algorithm pipelines and tools to evaluate them pip inst

Machine Learning and Data Analytics Lab FAU 3 Dec 07, 2022
peptides.py is a pure-Python package to compute common descriptors for protein sequences

peptides.py Physicochemical properties and indices for amino-acid sequences. 🗺️ Overview peptides.py is a pure-Python package to compute common descr

Martin Larralde 32 Dec 31, 2022
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 06, 2023
statDistros is a Python library for dealing with various statistical distributions

StatisticalDistributions statDistros statDistros is a Python library for dealing with various statistical distributions. Now it provides various stati

1 Oct 03, 2021
A collection of learning outcomes data analysis using Python and SQL, from DQLab.

Data Analyst with PYTHON Data Analyst berperan dalam menghasilkan analisa data serta mempresentasikan insight untuk membantu proses pengambilan keputu

6 Oct 11, 2022
Python for Data Analysis, 2nd Edition

Python for Data Analysis, 2nd Edition Materials and IPython notebooks for "Python for Data Analysis" by Wes McKinney, published by O'Reilly Media Buy

Wes McKinney 18.6k Jan 08, 2023
Extract data from a wide range of Internet sources into a pandas DataFrame.

pandas-datareader Up to date remote data access for pandas, works for multiple versions of pandas. Installation Install using pip pip install pandas-d

Python for Data 2.5k Jan 09, 2023
PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

Burn Research 4 Oct 13, 2022
Tokyo 2020 Paralympics, Analytics

Tokyo 2020 Paralympics, Analytics Thanks for checking out my app! It was built entirely using matplotlib and Tokyo 2020 Paralympics data. This applica

Petro Ivaniuk 1 Nov 18, 2021
Exploring the Top ML and DL GitHub Repositories

This repository contains my work related to my project where I scraped data on the most popular machine learning and deep learning GitHub repositories in order to further visualize and analyze it.

Nico Van den Hooff 17 Aug 21, 2022
PySpark bindings for H3, a hierarchical hexagonal geospatial indexing system

h3-pyspark: Uber's H3 Hexagonal Hierarchical Geospatial Indexing System in PySpark PySpark bindings for the H3 core library. For available functions,

Kevin Schaich 12 Dec 24, 2022
A powerful data analysis package based on mathematical step functions. Strongly aligned with pandas.

The leading use-case for the staircase package is for the creation and analysis of step functions. Pretty exciting huh. But don't hit the close button

48 Dec 21, 2022
Program that predicts the NBA mvp based on data from previous years.

NBA MVP Predictor A machine learning model using RandomForest Regression that predicts NBA MVP's using player data. Explore the docs » View Demo · Rep

Muhammad Rabee 1 Jan 21, 2022