Open Data Cube analyses continental scale Earth Observation data through time

Overview

Open Data Cube Core

Build Status Coverage Status Documentation Status

Overview

The Open Data Cube Core provides an integrated gridded data analysis environment for decades of analysis ready earth observation satellite and related data from multiple satellite and other acquisition systems.

Documentation

See the user guide for installation and usage of the datacube, and for documentation of the API.

Join our Slack if you need help setting up or using the Open Data Cube.

Please help us to keep the Open Data Cube community open and inclusive by reading and following our Code of Conduct.

Requirements

System

  • PostgreSQL 10+
  • Python 3.8+

Developer setup

  1. Clone:
    • git clone https://github.com/opendatacube/datacube-core.git
  2. Create a Python environment for using the ODC. We recommend conda as the easiest way to handle Python dependencies.
conda create -n odc -c conda-forge python=3.8 datacube pre_commit
conda activate odc
  1. Install a develop version of datacube-core.
cd datacube-core
pip install --upgrade -e .
  1. Install the pre-commit hooks to help follow ODC coding conventions when committing with git.
pre-commit install
  1. Run unit tests + PyLint ./check-code.sh

    (this script approximates what is run by Travis. You can alternatively run pytest yourself). Some test dependencies may need to be installed, attempt to install these using:

    pip install --upgrade -e '.[test]'

    If install for these fails please lodge them as issues.

  2. (or) Run all tests, including integration tests.

    ./check-code.sh integration_tests

    • Assumes a password-less Postgres database running on localhost called

    agdcintegration

    • Otherwise copy integration_tests/agdcintegration.conf to ~/.datacube_integration.conf and edit to customise.

Alternatively one can use the opendatacube/datacube-tests docker image to run tests. This docker includes database server pre-configured for running integration tests. Add --with-docker command line option as a first argument to ./check-code.sh script.

./check-code.sh --with-docker integration_tests

Developer setup on Ubuntu

Building a Python virtual environment on Ubuntu suitable for development work.

Install dependencies:

sudo apt-get update
sudo apt-get install -y \
  autoconf automake build-essential make cmake \
  graphviz \
  python3-venv \
  python3-dev \
  libpq-dev \
  libyaml-dev \
  libnetcdf-dev \
  libudunits2-dev

Build the python virtual environment:

pyenv="${HOME}/.envs/odc"  # Change to suit your needs
mkdir -p "${pyenv}"
python3 -m venv "${pyenv}"
source "${pyenv}/bin/activate"
pip install -U pip wheel cython numpy
pip install -e '.[dev]'
pip install flake8 mypy pylint autoflake black
Comments
  • [proposal] Add support for 3D datasets

    [proposal] Add support for 3D datasets

    There are soil and weather datasets that use a third height/Z dimension for storing data. It would be nice to be able to have ODC optionally support datasets with this dimension.

    Is there interest in adding this behavior to ODC?

    @alfredoahds

    wontfix improvement proposal 
    opened by snowman2 67
  • Dockerfile and third-party libs

    Dockerfile and third-party libs

    There has been some discussion on Slack related to Dockerfile and various choices that were made with respect to choice of pre-compiled third-party libs. Since Slack message threads disappear let's continue this discussion here.

    Some facts about current system in no particular order

    • Docker is based of ubuntu:18.04
    • Docker uses ppa:nextgis/ppa to get more recent libgdal
    • Docker builds python version of gdal
    • Docker installs rasterio in binary mode, so rasterio ships it's own version of libgdal
    • rasterio also ships it's own version of libcurl, compiled on Redhat derivative, hence symlink workaround for dealing with ca-certificates location
    • Tests use requierments-test.txt to pin dependencies for 3-d party libs, this is to minimize false positive rate where error is in environment setup and not in our code
    • Tests run on travis that uses 16.04 version of ubuntu and also use nectgis/ppa to get workable environment.

    @woodcockr reported on Slack that he has issues installing shapely in the default docker environment.

    enhancement discussion 
    opened by Kirill888 36
  • Amazon S3 support

    Amazon S3 support

    When running Open Data Cube in the cloud, I would like to have datasets in Amazon S3 buckets without having to store them in my EC2 instance. I have seen that in datacube-core release 1.5.2 the new features

    • Support for AWS S3 array storage
    • Driver Manager support for NetCDF, S3, S3-file drivers

    were added. I have read the few documentation there is on these features but I am confused. Is there any documentation on what these features are capable of or any example on how to use them?

    help wanted improve docs 
    opened by adriaat 30
  • Datacube.load performance for multi band netCDF data

    Datacube.load performance for multi band netCDF data

    Expected behaviour

    Something comparable to xarray.open_dataset('file_to_load.nc')

    Actual behaviour

    On the same infrastructure current datacube.load(...) which would load the same dataset/file is significantly slower. xarray load time = ~8 ms, datacube load = ~28m

    Simple comparison

    image

    Steps to reproduce the behaviour

    ... Include code, command line parameters as appropriate ...

    Environment information

    • Which datacube --version are you using? Open Data Cube core, version 1.7

    • What datacube deployment/enviornment are you running against? CSIRO (@woodcockr) Internal depolyment

    netCDF metadata

    gdalinfo (output is truncated as there are 366 bands)

    !gdalinfo /data/qtot/qtot_avg_1912.nc
    Warning 1: No UNIDATA NC_GLOBAL:Conventions attribute
    Driver: netCDF/Network Common Data Format
    Files: /data/qtot/qtot_avg_1912.nc
    Size is 841, 681
    Coordinate System is `'
    Origin = (111.974999999999994,-9.975000000000000)
    Pixel Size = (0.050000000000000,-0.050000000000000)
    Metadata:
      latitude#long_name=latitude
      latitude#name=latitude
      latitude#standard_name=latitude
      latitude#units=degrees_north
      longitude#long_name=longitude
      longitude#name=longitude
      longitude#standard_name=longitude
      longitude#units=degrees_east
      NC_GLOBAL#var_name=qtot_avg
      NETCDF_DIM_EXTRA={time}
      NETCDF_DIM_time_DEF={366,4}
      NETCDF_DIM_time_VALUES={4382,4383,4384,4385,4386,4387,4388,4389,4390,4391,4392,4393,4394,4395,4396,4397,4398,4399,4400,4401,4402,4403,4404,4405,4406,4407,4408,4409,4410,4411,4412,4413,4414,4415,4416,4417,4418,4419,4420,4421,4422,4423,4424,4425,4426,4427,4428,4429,4430,4431,4432,4433,4434,4435,4436,4437,4438,4439,4440,4441,4442,4443,4444,4445,4446,4447,4448,4449,4450,4451,4452,4453,4454,4455,4456,4457,4458,4459,4460,4461,4462,4463,4464,4465,4466,4467,4468,4469,4470,4471,4472,4473,4474,4475,4476,4477,4478,4479,4480,4481,4482,4483,4484,4485,4486,4487,4488,4489,4490,4491,4492,4493,4494,4495,4496,4497,4498,4499,4500,4501,4502,4503,4504,4505,4506,4507,4508,4509,4510,4511,4512,4513,4514,4515,4516,4517,4518,4519,4520,4521,4522,4523,4524,4525,4526,4527,4528,4529,4530,4531,4532,4533,4534,4535,4536,4537,4538,4539,4540,4541,4542,4543,4544,4545,4546,4547,4548,4549,4550,4551,4552,4553,4554,4555,4556,4557,4558,4559,4560,4561,4562,4563,4564,4565,4566,4567,4568,4569,4570,4571,4572,4573,4574,4575,4576,4577,4578,4579,4580,4581,4582,4583,4584,4585,4586,4587,4588,4589,4590,4591,4592,4593,4594,4595,4596,4597,4598,4599,4600,4601,4602,4603,4604,4605,4606,4607,4608,4609,4610,4611,4612,4613,4614,4615,4616,4617,4618,4619,4620,4621,4622,4623,4624,4625,4626,4627,4628,4629,4630,4631,4632,4633,4634,4635,4636,4637,4638,4639,4640,4641,4642,4643,4644,4645,4646,4647,4648,4649,4650,4651,4652,4653,4654,4655,4656,4657,4658,4659,4660,4661,4662,4663,4664,4665,4666,4667,4668,4669,4670,4671,4672,4673,4674,4675,4676,4677,4678,4679,4680,4681,4682,4683,4684,4685,4686,4687,4688,4689,4690,4691,4692,4693,4694,4695,4696,4697,4698,4699,4700,4701,4702,4703,4704,4705,4706,4707,4708,4709,4710,4711,4712,4713,4714,4715,4716,4717,4718,4719,4720,4721,4722,4723,4724,4725,4726,4727,4728,4729,4730,4731,4732,4733,4734,4735,4736,4737,4738,4739,4740,4741,4742,4743,4744,4745,4746,4747}
      qtot_avg#long_name=Total runoff: averaged across both HRUs (mm)
      qtot_avg#name=qtot_avg
      qtot_avg#standard_name=qtot_avg
      qtot_avg#units=mm
      qtot_avg#_FillValue=-999
      time#calendar=gregorian
      time#long_name=time
      time#name=time
      time#standard_name=time
      time#units=days since 1900-01-01
    Corner Coordinates:
    Upper Left  ( 111.9750000,  -9.9750000) 
    Lower Left  ( 111.9750000, -44.0250000) 
    Upper Right ( 154.0250000,  -9.9750000) 
    Lower Right ( 154.0250000, -44.0250000) 
    Center      ( 133.0000000, -27.0000000) 
    Band 1 Block=50x1 Type=Float32, ColorInterp=Undefined
      NoData Value=-999
      Unit Type: mm
      Metadata:
        long_name=Total runoff: averaged across both HRUs (mm)
        name=qtot_avg
        NETCDF_DIM_time=4382
        NETCDF_VARNAME=qtot_avg
        standard_name=qtot_avg
        units=mm
        _FillValue=-999
    

    ncdump -h

    netcdf qtot_avg_1912 {
    dimensions:
    	time = UNLIMITED ; // (366 currently)
    	latitude = 681 ;
    	longitude = 841 ;
    variables:
    	int time(time) ;
    		time:name = "time" ;
    		time:long_name = "time" ;
    		time:calendar = "gregorian" ;
    		time:units = "days since 1900-01-01" ;
    		time:standard_name = "time" ;
    	double latitude(latitude) ;
    		latitude:name = "latitude" ;
    		latitude:long_name = "latitude" ;
    		latitude:units = "degrees_north" ;
    		latitude:standard_name = "latitude" ;
    	double longitude(longitude) ;
    		longitude:name = "longitude" ;
    		longitude:long_name = "longitude" ;
    		longitude:units = "degrees_east" ;
    		longitude:standard_name = "longitude" ;
    	float qtot_avg(time, latitude, longitude) ;
    		qtot_avg:_FillValue = -999.f ;
    		qtot_avg:name = "qtot_avg" ;
    		qtot_avg:long_name = "Total runoff: averaged across both HRUs (mm)" ;
    		qtot_avg:units = "mm" ;
    		qtot_avg:standard_name = "qtot_avg" ;
    
    // global attributes:
    		:var_name = "qtot_avg" ;
    }
    
    opened by fre171csiro 23
  • ALOS-2 yaml

    ALOS-2 yaml

    Writing to see if you might be able to help us with a VNDC related issue please.

    RESTEC are having some issues around the choice of data types in the following files. The following are the files used by Vietnam to ingest their ALOS-2 data.

    https://github.com/vndatacube/odc-config-files/blob/master/alos/alos2_tile_productdef.yaml

    https://github.com/vndatacube/odc-config-files/blob/master/alos/alos2_tile_wgs84_50m.yaml

    Okumura-san has confirmed that the data types used here match the data definition. With that said, when the RESTEC team try to run their Python notebook, the dc.load step produces the following error for the incidence angle and mask products:

    TypeError: Cannot cast scalar from dtype('float32') to dtype('uint8') according to the rule 'same_kind'

    FYI they are running Python3.5.

    They are able to work around the issue. Two work arounds exist: using int8 (issues with negative values, but this can be further worked around) or using int16 (works without errors, but uses more resources).

    To get this notebook running, there are three options that we see:

    1. Implement a work around – but this is not ideal as it would have to be done in every application/uses more resources (for int16 case).
    2. Change the VN Cube yamls and reingest all of the VNCube data using the data types that work. They would however prefer to not change these values, as they are consistent with the data definition. Also wish to avoid reingesting all the data.
    3. Edit the load() function to manage the data types correctly. They would need assistance from core developers to do this.

    Are you able to advise please? Let me know if you need any more info.

    Many thanks in advance.

    opened by matthewsteventon 22
  • Celery runner

    Celery runner

    Overview

    New executor that uses Celery (with Redis as broker and data backend).

    This provides an alternative to current setup (dask.distributed). Problem with using dask.distributed is that it requires that tasks are idempotent, since it will sometimes schedule the same task in parallel on different nodes. With many tasks doing I/O this creates problems.

    Celery in comparison has a much simpler execution model, and doesn't have same constraints.

    Redis backend

    Celery supports a number of backends, of them two are fully supported: RabbitMQ and Redis. I have picked Redis as it is the simplest to get running without root access (NCI environment)

    data_task_options

    Adding celery option to --executor command line, same host:port argument is used. Celery executor will connect to Redis instance at a given address, if address is localhost and Redis is not running, it will be launched for the duration of the execution. Workers don't get launched however, so in most cases the app will stall until workers are added to the processing pool (see datacube-worker)

    $HOME/.datacube-redis contains redis password, if this file doesn't exist it will be created with a randomly generated password when launching Redis server.

    Also adding executor alias dask to be the same as distributed. However now that we have 2 distributed backends we should probably favour dask as a name for dask.distributed backend.

    datacube-worker

    New app datacube-worker was added to support launching workers in either celery or dask mode. It accepts the same --executor option as the task app.

    opened by Kirill888 22
  • Empty NetCDF file was created

    Empty NetCDF file was created

    I tried to ingest a granule from Sentinel-2.

    Configuration :

    The file creation is performed without problems. But, when I read bands containing in the NetCDF file (read each band as array), all value are -999 (due to my nodata configuration).

    I have verified that the file is not corrupt and values in JP2 files aren't empty.

    Thanks

    opened by PaulMousset 22
  • Csiro/s3 driver

    Csiro/s3 driver

    Reason for this pull request

    Improvements to DataCube:

    • S3 storage backend
      • Windows is not supported.
    • driver manager to dynamically load/switch storage drivers (NetCDF, S3, S3-test).
    • Ingest now supports creation of nD Storage Units in the available storage drivers. e.g. NetCDF: datacube -v ingest -c ls5_nbar_albers.yaml --executor multiproc 8 S3: datacube --driver s3 -v ingest -c ls5_nbar_albers_s3.yaml --executor multiproc 8 S3-test: datacube --driver s3-test -v ingest -c ls5_nbar_albers_s3_test.yaml --executor multiproc 8
    • load and load_data has optional multi-threaded support with use_threads flag.

    Improved testing:

    • tests are run for each driver, where possible.
    • tests corner values
    • tests md5 hash equality on load_data
    • tests multiple time slices
    • reduction in data usage
    • reduction in total number of concurrent db connections.

    Proposed changes

    • cli driver parameter to select driver, if None, it defaults to NetCDF.
    • support for generating n-dimension storage units on ingest.
      • example ingest yaml: docs/config_samples/ingester/ls5_nbar_nbar_s3.yaml
    • supported drivers:
      • NetCDF: based on existing driver.
      • S3: S3 backend for storage.
      • S3-test: Same as S3 but emulated on disk.
    • datacube.api.load_data - use_threads parameter to enable threaded _fuse_measurement with results stored in a shared memory array.
    • datacube.scripts.ingest - uses slightly modified GridWorkFlow to generate 3D tasks for 3D Storage Unit creation.
    • optional creation of s3 tables via "datacube -v system init -s3"

    Todo:

    • More tests.

    • [ ] Closes #xxxx

    • [ ] Tests added / passed

    • [ ] Fully documented, including docs/about/whats_new.rst for all changes

    opened by petewa 19
  • Trouble Ingesting USGS Landsat and MODIS data

    Trouble Ingesting USGS Landsat and MODIS data

    When I build a datacube on my server, I have already install the datacube and init the database. How can I index the landsat data like LC80090452014008LGN00 with tif file. I tried use the ‘usgslsprepare.py’

    python datacube-core/utils/usgslsprepare.py /home/tensorx/data/datacube/landsat/*/

    it was wrong

    2017-06-19 15:39:57,274 INFO Processing /home/tensorx/data/datacube/landsat/LC80090452014008LGN00
    Traceback (most recent call last):
      File "datacube-core/utils/usgslsprepare.py", line 265, in <module>
        main()
      File "/home/tensorx/miniconda2/envs/datacube3/lib/python3.5/site-packages/click/core.py", line 722, in __call__
        return self.main(*args, **kwargs)
      File "/home/tensorx/miniconda2/envs/datacube3/lib/python3.5/site-packages/click/core.py", line 697, in main
        rv = self.invoke(ctx)
      File "/home/tensorx/miniconda2/envs/datacube3/lib/python3.5/site-packages/click/core.py", line 895, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/home/tensorx/miniconda2/envs/datacube3/lib/python3.5/site-packages/click/core.py", line 535, in invoke
        return callback(*args, **kwargs)
      File "datacube-core/utils/usgslsprepare.py", line 256, in main
        documents = prepare_datasets(path)
      File "datacube-core/utils/usgslsprepare.py", line 241, in prepare_datasets
        nbar = prep_dataset(fields, nbar_path)
      File "datacube-core/utils/usgslsprepare.py", line 163, in prep_dataset
        with open(os.path.join(str(path), metafile)) as f:
    UnboundLocalError: local variable 'metafile' referenced before assignment
    

    How to fix it?

    enhancement ingestion/data availability 
    opened by robeson1010 19
  • Better integration with notebook display for core data types

    Better integration with notebook display for core data types

    Summary

    Many of datacube users work mostly in notebooks, yet none of our classes take advantage of rich display capabilities provided by the notebook environment. Here is an example of how easy it is to add “display on a map” feature to GeoBox class, leveraging existing Jupyter ecosystem:

    GeoJSON-GeoBox

    full notebook is here: https://gist.github.com/Kirill888/4ce2f64413e660d1638afa23eede6eb0

    Proposal

    1. Implement _ipython_display_ methods on important objects like GeoBox, Dataset
      • Take advantage of GeoJSON module when available, fallback to textual representation otherwise
    2. Update documentation/example notebooks with instructions on how to best take advantage of rich display ecosystem available inside Jupyter environment
    3. Update various docker files we have sitting around to include GeoJSON nbextension and possibly others
    wontfix 
    opened by Kirill888 18
  • 'DatasetType' object has no attribute '_all_measurements' with dask

    'DatasetType' object has no attribute '_all_measurements' with dask

    Hey all, I am receiving an error when using dask to perform some computations. Not sure entirely if this is an ODC/Dask/xarray issue though. Any help would be appreciated. The code example below was extracted from a jupyter notebook.

    Expected behaviour

    Computed data and plot

    Actual behaviour

    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    <ipython-input-15-37a4078ad80d> in <module>
    ----> 1 resampled.compute().plot(size=10)
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/xarray/core/dataarray.py in compute(self, **kwargs)
        832         """
        833         new = self.copy(deep=False)
    --> 834         return new.load(**kwargs)
        835 
        836     def persist(self, **kwargs) -> "DataArray":
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/xarray/core/dataarray.py in load(self, **kwargs)
        806         dask.array.compute
        807         """
    --> 808         ds = self._to_temp_dataset().load(**kwargs)
        809         new = self._from_temp_dataset(ds)
        810         self._variable = new._variable
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/xarray/core/dataset.py in load(self, **kwargs)
        652 
        653             # evaluate all the dask arrays simultaneously
    --> 654             evaluated_data = da.compute(*lazy_data.values(), **kwargs)
        655 
        656             for k, data in zip(lazy_data, evaluated_data):
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/dask/base.py in compute(*args, **kwargs)
        450         postcomputes.append(x.__dask_postcompute__())
        451 
    --> 452     results = schedule(dsk, keys, **kwargs)
        453     return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
        454 
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/distributed/client.py in get(self, dsk, keys, restrictions, loose_restrictions, resources, sync, asynchronous, direct, retries, priority, fifo_timeout, actors, **kwargs)
       2712                     should_rejoin = False
       2713             try:
    -> 2714                 results = self.gather(packed, asynchronous=asynchronous, direct=direct)
       2715             finally:
       2716                 for f in futures.values():
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/distributed/client.py in gather(self, futures, errors, direct, asynchronous)
       1991                 direct=direct,
       1992                 local_worker=local_worker,
    -> 1993                 asynchronous=asynchronous,
       1994             )
       1995 
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/distributed/client.py in sync(self, func, asynchronous, callback_timeout, *args, **kwargs)
        832         else:
        833             return sync(
    --> 834                 self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
        835             )
        836 
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/distributed/utils.py in sync(loop, func, callback_timeout, *args, **kwargs)
        337     if error[0]:
        338         typ, exc, tb = error[0]
    --> 339         raise exc.with_traceback(tb)
        340     else:
        341         return result[0]
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/distributed/utils.py in f()
        321             if callback_timeout is not None:
        322                 future = asyncio.wait_for(future, callback_timeout)
    --> 323             result[0] = yield future
        324         except Exception as exc:
        325             error[0] = sys.exc_info()
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/tornado/gen.py in run(self)
        733 
        734                     try:
    --> 735                         value = future.result()
        736                     except Exception:
        737                         exc_info = sys.exc_info()
    
    /opt/conda/envs/datacube/lib/python3.6/site-packages/distributed/client.py in _gather(self, futures, errors, direct, local_worker)
       1850                             exc = CancelledError(key)
       1851                         else:
    -> 1852                             raise exception.with_traceback(traceback)
       1853                         raise exc
       1854                     if errors == "skip":
    
    /home/ubuntu/.local/lib/python3.6/site-packages/datacube/api/core.py in fuse_lazy()
    
    /home/ubuntu/.local/lib/python3.6/site-packages/datacube/api/core.py in _fuse_measurement()
    
    /home/ubuntu/.local/lib/python3.6/site-packages/datacube/storage/_base.py in __init__()
    
    /home/ubuntu/.local/lib/python3.6/site-packages/datacube/model/__init__.py in lookup_measurements()
    
    /home/ubuntu/.local/lib/python3.6/site-packages/datacube/model/__init__.py in _resolve_aliases()
    
    AttributeError: 'DatasetType' object has no attribute '_all_measurements'
    

    Steps to reproduce the behaviour

    from dask.distributed import Client
    import datacube
    import xarray
    client = Client('cluster_address')
    dc = datacube.Datacube()
    query = {
        'lat': (48.15, 48.35),
        'lon': (16.3, 16.5),
        'time': ('2017-01-01', '2020-12-31')
    }
    data = dc.load(product='product', 
                   output_crs='EPSG:32633', 
                   resolution=(-10,10),
                   dask_chunks={'x': 250, 'y': 250, 'time':20},
                    **query)
    arr = data.band_1
    resampled = arr.resample(time='1w').mean().mean(axis=(1,2))
    resampled.compute().plot(size=10)
    

    Environment information

    There are some slight mismatches in the versions I hope that these aren't the issue:

    /opt/conda/envs/datacube/lib/python3.6/site-packages/distributed/client.py:1130: VersionMismatchWarning: Mismatched versions found
    
    +-------------+----------------+---------------+---------------+
    | Package     | client         | scheduler     | workers       |
    +-------------+----------------+---------------+---------------+
    | dask        | 2.27.0         | 2.27.0        | 2.28.0        |
    | distributed | 2.27.0         | 2.27.0        | 2.28.0        |
    | numpy       | 1.19.1         | 1.19.2        | 1.19.2        |
    | python      | 3.6.11.final.0 | 3.6.9.final.0 | 3.6.9.final.0 |
    | toolz       | 0.11.1         | 0.10.0        | 0.11.1        |
    +-------------+----------------+---------------+---------------+
      warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
    
    • Which datacube --version are you using? 1.8.3
    • What datacube deployment/enviornment are you running against? Jupyterhub running in docker
    opened by jankovicgd 17
  • ShapelyDeprecationWarning: The 'type' attribute is deprecated -> use the 'geom_type' attribute instead.

    ShapelyDeprecationWarning: The 'type' attribute is deprecated -> use the 'geom_type' attribute instead.

    datacube/utils/geometry/_base.py:623: ShapelyDeprecationWarning: The 'type' attribute is deprecated, and will be removed in the future. You can use the 'geom_type' attribute instead.
        if geom.type in ['Point', 'MultiPoint']:
    
    datacube/utils/geometry/_base.py:626: ShapelyDeprecationWarning: The 'type' attribute is deprecated, and will be removed in the future. You can use the 'geom_type' attribute instead.
        if geom.type in ['GeometryCollection', 'MultiPolygon', 'MultiLineString']:
    
    datacube/utils/geometry/_base.py:629: ShapelyDeprecationWarning: The 'type' attribute is deprecated, and will be removed in the future. You can use the 'geom_type' attribute instead.
        if geom.type in ['LineString', 'LinearRing']:
    
    datacube/utils/geometry/_base.py:632: ShapelyDeprecationWarning: The 'type' attribute is deprecated, and will be removed in the future. You can use the 'geom_type' attribute instead.
        if geom.type == 'Polygon':
    
    opened by snowman2 0
  • Dataset loading by geometry is not working

    Dataset loading by geometry is not working

    Expected behaviour

    Load available dataset with query

    Actual behaviour

    Loads 0 datasets

    Steps to reproduce the behaviour

    datasets = dc.find_datasets(product=SOURCE_PRODUCT,
                                    time=(start_date.strftime("%Y-%m-%d"),
                                          end_date.strftime("%Y-%m-%d")),
                                    )
    datasets2 = dc.find_datasets(product=SOURCE_PRODUCT,
                                    time=(start_date.strftime("%Y-%m-%d"),
                                          end_date.strftime("%Y-%m-%d")),
                                    geopolygon=datasets[0].extent,
                                    )
    

    datasets2 is empty

    Environment information

    • Which datacube --version are you using?
    • What datacube deployment/enviornment are you running against?

    datacube==1.8.9 Open Data Cube core, version 1.8.9

    Note: Stale issues will be automatically closed after a period of six months with no activity. To ensure critical issues are not closed, tag them with the Github pinned tag. If you are a community member and not a maintainer please escalate this issue to maintainers via GIS StackExchange or Slack.

    opened by uotamendi 1
  • Various issue when trying to install onto ubuntu

    Various issue when trying to install onto ubuntu

    Expected behaviour

    Less errors.

    Actual behaviour

    [email protected] ~/proj/datacube-core % datacube system check 
    Version:       1.8.8
    Config files:  /home/dap/.datacube.conf
    Host:          localhost:5432
    Database:      datacube
    User:          dap
    Environment:   None
    Index Driver:  default
    
    Valid connection:	2022-12-10 23:24:51,360 510586 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::default
    2022-12-10 23:24:51,360 510586 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:24:51,361 510586 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::memory
    2022-12-10 23:24:51,361 510586 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:24:51,361 510586 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::null
    2022-12-10 23:24:51,361 510586 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:24:51,361 510586 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::postgis
    2022-12-10 23:24:51,361 510586 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    

    Steps to reproduce the behaviour

    Installed using https://datacube-core.readthedocs.io/en/latest/installation/setup/ubuntu.html Ran into lots of errors, but my setup script is now:

    set -x
    PYTHON=${1-3.8}
    ENV=odc_${PYTHON}
    conda env remove -n ${ENV} --yes
    conda config --append channels conda-forge
    conda update -n base --yes -c defaults conda
    conda create --name ${ENV} --yes python=${PYTHON} datacube
    conda install -n ${ENV} --yes pycodestyle
    conda install -n ${ENV} --yes pylint
    conda install -n ${ENV} --yes jupyter matplotlib scipy pytest-cov hypothesis
    conda install -n ${ENV} --yes geoalchemy2 moto 
    cat ~/.datacube_integration.conf
    cd ~/proj/datacube-core
    conda run -n ${ENV} ./check-code.sh integration_tests
    

    The integration tests fail with many errors. So I decided to try to init the database following https://datacube-core.readthedocs.io/en/latest/installation/database/setup.html

    I created a database:

    [email protected]:~$ createdb datacube
    

    I created another config file:

    [email protected] ~/proj/datacube-core % cat ~/.datacube.conf 
    [datacube]
    # One config file may contain multiple named sections providing multiple configuration environments.
    # The section named "datacube" (or "default") is used if no environment is specified.
    
    # index_driver is optional and defaults to "default" (the default Postgres index driver)
    index_driver: default
    
    # The remaining configuration entries are for the default Postgres index driver and
    # may not apply to other index drivers.
    db_database: datacube
    
    # A blank host will use a local socket. Specify a hostname (such as localhost) to use TCP.
    db_hostname: localhost
    
    # Credentials are optional: you might have other Postgres authentication configured.
    # The default username otherwise is the current user id.
    db_username: dap
    db_password: postgres4me
    
    [test]
    # A "test" environment that accesses a separate test database.
    index_driver: default
    db_database: datacube_test
    
    [null]
    # A "null" environment for working with no index.
    index_driver: null
    
    [local_memory]
    # A local non-persistent in-memory index.
    #   Compatible with the default index driver, but resides purely in memory with no persistent database.
    #   Note that each new invocation will receive a new, empty index.
    index_driver: memory
    [email protected] ~/proj/datacube-core % 
    

    I then tried to init it:

    [email protected] ~/proj/datacube-core % datacube -v system init
    2022-12-10 23:34:01,711 512304 datacube INFO Running datacube command: /home/dap/anaconda3/envs/odc_3.8/bin/datacube -v system init
    2022-12-10 23:34:01,795 512304 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::default
    2022-12-10 23:34:01,795 512304 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:34:01,795 512304 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::memory
    2022-12-10 23:34:01,795 512304 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:34:01,796 512304 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::null
    2022-12-10 23:34:01,796 512304 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:34:01,796 512304 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::postgis
    2022-12-10 23:34:01,796 512304 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    Initialising database...
    2022-12-10 23:34:01,808 512304 datacube.drivers.postgres._core INFO Ensuring user roles.
    2022-12-10 23:34:01,811 512304 datacube.drivers.postgres._core INFO Adding role grants.
    2022-12-10 23:34:01,813 512304 datacube.drivers.postgres._core INFO No schema updates required.
    Updated.
    Checking indexes/views.
    2022-12-10 23:34:01,813 512304 datacube.drivers.postgres._api INFO Checking dynamic views/indexes. (rebuild views=True, indexes=False)
    Done.
    [email protected] ~/proj/datacube-core % 
    

    Due to the verbiage spewed as a result of this, I tried to check:

    [email protected] ~/proj/datacube-core % datacube system check 
    Version:       1.8.8
    Config files:  /home/dap/.datacube.conf
    Host:          localhost:5432
    Database:      datacube
    User:          dap
    Environment:   None
    Index Driver:  default
    
    Valid connection:	2022-12-10 23:35:49,773 512681 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::default
    2022-12-10 23:35:49,773 512681 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:35:49,774 512681 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::memory
    2022-12-10 23:35:49,774 512681 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:35:49,774 512681 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::null
    2022-12-10 23:35:49,774 512681 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    2022-12-10 23:35:49,774 512681 datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.index::postgis
    2022-12-10 23:35:49,774 512681 datacube.drivers.driver_cache WARNING Error was: VersionConflict(rasterio 1.2.10 (/home/dap/anaconda3/envs/odc_3.8/lib/python3.8/site-packages), Requirement.parse('rasterio>=1.3.2'))
    YES
    [email protected] ~/proj/datacube-core % 
    

    Is this success? It does offer 'YES' at the end, but I am a bit concerned, mainly by all the self-test errors. This is a small extract of the verbiage emitted by running the ./check-code.sh integration_tests step.

    ---------- coverage: platform linux, python 3.8.15-final-0 -----------
    Name                                         Stmts   Miss  Cover
    ----------------------------------------------------------------
    datacube/__init__.py                             8      0   100%
    datacube/__main__.py                             0      0   100%
    datacube/api/__init__.py                         4      0   100%
    datacube/api/core.py                           384    107    72%
    datacube/api/grid_workflow.py                  137     10    93%
    datacube/api/query.py                          213     18    92%
    datacube/config.py                             126      3    98%
    datacube/drivers/__init__.py                     5      0   100%
    datacube/drivers/_tools.py                      14      0   100%
    datacube/drivers/_types.py                      46      0   100%
    datacube/drivers/datasource.py                  30      0   100%
    datacube/drivers/driver_cache.py                29     10    66%
    datacube/drivers/indexes.py                     24      0   100%
    datacube/drivers/netcdf/__init__.py              4      0   100%
    datacube/drivers/netcdf/_safestrings.py         41      2    95%
    datacube/drivers/netcdf/_write.py               55      0   100%
    datacube/drivers/netcdf/driver.py               36     11    69%
    datacube/drivers/netcdf/writer.py              168     14    92%
    datacube/drivers/postgis/__init__.py             4      0   100%
    datacube/drivers/postgis/_api.py               396    288    27%
    datacube/drivers/postgis/_connections.py       132     56    58%
    datacube/drivers/postgis/_core.py               99     65    34%
    datacube/drivers/postgis/_fields.py            266    134    50%
    datacube/drivers/postgis/_schema.py            107      1    99%
    datacube/drivers/postgis/_spatial.py            88     58    34%
    datacube/drivers/postgis/sql.py                 55     12    78%
    datacube/drivers/postgres/__init__.py            4      0   100%
    datacube/drivers/postgres/_api.py              302     10    97%
    datacube/drivers/postgres/_connections.py      105     12    89%
    datacube/drivers/postgres/_core.py             107      8    93%
    datacube/drivers/postgres/_dynamic.py           64      7    89%
    datacube/drivers/postgres/_fields.py           268     25    91%
    datacube/drivers/postgres/_schema.py            14      0   100%
    datacube/drivers/postgres/sql.py                55      1    98%
    datacube/drivers/readers.py                     41      7    83%
    datacube/drivers/rio/__init__.py                 1      0   100%
    datacube/drivers/rio/_reader.py                134      0   100%
    datacube/drivers/writers.py                     20      3    85%
    datacube/execution/__init__.py                   0      0   100%
    datacube/execution/worker.py                    30     20    33%
    datacube/executor.py                           169     59    65%
    datacube/helpers.py                             19     14    26%
    datacube/index/__init__.py                       6      0   100%
    datacube/index/_api.py                          14      1    93%
    datacube/index/abstract.py                     378     64    83%
    datacube/index/eo3.py                          101      1    99%
    datacube/index/exceptions.py                    10      0   100%
    datacube/index/fields.py                        31      1    97%
    datacube/index/hl.py                           158      1    99%
    datacube/index/memory/__init__.py                1      0   100%
    datacube/index/memory/_datasets.py             475    409    14%
    datacube/index/memory/_fields.py                11      5    55%
    datacube/index/memory/_metadata_types.py        69     46    33%
    datacube/index/memory/_products.py             107     85    21%
    datacube/index/memory/_users.py                 38     27    29%
    datacube/index/memory/index.py                  67     25    63%
    datacube/index/null/__init__.py                  1      0   100%
    datacube/index/null/_datasets.py                63     24    62%
    datacube/index/null/_metadata_types.py          16      4    75%
    datacube/index/null/_products.py                23      7    70%
    datacube/index/null/_users.py                   10      2    80%
    datacube/index/null/index.py                    62     22    65%
    datacube/index/postgis/__init__.py               0      0   100%
    datacube/index/postgis/_datasets.py            377    310    18%
    datacube/index/postgis/_metadata_types.py       82     58    29%
    datacube/index/postgis/_products.py            126    100    21%
    datacube/index/postgis/_transaction.py          27     12    56%
    datacube/index/postgis/_users.py                21     11    48%
    datacube/index/postgis/index.py                103     52    50%
    datacube/index/postgres/__init__.py              0      0   100%
    datacube/index/postgres/_datasets.py           366     20    95%
    datacube/index/postgres/_metadata_types.py      82      3    96%
    datacube/index/postgres/_products.py           123      6    95%
    datacube/index/postgres/_transaction.py         27      0   100%
    datacube/index/postgres/_users.py               21      0   100%
    datacube/index/postgres/index.py                90      1    99%
    datacube/model/__init__.py                     515     38    93%
    datacube/model/_base.py                          6      0   100%
    datacube/model/fields.py                        83      4    95%
    datacube/model/utils.py                        164     34    79%
    datacube/scripts/__init__.py                     0      0   100%
    datacube/scripts/cli_app.py                      8      0   100%
    datacube/scripts/dataset.py                    358     38    89%
    datacube/scripts/ingest.py                     265    188    29%
    datacube/scripts/metadata.py                    95     20    79%
    datacube/scripts/product.py                    131     24    82%
    datacube/scripts/search_tool.py                 75      2    97%
    datacube/scripts/system.py                      54      6    89%
    datacube/scripts/user.py                        62      7    89%
    datacube/storage/__init__.py                     5      0   100%
    datacube/storage/_base.py                       56      0   100%
    datacube/storage/_hdf5.py                        2      0   100%
    datacube/storage/_load.py                       86      0   100%
    datacube/storage/_read.py                      127      3    98%
    datacube/storage/_rio.py                       143     15    90%
    datacube/storage/masking.py                      3      0   100%
    datacube/testutils/__init__.py                 208      4    98%
    datacube/testutils/geom.py                      66      0   100%
    datacube/testutils/io.py                       204      7    97%
    datacube/testutils/iodriver.py                  31      0   100%
    datacube/testutils/threads.py                   15      0   100%
    datacube/ui/__init__.py                          5      0   100%
    datacube/ui/click.py                           163     26    84%
    datacube/ui/common.py                           52      1    98%
    datacube/ui/expression.py                       46     10    78%
    datacube/ui/task_app.py                        159     30    81%
    datacube/utils/__init__.py                      10      0   100%
    datacube/utils/_misc.py                          7      0   100%
    datacube/utils/aws/__init__.py                 180      0   100%
    datacube/utils/changes.py                       75      1    99%
    datacube/utils/cog.py                          104      0   100%
    datacube/utils/dask.py                          93      0   100%
    datacube/utils/dates.py                         69      8    88%
    datacube/utils/documents.py                    280      5    98%
    datacube/utils/generic.py                       39      0   100%
    datacube/utils/geometry/__init__.py              5      0   100%
    datacube/utils/geometry/_base.py               765     13    98%
    datacube/utils/geometry/_warp.py                47      0   100%
    datacube/utils/geometry/gbox.py                109      1    99%
    datacube/utils/geometry/tools.py               269      0   100%
    datacube/utils/io.py                            31      1    97%
    datacube/utils/masking.py                      118      0   100%
    datacube/utils/math.py                         116      0   100%
    datacube/utils/py.py                            33      0   100%
    datacube/utils/rio/__init__.py                   3      0   100%
    datacube/utils/rio/_rio.py                      65      0   100%
    datacube/utils/serialise.py                     44      0   100%
    datacube/utils/uris.py                         108      5    95%
    datacube/utils/xarray_geoextensions.py         102      0   100%
    datacube/virtual/__init__.py                    89     12    87%
    datacube/virtual/catalog.py                     37     11    70%
    datacube/virtual/expr.py                        47      3    94%
    datacube/virtual/impl.py                       449     72    84%
    datacube/virtual/transformations.py            213     41    81%
    datacube/virtual/utils.py                       31      4    87%
    ----------------------------------------------------------------
    TOTAL                                        13615   2886    79%
    
    ======================================================= slowest 5 durations ========================================================
    12.37s call     integration_tests/test_config_tool.py::test_add_example_dataset_types[datacube-US/Pacific]
    4.16s call     tests/test_utils_aws.py::test_s3_basics
    2.05s call     tests/test_concurrent_executor.py::test_concurrent_executor
    1.50s call     tests/test_utils_dask.py::test_pmap
    1.24s call     tests/test_utils_dask.py::test_compute_tasks
    ===================================================== short test summary info ======================================================
    SKIPPED [2] integration_tests/test_3d.py:26: could not import 'dcio_example.xarray_3d': No module named 'dcio_example'
    SKIPPED [1] ../../anaconda3/envs/odc_3.8/lib/python3.8/site-packages/_pytest/doctest.py:452: all tests skipped by +SKIP option
    XFAIL tests/test_geometry.py::test_lonalt_bounds_more_than_180
      Bounds computation for large geometries in safe mode is broken
    XFAIL tests/test_utils_docs.py::test_merge_with_nan
      Merging dictionaries with content of NaN doesn't work currently
    ERROR tests/test_utils_docs.py::test_read_docs_from_http
    ERROR tests/ui/test_common.py::test_ui_path_doc_stream
    ERROR integration_tests/test_cli_output.py::test_cli_product_subcommand[experimental-US/Pacific] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/test_cli_output.py::test_cli_product_subcommand[experimental-UTC] - sqlalchemy.exc.OperationalError: (psy...
    ERROR integration_tests/test_cli_output.py::test_cli_metadata_subcommand[experimental-US/Pacific] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/test_cli_output.py::test_cli_metadata_subcommand[experimental-UTC] - sqlalchemy.exc.OperationalError: (ps...
    ERROR integration_tests/test_cli_output.py::test_cli_dataset_subcommand[experimental-US/Pacific] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/test_cli_output.py::test_cli_dataset_subcommand[experimental-UTC] - sqlalchemy.exc.OperationalError: (psy...
    ERROR integration_tests/test_cli_output.py::test_readd_and_update_metadata_product_dataset_command[experimental-US/Pacific] - sql...
    ERROR integration_tests/test_cli_output.py::test_readd_and_update_metadata_product_dataset_command[experimental-UTC] - sqlalchemy...
    ERROR integration_tests/test_config_tool.py::test_add_example_dataset_types[experimental-US/Pacific] - sqlalchemy.exc.Operational...
    ERROR integration_tests/test_config_tool.py::test_add_example_dataset_types[experimental-UTC] - sqlalchemy.exc.OperationalError: ...
    ERROR integration_tests/test_config_tool.py::test_error_returned_on_invalid[experimental-US/Pacific] - sqlalchemy.exc.Operational...
    ERROR integration_tests/test_config_tool.py::test_error_returned_on_invalid[experimental-UTC] - sqlalchemy.exc.OperationalError: ...
    ERROR integration_tests/test_config_tool.py::test_config_check[experimental-US/Pacific] - sqlalchemy.exc.OperationalError: (psyco...
    ERROR integration_tests/test_config_tool.py::test_config_check[experimental-UTC] - sqlalchemy.exc.OperationalError: (psycopg2.Ope...
    ERROR integration_tests/test_config_tool.py::test_list_users_does_not_fail[experimental-US/Pacific] - sqlalchemy.exc.OperationalE...
    ERROR integration_tests/test_config_tool.py::test_list_users_does_not_fail[experimental-UTC] - sqlalchemy.exc.OperationalError: (...
    ERROR integration_tests/test_config_tool.py::test_db_init_noop[experimental-US/Pacific] - sqlalchemy.exc.OperationalError: (psyco...
    ERROR integration_tests/test_config_tool.py::test_db_init_noop[experimental-UTC] - sqlalchemy.exc.OperationalError: (psycopg2.Ope...
    ERROR integration_tests/test_config_tool.py::test_db_init[experimental-US/Pacific] - sqlalchemy.exc.OperationalError: (psycopg2.O...
    ERROR integration_tests/test_config_tool.py::test_db_init[experimental-UTC] - sqlalchemy.exc.OperationalError: (psycopg2.Operatio...
    ERROR integration_tests/test_config_tool.py::test_add_no_such_product[experimental-US/Pacific] - sqlalchemy.exc.OperationalError:...
    ERROR integration_tests/test_config_tool.py::test_add_no_such_product[experimental-UTC] - sqlalchemy.exc.OperationalError: (psyco...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user0-US/Pacific] - sqlalchemy.exc.Operation...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user0-UTC] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user1-US/Pacific] - sqlalchemy.exc.Operation...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user1-UTC] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user2-US/Pacific] - sqlalchemy.exc.Operation...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user2-UTC] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user3-US/Pacific] - sqlalchemy.exc.Operation...
    ERROR integration_tests/test_config_tool.py::test_user_creation[experimental-example_user3-UTC] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/test_dataset_add.py::test_dataset_add_http[US/Pacific-datacube]
    ERROR integration_tests/test_dataset_add.py::test_dataset_add_http[UTC-datacube]
    ERROR integration_tests/test_model.py::test_crs_parse[experimental-US/Pacific] - sqlalchemy.exc.OperationalError: (psycopg2.Opera...
    ERROR integration_tests/test_model.py::test_crs_parse[experimental-UTC] - sqlalchemy.exc.OperationalError: (psycopg2.OperationalE...
    ERROR integration_tests/test_validate_ingestion.py::test_invalid_ingestor_config[experimental-US/Pacific] - sqlalchemy.exc.Operat...
    ERROR integration_tests/test_validate_ingestion.py::test_invalid_ingestor_config[experimental-UTC] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_config_docs.py::test_idempotent_add_dataset_type[experimental-US/Pacific] - sqlalchemy.exc.Ope...
    ERROR integration_tests/index/test_config_docs.py::test_idempotent_add_dataset_type[experimental-UTC] - sqlalchemy.exc.Operationa...
    ERROR integration_tests/index/test_config_docs.py::test_update_dataset[experimental-US/Pacific] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/index/test_config_docs.py::test_update_dataset[experimental-UTC] - sqlalchemy.exc.OperationalError: (psyc...
    ERROR integration_tests/index/test_config_docs.py::test_product_update_cli[experimental-US/Pacific] - sqlalchemy.exc.OperationalE...
    ERROR integration_tests/index/test_config_docs.py::test_product_update_cli[experimental-UTC] - sqlalchemy.exc.OperationalError: (...
    ERROR integration_tests/index/test_config_docs.py::test_update_metadata_type[experimental-US/Pacific] - sqlalchemy.exc.Operationa...
    ERROR integration_tests/index/test_config_docs.py::test_update_metadata_type[experimental-UTC] - sqlalchemy.exc.OperationalError:...
    ERROR integration_tests/index/test_config_docs.py::test_filter_types_by_fields[experimental-US/Pacific] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_config_docs.py::test_filter_types_by_fields[experimental-UTC] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_config_docs.py::test_filter_types_by_search[experimental-US/Pacific] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_config_docs.py::test_filter_types_by_search[experimental-UTC] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_index_data.py::test_archive_datasets[experimental-US/Pacific] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_index_data.py::test_archive_datasets[experimental-UTC] - sqlalchemy.exc.OperationalError: (psy...
    ERROR integration_tests/index/test_index_data.py::test_purge_datasets[experimental-US/Pacific] - sqlalchemy.exc.OperationalError:...
    ERROR integration_tests/index/test_index_data.py::test_purge_datasets[experimental-UTC] - sqlalchemy.exc.OperationalError: (psyco...
    ERROR integration_tests/index/test_index_data.py::test_purge_datasets_cli[experimental-US/Pacific] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_index_data.py::test_purge_datasets_cli[experimental-UTC] - sqlalchemy.exc.OperationalError: (p...
    ERROR integration_tests/index/test_index_data.py::test_purge_all_datasets_cli[experimental-US/Pacific] - sqlalchemy.exc.Operation...
    ERROR integration_tests/index/test_index_data.py::test_purge_all_datasets_cli[experimental-UTC] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/index/test_index_data.py::test_index_duplicate_dataset[experimental-US/Pacific] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_index_data.py::test_index_duplicate_dataset[experimental-UTC] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_index_data.py::test_has_dataset[experimental-US/Pacific] - sqlalchemy.exc.OperationalError: (p...
    ERROR integration_tests/index/test_index_data.py::test_has_dataset[experimental-UTC] - sqlalchemy.exc.OperationalError: (psycopg2...
    ERROR integration_tests/index/test_index_data.py::test_get_dataset[experimental-US/Pacific] - sqlalchemy.exc.OperationalError: (p...
    ERROR integration_tests/index/test_index_data.py::test_get_dataset[experimental-UTC] - sqlalchemy.exc.OperationalError: (psycopg2...
    ERROR integration_tests/index/test_index_data.py::test_transactions_api_ctx_mgr[experimental-US/Pacific] - sqlalchemy.exc.Operati...
    ERROR integration_tests/index/test_index_data.py::test_transactions_api_ctx_mgr[experimental-UTC] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/index/test_index_data.py::test_transactions_api_manual[experimental-US/Pacific] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_index_data.py::test_transactions_api_manual[experimental-UTC] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_index_data.py::test_transactions_api_hybrid[experimental-US/Pacific] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_index_data.py::test_transactions_api_hybrid[experimental-UTC] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_index_data.py::test_get_missing_things[experimental-US/Pacific] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_index_data.py::test_get_missing_things[experimental-UTC] - sqlalchemy.exc.OperationalError: (p...
    ERROR integration_tests/index/test_memory_index.py::test_mem_user_resource - RuntimeError: No index driver found for 'memory'. 2 ...
    ERROR integration_tests/index/test_memory_index.py::test_mem_metadatatype_resource - RuntimeError: No index driver found for 'mem...
    ERROR integration_tests/index/test_memory_index.py::test_mem_product_resource - RuntimeError: No index driver found for 'memory'....
    ERROR integration_tests/index/test_memory_index.py::test_mem_dataset_add_eo3 - RuntimeError: No index driver found for 'memory'. ...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_lineage - RuntimeError: No index driver found for 'memory'. 2 ava...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_search_dups - RuntimeError: No index driver found for 'memory'. 2...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_locations - RuntimeError: No index driver found for 'memory'. 2 a...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_updates - RuntimeError: No index driver found for 'memory'. 2 ava...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_expand_periods - RuntimeError: No index driver found for 'memory'...
    ERROR integration_tests/index/test_memory_index.py::test_mem_prod_time_bounds - RuntimeError: No index driver found for 'memory'....
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_archive_purge - RuntimeError: No index driver found for 'memory'....
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_search_and_count - RuntimeError: No index driver found for 'memor...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_search_and_count_by_product - RuntimeError: No index driver found...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_search_returning - RuntimeError: No index driver found for 'memor...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_search_summary - RuntimeError: No index driver found for 'memory'...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_search_returning_datasets_light - RuntimeError: No index driver f...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_search_by_metadata - RuntimeError: No index driver found for 'mem...
    ERROR integration_tests/index/test_memory_index.py::test_mem_ds_count_product_through_time - RuntimeError: No index driver found ...
    ERROR integration_tests/index/test_memory_index.py::test_memory_dataset_add - RuntimeError: No index driver found for 'memory'. 2...
    ERROR integration_tests/index/test_memory_index.py::test_mem_transactions - RuntimeError: No index driver found for 'memory'. 2 a...
    ERROR integration_tests/index/test_pluggable_indexes.py::test_with_standard_index[experimental-US/Pacific] - sqlalchemy.exc.Opera...
    ERROR integration_tests/index/test_pluggable_indexes.py::test_with_standard_index[experimental-UTC] - sqlalchemy.exc.OperationalE...
    ERROR integration_tests/index/test_pluggable_indexes.py::test_system_init[experimental-US/Pacific] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_pluggable_indexes.py::test_system_init[experimental-UTC] - sqlalchemy.exc.OperationalError: (p...
    ERROR integration_tests/index/test_postgis_index.py::test_create_spatial_index[US/Pacific-experimental] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_postgis_index.py::test_create_spatial_index[UTC-experimental] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_index_maintain[US/Pacific-experimental] - sqlalchemy.exc.Operat...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_index_maintain[UTC-experimental] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_index_populate[US/Pacific-experimental] - sqlalchemy.exc.Operat...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_index_populate[UTC-experimental] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_index_crs_validity[US/Pacific-experimental] - sqlalchemy.exc.Op...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_index_crs_validity[UTC-experimental] - sqlalchemy.exc.Operation...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_extent[US/Pacific-experimental] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_extent[UTC-experimental] - sqlalchemy.exc.OperationalError: (ps...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_search[US/Pacific-experimental] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/index/test_postgis_index.py::test_spatial_search[UTC-experimental] - sqlalchemy.exc.OperationalError: (ps...
    ERROR integration_tests/index/test_search_eo3.py::test_search_by_metadata[experimental-US/Pacific] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_search_eo3.py::test_search_by_metadata[experimental-UTC] - sqlalchemy.exc.OperationalError: (p...
    ERROR integration_tests/index/test_search_eo3.py::test_search_dataset_equals_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operat...
    ERROR integration_tests/index/test_search_eo3.py::test_search_dataset_equals_eo3[experimental-UTC] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_search_eo3.py::test_search_dataset_by_metadata_eo3[experimental-US/Pacific] - sqlalchemy.exc.O...
    ERROR integration_tests/index/test_search_eo3.py::test_search_dataset_by_metadata_eo3[experimental-UTC] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_search_eo3.py::test_search_day_eo3[experimental-US/Pacific] - sqlalchemy.exc.OperationalError:...
    ERROR integration_tests/index/test_search_eo3.py::test_search_day_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: (psyco...
    ERROR integration_tests/index/test_search_eo3.py::test_search_dataset_ranges_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operat...
    ERROR integration_tests/index/test_search_eo3.py::test_search_dataset_ranges_eo3[experimental-UTC] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_search_eo3.py::test_zero_width_range_search[experimental-US/Pacific] - sqlalchemy.exc.Operatio...
    ERROR integration_tests/index/test_search_eo3.py::test_zero_width_range_search[experimental-UTC] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_search_eo3.py::test_search_globally_eo3[experimental-US/Pacific] - sqlalchemy.exc.OperationalE...
    ERROR integration_tests/index/test_search_eo3.py::test_search_globally_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: (...
    ERROR integration_tests/index/test_search_eo3.py::test_search_by_product_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operationa...
    ERROR integration_tests/index/test_search_eo3.py::test_search_by_product_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError:...
    ERROR integration_tests/index/test_search_eo3.py::test_search_limit_eo3[experimental-US/Pacific] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_search_eo3.py::test_search_limit_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: (psy...
    ERROR integration_tests/index/test_search_eo3.py::test_search_or_expressions_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operat...
    ERROR integration_tests/index/test_search_eo3.py::test_search_or_expressions_eo3[experimental-UTC] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_search_eo3.py::test_search_returning_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operational...
    ERROR integration_tests/index/test_search_eo3.py::test_search_returning_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: ...
    ERROR integration_tests/index/test_search_eo3.py::test_search_returning_rows_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operat...
    ERROR integration_tests/index/test_search_eo3.py::test_search_returning_rows_eo3[experimental-UTC] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_search_eo3.py::test_searches_only_type_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operation...
    ERROR integration_tests/index/test_search_eo3.py::test_searches_only_type_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/index/test_search_eo3.py::test_search_special_fields_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operat...
    ERROR integration_tests/index/test_search_eo3.py::test_search_special_fields_eo3[experimental-UTC] - sqlalchemy.exc.OperationalEr...
    ERROR integration_tests/index/test_search_eo3.py::test_search_by_uri_eo3[experimental-US/Pacific] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/index/test_search_eo3.py::test_search_by_uri_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: (ps...
    ERROR integration_tests/index/test_search_eo3.py::test_search_conflicting_types[experimental-US/Pacific] - sqlalchemy.exc.Operati...
    ERROR integration_tests/index/test_search_eo3.py::test_search_conflicting_types[experimental-UTC] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/index/test_search_eo3.py::test_fetch_all_of_md_type[experimental-US/Pacific] - sqlalchemy.exc.Operational...
    ERROR integration_tests/index/test_search_eo3.py::test_fetch_all_of_md_type[experimental-UTC] - sqlalchemy.exc.OperationalError: ...
    ERROR integration_tests/index/test_search_eo3.py::test_count_searches[experimental-US/Pacific] - sqlalchemy.exc.OperationalError:...
    ERROR integration_tests/index/test_search_eo3.py::test_count_searches[experimental-UTC] - sqlalchemy.exc.OperationalError: (psyco...
    ERROR integration_tests/index/test_search_eo3.py::test_count_by_product_searches_eo3[experimental-US/Pacific] - sqlalchemy.exc.Op...
    ERROR integration_tests/index/test_search_eo3.py::test_count_by_product_searches_eo3[experimental-UTC] - sqlalchemy.exc.Operation...
    ERROR integration_tests/index/test_search_eo3.py::test_count_time_groups[experimental-US/Pacific] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/index/test_search_eo3.py::test_count_time_groups[experimental-UTC] - sqlalchemy.exc.OperationalError: (ps...
    ERROR integration_tests/index/test_search_eo3.py::test_count_time_groups_cli[experimental-US/Pacific] - sqlalchemy.exc.Operationa...
    ERROR integration_tests/index/test_search_eo3.py::test_count_time_groups_cli[experimental-UTC] - sqlalchemy.exc.OperationalError:...
    ERROR integration_tests/index/test_search_eo3.py::test_search_cli_basic[experimental-US/Pacific] - sqlalchemy.exc.OperationalErro...
    ERROR integration_tests/index/test_search_eo3.py::test_search_cli_basic[experimental-UTC] - sqlalchemy.exc.OperationalError: (psy...
    ERROR integration_tests/index/test_search_eo3.py::test_cli_info_eo3[experimental-US/Pacific] - sqlalchemy.exc.OperationalError: (...
    ERROR integration_tests/index/test_search_eo3.py::test_cli_info_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: (psycopg...
    ERROR integration_tests/index/test_search_eo3.py::test_find_duplicates_eo3[experimental-US/Pacific] - sqlalchemy.exc.OperationalE...
    ERROR integration_tests/index/test_search_eo3.py::test_find_duplicates_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: (...
    ERROR integration_tests/index/test_search_eo3.py::test_csv_search_via_cli_eo3[experimental-US/Pacific] - sqlalchemy.exc.Operation...
    ERROR integration_tests/index/test_search_eo3.py::test_csv_search_via_cli_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError...
    ERROR integration_tests/index/test_search_eo3.py::test_csv_structure_eo3[experimental-US/Pacific] - sqlalchemy.exc.OperationalErr...
    ERROR integration_tests/index/test_search_eo3.py::test_csv_structure_eo3[experimental-UTC] - sqlalchemy.exc.OperationalError: (ps...
    ERROR integration_tests/index/test_search_eo3.py::test_query_dataset_multi_product_eo3[experimental-US/Pacific] - sqlalchemy.exc....
    ERROR integration_tests/index/test_search_eo3.py::test_query_dataset_multi_product_eo3[experimental-UTC] - sqlalchemy.exc.Operati...
    FAILED tests/test_driver.py::test_writer_drivers - AssertionError: assert 'netcdf' in []
    FAILED tests/test_driver.py::test_index_drivers - AssertionError: assert 'null' in ['default', 'postgres']
    FAILED tests/test_geometry.py::test_ops - assert 25.0 == 25.000000000000004
    FAILED tests/test_geometry.py::test_crs_compat - assert None == 3577
    FAILED tests/api/test_grid_workflow.py::test_gridworkflow_with_time_depth - AssertionError
    FAILED tests/api/test_virtual.py::test_aggregate - ValueError: time already exists as coordinate or variable name.
    FAILED integration_tests/test_double_ingestion.py::test_double_ingestion[US/Pacific-datacube] - AssertionError: Error for ['inges...
    FAILED integration_tests/test_double_ingestion.py::test_double_ingestion[UTC-datacube] - AssertionError: Error for ['ingest', '--...
    FAILED integration_tests/test_end_to_end.py::test_end_to_end[US/Pacific-datacube] - AssertionError: Error for ['-v', 'ingest', '-...
    FAILED integration_tests/test_end_to_end.py::test_end_to_end[UTC-datacube] - AssertionError: Error for ['-v', 'ingest', '-c', '/t...
    FAILED integration_tests/test_full_ingestion.py::test_full_ingestion[US/Pacific-datacube] - AssertionError: Error for ['ingest', ...
    FAILED integration_tests/test_full_ingestion.py::test_full_ingestion[UTC-datacube] - AssertionError: Error for ['ingest', '--conf...
    FAILED integration_tests/test_full_ingestion.py::test_process_all_ingest_jobs[US/Pacific-datacube] - AssertionError: Error for ['...
    FAILED integration_tests/test_full_ingestion.py::test_process_all_ingest_jobs[UTC-datacube] - AssertionError: Error for ['ingest'...
    FAILED integration_tests/test_index_out_of_bound.py::test_index_out_of_bound_error[US/Pacific-datacube] - AssertionError: Error f...
    FAILED integration_tests/test_index_out_of_bound.py::test_index_out_of_bound_error[UTC-datacube] - AssertionError: Error for ['in...
    FAILED integration_tests/test_validate_ingestion.py::test_invalid_ingestor_config[datacube-US/Pacific] - AssertionError: assert '...
    FAILED integration_tests/test_validate_ingestion.py::test_invalid_ingestor_config[datacube-UTC] - AssertionError: assert 'No such...
    FAILED integration_tests/index/test_memory_index.py::test_init_memory - AssertionError: assert 'memory' in {'default': <datacube....
    FAILED integration_tests/index/test_null_index.py::test_init_null - AssertionError: assert 'null' in {'default': <datacube.index....
    FAILED integration_tests/index/test_null_index.py::test_null_user_resource - RuntimeError: No index driver found for 'null'. 2 av...
    FAILED integration_tests/index/test_null_index.py::test_null_metadata_types_resource - RuntimeError: No index driver found for 'n...
    FAILED integration_tests/index/test_null_index.py::test_null_product_resource - RuntimeError: No index driver found for 'null'. 2...
    FAILED integration_tests/index/test_null_index.py::test_null_dataset_resource - RuntimeError: No index driver found for 'null'. 2...
    FAILED integration_tests/index/test_null_index.py::test_null_transactions - RuntimeError: No index driver found for 'null'. 2 ava...
    FAILED integration_tests/index/test_search_eo3.py::test_cli_info_eo3[datacube-US/Pacific] - AssertionError: assert '    lat: {beg...
    FAILED integration_tests/index/test_search_eo3.py::test_cli_info_eo3[datacube-UTC] - AssertionError: assert '    lat: {begin: -38...
    ==================== 27 failed, 690 passed, 3 skipped, 2 xfailed, 21 warnings, 162 errors in 158.71s (0:02:38) =====================
    

    Environment information

    • Which datacube --version are you using?
    [email protected] ~/proj/datacube-core % datacube --version
    Open Data Cube core, version 1.8.8
    [email protected] ~/proj/datacube-core % conda info
    
         active environment : odc_3.8
        active env location : /home/dap/anaconda3/envs/odc_3.8
                shell level : 1
           user config file : /home/dap/.condarc
     populated config files : /home/dap/.condarc
              conda version : 22.11.1
        conda-build version : 3.22.0
             python version : 3.9.13.final.0
           virtual packages : __archspec=1=aarch64
                              __glibc=2.35=0
                              __linux=5.15.0=0
                              __unix=0=0
           base environment : /home/dap/anaconda3  (writable)
          conda av data dir : /home/dap/anaconda3/etc/conda
      conda av metadata url : None
               channel URLs : https://repo.anaconda.com/pkgs/main/linux-aarch64
                              https://repo.anaconda.com/pkgs/main/noarch
                              https://repo.anaconda.com/pkgs/r/linux-aarch64
                              https://repo.anaconda.com/pkgs/r/noarch
                              https://conda.anaconda.org/conda-forge/linux-aarch64
                              https://conda.anaconda.org/conda-forge/noarch
              package cache : /home/dap/anaconda3/pkgs
                              /home/dap/.conda/pkgs
           envs directories : /home/dap/anaconda3/envs
                              /home/dap/.conda/envs
                   platform : linux-aarch64
                 user-agent : conda/22.11.1 requests/2.28.1 CPython/3.9.13 Linux/5.15.0-56-generic ubuntu/22.04.1 glibc/2.35
                    UID:GID : 1027:1027
                 netrc file : None
               offline mode : False
    
    [email protected] ~/proj/datacube-core % git status
    On branch develop
    Your branch is up to date with 'origin/develop'.
    
    Untracked files:
      (use "git add <file>..." to include in what will be committed)
    	h.txt
    	my.py
    	redo.sh
    
    nothing added to commit but untracked files present (use "git add" to track)
    [email protected] ~/proj/datacube-core % 
    
    • What datacube deployment/enviornment are you running against?

    Note: Stale issues will be automatically closed after a period of six months with no activity. To ensure critical issues are not closed, tag them with the Github pinned tag. If you are a community member and not a maintainer please escalate this issue to maintainers via GIS StackExchange or Slack.

    opened by permezel 5
  • need support product grouping in core

    need support product grouping in core

    ASTERS are not part of DEA products, but due to alphabetical ordering they are always the first products showing up on query dc.list_products()

    image

    core should add a feature to support pre-identified grouping, ie.

    dc.list_products(group='c3')

    enhancement 
    opened by pindge 4
  • nodata for product definition doesnt match dtype

    nodata for product definition doesnt match dtype

    related to: https://github.com/GeoscienceAustralia/dea-config/issues/1110

    247 | ga_srtm_dem1sv1_0 | {"platform": {"code": "Space Shuttle Endeavour"}, "instrument": {"name": "SIR"}, "product_type": "DEM"} |                 1 | {"name": "ga_srtm_dem1sv1_0", "storage": {"crs": "EPSG:4326", "resolution": {"latitude": -0.00027777777778, "longitude": 0.00027777777778}}, "metadata": {"platform": {"code": "Space Shuttle Endeavour"}, "instrument": {"name": "SIR"}, "product_type": "DEM"}, "description": "DEM 1sec Version 1.0", "measurements": [{"name": "dem", "dtype": "float32", "units": "metre", "nodata": -340282350000000000000000000000000000000}, {"name": "dem_s", "dtype": "float32", "units": "metre", "nodata": -340282350000000000000000000000000000000}, {"name": "dem_h", "dtype": "float32", "units": "metre", "nodata": -340282350000000000000000000000000000000}], "metadata_type": "eo"} | 2020-09-25 04:29:16.760552+00 | ows      | 2021-07-01 01:07:29.571853+00
    (1 row)
    

    eo3-validate type handling needs update

    opened by pindge 14
  • links on read the doc footer leads to 404

    links on read the doc footer leads to 404

    https://github.com/opendatacube/datacube-core/blob/develop/docs/_templates/odc-footer.html renders at the footer of page https://datacube-core.readthedocs.io/en/latest/index.html

    documentation 
    opened by pindge 0
Releases(datacube-1.8.9)
  • datacube-1.8.9(Nov 17, 2022)

    Most notable changes:

    1. patch_url argument to dc.load() and dc.load_data() (introduced in v1.8.8) is now also supported for Dask loading.
    2. Fixed a day-zero bug affecting search over range-type search fields where the target and search field value is of zero-width.
    3. Performance improvements to CRS geometry class.
    4. Numerous improvements to documentation and github actions.

    Full list of changes:

    • Performance improvements to CRS geometry class (#1322)
    • Extend patch_url argument to dc.load() and dc.load_data() to Dask loading. (#1323)
    • Add sphinx.ext.autoselectionlabel extension to readthedoc conf to support :ref: command (#1325)
    • Add pyspellcheck for .rst documentation files and fix typos (#1327)
    • Add rst documentation lint github action and apply best practices (#1328)
    • Follow PEP561_ to make type hints available to other packages (#1331)
    • Updated GitHub actions config to remove deprecated set-output (#1333)
    • Add what's new page link to menu and general doc fixes (#1335)
    • Add search_fields to required for metadata type schema and update doc (#1339)
    • Fix typo and update metadata documentation (#1340)
    • Add readthedoc preview github action (#1344)
    • Update nodata in readthedoc for products page (#1347)
    • Add eo-datasets to extensions & related software doc page (#1349)
    • Fix bug affecting searches against range types of zero width (#1352)
    • Add 1.8.9 release date and missing PR to whats_news.rst (#1353)

    Includes contributions from @SpacemanPaul, @omad, @pindge, @snowman2.

    With thanks and appreciation to all contributors, users and supporting organisations, expecially Geoscience Australia.

    Source code(tar.gz)
    Source code(zip)
  • 1.8.8(Oct 5, 2022)

    Ofiicial release (Same as 1.8.8-rc1)

    Most notable new features are:

    1. the new database transaction API, as discussed in ODC-EP07 Database Transaction API. Simple example:
    with dc.index.transaction() as trans:
       # Archive old datasets and add new ones in single transaction
       dc.index.datasets.archive([old_ds1.id, old_ds2.id])
       dc.index.datasets.add(ds1)
       dc.index.datasets.add(ds2)
    
       # If execution gets to here, the transaction is committed.
       # If an exception was raised by any of the above methods, the transaction is rolled back.
    
    1. Add patch_url argument to dc.load and dc.load_data allowing signing of URIs as required by some commercial data providers (e.g. Microsoft Planetary Computer).

    Also includes an update of the main test docker build to Ubuntu 22.04 and Python 3.10, and significant progress on the new postgis index driver.

    Please note that the postgis index driver is still flagged as "experimental" and is missing several key features.

    The release is almost identical to 1.8.8rc1. Full list of changes since 1.8.7:

    • Migrate main test docker build to Ubuntu 22.04 and Python 3.10. (#1283)
    • Dynamically create tables to serve as spatial indexes in postgis driver. (#1312)
    • Populate spatial index tables, automatically and manually. (#1314)
    • Perform spatial queries against spatial index tables in postgis driver. (#1316)
    • EO3 data fixtures and tests. Fix SQLAlchemy bugs in postgis driver. (#1309)
    • Dependency updates. (#1308, #1313)
    • Remove several features that had been deprecated in previous releases. (#1275)
    • Fix broken paths in api docs. (#1277)
    • Fix readthedocs build. (#1269)
    • Add support for Jupyter Notebooks pages in documentation (#1279)
    • Add doc change comparison for tuple and list types with identical values (#1281)
    • Add flake8 to Github action workflow and correct code base per flake8 rules (#1285)
    • Add dataset id check to dataset doc resolve to prevent uuid returning error when id used in None (#1287)
    • Add how to run targeted single test case in docker guide to README (https://github.com/opendatacube/datacube-core/pull/1288)
    • Add help message for all dataset, product and metadata subcommands when required arg is not passed in (https://github.com/opendatacube/datacube-core/pull/1292)
    • Add error code 1 to all incomplete dataset, product and metadata subcommands (https://github.com/opendatacube/datacube-core/pull/1293)
    • Add exit_on_empty_file message to product and dataset subcommands instead of returning no output when file is empty (https://github.com/opendatacube/datacube-core/pull/1294)
    • Add flags to index drivers advertising what format datasets they support (eo/eo3/non-geo (e.g. telemetry only)) and validate in the high-level API. General refactor and cleanup of eo3.py and hl.py. (#1296)
    • Replace references to 'agdc' and 'dataset_type' in postgis driver with 'odc' and 'product'. (#1298)
    • Add warning message for product and metadata add when product and metadata is already in the database. (#1299)
    • Ensure SimpleDocNav.id is of type UUID, to improve lineage resolution (#1304)
    • Replace SQLAlchemy schema and query definitions in experimental postgis driver with newer "declarative" style ORM. Portions of API dealing with lineage handling, locations, and dynamic indexes are currently broken in the postgis driver. As per the warning message, the postgis driver is currently flagged as "experimental" and is not considered stable. (#1305)
    • Implement patch_url argument to dc.load() and dc.load_data() to provide a way to sign dataset URIs, as is required to access some commercial archives (e.g. Microsoft Planetary Computer). API is based on the odc-stac implementation. Only works for direct loading. More work required for deferred (i.e. Dask) loading. (#1317)
    • Implement public-facing index-driver-independent API for managing database transactions, as per Enhancement Proposal EP07 (#1318)
    • Update Conda environment to match dependencies in setup.py (#1319)
    • Final updates to whats_new.rst for release (#1320)

    Includes contributions from @SpacemanPaul @tijmenr @pindge and @omad

    Thanks to the ODC Steering Council and Geoscience Australia for their ongoing support of ODC development.

    Source code(tar.gz)
    Source code(zip)
  • 1.8.8rc1(Sep 29, 2022)

    RC release to facilitate development in downstream packages using the new transaction API.

    Most notable new feature is the new database transaction API, as discussed in ODC-EP07 Database Transaction API. API Example:

    with dc.index.transaction() as trans:
       # Archive old datasets and add new ones in single transaction
       dc.index.datasets.archive([old_ds1.id, old_ds2.id])
       dc.index.datasets.add(ds1)
       dc.index.datasets.add(ds2)
    
       # If execution gets to here, the transaction is committed.
       # If an exception was raised by any of the above methods, the transaction is rolled back.
    

    Also includes an update of the main test docker build to Ubuntu 22.04 and Python 3.10, and significant progress on the new postgis index driver.

    Please note that the postgis index driver is still flagged as "experimental" and is missing several key features.

    Full list of changes since 1.8.7:

    • Migrate main test docker build to Ubuntu 22.04 and Python 3.10. (#1283)
    • Dynamically create tables to serve as spatial indexes in postgis driver. (#1312)
    • Populate spatial index tables, automatically and manually. (#1314)
    • Perform spatial queries against spatial index tables in postgis driver. (#1316)
    • EO3 data fixtures and tests. Fix SQLAlchemy bugs in postgis driver. (#1309)
    • Dependency updates. (#1308, #1313)
    • Remove several features that had been deprecated in previous releases. (#1275)
    • Fix broken paths in api docs. (#1277)
    • Fix readthedocs build. (#1269)
    • Add doc change comparison for tuple and list types with identical values (#1281)
    • Add flake8 to Github action workflow and correct code base per flake8 rules (#1285)
    • Add dataset id check to dataset doc resolve to prevent uuid returning error when id used in None (#1287)
    • Add how to run targeted single test case in docker guide to README (#1288)
    • Add help message for all dataset, product and metadata subcommands when required arg is not passed in (#1292)
    • Add error code 1 to all incomplete dataset, product and metadata subcommands (#1293)
    • Add exit_on_empty_file message to product and dataset subcommands instead of returning no output when file is empty (#1294)
    • Add flags to index drivers advertising what format datasets they support (eo/eo3/non-geo (e.g. telemetry only)) and validate in the high-level API. General refactor and cleanup of eo3.py and hl.py. (#1296)
    • Replace references to 'agdc' and 'dataset_type' in postgis driver with 'odc' and 'product'. (#1298)
    • Add warning message for product and metadata add when product and metadata is already in the database. (#1299)
    • Ensure SimpleDocNav.id is of type UUID, to improve lineage resolution (#1304)
    • Replace SQLAlchemy schema and query definitions in experimental postgis driver with newer "declarative" style ORM. Portions of API dealing with lineage handling, locations, and dynamic indexes are currently broken in the postgis driver. As per the warning message, the postgis driver is currently flagged as "experimental" and is not considered stable. (#1305)
    • Implement patch_url argument to dc.load() and dc.load_data() to provide a way to sign dataset URIs, as is required to access some commercial archives (e.g. Microsoft Planetary Computer). API is based on the odc-stac implementation. Only works for direct loading. More work required for deferred (i.e. Dask) loading. (#1317)
    • Implement public-facing index-driver-independent API for managing database transactions, as per Enhancement Proposal EP07 (#1318)
    • Update Conda environment to match dependencies in setup.py (#1319)
    Source code(tar.gz)
    Source code(zip)
  • 1.8.7(Jun 7, 2022)

    • Cleanup mypy typechecking compliance. (#1266)
    • When dataset add operations fail due to lineage issues, the produced error message now clearly indicates that the problem was due to lineage issues. (#1260)
    • Added support for group-by financial years to virtual products. (#1257, #1261)
    • Remove reference to rasterio.path. (#1255)
    • Cleaner separation of (experimental) postgis and (stable) postgres drivers, and suppress SQLAlchemy cache warnings. (#1254)
    • Prevent Shapely deprecation warning. (#1253)
    • Fix DATACUBE_DB_URL parsing to understand syntax like: postgresql:///datacube?host=/var/run/postgresql (#1256)
    • Clearer error message when local metadata file does not exist. (#1252)
    • Address upstream security alerts and update upstream library versions. (#1250)
    • Clone postgres index driver as postgis, and flag as experimental. (#1248)
    • Implement a local non-persistent in-memory index driver, with maximal backwards-compatibility with default postgres index driver. Doesn't work with CLI interface, as every invocation will receive a new, empty index, but useful for testing and small scale proof-of-concept work. (#1247)
    • Performance and correctness fixes backported from odc-geo. (#1242)
    • Deprecate use of the celery executor. Update numpy pin in rtd-requirements.txt to suppress Dependabot warnings. (#1239)
    • Implement a minimal "null" index driver that provides an always-empty index. Mainly intended to validate the recent abstraction work around the index driver layer, but may be useful for some testing scenarios, and ODC use cases that do not require an index. (#1236)
    • Regularise some minor API inconsistencies and restore redis-server to Docker image. (#1234)
    • Move (default) postgres driver-specific files from datacube.index to datacube.index.postgres. datacube.index.Index is now an alias for the abstract base class index interface definition rather than postgres driver-specific implementation of that interface. (#1231)
    • Update numpy and netcdf4 version in docker build (#1229) rather than postgres driver-specific implementation of that interface. (#1227)
    • Migrate test docker image from datacube/geobase to osgeo/gdal. (#1233)
    • Separate index driver interface definition from default index driver implementation. (#1226)
    • Prefer WKT over EPSG when guessing CRS strings. (#1223, #1262)
    • Updates to documentation. (#1208, #1212, #1215, #1218, #1240, #1244)
    • Tweak to segmented in geometry to suppress Shapely warning. (#1207)
    • Fix to ensure skip_broken_datasets is correctly propagated in virtual products (#1259)
    • Deprecate Rename, Select and ToFloat built-in transforms in virtual products (#1263)

    Includes contributions from @whatnick, @alexgleith, @maawoo, @jeremyh, @iamtekson, @alfredoahds, @SpacemanPaul, @kirill888, @robbitbt, @tebadi, @uchchwhash, and @mpaget.

    Acknowledgements to the Open Datacube Steering Council and all supporting organisations, including Geoscience Australia, Digital Earth Africa, CSIRO, Frontier SI and Aerometrex.

    Source code(tar.gz)
    Source code(zip)
  • 1.8.6(Sep 30, 2021)

    • Added dataset purge command for hard deletion of archived datasets #1199
    • Trivial fixes to CLI help output #1197
    • Fix to enable searching for multiple products #1201
    Source code(tar.gz)
    Source code(zip)
  • 1.8.5(Aug 18, 2021)

    • Fix unguarded dependencies on boto libraries #1174 #1172
    • Various documentation fixes #1175
    • Address import problems on Windows due to use of Unix only functions #1176
    • Address numpy.bool deprecation warnings #1184
    Source code(tar.gz)
    Source code(zip)
  • 1.8.4(Aug 6, 2021)

    v1.8.4 (6 August 2021)

    • Removed example and contributed notebooks from the repository. Better notebook examples exist
    • Removed datacube_apps, as these are not used and not maintained
    • Add cloud_cover to EO3 metadata
    • Add erosion functionality to Virtual products' ApplyMask to supplement existing dilation functionality #1049
    • Fix numeric precision issues in compute_reproject_roi when pixel size is small #1047
    • Follow up fix to #1047 to round scale to nearest integer if very close
    • Add support for 3D Datasets #1099
    • New feature: search by URI from the command line datacube dataset uri-search
    • Added new "license" and "description" properties to DatasetType to enable easier access to product information #1143 #1144
    • Revised the Datacube.list_products function to produce a simpler and more useful product list table #1145
    • Refactor docs, making them more up to date and simpler #1137 #1128
    • Add new dataset_predicate param to dc.load and dc.find_datasets for more flexible temporal filtering (e.g. loading data for non-contiguous time ranges such as specific months or seasons over multiple years) #1148 #1156
    • Fix to GroupBy to ensure output output axes are correctly labelled when sorting observations using sort_key #1157
    • GroupBy is now its own class to allow easier custom grouping and sorting of data #1157
    • add support for IAM authentication for RDS databases in AWS #1168
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.8.3(Aug 18, 2020)

    • More efficient band alias handling
    • More documentation cleanups
    • Bug fixes in datacube.utils.aws, credentials handling when AWS_UNSIGNED is set
    • Product definition can now optionally include per-band scaling factors
    • Fix issue where new updated columns aren't created on a fresh database
    • Fix bug around adding updated columns locking up active databases
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.8.2(Jul 10, 2020)

    • Fix regressions in .geobox (#982)
    • Expand list of supported dtypes to include complex values (#989)
    • Can now specify dataset location directly in the yaml document (#990, #989)
    • Better error reporting in datacube dataset update (#983)
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.8.1(Jul 2, 2020)

    Summary

    This release contains mostly bug fixes an documentation improvements.

    Full List of Changes

    • Added an updated column for trigger based tracking of database row updates in PostgreSQL. (#951)
    • Changes to the writer driver API. The driver is now responsible for constructing output URIs from user configuration. (#960)
    • Added a datacube.utils.geometry.assign_crs method for better interoperability with other libraries (#967)
    • Better interoperability with xarray - the xarray.Dataset.to_netcdf function should work again (#972, #976)
    • Add support for unsigned access to public S3 resources from CLI apps (#976)
    • Usability fixes for indexing EO3 datasets (#958)
    • Fix CLI initialisation of the Dask Distributed Executor (#974)
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.8.0(May 21, 2020)

    Summary

    Lot's of changes since the 1.7 release.

    The two primary changes that are most likely to have backward compatibility issues are:

    1. The internal details of how we store geo-registration information on xarray Datasets returned by dc.load have changed in a significant way (#837, #899).
    2. We no longer use GDAL native Python bindings (osgeo.{ogr,osr}) and instead rely on pyproj and shapely as a backend for Geometry and CRS classes (#880).

    We no longer store CRS as an object (datacube.utils.geometry.CRS) in an attribute dictionary of the DataArray, instead it is stored in a string format (WKT) in an attribute of a special spatial_ref coordinate. This change allows us to better interface with xarray IO libraries. One can now write data loaded by dc.load with xr.to_netcdf(..) directly and load back with xr.open_dataset(..), all while maintaining geo-registration information (i.e. .geobox property).

    Also, since CRS information is now stored on the Coordinate, and not on the DataArray itself, it survives a greater variety of mathematical operations. Attributes on the DataArray would often go missing when doing the most basic of operations, like changing dtype of the loaded data, now CRS metadata is preserved in the majority of the cases.

    Moving away from the native GDAL Python bindings is primarily motivated by the complexity of the installation of gdal python library. Both shapely and pyproj that replaced it, offer binary wheels, and are therefore much simpler to install.

    Changes since 1.8.0rc1

    • Expanded EO3 support
    • Bug fixes in EO3 handling
    • Cleanup in docs
    • Better compatibility with other libraries for CRS construction
    • Removed ancient db migration code

    Full List of Changes

    • Changed geo-registration mechanics for arrays returned by dc.load
    • Migrate geometry and CRS backends from osgeo.ogr and osgeo.osr to shapely and pyproj respectively
    • Fixes for geometries crossing anti meridian
    • EO3 dataset metadata format is now understood by datacube dataset add
    • New virtual product combinator reproject for on-the-fly reprojection of rasters
    • Enhancements to the expressions transformation in virtual products
    • Support /vsi** style paths for dataset locations
    • Remove old Search Expressions and replace with a simpler implementation based on Lark Parser
    • Remove no longer required PyPEG2 dependency
    • Change development version numbers generation. Use setuptools_scm instead of versioneer
    • Deprecated datacube.helpers.write_geotiff, use datacube.utils.cog.write_cog for similar functionality
    • Deprecated datacube.storage.masking, moved to datacube.utils.masking
    • Remove S3AIO driver
    • Removed migration support from datacube releases before 1.1.5. If you still run a datacube before 1.1.5 (from 2016 or older), you will need to update it using ODC 1.7 first, before coming to 1.8.
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.8.0rc1(May 6, 2020)

    Summary

    Lot's of changes since the 1.7 release.

    The two primary changes that are most likely to have backward compatibility issues are:

    1. The internal details of how we store geo-registration information on xarray Datasets returned by dc.load have changed in a significant way (#837, #899).
    2. We no longer use GDAL native Python bindings (osgeo.{ogr,osr}) and instead rely on pyproj and shapely as a backend for Geometry and CRS classes (#880).

    We no longer store CRS as an object (datacube.utils.geometry.CRS) in an attribute dictionary of the DataArray, instead it is stored in a string format (WKT) in an attribute of a special spatial_ref coordinate. This change allows us to better interface with xarray IO libraries. One can now write data loaded by dc.load with xr.to_netcdf(..) directly and load back with xr.open_dataset(..), all while maintaining geo-registration information (i.e. .geobox property).

    Also, since CRS information is now stored on the Coordinate, and not on the DataArray itself, it survives a greater variety of mathematical operations. Attributes on the DataArray would often go missing when doing the most basic of operations, like changing dtype of the loaded data, now CRS metadata is preserved in the majority of the cases.

    Moving away from the native GDAL Python bindings is primarily motivated by the complexity of the installation of gdal python library. Both shapely and pyproj that replaced it, offer binary wheels, and are therefore much simpler to install.

    Full List of Changes

    • Changed geo-registration mechanics for arrays returned by dc.load
    • Migrate geometry and CRS backends from osgeo.ogr and osgeo.osr to shapely and pyproj respectively
    • Fixes for geometries crossing anti meridian
    • EO3 dataset metadata format is now understood by datacube dataset add
    • New virtual product combinator reproject for on-the-fly reprojection of rasters
    • Enhancements to the expressions transformation in virtual products
    • Support /vsi** style paths for dataset locations
    • Remove old Search Expressions and replace with a simpler implementation based on Lark Parser
    • Remove no longer required PyPEG2 dependency
    • Change development version numbers generation. Use setuptools_scm instead of versioneer
    • Deprecated datacube.helpers.write_geotiff, use datacube.utils.cog.write_cog for similar functionality
    • Deprecated datacube.storage.masking, moved to datacube.utils.masking
    • Remove S3AIO driver
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.8.0b6(Apr 16, 2020)

    Summary

    Lot's of changes since the 1.7 release.

    The two primary changes that are most likely to have backward compatibility issues are:

    1. The internal details of how we store geo-registration information on xarray Datasets returned by dc.load have changed in a significant way (#837, #899).
    2. We no longer use GDAL native Python bindings (osgeo.{ogr,osr}) and instead rely on pyproj and shapely as a backend for Geometry and CRS classes (#880).

    We no longer store CRS as an object (datacube.utils.geometry.CRS) in an attribute dictionary of the DataArray, instead it is stored in a string format (WKT) in an attribute of a special spatial_ref coordinate. This change allows us to better interface with xarray IO libraries. One can now write data loaded by dc.load with xr.to_netcdf(..) directly and load back with xr.open_dataset(..), all while maintaining geo-registration information (i.e. .geobox property).

    Also, since CRS information is now stored on the Coordinate, and not on the DataArray itself, it survives a greater variety of mathematical operations. Attributes on the DataArray would often go missing when doing the most basic of operations, like changing dtype of the loaded data, now CRS metadata is preserved in the majority of the cases.

    Moving away from the native GDAL Python bindings is primarily motivated by the complexity of the installation of gdal python library. Both shapely and pyproj that replaced it, offer binary wheels, and are therefore much simpler to install.

    Full List of Changes

    • Changed geo-registration mechanics for arrays returned by dc.load
    • Migrate geometry and CRS backends from osgeo.ogr and osgeo.osr to shapely and pyproj respectively
    • New virtual product combinator reproject for on-the-fly reprojection of rasters
    • Enhancements to the expressions transformation in virtual products
    • Support /vsi** style paths for dataset locations
    • Remove old Search Expressions and replace with a simpler implementation based on Lark Parser
    • Remove no longer required PyPEG2 dependency
    • Change development version numbers generation. Use setuptools_scm instead of versioneer
    • Deprecated datacube.helpers.write_geotiff, use datacube.utils.cog.write_cog for similar functionality
    • Deprecated datacube.storage.masking, moved to datacube.utils.masking
    • Remove S3AIO driver
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.8.0b5(Apr 9, 2020)

  • datacube-1.7(May 16, 2019)

    Not a lot of changes since rc1.

    • Early exit from dc.load on KeyboardInterrupt, allows partial loads inside notebook.
    • Some bug fixes in geometry related code
    • Some cleanups in tests
    • Pre-commit hooks configuration for easier testing
    • Re-enable multi-threaded reads for s3aio driver (set use_threads=True in dc.load(..))
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.7rc1(Apr 18, 2019)

    1.7rc1 (18 April 2019)

    Virtual Products

    Add Virtual Products for multi-product loading.

    (#522, #597, #601, #612, #644, #677, #699, #700)

    Changes to Data Loading

    The internal machinery used when loading and reprojecting data, has been completely rewritten. The new code has been tested, but this is a complicated and fundamental part of code and there is potential for breakage.

    When loading reprojected data, the new code will produce slightly different results. We don't believe that it is any less accurate than the old code, but you cannot expect exactly the same numeric results.

    Non-reprojected loads should be identical.

    This change has been made for two reasons:

    1. The reprojection is now core Data Cube, and is not the responsibility of the IO driver.
    2. When loading lower resolution data, DataCube can now take advantage of available overviews.
    • New futures based IO driver interface (#686)

    Other Changes

    • Allow specifying different resampling methods for different data variables of the same Product. (#551)
    • Allow all reampling methods supported by rasterio. (#622)
    • Bug fix (Index out of bounds causing ingestion failures)
    • Support indexing data directly from HTTP/HTTPS/S3 URLs (#607)
    • Renamed the command line tool datacube metadata_type to datacube metadata (#692)
    • More useful output from the command line datacube {product|metadata} {show|list}
    • Add optional progress_cbk to dc.load(_data) (#702), allows user to monitor data loading progress.
    • Thread-safe netCDF access within dc.load (#705)

    Performance Improvements

    • Use single pass over datasets when computing bounds (#660)
    • Bugfixes and improved performance of dask-backed arrays (#547, #664)

    Documentation Improvements

    Deprecations

    • From the command line, the old query syntax for searching within vague time ranges, eg: 2018-03 < time < 2018-04 has been removed. It is unclear exactly what that syntax should mean, whether to include or exclude the months specified. It is replaced by time in [2018-01, 2018-02] which has the same semantics as dc.load time queries. (#709)
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.6.2(Jan 24, 2019)

  • datacube-1.6.1(Nov 27, 2018)

  • datacube-1.6.0(Aug 23, 2018)

    • Enable use of aliases when specifying band names
    • Fix ingestion failing after the first run #510
    • Docker images now know which version of ODC they contain #523
    • Fix data loading when nodata is NaN #531
    • Allow querying based on python datetime.datetime objects. #499
    • Require rasterio 1.0.2 or higher, which fixes several critical bugs when loading and reprojecting from multi-band files.
    • Assume fixed paths for id and sources metadata fields #482
    • datacube.model.Measurement was put to use for loading in attributes and made to inherit from dict to preserve current behaviour. #502
    • Updates when indexing data with datacube dataset add (See #485, #451 and #480)
      • Allow indexing without lineage datacube dataset add --ignore-lineage
      • Removed the --sources-policy=skip|verify|ensure. Instead use --[no-]auto-add-lineage and --[no-]verify-lineage
      • New option datacube dataset add --exclude-product <name> allows excluding some products from auto-matching
    • Preliminary API for indexing datasets #511
    • Enable creation of MetadataTypes without having an active database connection #535
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.6rc2(Jun 30, 2018)

    Backwards Incompatible Changes

    • The helpers.write_geotiff() function has been updated to support files smaller than 256x256. It also no longer supports specifying the time index. Before passing data in, use xarray_data.isel(time=<my_time_index>). (#277)
    • Removed product matching options from datacube dataset update (#445). No matching is needed in this case as all datasets are already in the database and are associated to products.
    • Removed --match-rules option from datacube dataset add (#447)
    • The seldom-used stack keyword argument has been removed from Datcube.load. (#461)
    • The behaviour of the time range queries has changed to be compatible with standard Python searches (eg. time slice an xarray). Now the time range selection is inclusive of any unspecified time units. (#440)
      • Example 1:
        time=('2008-01', '2008-03') previously would have returned all data from the start of 1st January, 2008 to the end of 1st of March, 2008. Now, this query will return all data from the start of 1st January, 2008 and 23:59:59.999 on 31st of March, 2008.

      • Example 2:
        To specify a search time between 1st of January and 29th of February, 2008 (inclusive), use a search query like time=('2008-01', '2008-02'). This query is equivalent to using any of the following in the second time element:

        ('2008-02-29')
        ('2008-02-29 23')
        ('2008-02-29 23:59')
        ('2008-02-29 23:59:59')
        ('2008-02-29 23:59:59.999')

    Changes

    • A --location-policy option has been added to the datacube dataset update command. Previously this command would always add a new location to the list of URIs associated with a dataset. It's now possible to specify archive and forget options, which will mark previous location as archived or remove them from the index altogether. The default behaviour is unchanged. (#469)

    • The masking related function describe_variable_flags() now returns a pandas DataFrame by default. This will display as a table in Jupyter Notebooks. (#422)

    • Usability improvements in datacube dataset [add|update] commands (#447, #448, #398)

      • Embedded documentation updates
      • Deprecated --auto-match (it was always on anyway)
      • Renamed --dtype to --product (the old name will still work, but with a warning)
      • Add option to skip lineage data when indexing (useful for saving time when testing) (#473)
    • Enable compression for metadata documents stored in NetCDFs generated by stacker and ingestor (#452)

    • Implement better handling of stacked NetCDF files (#415)

      • Record the slice index as part of the dataset location URI, using #part=<int> syntax, index is 0-based
      • Use this index when loading data instead of fuzzy searching by timestamp
      • Fall back to the old behaviour when #part=<int> is missing and the file is more than one time slice deep
    • Expose the following dataset fields and make them searchable:

      • indexed_time (when the dataset was indexed)
      • indexed_by (user who indexed the dataset)
      • creation_time (creation of dataset: when it was processed)
      • label (the label for a dataset)

      (See #432 for more details)

    Bug Fixes

    • The .dimensions property of a product no longer crashes when product is missing a grid_spec. It instead defaults to time,y,x
    • Fix a regression in v1.6rc1 which made it impossible to run datacube ingest to create products which were defined in 1.5.5 and earlier versions of ODC. (#423, #436)
    • Allow specifying the chunking for string variables when writing NetCDFs (#453)
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.6rc1(Apr 11, 2018)

    v1.6rc1 Easter Bilby (10 April 2018)

    This is the first release in a while, and so there’s a lot of changes, including some significant refactoring, with the potential having issues when upgrading.

    Backwards Incompatible Fixes

    • Drop Support for Python 2. Python 3.5 is now the earliest supported Python version.
    • Removed the old ndexpr, analytics and execution engine code. There is work underway in the execution engine branch to replace these features.

    Enhancements

    • Support for third party drivers, for custom data storage and custom index implementations

    • The correct way to get an Index connection in code is to use datacube.index.index_connect().

    • Changes in ingestion configuration

      • Must now specify the Data Write Plug-ins to use. For s3 ingestion there was a top level container specified, which has been renamed and moved under storage. The entire storage section is passed through to the Data Write Plug-ins, so drivers requiring other configuration can include them here. eg:

        ...
        storage:
          ...
          driver: s3aio
          bucket: my_s3_bucket
        ...
        
    • Added a Dockerfile to enable automated builds for a reference Docker image.

    • Multiple environments can now be specified in one datacube config. See PR 298 and the Runtime Config

      • Allow specifying which index_driver should be used for an environment.
    • Command line tools can now output CSV or YAML. (Issue issue 206, PR 390)

    • Support for saving data to NetCDF using a Lambert Conformal Conic Projection (PR 329)

    • Lots of documentation updates:

      • Information about Bit Masking.
      • A description of how data is loaded.
      • Some higher level architecture documentation.
      • Updates on how to index new data.

    Bug Fixes

    • Allow creation of datacube.utils.geometry.Geometry objects from 3d representations. The Z axis is simply thrown away.
    • The datacube --config_file option has been renamed to datacube --config, which is shorter and more consistent with the other options. The old name can still be used for now.
    • Fix a severe performance regression when extracting and reprojecting a small region of data. (PR 393)
    • Fix for a somewhat rare bug causing read failures by attempt to read data from a negative index into a file. (PR 376)
    • Make CRS equality comparisons a little bit looser. Trust either a Proj.4 based comparison or a GDAL based comparison. (Closed issue 243)

    New Data Support

    • Added example prepare script for Collection 1 USGS data; improved band handling and downloads.
    • Add a product specification and prepare script for indexing Landsat L2 Surface Reflectance Data (PR 375)
    • Add a product specification for Sentinel 2 ARD Data (PR 342)
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.5.5(Jan 18, 2018)

  • datacube-1.5.4(Dec 14, 2017)

    • Minor features backported from 2.0:

      • Support for limit in searches

      • Alternative lazy search method find_lazy

    • Fixes:

      • Improve native field descriptions

      • Connection should not be held open between multi-product searches

      • Disable prefetch for celery workers

      • Support jsonify-ing decimals

    Source code(tar.gz)
    Source code(zip)
  • datacube-1.5.3(Oct 16, 2017)

    • Use cloudpickle as the celery serialiser

    • Allow celery tests to run without installing it

    • Move datacube-worker inside the main datacube package

    • Write metadata_type from the ingest configuration if available

    • Support config parsing limitations of Python 2

    • Fix #303: resolve GDAL build dependencies on Travis

    • Upgrade rasterio to newer version

    Source code(tar.gz)
    Source code(zip)
  • datacube-1.5.2(Sep 12, 2017)

    New Features

    • Support for AWS S3 array storage
    • Driver Manager support for NetCDF, S3, S3-file drivers.

    Usability Improvements

    • When datacube dataset add is unable to add a Dataset to the index, print out the entire Dataset to make it easier to debug the problem.
    • Give datacube system check prettier and more readable output.
    • Make celery and redis optional when installing.
    • Significantly reduced disk space usage for integration tests
    • Dataset objects now have an is_active field to mirror is_archived.
    • Added index.datasets.get_archived_location_times() to see when each location was archived.

    Bug Fixes

    • Fix bug when reading data in native projection, but outside source area. Often hit when running datacube-stats
    • Fix error loading and fusing data using dask. (Fixes #276)
    • When reading data, implement skip_broken_datasets for the dask case too
    • Fix bug #261. Unable to load Australian Rainfall Grid Data. This was as a result of the CRS/Transformation override functionality being broken when using the latest rasterio version 1.0a9
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.4.1(May 25, 2017)

    • Support for reading multiband HDF datasets, such as MODIS collection 6

    • Workaround for rasterio issue when reprojecting stacked data

    • Bug fixes for command line arg handling

    Source code(tar.gz)
    Source code(zip)
  • datacube-1.4.0(May 17, 2017)

    • Adds more convenient year/date range search expressions (see #226)
    • Adds a simple replication utility (see #223)
    • Fixed issue reading products without embedded CRS info, such as bom_rainfall_grid (see #224)
    • Fixed issues with stacking and ncml creation for NetCDF files
    • Various documentation and bug fixes
    • Added CircleCI as a continuous build system, for previewing generated documenation on pull
    • Require xarray >= 0.9. Solves common problems caused by losing embedded flag_def and crs attributes.
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.3.2(Apr 20, 2017)

    • Docs now refer to "Open Data Cube".
    • Docs describe how to use conda to install datacube.
    • Bug fixes for the stacking process.
    • Minor model changes:
      • Support for remote (non-file) locations from dataset objects: see #219
      • Consistency improvents to the dataset properties: see #217
    • Various other bug fixes and document updates.
    Source code(tar.gz)
    Source code(zip)
  • datacube-1.3.0(Mar 28, 2017)

    This is the first release datacube-core, as part of the Open Data Cube community.

    • Updated the Postgres product views to include the whole dataset metadata document.

    • datacube system init now recreates the product views by default every time it is run, and now supports Postgres 9.6.

    • URI searches are now better supported from the command line: datacube dataset search uri = file:///some/uri/here

    • datacube user now supports a user description (via --description) when creating a user, and delete accepts multiple user arguments.

    • Platform-specific (Landsat) fields have been removed from the default eo metadata type in order to keep it minimal. Users & products can still add their own metadata types to use additional fields.

    • Dataset locations can now be archived, not just deleted.

    This release now enforces the uri index changes to be applied: it will prompt you to rerun init as an administrator to update your existing cubes: datacube -v system init (this command can be run without affecting read-only users, but will briefly pause writes)

    Source code(tar.gz)
    Source code(zip)
  • datacube-1.2.0(Feb 15, 2017)

    • Implemented improvements to dataset search and info cli outputs
    • Can now specify a range of years to process to ingest cli (e.g. 2000-2005)
    • Fixed metadata_type update cli not creating indexes (running system init will create missing ones)
    • Enable indexing of datacube generated NetCDF files. Making it much easier to pull selected data into a private datacube index. Use by running datacube dataset add selected_netcdf.nc.
    • Switch versioning scheme to increment the second digit instead of the third.
    Source code(tar.gz)
    Source code(zip)
LEOGPS - Satellite Navigation with GPS on Python!

LEOGPS is an open-source Python software which performs relative satellite navigation between two formation flying satellites, with the objective of high accuracy relative positioning. Specifically,

Samuel Low 50 Dec 13, 2022
leafmap - A Python package for geospatial analysis and interactive mapping in a Jupyter environment.

A Python package for geospatial analysis and interactive mapping with minimal coding in a Jupyter environment

Qiusheng Wu 1.4k Jan 02, 2023
Tile Map Service and OGC Tiles API for QGIS Server

Tiles API Add tiles API to QGIS Server Tiles Map Service API OGC Tiles API Tile Map Service API - TMS The TMS API provides these URLs: /tms/? to get i

3Liz 6 Dec 01, 2021
Xarray backend to Copernicus Sentinel-1 satellite data products

xarray-sentinel WARNING: this product is a "technology preview" / pre-Alpha Xarray backend to explore and load Copernicus Sentinel-1 satellite data pr

B-Open 191 Dec 15, 2022
A utility to search, download and process Landsat 8 satellite imagery

Landsat-util Landsat-util is a command line utility that makes it easy to search, download, and process Landsat imagery. Docs For full documentation v

Development Seed 681 Dec 07, 2022
pure-Python (Numpy optional) 3D coordinate conversions for geospace ecef enu eci

Python 3-D coordinate conversions Pure Python (no prerequistes beyond Python itself) 3-D geographic coordinate conversions and geodesy. API similar to

Geospace code 292 Dec 29, 2022
Using Global fishing watch's data to build a machine learning model that can identify illegal fishing and poaching activities through satellite and geo-location data.

Using Global fishing watch's data to build a machine learning model that can identify illegal fishing and poaching activities through satellite and geo-location data.

Ayush Mishra 3 May 06, 2022
Python library to visualize circular plasmid maps

Plasmidviewer Plasmidviewer is a Python library to visualize plasmid maps from GenBank. This library provides only the function to visualize circular

Mori Hideto 9 Dec 04, 2022
Summary statistics of geospatial raster datasets based on vector geometries.

rasterstats rasterstats is a Python module for summarizing geospatial raster datasets based on vector geometries. It includes functions for zonal stat

Matthew Perry 437 Dec 23, 2022
Imports VZD (Latvian State Land Service) open data into postgis enabled database

Python script main.py downloads and imports Latvian addresses into PostgreSQL database. Data contains parishes, counties, cities, towns, and streets.

Kaspars Foigts 7 Oct 26, 2022
Python 台灣行政區地圖 (2021)

Python 台灣行政區地圖 (2021) 以 python 讀取政府開放平台的 ShapeFile 地圖資訊。歡迎引用或是協作 另有縣市資訊、村里資訊與各種行政地圖資訊 例如: 直轄市、縣市界線(TWD97經緯度) 鄉鎮市區界線(TWD97經緯度) | 政府資料開放平臺: https://data

WeselyOng 12 Sep 27, 2022
Rasterio reads and writes geospatial raster datasets

Rasterio Rasterio reads and writes geospatial raster data. Geographic information systems use GeoTIFF and other formats to organize and store gridded,

Mapbox 1.9k Jan 07, 2023
List of Land Cover datasets in the GEE Catalog

List of Land Cover datasets in the GEE Catalog A list of all the Land Cover (or discrete) datasets in Google Earth Engine. Values, Colors and Descript

David Montero Loaiza 5 Aug 24, 2022
Constraint-based geometry sketcher for blender

Geometry Sketcher Constraint-based sketcher addon for Blender that allows to create precise 2d shapes by defining a set of geometric constraints like

1.7k Jan 02, 2023
Search and download Copernicus Sentinel satellite images

sentinelsat Sentinelsat makes searching, downloading and retrieving the metadata of Sentinel satellite images from the Copernicus Open Access Hub easy

837 Dec 28, 2022
Implementation of Trajectory classes and functions built on top of GeoPandas

MovingPandas MovingPandas implements a Trajectory class and corresponding methods based on GeoPandas. Visit movingpandas.org for details! You can run

Anita Graser 897 Jan 01, 2023
A light-weight, versatile XYZ tile server, built with Flask and Rasterio :earth_africa:

Terracotta is a pure Python tile server that runs as a WSGI app on a dedicated webserver or as a serverless app on AWS Lambda. It is built on a modern

DHI GRAS 531 Dec 28, 2022
Manipulation and analysis of geometric objects

Shapely Manipulation and analysis of geometric objects in the Cartesian plane. Shapely is a BSD-licensed Python package for manipulation and analysis

3.1k Jan 03, 2023
Hapi is a Python library for building Conceptual Distributed Model using HBV96 lumped model & Muskingum routing method

Current build status All platforms: Current release info Name Downloads Version Platforms Hapi - Hydrological library for Python Hapi is an open-sourc

Mostafa Farrag 15 Dec 26, 2022
A Django application that provides country choices for use with forms, flag icons static files, and a country field for models.

Django Countries A Django application that provides country choices for use with forms, flag icons static files, and a country field for models. Insta

Chris Beaven 1.2k Jan 03, 2023