-
Ensure you have at minimum Python 3.14 installed; Python 3.13, 3.12, 3.11 and 3.10 are optional for multi-environment tests
This repo uses tox and by default will try to run tests against all supported versions. If you have only subset of supported python interpreters installed, see Run tests section for information how to limit tests only to your subset.
-
Create your fork of
gooddata-python-sdkrepository -
Clone and setup environment:
git clone git@github.com:<your_user>/gooddata-python-sdk.git cd gooddata-python-sdk git remote add upstream git@github.com:gooddata/gooddata-python-sdk.git make dev # activate venv source .venv/bin/activate
The
make devcommand will create a new Python 3.14 virtual environment in the.venvdirectory, install all third party dependencies into it and setup git hooks.Additionally, if you use direnv you can run
direnv allow .envrcto enable automatic activation of the virtual environment that was previously created in.venv.If
direnvis not your cup of tea, you may want to adopt the PYTHONPATH exports that are done as part of the script so that you can run custom Python code using the packages container herein without installing them.To make sure you have successfully set up your environment run
make testin virtualenv in the root of git repo. Please note, thatmake testexecutes tests against all the supported python versions. If you need to specify only subset of them, see section [Run tests](#Run tests) -
Develop the feature or the fix. Make sure your code follows coding conventions. Create pull request.
When adding a new distributable package to this monorepo, update release automation and PyPI configuration together:
- Add the package name to
COMPONENTSin:.github/workflows/dev-release.yaml.github/workflows/build-release.yaml
- Verify the package is built by release workflows and artifacts are uploaded from its
dist/directory. - Configure the package on PyPI to use Trusted Publisher for this repository/workflow combination.
- Run/observe a release workflow and confirm publishing succeeds via OIDC (no
PYPI_API_TOKENrequired).
When adding support for a new Python version:
- Update all
pyproject.tomlfiles to include the new Python version classifier - Update all
tox.inifiles to include the new Python version inenvlist - Update CI/CD workflows to test against the new Python version
- Run
uv lock --upgradeto upgrade the lock file and re-resolve dependencies for all Python versions, including the newly added one
The lock file upgrade is crucial as it allows uv to pick package versions that are compatible with all supported Python versions.
This project uses ruff to ensure basic code sanity and for no-nonsense, consistent formatting.
ruff is part of the pre-commit hook that is automatically set up during make dev.
You can also run the lint and formatter manually:
- To run
ruffrun:make format-fix
NOTE: If the pre-commit hook finds and auto-corrects some formatting errors, it will not auto-stage the updated files and will fail the commit operation. You have to re-drive the commit. This is a well-known and unlikely-to-change behavior of the pre-commit package that this repository uses to manage hooks.
The project documents code by docstrings in google-like format.
The project documentation is done in hugo. To contribute:
-
Install Hugo
-
Run
make new-docs -
Open http://localhost:1313/latest/ in your browser to see the preview.
The documentation is deployed using manually triggered GitHub workflows.
One logical change is done in one commit.
To document a new feature, you need to create a new .md file in one of the subsections. These subsections represent the left navigation menu and are in a hierarchical directories.
e.g.:
administration
├── organization
│ ├── create_or_update_jwk.md
│ ├── delete_jwk.md
│ └── ...
├── premissions
│ ├── list_available_assignees.md
│ ├── get_declarative_organization_permissions.md
│ └── ...
└── ...
Note that you would not only need to add new
.mdbut also edit the existing_index.mdin the same directory to add a link to your new method.
Example:
Imagine you just created a new method named super_permissions_method and would like to document it.
Steps:
-
You make sure to properly document your new method in the docstrings.
-
You create a new file
super_permissions_method.mdindocs/content/en/docs/administration/permissions/-
With content:
--- title: "super_permissions_method" linkTitle: "super_permissions_method" superheading: "catalog_permission." weight: 100 --- {{< python "sdk.CatalogPermissionService.super_permissions_method" >}} ## Example ```python # This method does something very cool, trust me bro sdk.super_permissions_method(user_id = "demo_user", give_all_permission = True)The
{{< python "PATH" >}}is a hugo shortcode, that will render the information about the method directly from the docstrings. ThePATHparameter is represented as a path in the API references with a dot.denoting each step.So in our example, this is an
sdkpackage with the permissions service (CatalogPermissionService) and then followed by the actual method (super_permissions_method).
-
-
You update the
_index.mdin the same folder:* [update_name](./update_name/) * [create_or_update_jwk](./create_or_update_jwk/) * [delete_jwk](./delete_jwk/) * [get_jwk](./get_jwk/) * [list_jwks](./list_jwks/) + * [super_permissions_method](./super_permissions_method/) -
Lastly you contact someone from the documentation team for a proof-read and merge. Your changes should be visible in the preview in the PR job named
Netlify Deploy Preview
Tests use tox and pytest
libraries. Each project has its own tox.ini.
NOTE: Tests are not executed for OpenAPI client projects.
Here are the options how to run the tests:
- run tests for one sub-project - drill down to sub-project's directory
- use
make testto trigger tests
cd packages/gooddata-sdk make test
- or execute
toxcommand with arguments of your choice
cd packages/gooddata-sdk tox -e py310 - use
- run tests for all non-client projects using
make testin project root directory
Tests triggered by make can be controlled via these environment variables:
RECREATE_ENVS- set environment variableRECREATE_ENVSto 1 and make will add--recreateflag,--recreateflag is not used otherwiseRECREATE_ENVS=1 make testTEST_ENVS- define tox test environments (targets) as comma-separated list, by default all tox default targets are executedTEST_ENVS=py311,py310 make testADD_ARGS- send additional arguments to pytest tool, useful for pin-pointing just part of testsADD_ARGS="-k http_headers" make test
Some tests include HTTP call(s) to GoodData instance. Those tests are executed through vcrpy so that a GoodData instance is needed either the first time or when a request is changed. It has clear benefits:
- ability to run the tests without a GoodData instance
- request and response snapshot - it makes debugging of HTTP calls simple
But there is one disadvantage. One needs a GoodData instance with the original setup to change tests.
docker-compose.yaml in the root of the repository is here to help.
-
AWS ECR Login - The docker-compose uses ECR images:
aws ecr get-login-password | docker login --username AWS --password-stdin 020413372491.dkr.ecr.us-east-1.amazonaws.com -
GoodData License Key - Get from the GoodData team and place it in the
./build/licensefile:mkdir -p build echo "<your_license_key>" > build/license
The auth-service reads the license from this mounted location.
The docker-compose starts a full GoodData microservices stack:
Infrastructure services:
- PostgreSQL (with demo databases:
md,dex,automation,gw,tiger) - Redis (caching)
- Apache Pulsar (messaging)
- Traefik (routing)
- Dex (OIDC authentication)
Core GoodData services:
- metadata-api, auth-service, calcique, sql-executor, result-cache
- afm-exec-api, scan-model, api-gateway, api-gw
- automation, export-controller, tabular-exporter
- quiver (data processing engine)
Bootstrap services (run once):
metadata-organization-bootstrap- Creates organization + admin userdata-loader- Loads demo data into PostgreSQL (with--no-schema-versioning)create-ds- Registers data sources in metadata-apilayout-uploader- Uploads workspace hierarchy, analytics model, users, permissions
The data-loader uses --no-schema-versioning flag to ensure:
- Schema names are consistent (e.g.,
demonotdemo_abc123) - Fixture names don't have hash suffixes
- VCR cassette tests produce reproducible results
# Start all services
docker compose up -d
# Wait for bootstrap to complete (watch for "Layout upload completed successfully!")
docker compose logs -f metadata-organization-bootstrap data-loader create-ds layout-uploader
# Check service status
docker compose ps
# The GoodData API is available at http://localhost:3000
# Default credentials: demo@example.com / demo123
# API token: YWRtaW46Ym9vdHN0cmFwOmFkbWluMTIzWhen a vcrpy supported test needs to be updated:
- Start GoodData using the above
docker-compose.yaml - Wait for all bootstrap services to complete
- Delete original vcrpy cassette with
make remove-cassettes - Execute test
- Commit the newly generated cassette to git
# Stop all services
docker compose down
# Full cleanup (remove volumes - required for fresh start)
docker compose down -vThe FDW (Foreign Data Wrapper) tests require an additional service. Start it with:
docker compose --profile fdw up -dThis starts a PostgreSQL instance with the gooddata-fdw extension on port 2543.
Tests in pull request (PR) are executed using docker. The following is done to make test environment as close to reproducible as possible:
- each supported python version has defined python base docker image
- tox version installed to docker is frozen to specific version
- all test dependencies specified in test-requirements.txt should be limited to some version range
Above rules give a chance to execute tests on localhost in the same or very similar environment as used in PR.
Orchestration is driven by make test-ci. Target test-ci supports the same features as make test, see
[Run tests](#Run tests) for details.
NOTE: docker tox tests and localhost tox tests are using the same .tox directory. Virtual environments for both test types are most likely incompatible due to different base python version. tox is able to recognize it and recreate venv automatically. So when docker tox tests are executed after localhost tests or vice-versa envs are recreated.
- run all tests for all supported python environments
make test-ci
- run all tests for all supported python environments and for one project
cd packages/gooddata-sdk make test-ci - run all tests containing
http_headersin name for py311 and py310 for all projectsTEST_ENVS=py311,py310 ADD_ARGS="-k http_headers" make test-ci - run tests on localhost against microservices started with docker-compose
RECREATE_ENVS=1 HOST_NETWORK=1 make test-ci
Refer to our OpenAPI client README
There are several kinds of fixtures used for the tests. This is important to know about when you're making changes or updating cassettes as it can surprise you. You have to keep in mind especially if you want to add new attributes to be used across several tests.
packages/tests-support/fixturesare used as the default layout for tests that are uploaded by docker compose. They are also uploaded by theupload_demo_layout.pyscript.
These are common places for fixtures used in Gooddata SDK:
packages/gooddata-sdk/tests/catalog/refresh: this is the layout that actually replaces the layout after some kinds of catalog tests, and hence overrides the layout from docker compose. You have to make changes also here if you want to make changes to the default layout. It's a current TODO to merge this andpackages/tests-support/fixturesinto one layout.packages/gooddata-sdk/tests/catalog/expected: these are fixtures that are used to compare against in various catalog tests.packages/gooddata-sdk/tests/catalog/store,packages/gooddata-sdk/tests/catalog/load,packages/gooddata-sdk/tests/catalog/load_with_locale,packages/gooddata-sdk/tests/catalog/load_with_locale, ...- These are numerous fixtures that are used for load and put tests. You often need to change more of them.
- Note that some of these contain
workspaceandworkspace_contentsubfolders, depending on where the fixtures are used to load