keboola / kbc-manage-api-php-client
Keboola Management API Client
Installs: 38 324
Dependents: 3
Suggesters: 0
Security: 0
Stars: 1
Watchers: 21
Forks: 1
Open Issues: 2
Requires
- php: ^7.4|^8.0
- guzzlehttp/guzzle: ^7.0|^6.1
Requires (Dev)
- keboola/coding-standard: ^15.0
- keboola/phpunit-retry-annotations: ^0.3.0
- keboola/storage-api-client: ^12.0
- phpstan/phpdoc-parser: ^1.24.5
- phpstan/phpstan: ^1.10
- phpstan/phpstan-phpunit: ^1
- phpunit/phpunit: ^7.0|^8.0
- dev-master
- v7.1.1
- 7.1.0
- 7.0.0
- 6.1.0
- 6.0.0
- 5.2.0
- 5.1.0
- 5.0.0
- 4.0.0
- 3.4.0
- 3.3.1
- 3.3.0
- 3.2.1
- 3.2.0
- 3.1.0
- 3.0.0
- 2.4.0
- 2.3.1
- 2.3.0
- 2.2.0
- 2.1.0
- 2.0.0
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.0.2
- 0.0.1
- dev-martin-snflk-register-ST-2481
- dev-zajca-ct-1521-1
- dev-jirka-ct-1686-test-superadmin-can-manage-features-in-sox
- dev-CT-1080-user-removal-fails-if-project-have-autojoin-disabled
- dev-jirka-ct-887-split-suites
- dev-zajca-kbc-1377
- dev-KBC-377-new-privilege-can-use-direct-access
This package is auto-updated.
Last update: 2024-11-12 08:19:33 UTC
README
Simple PHP wrapper library for Keboola Management REST API
Installation
Library is available as composer package. To start using composer in your project follow these steps:
Install composer
curl -s http://getcomposer.org/installer | php mv ./composer.phar ~/bin/composer # or /usr/local/bin/composer
Create composer.json file in your project root folder:
{ "require": { "php" : ">=5.4.0", "keboola/kbc-manage-api-php-client": "~0.0" } }
Install package:
composer install
Add autoloader in your bootstrap script:
require 'vendor/autoload.php';
Read more in Composer documentation
Usage examples
require 'vendor/autoload.php'; use Keboola\ManageApi\Client; $client = new Client([ 'token' => getenv('MY_MANAGE_TOKEN'), 'url' => 'https://connnection.keboola.com', ]); $project = $client->getProject(234);
Tests
The main purpose of these test is "black box" test driven development of Keboola Connection. These test guards the API implementation. You can run these tests only against non-production environments.
Tests requires valid Keboola Management API tokens and an endpoint URL of the API test environment.
Note: For automated tests, the tests are run again three times by default if they fail. For local development this would be quite annoying,
so you can disable this by creating new file phpunit-retry.xml
from phpunit-retry.xml.dist
Note: The test environment should be running a cronjob for token-expirator
otherwise the testTemporaryAccess
test will fail.
Create file .env
with environment variables`:
# REQUIRED - must be filled before running any test KBC_MANAGE_API_URL=https://connection.keboola.com # URL where Keboola Connection is running KBC_MANAGE_API_TOKEN=your_token # manage api token assigned to user **with** **superadmin** privileges. Can be created in Account Settings under the title Personal Access Tokens. User must have Multi-Factor Authentication disabled. KBC_SUPER_API_TOKEN=your_token # can be created in manage-apps on the Tokens tab KBC_MANAGE_API_SUPER_TOKEN_WITH_PROJECTS_READ_SCOPE=super_token_with_projects_read_scope # can be created in manage-apps on the Tokens tab. Token must have "projects:read" scope KBC_MANAGE_API_SUPER_TOKEN_WITHOUT_SCOPES=super_token_without_scopes KBC_MANAGE_API_SUPER_TOKEN_WITH_DELETED_PROJECTS_READ_SCOPE=super_token_with_deleted_projects_read_scope # can be created in manage-apps on the Tokens tab. Token must have "deleted-projects:read" scope KBC_MANAGE_API_SUPER_TOKEN_WITH_UI_MANAGE_SCOPE=super_token_with_ui_manage_scope # can be created in manage-apps on the Tokens tab. Token must have "connection:ui-manage" scope KBC_MANAGE_API_SUPER_TOKEN_WITH_STORAGE_TOKENS_SCOPE=super_token_with_storage_tokens_scope # can be created in manage-apps on the Tokens tab. Token must have "manage:storage-tokens" scope KBC_TEST_MAINTAINER_ID=id # `id` of maintainer. Please create a new maintainer dedicated to test suite. All maintainer's organizations and projects all purged before tests! KBC_TEST_ADMIN_EMAIL=email_of_another_admin_having_mfa_disabled # email address of another user without any organizations KBC_TEST_ADMIN_TOKEN=token_of_another_admin_having_mfa_disabled # is also a Personal Access Token of user **without** **superadmin** privileges , but for a different user than that which has `KBC_MANAGE_API_TOKEN`. User must have Multi-Factor Authentication disabled. KBC_TEST_ADMIN_WITH_MFA_EMAIL=email_of_another_admin_having_mfa_enabled # email address of another user without any organizations and having Multi-Factor Authentication enabled KBC_TEST_ADMIN_WITH_MFA_TOKEN=token_of_another_admin_having_mfa_enabled # is also a Personal Access Token of user **without** **superadmin** privileges , but for a different user than that which has `KBC_MANAGE_API_TOKEN` or `KBC_TEST_ADMIN_TOKEN` # OPTIONAL - required only for running testCreateStorageBackend, you have to have new snowflake backend and fill credentials into following environment variables KBC_TEST_SNOWFLAKE_BACKEND_NAME= KBC_TEST_SNOWFLAKE_BACKEND_PASSWORD= KBC_TEST_SNOWFLAKE_HOST= KBC_TEST_SNOWFLAKE_WAREHOUSE= KBC_TEST_SNOWFLAKE_BACKEND_REGION=
Run tests
docker-compose run --rm dev composer tests
File Storage tests
Setup cloud resources for File Storage tests
Prerequisites:
- configured and logged in az, aws and gcp CLI tools
az login
aws sso login --profile=<your_profile>
gcloud auth application-default login
- installed terraform (https://www.terraform.io) and jq (https://stedolan.github.io/jq) to setup local env
# create terraform.tfvars file from terraform.tfvars.dist cp ./provisioning/terraform.tfvars.dist ./provisioning/terraform.tfvars # set terraform variables name_prefix = "<your_nick>" # your name/nickname to make your resource unique & recognizable, allowed characters are [a-zA-Z0-9-] gcp_storage_location = "<your_region>" # region of GCP resources gcp_project_id = "<your_project_id>" # GCP project id gcp_project_region = "<your_region>" # region of GCP project azure_storage_location = "<your_region>" # region of Azure resources azure_tenant_id = "<your_tenant_id>" # Azure tenant id azure_subscription_id = "<your_subscription_id>" # Azure subscription id aws_profile = "<your_profile>" # your aws profile name aws_region = "<your_region>" # region of AWS resources aws_account = "<your_account_id>" # your aws account id # Initialize terraform terraform -chdir=./provisioning init # Create resources terraform -chdir=./provisioning apply # For destroying resources run terraform -chdir=./provisioning apply -destroy # Setup terraform variables to .env file (will be prepended to .env file) # For Azure ./provisioning/update-env.sh azure # For Aws ./provisioning/update-env.sh aws # For GCP ./provisioning/update-env.sh gcp
Required variables for File Storage tests
These variables are used for testing file storage. You have to copy these values from Azure and AWS portal.
TEST_ABS_ACCOUNT_KEY
- First secret key for Azure Storage accountTEST_ABS_ACCOUNT_NAME
- Name of Azure Storage accountTEST_ABS_CONTAINER_NAME
- Name of container created inside Azure Storage AccountTEST_ABS_REGION
- Name of region where Azure Storage Account is located. (Note: AWS region list is used)TEST_ABS_ROTATE_ACCOUNT_KEY
- Second secret key for Azure Storage accountTEST_S3_ROTATE_KEY
- Second AWS keyTEST_S3_ROTATE_SECRET
- Second AWS secretTEST_S3_FILES_BUCKET
- Name of file bucket on S3TEST_S3_KEY
- First AWS keyTEST_S3_REGION
- Region where your S3 is locatedTEST_S3_SECRET
- First AWS secretTEST_GCS_KEYFILE_JSON
- First GCS key file contents as json stringTEST_GCS_KEYFILE_ROTATE_JSON
- Second GCS key file contents as json string used for testing rotationTEST_GCS_FILES_BUCKET
- Name of file bucket on GCSTEST_GCS_REGION
- Region whare GCS is located
Variable prefixed with ROTATE are used for rotating credentials and they MUST be working credentials.
Run File Storage tests
docker-compose run --rm dev composer tests-file-storage
Build OpenAPI document
Currently, we mainly document APIs in apiary.apib file. But we want to move to OpenAPI format. By calling following commands, the apiary.apib file will be translated to OpenAPI format and stored in file openapi.yml. Then you can commit it. We should put it in CI.
You need to install apib2swagger
tool .
$ npm install -g apib2swagger
Then run following commands
$ cat apiary.apib | grep -v "X-KBC-ManageApiToken:" | apib2swagger -o openapi.yml -y --open-api-3 --info-title="Manage API"
$ php AdjustOpenApi.php
License
MIT licensed, see LICENSE file.