konfig / carbon-php-sdk
Connect external data to LLMs, no matter the source.
Requires
- php: ^8.0
- ext-curl: *
- ext-json: *
- ext-mbstring: *
- guzzlehttp/guzzle: ^7.3
- guzzlehttp/psr7: ^1.7 || ^2.0
Requires (Dev)
- friendsofphp/php-cs-fixer: ^3.5
- phpunit/phpunit: ^8.0 || ^9.0
- dev-main
- v0.2.51
- v0.2.50
- v0.2.49
- v0.2.48
- v0.2.47
- v0.2.46
- v0.2.45
- v0.2.44
- v0.2.43
- v0.2.42
- v0.2.41
- v0.2.40
- v0.2.39
- v0.2.38
- v0.2.37
- v0.2.36
- v0.2.35
- v0.2.34
- v0.2.33
- v0.2.32
- v0.2.31
- v0.2.30
- v0.2.29
- v0.2.28
- v0.2.27
- v0.2.26
- v0.2.25
- v0.2.24
- v0.2.23
- v0.2.22
- v0.2.21
- v0.2.20
- v0.2.19
- v0.2.18
- v0.2.17
- v0.2.16
- v0.2.15
- v0.2.14
- v0.2.13
- v0.2.12
- v0.2.11
- v0.2.10
- v0.2.9
- v0.2.8
- v0.2.7
- v0.2.6
- v0.2.5
- v0.2.4
- v0.2.3
- v0.2.2
- v0.2.1
- v0.2.0
- v0.1.33
- v0.1.32
- v0.1.31
- v0.1.30
- v0.1.29
- v0.1.28
- v0.1.27
- v0.1.26
- v0.1.25
- v0.1.24
- v0.1.23
- v0.1.22
- v0.1.21
- v0.1.20
- v0.1.19
- v0.1.18
- v0.1.17
- v0.1.16
- v0.1.15
- v0.1.14
- v0.1.13
- v0.1.12
- v0.1.11
- v0.1.10
- v0.1.9
- v0.1.8
- v0.1.7
- v0.1.6
- v0.1.5
- v0.1.4
- v0.1.3
- v0.1.2
- v0.1.1
- dev-bump
- dev-new-openapi-spec-1731974501620
- dev-new-openapi-spec-1731715272749
- dev-new-openapi-spec-1731456046104
- dev-new-openapi-spec-1730937689669
- dev-new-openapi-spec-1730505647571
- dev-new-openapi-spec-1730160091537
- dev-new-openapi-spec-1729900876455
- dev-new-openapi-spec-1729728112502
- dev-new-openapi-spec-1729555276471
- dev-new-openapi-spec-1729209738043
- dev-new-openapi-spec-1729036909631
- dev-new-openapi-spec-1728691295243
- dev-new-openapi-spec-1728432125512
- dev-new-openapi-spec-1728086520639
- dev-new-openapi-spec-1727913717065
- dev-new-openapi-spec-1727395340762
- dev-new-openapi-spec-1727136065738
- dev-new-openapi-spec-1726790519202
- dev-new-openapi-spec-1726617660292
- dev-new-openapi-spec-1726531257781
- dev-new-openapi-spec-1726012850580
- dev-new-openapi-spec-1725926455444
- dev-new-openapi-spec-1725494547003
- dev-new-openapi-spec-1724976143740
- dev-new-openapi-spec-1724889818554
- dev-new-openapi-spec-1724803269095
- dev-new-openapi-spec-1724457662362
- dev-new-openapi-spec-1724284857592
- dev-new-openapi-spec-1724198461331
- dev-new-openapi-spec-1724112126572
- dev-new-openapi-spec-1724025661971
- dev-new-openapi-spec-1723852921686
- dev-new-openapi-spec-1723606824480
- dev-new-openapi-spec-1723075254037
- dev-new-openapi-spec-1722988892624
- dev-new-openapi-spec-1722470458070
- dev-new-openapi-spec-1722384063081
This package is auto-updated.
Last update: 2024-11-19 00:22:59 UTC
README
Carbon
Connect external data to LLMs, no matter the source.
Table of Contents
- Installation & Usage
- Getting Started
- Reference
carbon.auth.getAccessToken
carbon.auth.getWhiteLabeling
carbon.cRM.getAccount
carbon.cRM.getAccounts
carbon.cRM.getContact
carbon.cRM.getContacts
carbon.cRM.getLead
carbon.cRM.getLeads
carbon.cRM.getOpportunities
carbon.cRM.getOpportunity
carbon.dataSources.addTags
carbon.dataSources.query
carbon.dataSources.queryUserDataSources
carbon.dataSources.removeTags
carbon.dataSources.revokeAccessToken
carbon.embeddings.all
carbon.embeddings.getDocuments
carbon.embeddings.getEmbeddingsAndChunks
carbon.embeddings.uploadChunksAndEmbeddings
carbon.files.createUserFileTags
carbon.files.delete
carbon.files.deleteFileTags
carbon.files.deleteMany
carbon.files.deleteV2
carbon.files.getParsedFile
carbon.files.getRawFile
carbon.files.modifyColdStorageParameters
carbon.files.moveToHotStorage
carbon.files.queryUserFiles
carbon.files.queryUserFilesDeprecated
carbon.files.resync
carbon.files.upload
carbon.files.uploadFromUrl
carbon.files.uploadText
carbon.github.getIssue
carbon.github.getIssues
carbon.github.getPr
carbon.github.getPrComments
carbon.github.getPrCommits
carbon.github.getPrFiles
carbon.github.getPullRequests
carbon.integrations.cancel
carbon.integrations.connectDataSource
carbon.integrations.connectDocument360
carbon.integrations.connectFreshdesk
carbon.integrations.connectGitbook
carbon.integrations.connectGuru
carbon.integrations.createAwsIamUser
carbon.integrations.getOauthUrl
carbon.integrations.listConfluencePages
carbon.integrations.listConversations
carbon.integrations.listDataSourceItems
carbon.integrations.listFolders
carbon.integrations.listGitbookSpaces
carbon.integrations.listLabels
carbon.integrations.listOutlookCategories
carbon.integrations.listRepos
carbon.integrations.listSharepointSites
carbon.integrations.syncAzureBlobFiles
carbon.integrations.syncAzureBlobStorage
carbon.integrations.syncConfluence
carbon.integrations.syncDataSourceItems
carbon.integrations.syncFiles
carbon.integrations.syncGitHub
carbon.integrations.syncGitbook
carbon.integrations.syncGmail
carbon.integrations.syncOutlook
carbon.integrations.syncRepos
carbon.integrations.syncRssFeed
carbon.integrations.syncS3Files
carbon.integrations.syncSlack
carbon.organizations.get
carbon.organizations.update
carbon.organizations.updateStats
carbon.users.all
carbon.users.delete
carbon.users.get
carbon.users.toggleUserFeatures
carbon.users.updateUsers
carbon.users.whoAmI
carbon.utilities.fetchUrls
carbon.utilities.fetchWebpage
carbon.utilities.fetchYoutubeTranscripts
carbon.utilities.processSitemap
carbon.utilities.scrapeSitemap
carbon.utilities.scrapeWeb
carbon.utilities.searchUrls
carbon.utilities.userWebpages
carbon.webhooks.addUrl
carbon.webhooks.deleteUrl
carbon.webhooks.urls
carbon.whiteLabel.all
carbon.whiteLabel.create
carbon.whiteLabel.delete
carbon.whiteLabel.update
Installation & Usage
Requirements
This library requires PHP ^8.0
Composer
To install the bindings via Composer, add the following to composer.json
:
{ "repositories": [ { "type": "vcs", "url": "https://github.com/Carbon-for-Developers/carbon-php-sdk.git" } ], "require": { "konfig/carbon-php-sdk": "0.2.51" } }
Then run composer install
Manual Installation
Download the files and include autoload.php
:
<?php require_once('/path/to/carbon-php-sdk/vendor/autoload.php');
Getting Started
Please follow the installation procedure and then run the following:
<?php require_once(__DIR__ . '/vendor/autoload.php'); // 1) Get an access token for a customer $carbon = new \Carbon\Client( apiKey: "API_KEY", customerId: "CUSTOMER_ID", ); $result = $carbon->auth->getAccessToken(); # 2) Use the access token to authenticate moving forward $carbon = new \Carbon\Client(accessToken: $token->getAccessToken()); # use SDK as usual $whiteLabeling = $carbon->auth->getWhiteLabeling(); # etc.
Reference
carbon.auth.getAccessToken
Get Access Token
π οΈ Usage
$result = $carbon->auth->getAccessToken();
π Return
π Endpoint
/auth/v1/access_token
GET
π Back to Table of Contents
carbon.auth.getWhiteLabeling
Returns whether or not the organization is white labeled and which integrations are white labeled
:param current_user: the current user :param db: the database session :return: a WhiteLabelingResponse
π οΈ Usage
$result = $carbon->auth->getWhiteLabeling();
π Return
π Endpoint
/auth/v1/white_labeling
GET
π Back to Table of Contents
carbon.cRM.getAccount
Get Account
π οΈ Usage
$result = $carbon->cRM->getAccount( id: "id_example", data_source_id: 1, include_remote_data: False, includes: [ "string_example" ] );
βοΈ Parameters
id: string
data_source_id: int
include_remote_data: bool
includes: []
π Return
π Endpoint
/integrations/data/crm/accounts/{id}
GET
π Back to Table of Contents
carbon.cRM.getAccounts
Get Accounts
π οΈ Usage
$result = $carbon->cRM->getAccounts( data_source_id: 1, include_remote_data: False, next_cursor: "string_example", page_size: 1, order_dir: "asc", includes: [], filters: [ ], order_by: "created_at" );
βοΈ Parameters
data_source_id: int
include_remote_data: bool
next_cursor: string
page_size: int
order_dir:
includes: []
filters: AccountFilters
order_by:
π Return
π Endpoint
/integrations/data/crm/accounts
POST
π Back to Table of Contents
carbon.cRM.getContact
Get Contact
π οΈ Usage
$result = $carbon->cRM->getContact( id: "id_example", data_source_id: 1, include_remote_data: False, includes: [ "string_example" ] );
βοΈ Parameters
id: string
data_source_id: int
include_remote_data: bool
includes: []
π Return
π Endpoint
/integrations/data/crm/contacts/{id}
GET
π Back to Table of Contents
carbon.cRM.getContacts
Get Contacts
π οΈ Usage
$result = $carbon->cRM->getContacts( data_source_id: 1, include_remote_data: False, next_cursor: "string_example", page_size: 1, order_dir: "asc", includes: [], filters: [ ], order_by: "created_at" );
βοΈ Parameters
data_source_id: int
include_remote_data: bool
next_cursor: string
page_size: int
order_dir:
includes: []
filters: ContactFilters
order_by:
π Return
π Endpoint
/integrations/data/crm/contacts
POST
π Back to Table of Contents
carbon.cRM.getLead
Get Lead
π οΈ Usage
$result = $carbon->cRM->getLead( id: "id_example", data_source_id: 1, include_remote_data: False, includes: [ "string_example" ] );
βοΈ Parameters
id: string
data_source_id: int
include_remote_data: bool
includes: []
π Return
π Endpoint
/integrations/data/crm/leads/{id}
GET
π Back to Table of Contents
carbon.cRM.getLeads
Get Leads
π οΈ Usage
$result = $carbon->cRM->getLeads( data_source_id: 1, include_remote_data: False, next_cursor: "string_example", page_size: 1, order_dir: "asc", includes: [], filters: [ ], order_by: "created_at" );
βοΈ Parameters
data_source_id: int
include_remote_data: bool
next_cursor: string
page_size: int
order_dir:
includes: []
filters: LeadFilters
order_by:
π Return
π Endpoint
/integrations/data/crm/leads
POST
π Back to Table of Contents
carbon.cRM.getOpportunities
Get Opportunities
π οΈ Usage
$result = $carbon->cRM->getOpportunities( data_source_id: 1, include_remote_data: False, next_cursor: "string_example", page_size: 1, order_dir: "asc", includes: [], filters: [ "status" => "WON", ], order_by: "created_at" );
βοΈ Parameters
data_source_id: int
include_remote_data: bool
next_cursor: string
page_size: int
order_dir:
includes: []
filters: OpportunityFilters
order_by:
π Return
π Endpoint
/integrations/data/crm/opportunities
POST
π Back to Table of Contents
carbon.cRM.getOpportunity
Get Opportunity
π οΈ Usage
$result = $carbon->cRM->getOpportunity( id: "id_example", data_source_id: 1, include_remote_data: False, includes: [ "string_example" ] );
βοΈ Parameters
id: string
data_source_id: int
include_remote_data: bool
includes: []
π Return
π Endpoint
/integrations/data/crm/opportunities/{id}
GET
π Back to Table of Contents
carbon.dataSources.addTags
Add Data Source Tags
π οΈ Usage
$result = $carbon->dataSources->addTags( tags: [], data_source_id: 1 );
βοΈ Parameters
tags: object
data_source_id: int
π Return
π Endpoint
/data_sources/tags/add
POST
π Back to Table of Contents
carbon.dataSources.query
Data Sources
π οΈ Usage
$result = $carbon->dataSources->query( pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", filters: [ "source" => "GOOGLE_CLOUD_STORAGE", ] );
βοΈ Parameters
pagination: Pagination
order_by:
order_dir:
filters: OrganizationUserDataSourceFilters
π Return
OrganizationUserDataSourceResponse
π Endpoint
/data_sources
POST
π Back to Table of Contents
carbon.dataSources.queryUserDataSources
User Data Sources
π οΈ Usage
$result = $carbon->dataSources->queryUserDataSources( pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", filters: [ "source" => "GOOGLE_CLOUD_STORAGE", ] );
βοΈ Parameters
pagination: Pagination
order_by:
order_dir:
filters: OrganizationUserDataSourceFilters
π Return
OrganizationUserDataSourceResponse
π Endpoint
/user_data_sources
POST
π Back to Table of Contents
carbon.dataSources.removeTags
Remove Data Source Tags
π οΈ Usage
$result = $carbon->dataSources->removeTags( data_source_id: 1, tags_to_remove: [], remove_all_tags: False );
βοΈ Parameters
data_source_id: int
tags_to_remove: string
[]
remove_all_tags: bool
π Return
π Endpoint
/data_sources/tags/remove
POST
π Back to Table of Contents
carbon.dataSources.revokeAccessToken
Revoke Access Token
π οΈ Usage
$result = $carbon->dataSources->revokeAccessToken( data_source_id: 1 );
βοΈ Parameters
data_source_id: int
π Return
π Endpoint
/revoke_access_token
POST
π Back to Table of Contents
carbon.embeddings.all
Retrieve Embeddings And Content V2
π οΈ Usage
$result = $carbon->embeddings->all( filters: [ "include_all_children" => False, "non_synced_only" => False, ], pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", include_vectors: False );
βοΈ Parameters
filters: OrganizationUserFilesToSyncFilters
pagination: Pagination
order_by:
order_dir:
include_vectors: bool
π Return
π Endpoint
/list_chunks_and_embeddings
POST
π Back to Table of Contents
carbon.embeddings.getDocuments
For pre-filtering documents, using tags_v2
is preferred to using tags
(which is now deprecated). If both tags_v2
and tags
are specified, tags
is ignored. tags_v2
enables
building complex filters through the use of "AND", "OR", and negation logic. Take the below input as an example:
{ "OR": [ { "key": "subject", "value": "holy-bible", "negate": false }, { "key": "person-of-interest", "value": "jesus christ", "negate": false }, { "key": "genre", "value": "religion", "negate": true } { "AND": [ { "key": "subject", "value": "tao-te-ching", "negate": false }, { "key": "author", "value": "lao-tzu", "negate": false } ] } ] }
In this case, files will be filtered such that:
- "subject" = "holy-bible" OR
- "person-of-interest" = "jesus christ" OR
- "genre" != "religion" OR
- "subject" = "tao-te-ching" AND "author" = "lao-tzu"
Note that the top level of the query must be either an "OR" or "AND" array. Currently, nesting is limited to 3. For tag blocks (those with "key", "value", and "negate" keys), the following typing rules apply:
- "key" isn't optional and must be a
string
- "value" isn't optional and can be
any
or list[any
] - "negate" is optional and must be
true
orfalse
. If present andtrue
, then the filter block is negated in the resulting query. It isfalse
by default.
When querying embeddings, you can optionally specify the media_type
parameter in your request. By default (if
not set), it is equal to "TEXT". This means that the query will be performed over files that have
been parsed as text (for now, this covers all files except image files). If it is equal to "IMAGE",
the query will be performed over image files (for now, .jpg
and .png
files). You can think of this
field as an additional filter on top of any filters set in file_ids
and
When hybrid_search
is set to true, a combination of keyword search and semantic search are used to rank
and select candidate embeddings during information retrieval. By default, these search methods are weighted
equally during the ranking process. To adjust the weight (or "importance") of each search method, you can use
the hybrid_search_tuning_parameters
property. The description for the different tuning parameters are:
weight_a
: weight to assign to semantic searchweight_b
: weight to assign to keyword search
You must ensure that sum(weight_a, weight_b,..., weight_n)
for all n weights is equal to 1. The equality
has an error tolerance of 0.001 to account for possible floating point issues.
In order to use hybrid search for a customer across a set of documents, two flags need to be enabled:
- Use the
/modify_user_configuration
endpoint to to enablesparse_vectors
for the customer. The payload body for this request is below:
{
"configuration_key_name": "sparse_vectors",
"value": {
"enabled": true
}
}
- Make sure hybrid search is enabled for the documents across which you want to perform the search. For the
/uploadfile
endpoint, this can be done by setting the following query parameter:generate_sparse_vectors=true
Carbon supports multiple models for use in generating embeddings for files. For images, we support Vertex AI's
multimodal model; for text, we support OpenAI's text-embedding-ada-002
and Cohere's embed-multilingual-v3.0.
The model can be specified via the embedding_model
parameter (in the POST body for /embeddings
, and a query
parameter in /uploadfile
). If no model is supplied, the text-embedding-ada-002
is used by default. When performing
embedding queries, embeddings from files that used the specified model will be considered in the query.
For example, if files A and B have embeddings generated with OPENAI
, and files C and D have embeddings generated with
COHERE_MULTILINGUAL_V3
, then by default, queries will only consider files A and B. If COHERE_MULTILINGUAL_V3
is
specified as the embedding_model
in /embeddings
, then only files C and D will be considered. Make sure that
the set of all files you want considered for a query have embeddings generated via the same model. For now, do not
set VERTEX_MULTIMODAL
as an embedding_model
. This model is used automatically by Carbon when it detects an image file.
π οΈ Usage
$result = $carbon->embeddings->getDocuments( query: "a", k: 1, tags: [ "key": "string_example", ], query_vector: [ 3.14 ], file_ids: [ 1 ], parent_file_ids: [ 1 ], include_all_children: False, tags_v2: [ ], include_tags: True, include_vectors: True, include_raw_file: True, hybrid_search: True, hybrid_search_tuning_parameters: [ "weight_a" => 0.5, "weight_b" => 0.5, ], media_type: "TEXT", embedding_model: "OPENAI", include_file_level_metadata: False, high_accuracy: False, rerank: [ "model" => "model_example", ], file_types_at_source: [ "string_example" ], exclude_cold_storage_files: False );
βοΈ Parameters
query: string
Query for which to get related chunks and embeddings.
k: int
Number of related chunks to return.
tags: array<string, Tags1
>
A set of tags to limit the search to. Deprecated and may be removed in the future.
query_vector: float
[]
Optional query vector for which to get related chunks and embeddings. It must have been generated by the same model used to generate the embeddings across which the search is being conducted. Cannot provide both query
and query_vector
.
file_ids: int
[]
Optional list of file IDs to limit the search to
parent_file_ids: int
[]
Optional list of parent file IDs to limit the search to. A parent file describes a file to which another file belongs (e.g. a folder)
include_all_children: bool
Flag to control whether or not to include all children of filtered files in the embedding search.
tags_v2: object
A set of tags to limit the search to. Use this instead of tags
, which is deprecated.
include_tags: bool
Flag to control whether or not to include tags for each chunk in the response.
include_vectors: bool
Flag to control whether or not to include embedding vectors in the response.
include_raw_file: bool
Flag to control whether or not to include a signed URL to the raw file containing each chunk in the response.
hybrid_search: bool
Flag to control whether or not to perform hybrid search.
hybrid_search_tuning_parameters: HybridSearchTuningParamsNullable
media_type:
embedding_model:
include_file_level_metadata: bool
Flag to control whether or not to include file-level metadata in the response. This metadata will be included in the content_metadata
field of each document along with chunk/embedding level metadata.
high_accuracy: bool
Flag to control whether or not to perform a high accuracy embedding search. By default, this is set to false. If true, the search may return more accurate results, but may take longer to complete.
rerank: RerankParamsNullable
file_types_at_source: AutoSyncedSourceTypesPropertyInner
[]
Filter files based on their type at the source (for example help center tickets and articles)
exclude_cold_storage_files: bool
Flag to control whether or not to exclude files that are not in hot storage. If set to False, then an error will be returned if any filtered files are in cold storage.
π Return
π Endpoint
/embeddings
POST
π Back to Table of Contents
carbon.embeddings.getEmbeddingsAndChunks
Retrieve Embeddings And Content
π οΈ Usage
$result = $carbon->embeddings->getEmbeddingsAndChunks( filters: [ "user_file_id" => 1, "embedding_model" => "OPENAI", ], pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", include_vectors: False );
βοΈ Parameters
filters: EmbeddingsAndChunksFilters
pagination: Pagination
order_by:
order_dir:
include_vectors: bool
π Return
π Endpoint
/text_chunks
POST
π Back to Table of Contents
carbon.embeddings.uploadChunksAndEmbeddings
Upload Chunks And Embeddings
π οΈ Usage
$result = $carbon->embeddings->uploadChunksAndEmbeddings( embedding_model: "OPENAI", chunks_and_embeddings: [ [ "file_id" => 1, "chunks_and_embeddings" => [ [ "chunk_number" => 1, "chunk" => "chunk_example", ] ], ] ], overwrite_existing: False, chunks_only: False, custom_credentials: [ "key": [], ] );
βοΈ Parameters
embedding_model:
chunks_and_embeddings: SingleChunksAndEmbeddingsUploadInput
[]
overwrite_existing: bool
chunks_only: bool
custom_credentials: array<string,object>
π Return
π Endpoint
/upload_chunks_and_embeddings
POST
π Back to Table of Contents
carbon.files.createUserFileTags
A tag is a key-value pair that can be added to a file. This pair can then be used for searches (e.g. embedding searches) in order to narrow down the scope of the search. A file can have any number of tags. The following are reserved keys that cannot be used:
- db_embedding_id
- organization_id
- user_id
- organization_user_file_id
Carbon currently supports two data types for tag values - string
and list<string>
.
Keys can only be string
. If values other than string
and list<string>
are used,
they're automatically converted to strings (e.g. 4 will become "4").
π οΈ Usage
$result = $carbon->files->createUserFileTags( tags: [ "key": "string_example", ], organization_user_file_id: 1 );
βοΈ Parameters
tags: array<string, Tags1
>
organization_user_file_id: int
π Return
π Endpoint
/create_user_file_tags
POST
π Back to Table of Contents
carbon.files.delete
Delete File Endpoint
π οΈ Usage
$result = $carbon->files->delete( file_id: 1 );
βοΈ Parameters
file_id: int
π Return
π Endpoint
/deletefile/{file_id}
DELETE
π Back to Table of Contents
carbon.files.deleteFileTags
Delete File Tags
π οΈ Usage
$result = $carbon->files->deleteFileTags( tags: [ "string_example" ], organization_user_file_id: 1 );
βοΈ Parameters
tags: string
[]
organization_user_file_id: int
π Return
π Endpoint
/delete_user_file_tags
POST
π Back to Table of Contents
carbon.files.deleteMany
Delete Files Endpoint
π οΈ Usage
$result = $carbon->files->deleteMany( file_ids: [ 1 ], sync_statuses: [ "string_example" ], delete_non_synced_only: False, send_webhook: False, delete_child_files: False );
βοΈ Parameters
file_ids: int
[]
sync_statuses: []
delete_non_synced_only: bool
send_webhook: bool
delete_child_files: bool
π Return
π Endpoint
/delete_files
POST
π Back to Table of Contents
carbon.files.deleteV2
Delete Files V2 Endpoint
π οΈ Usage
$result = $carbon->files->deleteV2( filters: [ "include_all_children" => False, "non_synced_only" => False, ], send_webhook: False, preserve_file_record: False );
βοΈ Parameters
filters: OrganizationUserFilesToSyncFilters
send_webhook: bool
preserve_file_record: bool
Whether or not to delete all data related to the file from the database, BUT to preserve the file metadata, allowing for resyncs. By default preserve_file_record
is false, which means that all data related to the file as well as its metadata will be deleted. Note that even if preserve_file_record
is true, raw files uploaded via the uploadfile
endpoint still cannot be resynced.
π Return
π Endpoint
/delete_files_v2
POST
π Back to Table of Contents
carbon.files.getParsedFile
This route is deprecated. Use /user_files_v2
instead.
π οΈ Usage
$result = $carbon->files->getParsedFile( file_id: 1 );
βοΈ Parameters
file_id: int
π Return
π Endpoint
/parsed_file/{file_id}
GET
π Back to Table of Contents
carbon.files.getRawFile
This route is deprecated. Use /user_files_v2
instead.
π οΈ Usage
$result = $carbon->files->getRawFile( file_id: 1 );
βοΈ Parameters
file_id: int
π Return
π Endpoint
/raw_file/{file_id}
GET
π Back to Table of Contents
carbon.files.modifyColdStorageParameters
Modify Cold Storage Parameters
π οΈ Usage
$result = $carbon->files->modifyColdStorageParameters( filters: [ "include_all_children" => False, "non_synced_only" => False, ], enable_cold_storage: True, hot_storage_time_to_live: 1 );
βοΈ Parameters
filters: OrganizationUserFilesToSyncFilters
enable_cold_storage: bool
hot_storage_time_to_live: int
π Return
bool
π Endpoint
/modify_cold_storage_parameters
POST
π Back to Table of Contents
carbon.files.moveToHotStorage
Move To Hot Storage
π οΈ Usage
$result = $carbon->files->moveToHotStorage( filters: [ "include_all_children" => False, "non_synced_only" => False, ] );
βοΈ Parameters
filters: OrganizationUserFilesToSyncFilters
π Return
bool
π Endpoint
/move_to_hot_storage
POST
π Back to Table of Contents
carbon.files.queryUserFiles
For pre-filtering documents, using tags_v2
is preferred to using tags
(which is now deprecated). If both tags_v2
and tags
are specified, tags
is ignored. tags_v2
enables
building complex filters through the use of "AND", "OR", and negation logic. Take the below input as an example:
{ "OR": [ { "key": "subject", "value": "holy-bible", "negate": false }, { "key": "person-of-interest", "value": "jesus christ", "negate": false }, { "key": "genre", "value": "religion", "negate": true } { "AND": [ { "key": "subject", "value": "tao-te-ching", "negate": false }, { "key": "author", "value": "lao-tzu", "negate": false } ] } ] }
In this case, files will be filtered such that:
- "subject" = "holy-bible" OR
- "person-of-interest" = "jesus christ" OR
- "genre" != "religion" OR
- "subject" = "tao-te-ching" AND "author" = "lao-tzu"
Note that the top level of the query must be either an "OR" or "AND" array. Currently, nesting is limited to 3. For tag blocks (those with "key", "value", and "negate" keys), the following typing rules apply:
- "key" isn't optional and must be a
string
- "value" isn't optional and can be
any
or list[any
] - "negate" is optional and must be
true
orfalse
. If present andtrue
, then the filter block is negated in the resulting query. It isfalse
by default.
π οΈ Usage
$result = $carbon->files->queryUserFiles( pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", filters: [ "include_all_children" => False, "non_synced_only" => False, ], include_raw_file: True, include_parsed_text_file: True, include_additional_files: True, presigned_url_expiry_time_seconds: 3600 );
βοΈ Parameters
pagination: Pagination
order_by:
order_dir:
filters: OrganizationUserFilesToSyncFilters
include_raw_file: bool
If true, the query will return presigned URLs for the raw file. Only relevant for the /user_files_v2 endpoint.
include_parsed_text_file: bool
If true, the query will return presigned URLs for the parsed text file. Only relevant for the /user_files_v2 endpoint.
include_additional_files: bool
If true, the query will return presigned URLs for additional files. Only relevant for the /user_files_v2 endpoint.
presigned_url_expiry_time_seconds: int
The expiry time for the presigned URLs. Only relevant for the /user_files_v2 endpoint.
π Return
π Endpoint
/user_files_v2
POST
π Back to Table of Contents
carbon.files.queryUserFilesDeprecated
This route is deprecated. Use /user_files_v2
instead.
π οΈ Usage
$result = $carbon->files->queryUserFilesDeprecated( pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", filters: [ "include_all_children" => False, "non_synced_only" => False, ], include_raw_file: True, include_parsed_text_file: True, include_additional_files: True, presigned_url_expiry_time_seconds: 3600 );
βοΈ Parameters
pagination: Pagination
order_by:
order_dir:
filters: OrganizationUserFilesToSyncFilters
include_raw_file: bool
If true, the query will return presigned URLs for the raw file. Only relevant for the /user_files_v2 endpoint.
include_parsed_text_file: bool
If true, the query will return presigned URLs for the parsed text file. Only relevant for the /user_files_v2 endpoint.
include_additional_files: bool
If true, the query will return presigned URLs for additional files. Only relevant for the /user_files_v2 endpoint.
presigned_url_expiry_time_seconds: int
The expiry time for the presigned URLs. Only relevant for the /user_files_v2 endpoint.
π Return
π Endpoint
/user_files
POST
π Back to Table of Contents
carbon.files.resync
Resync File
π οΈ Usage
$result = $carbon->files->resync( file_id: 1, chunk_size: 1, chunk_overlap: 1, force_embedding_generation: False, skip_file_processing: False );
βοΈ Parameters
file_id: int
chunk_size: int
chunk_overlap: int
force_embedding_generation: bool
skip_file_processing: bool
π Return
π Endpoint
/resync_file
POST
π Back to Table of Contents
carbon.files.upload
This endpoint is used to directly upload local files to Carbon. The POST
request should be a multipart form request.
Note that the set_page_as_boundary
query parameter is applicable only to PDFs for now. When this value is set,
PDF chunks are at most one page long. Additional information can be retrieved for each chunk, however, namely the coordinates
of the bounding box around the chunk (this can be used for things like text highlighting). Following is a description
of all possible query parameters:
chunk_size
: the chunk size (in tokens) applied when splitting the documentchunk_overlap
: the chunk overlap (in tokens) applied when splitting the documentskip_embedding_generation
: whether or not to skip the generation of chunks and embeddingsset_page_as_boundary
: described aboveembedding_model
: the model used to generate embeddings for the document chunksuse_ocr
: whether or not to use OCR as a preprocessing step prior to generating chunks. Valid for PDFs, JPEGs, and PNGsgenerate_sparse_vectors
: whether or not to generate sparse vectors for the file. Required for hybrid search.prepend_filename_to_chunks
: whether or not to prepend the filename to the chunk text
Carbon supports multiple models for use in generating embeddings for files. For images, we support Vertex AI's
multimodal model; for text, we support OpenAI's text-embedding-ada-002
and Cohere's embed-multilingual-v3.0.
The model can be specified via the embedding_model
parameter (in the POST body for /embeddings
, and a query
parameter in /uploadfile
). If no model is supplied, the text-embedding-ada-002
is used by default. When performing
embedding queries, embeddings from files that used the specified model will be considered in the query.
For example, if files A and B have embeddings generated with OPENAI
, and files C and D have embeddings generated with
COHERE_MULTILINGUAL_V3
, then by default, queries will only consider files A and B. If COHERE_MULTILINGUAL_V3
is
specified as the embedding_model
in /embeddings
, then only files C and D will be considered. Make sure that
the set of all files you want considered for a query have embeddings generated via the same model. For now, do not
set VERTEX_MULTIMODAL
as an embedding_model
. This model is used automatically by Carbon when it detects an image file.
π οΈ Usage
$result = $carbon->files->upload( file: open('/path/to/file', 'rb'), chunk_size: 1, chunk_overlap: 1, skip_embedding_generation: False, set_page_as_boundary: False, embedding_model: "string_example", use_ocr: False, generate_sparse_vectors: False, prepend_filename_to_chunks: False, max_items_per_chunk: 1, parse_pdf_tables_with_ocr: False, detect_audio_language: False, transcription_service: "assemblyai", include_speaker_labels: False, media_type: "TEXT", split_rows: False, enable_cold_storage: False, hot_storage_time_to_live: 1, generate_chunks_only: False, store_file_only: False );
βοΈ Parameters
file: \SplFileObject
chunk_size: int
Chunk size in tiktoken tokens to be used when processing file.
chunk_overlap: int
Chunk overlap in tiktoken tokens to be used when processing file.
skip_embedding_generation: bool
Flag to control whether or not embeddings should be generated and stored when processing file.
set_page_as_boundary: bool
Flag to control whether or not to set the a page's worth of content as the maximum amount of content that can appear in a chunk. Only valid for PDFs. See description route description for more information.
embedding_model: ``
Embedding model that will be used to embed file chunks.
use_ocr: bool
Whether or not to use OCR when processing files. Valid for PDFs, JPEGs, and PNGs. Useful for documents with tables, images, and/or scanned text.
generate_sparse_vectors: bool
Whether or not to generate sparse vectors for the file. This is required for the file to be a candidate for hybrid search.
prepend_filename_to_chunks: bool
Whether or not to prepend the file's name to chunks.
max_items_per_chunk: int
Number of objects per chunk. For csv, tsv, xlsx, and json files only.
parse_pdf_tables_with_ocr: bool
Whether to use rich table parsing when use_ocr
is enabled.
detect_audio_language: bool
Whether to automatically detect the language of the uploaded audio file.
transcription_service:
The transcription service to use for audio files. If no service is specified, 'deepgram' will be used.
include_speaker_labels: bool
Detect multiple speakers and label segments of speech by speaker for audio files.
media_type:
The media type of the file. If not provided, it will be inferred from the file extension.
split_rows: bool
Whether to split tabular rows into chunks. Currently only valid for CSV, TSV, and XLSX files.
enable_cold_storage: bool
Enable cold storage for the file. If set to true, the file will be moved to cold storage after a certain period of inactivity. Default is false.
hot_storage_time_to_live: int
Time in days after which the file will be moved to cold storage. Must be one of [1, 3, 7, 14, 30].
generate_chunks_only: bool
If this flag is enabled, the file will be chunked and stored with Carbon, but no embeddings will be generated. This overrides the skip_embedding_generation flag.
store_file_only: bool
If this flag is enabled, the file will be stored with Carbon, but no processing will be done.
π Return
π Endpoint
/uploadfile
POST
π Back to Table of Contents
carbon.files.uploadFromUrl
Create Upload File From Url
π οΈ Usage
$result = $carbon->files->uploadFromUrl( url: "string_example", file_name: "string_example", chunk_size: 1, chunk_overlap: 1, skip_embedding_generation: False, set_page_as_boundary: False, embedding_model: "OPENAI", generate_sparse_vectors: False, use_textract: False, prepend_filename_to_chunks: False, max_items_per_chunk: 1, parse_pdf_tables_with_ocr: False, detect_audio_language: False, transcription_service: "assemblyai", include_speaker_labels: False, media_type: "TEXT", split_rows: False, cold_storage_params: [ "enable_cold_storage" => False, ], generate_chunks_only: False, store_file_only: False );
βοΈ Parameters
url: string
file_name: string
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
set_page_as_boundary: bool
embedding_model:
generate_sparse_vectors: bool
use_textract: bool
prepend_filename_to_chunks: bool
max_items_per_chunk: int
Number of objects per chunk. For csv, tsv, xlsx, and json files only.
parse_pdf_tables_with_ocr: bool
detect_audio_language: bool
transcription_service:
include_speaker_labels: bool
media_type:
split_rows: bool
cold_storage_params: ColdStorageProps
generate_chunks_only: bool
If this flag is enabled, the file will be chunked and stored with Carbon, but no embeddings will be generated. This overrides the skip_embedding_generation flag.
store_file_only: bool
If this flag is enabled, the file will be stored with Carbon, but no processing will be done.
π Return
π Endpoint
/upload_file_from_url
POST
π Back to Table of Contents
carbon.files.uploadText
Carbon supports multiple models for use in generating embeddings for files. For images, we support Vertex AI's
multimodal model; for text, we support OpenAI's text-embedding-ada-002
and Cohere's embed-multilingual-v3.0.
The model can be specified via the embedding_model
parameter (in the POST body for /embeddings
, and a query
parameter in /uploadfile
). If no model is supplied, the text-embedding-ada-002
is used by default. When performing
embedding queries, embeddings from files that used the specified model will be considered in the query.
For example, if files A and B have embeddings generated with OPENAI
, and files C and D have embeddings generated with
COHERE_MULTILINGUAL_V3
, then by default, queries will only consider files A and B. If COHERE_MULTILINGUAL_V3
is
specified as the embedding_model
in /embeddings
, then only files C and D will be considered. Make sure that
the set of all files you want considered for a query have embeddings generated via the same model. For now, do not
set VERTEX_MULTIMODAL
as an embedding_model
. This model is used automatically by Carbon when it detects an image file.
π οΈ Usage
$result = $carbon->files->uploadText( contents: "aaaaa", name: "string_example", chunk_size: 1, chunk_overlap: 1, skip_embedding_generation: False, overwrite_file_id: 1, embedding_model: "OPENAI", generate_sparse_vectors: False, cold_storage_params: [ "enable_cold_storage" => False, ], generate_chunks_only: False, store_file_only: False );
βοΈ Parameters
contents: string
name: string
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
overwrite_file_id: int
embedding_model:
generate_sparse_vectors: bool
cold_storage_params: ColdStorageProps
generate_chunks_only: bool
If this flag is enabled, the file will be chunked and stored with Carbon, but no embeddings will be generated. This overrides the skip_embedding_generation flag.
store_file_only: bool
If this flag is enabled, the file will be stored with Carbon, but no processing will be done.
π Return
π Endpoint
/upload_text
POST
π Back to Table of Contents
carbon.github.getIssue
Issue
π οΈ Usage
$result = $carbon->github->getIssue( issue_number: 1, include_remote_data: False, data_source_id: 1, repository: "string_example" );
βοΈ Parameters
issue_number: int
include_remote_data: bool
data_source_id: int
repository: string
π Return
π Endpoint
/integrations/data/github/issues/{issue_number}
GET
π Back to Table of Contents
carbon.github.getIssues
Issues
π οΈ Usage
$result = $carbon->github->getIssues( data_source_id: 1, repository: "string_example", include_remote_data: False, page: 1, page_size: 30, next_cursor: "string_example", filters: [ "state" => "closed", ], order_by: "created", order_dir: "asc" );
βοΈ Parameters
data_source_id: int
repository: string
Full name of the repository, denoted as {owner}/{repo}
include_remote_data: bool
page: int
page_size: int
next_cursor: string
filters: IssuesFilter
order_by:
order_dir:
π Return
π Endpoint
/integrations/data/github/issues
POST
π Back to Table of Contents
carbon.github.getPr
Get Pr
π οΈ Usage
$result = $carbon->github->getPr( pull_number: 1, include_remote_data: False, data_source_id: 1, repository: "string_example" );
βοΈ Parameters
pull_number: int
include_remote_data: bool
data_source_id: int
repository: string
π Return
π Endpoint
/integrations/data/github/pull_requests/{pull_number}
GET
π Back to Table of Contents
carbon.github.getPrComments
Pr Comments
π οΈ Usage
$result = $carbon->github->getPrComments( data_source_id: 1, repository: "string_example", pull_number: 1, include_remote_data: False, page: 1, page_size: 30, next_cursor: "string_example", order_by: "created", order_dir: "asc" );
βοΈ Parameters
data_source_id: int
repository: string
Full name of the repository, denoted as {owner}/{repo}
pull_number: int
include_remote_data: bool
page: int
page_size: int
next_cursor: string
order_by:
order_dir:
π Return
π Endpoint
/integrations/data/github/pull_requests/comments
POST
π Back to Table of Contents
carbon.github.getPrCommits
Pr Commits
π οΈ Usage
$result = $carbon->github->getPrCommits( data_source_id: 1, repository: "string_example", pull_number: 1, include_remote_data: False, page: 1, page_size: 30, next_cursor: "string_example" );
βοΈ Parameters
data_source_id: int
repository: string
Full name of the repository, denoted as {owner}/{repo}
pull_number: int
include_remote_data: bool
page: int
page_size: int
next_cursor: string
π Return
π Endpoint
/integrations/data/github/pull_requests/commits
POST
π Back to Table of Contents
carbon.github.getPrFiles
Pr Files
π οΈ Usage
$result = $carbon->github->getPrFiles( data_source_id: 1, repository: "string_example", pull_number: 1, include_remote_data: False, page: 1, page_size: 30, next_cursor: "string_example" );
βοΈ Parameters
data_source_id: int
repository: string
Full name of the repository, denoted as {owner}/{repo}
pull_number: int
include_remote_data: bool
page: int
page_size: int
next_cursor: string
π Return
π Endpoint
/integrations/data/github/pull_requests/files
POST
π Back to Table of Contents
carbon.github.getPullRequests
Get Prs
π οΈ Usage
$result = $carbon->github->getPullRequests( data_source_id: 1, repository: "string_example", include_remote_data: False, page: 1, page_size: 30, next_cursor: "string_example", filters: [ "state" => "closed", ], order_by: "created", order_dir: "asc" );
βοΈ Parameters
data_source_id: int
repository: string
Full name of the repository, denoted as {owner}/{repo}
include_remote_data: bool
page: int
page_size: int
next_cursor: string
filters: PullRequestFilters
order_by:
order_dir:
π Return
π Endpoint
/integrations/data/github/pull_requests
POST
π Back to Table of Contents
carbon.integrations.cancel
Cancel Data Source Items Sync
π οΈ Usage
$result = $carbon->integrations->cancel( data_source_id: 1 );
βοΈ Parameters
data_source_id: int
π Return
π Endpoint
/integrations/items/sync/cancel
POST
π Back to Table of Contents
carbon.integrations.connectDataSource
Connect Data Source
π οΈ Usage
$result = $carbon->integrations->connectDataSource( authentication: [ "source" => "GOOGLE_DRIVE", "access_token" => "access_token_example", ], sync_options: [ "chunk_size" => 1500, "chunk_overlap" => 20, "skip_embedding_generation" => False, "embedding_model" => "OPENAI", "generate_sparse_vectors" => False, "prepend_filename_to_chunks" => False, "sync_files_on_connection" => True, "set_page_as_boundary" => False, "enable_file_picker" => True, "sync_source_items" => True, "incremental_sync" => False, ] );
βοΈ Parameters
authentication: AuthenticationProperty
sync_options: SyncOptions
π Return
π Endpoint
/integrations/connect
POST
π Back to Table of Contents
carbon.integrations.connectDocument360
You will need an access token to connect your Document360 account. To obtain an access token, follow the steps highlighted here https://apidocs.document360.com/apidocs/api-token.
π οΈ Usage
$result = $carbon->integrations->connectDocument360( account_email: "string_example", access_token: "string_example", tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, sync_files_on_connection: True, request_id: "string_example", sync_source_items: True, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ], data_source_tags: [] );
βοΈ Parameters
account_email: string
This email will be used to identify your carbon data source. It should have access to the Document360 account you wish to connect.
access_token: string
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
sync_files_on_connection: bool
request_id: string
sync_source_items: bool
Enabling this flag will fetch all available content from the source to be listed via list items endpoint
file_sync_config: FileSyncConfigNullable
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/document360
POST
π Back to Table of Contents
carbon.integrations.connectFreshdesk
Refer this article to obtain an API key https://support.freshdesk.com/en/support/solutions/articles/215517. Make sure that your API key has the permission to read solutions from your account and you are on a paid plan. Once you have an API key, you can make a request to this endpoint along with your freshdesk domain. This will trigger an automatic sync of the articles in your "solutions" tab. Additional parameters below can be used to associate data with the synced articles or modify the sync behavior.
π οΈ Usage
$result = $carbon->integrations->connectFreshdesk( domain: "string_example", api_key: "string_example", tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, sync_files_on_connection: True, request_id: "string_example", sync_source_items: True, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ], data_source_tags: [] );
βοΈ Parameters
domain: string
api_key: string
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
sync_files_on_connection: bool
request_id: string
sync_source_items: bool
Enabling this flag will fetch all available content from the source to be listed via list items endpoint
file_sync_config: FileSyncConfigNullable
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/freshdesk
POST
π Back to Table of Contents
carbon.integrations.connectGitbook
You will need an access token to connect your Gitbook account. Note that the permissions will be defined by the user generating access token so make sure you have the permission to access spaces you will be syncing. Refer this article for more details https://developer.gitbook.com/gitbook-api/authentication. Additionally, you need to specify the name of organization you will be syncing data from.
π οΈ Usage
$result = $carbon->integrations->connectGitbook( organization: "string_example", access_token: "string_example", tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, sync_files_on_connection: True, request_id: "string_example", sync_source_items: True, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ], data_source_tags: [] );
βοΈ Parameters
organization: string
access_token: string
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
sync_files_on_connection: bool
request_id: string
sync_source_items: bool
Enabling this flag will fetch all available content from the source to be listed via list items endpoint
file_sync_config: FileSyncConfigNullable
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/gitbook
POST
π Back to Table of Contents
carbon.integrations.connectGuru
You will need an access token to connect your Guru account. To obtain an access token, follow the steps highlighted here https://help.getguru.com/docs/gurus-api#obtaining-a-user-token. The username should be your Guru username.
π οΈ Usage
$result = $carbon->integrations->connectGuru( username: "string_example", access_token: "string_example", tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, sync_files_on_connection: True, request_id: "string_example", sync_source_items: True, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ], data_source_tags: [] );
βοΈ Parameters
username: string
access_token: string
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
sync_files_on_connection: bool
request_id: string
sync_source_items: bool
Enabling this flag will fetch all available content from the source to be listed via list items endpoint
file_sync_config: FileSyncConfigNullable
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/guru
POST
π Back to Table of Contents
carbon.integrations.createAwsIamUser
This endpoint can be used to connect S3 as well as Digital Ocean Spaces (S3 compatible)
For S3, create a new IAM user with permissions to:
- List all buckets.
- Read from the specific buckets and objects to sync with Carbon. Ensure any future buckets or objects carry the same permissions.
π οΈ Usage
$result = $carbon->integrations->createAwsIamUser( access_key: "string_example", access_key_secret: "string_example", sync_source_items: True, endpoint_url: "string_example", data_source_tags: [] );
βοΈ Parameters
access_key: string
access_key_secret: string
sync_source_items: bool
Enabling this flag will fetch all available content from the source to be listed via list items endpoint
endpoint_url: string
You can specify a Digital Ocean endpoint URL to connect a Digital Ocean Space through this endpoint. The URL should be of format .digitaloceanspaces.com. It's not required for S3 buckets.
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/s3
POST
π Back to Table of Contents
carbon.integrations.getOauthUrl
This endpoint can be used to generate the following URLs
- An OAuth URL for OAuth based connectors
- A file syncing URL which skips the OAuth flow if the user already has a valid access token and takes them to the success state.
π οΈ Usage
$result = $carbon->integrations->getOauthUrl( service: "BOX", tags: None, scope: "string_example", scopes: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", zendesk_subdomain: "string_example", microsoft_tenant: "string_example", sharepoint_site_name: "string_example", confluence_subdomain: "string_example", generate_sparse_vectors: False, prepend_filename_to_chunks: False, max_items_per_chunk: 1, salesforce_domain: "string_example", sync_files_on_connection: True, set_page_as_boundary: False, data_source_id: 1, connecting_new_account: False, request_id: "string_example", use_ocr: False, parse_pdf_tables_with_ocr: False, enable_file_picker: True, sync_source_items: True, incremental_sync: False, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ], automatically_open_file_picker: True, gong_account_email: "string_example", servicenow_credentials: [ "instance_subdomain" => "instance_subdomain_example", "client_id" => "client_id_example", "client_secret" => "client_secret_example", "redirect_uri" => "redirect_uri_example", ], data_source_tags: [] );
βοΈ Parameters
service:
tags:
scope: string
scopes: string
[]
List of scopes to request from the OAuth provider. Please that the scopes will be used as it is, not combined with the default props that Carbon uses. One scope should be one array element.
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
zendesk_subdomain: string
microsoft_tenant: string
sharepoint_site_name: string
confluence_subdomain: string
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
max_items_per_chunk: int
Number of objects per chunk. For csv, tsv, xlsx, and json files only.
salesforce_domain: string
sync_files_on_connection: bool
Used to specify whether Carbon should attempt to sync all your files automatically when authorization is complete. This is only supported for a subset of connectors and will be ignored for the rest. Supported connectors: Intercom, Zendesk, Gitbook, Confluence, Salesforce, Freshdesk
set_page_as_boundary: bool
data_source_id: int
Used to specify a data source to sync from if you have multiple connected. It can be skipped if you only have one data source of that type connected or are connecting a new account.
connecting_new_account: bool
Used to connect a new data source. If not specified, we will attempt to create a sync URL for an existing data source based on type and ID.
request_id: string
This request id will be added to all files that get synced using the generated OAuth URL
use_ocr: bool
Enable OCR for files that support it. Supported formats: pdf, png, jpg
parse_pdf_tables_with_ocr: bool
enable_file_picker: bool
Enable integration's file picker for sources that support it. Supported sources: BOX, DROPBOX, GOOGLE_DRIVE, ONEDRIVE, SHAREPOINT
sync_source_items: bool
Enabling this flag will fetch all available content from the source to be listed via list items endpoint
incremental_sync: bool
Only sync files if they have not already been synced or if the embedding properties have changed. This flag is currently supported by ONEDRIVE, GOOGLE_DRIVE, BOX, DROPBOX, INTERCOM, GMAIL, OUTLOOK, ZENDESK, CONFLUENCE, NOTION, SHAREPOINT, SERVICENOW. It will be ignored for other data sources.
file_sync_config: FileSyncConfigNullable
automatically_open_file_picker: bool
Automatically open source file picker after the OAuth flow is complete. This flag is currently supported by BOX, DROPBOX, GOOGLE_DRIVE, ONEDRIVE, SHAREPOINT. It will be ignored for other data sources.
gong_account_email: string
If you are connecting a Gong account, you need to input the email of the account you wish to connect. This email will be used to identify your carbon data source.
servicenow_credentials: ServiceNowCredentialsNullable
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/oauth_url
POST
π Back to Table of Contents
carbon.integrations.listConfluencePages
This endpoint has been deprecated. Use /integrations/items/list instead.
To begin listing a user's Confluence pages, at least a data_source_id
of a connected
Confluence account must be specified. This base request returns a list of root pages for
every space the user has access to in a Confluence instance. To traverse further down
the user's page directory, additional requests to this endpoint can be made with the same
data_source_id
and with parent_id
set to the id of page from a previous request. For
convenience, the has_children
property in each directory item in the response list will
flag which pages will return non-empty lists of pages when set as the parent_id
.
π οΈ Usage
$result = $carbon->integrations->listConfluencePages( data_source_id: 1, parent_id: "string_example" );
βοΈ Parameters
data_source_id: int
parent_id: string
π Return
π Endpoint
/integrations/confluence/list
POST
π Back to Table of Contents
carbon.integrations.listConversations
List all of your public and private channels, DMs, and Group DMs. The ID from response
can be used as a filter to sync messages to Carbon
types: Comma separated list of types. Available types are im (DMs), mpim (group DMs), public_channel, and private_channel.
Defaults to public_channel.
cursor: Used for pagination. If next_cursor is returned in response, you need to pass it as the cursor in the next request
data_source_id: Data source needs to be specified if you have linked multiple slack accounts
exclude_archived: Should archived conversations be excluded, defaults to true
π οΈ Usage
$result = $carbon->integrations->listConversations( types: "public_channel", cursor: "string_example", data_source_id: 1, exclude_archived: True );
βοΈ Parameters
types: string
cursor: string
data_source_id: int
exclude_archived: bool
π Return
object
π Endpoint
/integrations/slack/conversations
GET
π Back to Table of Contents
carbon.integrations.listDataSourceItems
List Data Source Items
π οΈ Usage
$result = $carbon->integrations->listDataSourceItems( data_source_id: 1, parent_id: "string_example", filters: [ ], pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "name", order_dir: "asc" );
βοΈ Parameters
data_source_id: int
parent_id: string
filters: ListItemsFiltersNullable
pagination: Pagination
order_by:
order_dir:
π Return
π Endpoint
/integrations/items/list
POST
π Back to Table of Contents
carbon.integrations.listFolders
After connecting your Outlook account, you can use this endpoint to list all of your folders on outlook. This includes both system folders like "inbox" and user created folders.
π οΈ Usage
$result = $carbon->integrations->listFolders( data_source_id: 1 );
βοΈ Parameters
data_source_id: int
π Return
object
π Endpoint
/integrations/outlook/user_folders
GET
π Back to Table of Contents
carbon.integrations.listGitbookSpaces
After connecting your Gitbook account, you can use this endpoint to list all of your spaces under current organization.
π οΈ Usage
$result = $carbon->integrations->listGitbookSpaces( data_source_id: 1 );
βοΈ Parameters
data_source_id: int
π Return
object
π Endpoint
/integrations/gitbook/spaces
GET
π Back to Table of Contents
carbon.integrations.listLabels
After connecting your Gmail account, you can use this endpoint to list all of your labels. User created labels will have the type "user" and Gmail's default labels will have the type "system"
π οΈ Usage
$result = $carbon->integrations->listLabels( data_source_id: 1 );
βοΈ Parameters
data_source_id: int
π Return
object
π Endpoint
/integrations/gmail/user_labels
GET
π Back to Table of Contents
carbon.integrations.listOutlookCategories
After connecting your Outlook account, you can use this endpoint to list all of your categories on outlook. We currently support listing up to 250 categories.
π οΈ Usage
$result = $carbon->integrations->listOutlookCategories( data_source_id: 1 );
βοΈ Parameters
data_source_id: int
π Return
object
π Endpoint
/integrations/outlook/user_categories
GET
π Back to Table of Contents
carbon.integrations.listRepos
Once you have connected your GitHub account, you can use this endpoint to list the repositories your account has access to. You can use a data source ID or username to fetch from a specific account.
π οΈ Usage
$result = $carbon->integrations->listRepos( per_page: 30, page: 1, data_source_id: 1 );
βοΈ Parameters
per_page: int
page: int
data_source_id: int
π Return
object
π Endpoint
/integrations/github/repos
GET
π Back to Table of Contents
carbon.integrations.listSharepointSites
List all Sharepoint sites in the connected tenant. The site names from the response can be used as the site name when connecting a Sharepoint site. If site name is null in the response, then site name should be left null when connecting to the site.
This endpoint requires an additional Sharepoint scope: "Sites.Read.All". Include this scope along with the default Sharepoint scopes to list Sharepoint sites, connect to a site, and finally sync files from the site. The default Sharepoint scopes are: [o, p, e, n, i, d, , o, f, f, l, i, n, e, _, a, c, c, e, s, s, , U, s, e, r, ., R, e, a, d, , F, i, l, e, s, ., R, e, a, d, ., A, l, l].
data_soure_id: Data source needs to be specified if you have linked multiple Sharepoint accounts cursor: Used for pagination. If next_cursor is returned in response, you need to pass it as the cursor in the next request
π οΈ Usage
$result = $carbon->integrations->listSharepointSites( data_source_id: 1, cursor: "string_example" );
βοΈ Parameters
data_source_id: int
cursor: string
π Return
object
π Endpoint
/integrations/sharepoint/sites/list
GET
π Back to Table of Contents
carbon.integrations.syncAzureBlobFiles
After optionally loading the items via /integrations/items/sync and integrations/items/list, use the container name and file name as the ID in this endpoint to sync them into Carbon. Additional parameters below can associate data with the selected items or modify the sync behavior
π οΈ Usage
$result = $carbon->integrations->syncAzureBlobFiles( ids: [ [ ] ], tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, max_items_per_chunk: 1, set_page_as_boundary: False, data_source_id: 1, request_id: "string_example", use_ocr: False, parse_pdf_tables_with_ocr: False, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ] );
βοΈ Parameters
ids: AzureBlobGetFileInput
[]
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
max_items_per_chunk: int
Number of objects per chunk. For csv, tsv, xlsx, and json files only.
set_page_as_boundary: bool
data_source_id: int
request_id: string
use_ocr: bool
parse_pdf_tables_with_ocr: bool
file_sync_config: FileSyncConfigNullable
π Return
π Endpoint
/integrations/azure_blob_storage/files
POST
π Back to Table of Contents
carbon.integrations.syncAzureBlobStorage
This endpoint can be used to connect Azure Blob Storage.
For Azure Blob Storage, follow these steps:
- Create a new Azure Storage account and grant the following permissions:
- List containers.
- Read from specific containers and blobs to sync with Carbon. Ensure any future containers or blobs carry the same permissions.
- Generate a shared access signature (SAS) token or an access key for the storage account.
Once created, provide us with the following details to generate the connection URL:
- Storage Account KeyName.
- Storage Account Name.
π οΈ Usage
$result = $carbon->integrations->syncAzureBlobStorage( account_name: "string_example", account_key: "string_example", sync_source_items: True, data_source_tags: [] );
βοΈ Parameters
account_name: string
account_key: string
sync_source_items: bool
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/azure_blob_storage
POST
π Back to Table of Contents
carbon.integrations.syncConfluence
This endpoint has been deprecated. Use /integrations/files/sync instead.
After listing pages in a user's Confluence account, the set of selected page ids
and the
connected account's data_source_id
can be passed into this endpoint to sync them into
Carbon. Additional parameters listed below can be used to associate data to the selected
pages or alter the behavior of the sync.
π οΈ Usage
$result = $carbon->integrations->syncConfluence( data_source_id: 1, ids: [ "string_example" ], tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, max_items_per_chunk: 1, set_page_as_boundary: False, request_id: "string_example", use_ocr: False, parse_pdf_tables_with_ocr: False, incremental_sync: False, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ] );
βοΈ Parameters
data_source_id: int
ids: IdsProperty
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
max_items_per_chunk: int
Number of objects per chunk. For csv, tsv, xlsx, and json files only.
set_page_as_boundary: bool
request_id: string
use_ocr: bool
parse_pdf_tables_with_ocr: bool
incremental_sync: bool
Only sync files if they have not already been synced or if the embedding properties have changed. This flag is currently supported by ONEDRIVE, GOOGLE_DRIVE, BOX, DROPBOX, INTERCOM, GMAIL, OUTLOOK, ZENDESK, CONFLUENCE, NOTION, SHAREPOINT, SERVICENOW. It will be ignored for other data sources.
file_sync_config: FileSyncConfigNullable
π Return
π Endpoint
/integrations/confluence/sync
POST
π Back to Table of Contents
carbon.integrations.syncDataSourceItems
Sync Data Source Items
π οΈ Usage
$result = $carbon->integrations->syncDataSourceItems( data_source_id: 1 );
βοΈ Parameters
data_source_id: int
π Return
π Endpoint
/integrations/items/sync
POST
π Back to Table of Contents
carbon.integrations.syncFiles
After listing files and folders via /integrations/items/sync and integrations/items/list, use the selected items' external ids as the ids in this endpoint to sync them into Carbon. Sharepoint items take an additional parameter root_id, which identifies the drive the file or folder is in and is stored in root_external_id. That additional paramter is optional and excluding it will tell the sync to assume the item is stored in the default Documents drive.
π οΈ Usage
$result = $carbon->integrations->syncFiles( data_source_id: 1, ids: [ "string_example" ], tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, max_items_per_chunk: 1, set_page_as_boundary: False, request_id: "string_example", use_ocr: False, parse_pdf_tables_with_ocr: False, incremental_sync: False, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ] );
βοΈ Parameters
data_source_id: int
ids: IdsProperty
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
max_items_per_chunk: int
Number of objects per chunk. For csv, tsv, xlsx, and json files only.
set_page_as_boundary: bool
request_id: string
use_ocr: bool
parse_pdf_tables_with_ocr: bool
incremental_sync: bool
Only sync files if they have not already been synced or if the embedding properties have changed. This flag is currently supported by ONEDRIVE, GOOGLE_DRIVE, BOX, DROPBOX, INTERCOM, GMAIL, OUTLOOK, ZENDESK, CONFLUENCE, NOTION, SHAREPOINT, SERVICENOW. It will be ignored for other data sources.
file_sync_config: FileSyncConfigNullable
π Return
π Endpoint
/integrations/files/sync
POST
π Back to Table of Contents
carbon.integrations.syncGitHub
Refer this article to obtain an access token https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens. Make sure that your access token has the permission to read content from your desired repos. Note that if your access token expires you will need to manually update it through this endpoint.
π οΈ Usage
$result = $carbon->integrations->syncGitHub( username: "string_example", access_token: "string_example", sync_source_items: False, data_source_tags: [] );
βοΈ Parameters
username: string
access_token: string
sync_source_items: bool
Enabling this flag will fetch all available content from the source to be listed via list items endpoint
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/github
POST
π Back to Table of Contents
carbon.integrations.syncGitbook
You can sync upto 20 Gitbook spaces at a time using this endpoint. Additional parameters below can be used to associate data with the synced pages or modify the sync behavior.
π οΈ Usage
$result = $carbon->integrations->syncGitbook( space_ids: [ "string_example" ], data_source_id: 1, tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, request_id: "string_example", file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ] );
βοΈ Parameters
space_ids: string
[]
data_source_id: int
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
request_id: string
file_sync_config: FileSyncConfigNullable
π Return
object
π Endpoint
/integrations/gitbook/sync
POST
π Back to Table of Contents
carbon.integrations.syncGmail
Once you have successfully connected your gmail account, you can choose which emails to sync with us using the filters parameter. Filters is a JSON object with key value pairs. It also supports AND and OR operations. For now, we support a limited set of keys listed below.
label: Inbuilt Gmail labels, for example "Important" or a custom label you created.
after or before: A date in YYYY/mm/dd format (example 2023/12/31). Gets emails after/before a certain date.
You can also use them in combination to get emails from a certain period.
is: Can have the following values - starred, important, snoozed, and unread
from: Email address of the sender
to: Email address of the recipient
in: Can have the following values - sent (sync emails sent by the user)
has: Can have the following values - attachment (sync emails that have attachments)
Using keys or values outside of the specified values can lead to unexpected behaviour.
An example of a basic query with filters can be
{ "filters": { "key": "label", "value": "Test" } }
Which will list all emails that have the label "Test".
You can use AND and OR operation in the following way:
{ "filters": { "AND": [ { "key": "after", "value": "2024/01/07" }, { "OR": [ { "key": "label", "value": "Personal" }, { "key": "is", "value": "starred" } ] } ] } }
This will return emails after 7th of Jan that are either starred or have the label "Personal". Note that this is the highest level of nesting we support, i.e. you can't add more AND/OR filters within the OR filter in the above example.
π οΈ Usage
$result = $carbon->integrations->syncGmail( filters: [], tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, data_source_id: 1, request_id: "string_example", sync_attachments: False, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ], incremental_sync: False );
βοΈ Parameters
filters: object
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
data_source_id: int
request_id: string
sync_attachments: bool
file_sync_config: FileSyncConfigNullable
incremental_sync: bool
π Return
π Endpoint
/integrations/gmail/sync
POST
π Back to Table of Contents
carbon.integrations.syncOutlook
Once you have successfully connected your Outlook account, you can choose which emails to sync with us
using the filters and folder parameter. "folder" should be the folder you want to sync from Outlook. By default
we get messages from your inbox folder.
Filters is a JSON object with key value pairs. It also supports AND and OR operations.
For now, we support a limited set of keys listed below.
category: Custom categories that you created in Outlook.
after or before: A date in YYYY/mm/dd format (example 2023/12/31). Gets emails after/before a certain date. You can also use them in combination to get emails from a certain period.
is: Can have the following values: flagged
from: Email address of the sender
An example of a basic query with filters can be
{ "filters": { "key": "category", "value": "Test" } }
Which will list all emails that have the category "Test".
Specifying a custom folder in the same query
{ "folder": "Folder Name", "filters": { "key": "category", "value": "Test" } }
You can use AND and OR operation in the following way:
{ "filters": { "AND": [ { "key": "after", "value": "2024/01/07" }, { "OR": [ { "key": "category", "value": "Personal" }, { "key": "category", "value": "Test" }, ] } ] } }
This will return emails after 7th of Jan that have either Personal or Test as category. Note that this is the highest level of nesting we support, i.e. you can't add more AND/OR filters within the OR filter in the above example.
π οΈ Usage
$result = $carbon->integrations->syncOutlook( filters: [], tags: [], folder: "Inbox", chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, data_source_id: 1, request_id: "string_example", sync_attachments: False, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ], incremental_sync: False );
βοΈ Parameters
filters: object
tags: object
folder: string
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
data_source_id: int
request_id: string
sync_attachments: bool
file_sync_config: FileSyncConfigNullable
incremental_sync: bool
π Return
π Endpoint
/integrations/outlook/sync
POST
π Back to Table of Contents
carbon.integrations.syncRepos
You can retreive repos your token has access to using /integrations/github/repos and sync their content. You can also pass full name of any public repository (username/repo-name). This will store the repo content with carbon which can be accessed through /integrations/items/list endpoint. Maximum of 25 repositories are accepted per request.
π οΈ Usage
$result = $carbon->integrations->syncRepos( repos: [ "string_example" ], data_source_id: 1 );
βοΈ Parameters
repos: string
[]
data_source_id: int
π Return
object
π Endpoint
/integrations/github/sync_repos
POST
π Back to Table of Contents
carbon.integrations.syncRssFeed
Rss Feed
π οΈ Usage
$result = $carbon->integrations->syncRssFeed( url: "string_example", tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, request_id: "string_example", data_source_tags: [] );
βοΈ Parameters
url: string
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
request_id: string
data_source_tags: object
Tags to be associated with the data source. If the data source already has tags set, then an upsert will be performed.
π Return
π Endpoint
/integrations/rss_feed
POST
π Back to Table of Contents
carbon.integrations.syncS3Files
After optionally loading the items via /integrations/items/sync and integrations/items/list, use the bucket name and object key as the ID in this endpoint to sync them into Carbon. Additional parameters below can associate data with the selected items or modify the sync behavior
π οΈ Usage
$result = $carbon->integrations->syncS3Files( ids: [ [ ] ], tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, max_items_per_chunk: 1, set_page_as_boundary: False, data_source_id: 1, request_id: "string_example", use_ocr: False, parse_pdf_tables_with_ocr: False, file_sync_config: [ "auto_synced_source_types" => ["ARTICLE"], "sync_attachments" => False, "detect_audio_language" => False, "transcription_service" => "assemblyai", "include_speaker_labels" => False, "split_rows" => False, "generate_chunks_only" => False, "store_file_only" => False, "skip_file_processing" => False, ] );
βοΈ Parameters
ids: S3GetFileInput
[]
Each input should be one of the following: A bucket name, a bucket name and a prefix, or a bucket name and an object key. A prefix is the common path for all objects you want to sync. Paths should end with a forward slash.
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
max_items_per_chunk: int
Number of objects per chunk. For csv, tsv, xlsx, and json files only.
set_page_as_boundary: bool
data_source_id: int
request_id: string
use_ocr: bool
parse_pdf_tables_with_ocr: bool
file_sync_config: FileSyncConfigNullable
π Return
π Endpoint
/integrations/s3/files
POST
π Back to Table of Contents
carbon.integrations.syncSlack
You can list all conversations using the endpoint /integrations/slack/conversations. The ID of conversation will be used as an input for this endpoint with timestamps as optional filters.
π οΈ Usage
$result = $carbon->integrations->syncSlack( filters: [ "conversation_id" => "conversation_id_example", ], tags: [], chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, embedding_model: "OPENAI", generate_sparse_vectors: False, prepend_filename_to_chunks: False, data_source_id: 1, request_id: "string_example" );
βοΈ Parameters
filters: SlackFilters
tags: object
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
embedding_model:
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
data_source_id: int
request_id: string
π Return
object
π Endpoint
/integrations/slack/sync
POST
π Back to Table of Contents
carbon.organizations.get
Get Organization
π οΈ Usage
$result = $carbon->organizations->get();
π Return
π Endpoint
/organization
GET
π Back to Table of Contents
carbon.organizations.update
Update Organization
π οΈ Usage
$result = $carbon->organizations->update( global_user_config: [ ], data_source_configs: [ "key": [ "allowed_file_formats" => [], ], ] );
βοΈ Parameters
global_user_config: UserConfigurationNullable
data_source_configs: array<string, DataSourceConfiguration
>
Used to set organization level defaults for configuration related to data sources.
π Return
π Endpoint
/organization/update
POST
π Back to Table of Contents
carbon.organizations.updateStats
Use this endpoint to reaggregate the statistics for an organization, for example aggregate_file_size. The reaggregation process is asyncronous so a webhook will be sent with the event type being FILE_STATISTICS_AGGREGATED to notify when the process is complee. After this aggregation is complete, the updated statistics can be retrieved using the /organization endpoint. The response of /organization willalso contain a timestamp of the last time the statistics were reaggregated.
π οΈ Usage
$result = $carbon->organizations->updateStats();
π Return
π Endpoint
/organization/statistics
POST
π Back to Table of Contents
carbon.users.all
List users within an organization
π οΈ Usage
$result = $carbon->users->all( pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], filters: [ ], order_by: "created_at", order_dir: "asc", include_count: False );
βοΈ Parameters
pagination: Pagination
filters: ListUsersFilters
order_by:
order_dir:
include_count: bool
π Return
π Endpoint
/list_users
POST
π Back to Table of Contents
carbon.users.delete
Delete Users
π οΈ Usage
$result = $carbon->users->delete( customer_ids: [ "string_example" ] );
βοΈ Parameters
customer_ids: string
[]
π Return
π Endpoint
/delete_users
POST
π Back to Table of Contents
carbon.users.get
User Endpoint
π οΈ Usage
$result = $carbon->users->get( customer_id: "string_example" );
βοΈ Parameters
customer_id: string
π Return
π Endpoint
/user
POST
π Back to Table of Contents
carbon.users.toggleUserFeatures
Toggle User Features
π οΈ Usage
$result = $carbon->users->toggleUserFeatures( configuration_key_name: "sparse_vectors", value: [] );
βοΈ Parameters
configuration_key_name:
value: object
π Return
π Endpoint
/modify_user_configuration
POST
π Back to Table of Contents
carbon.users.updateUsers
Update Users
π οΈ Usage
$result = $carbon->users->updateUsers( customer_ids: [ "string_example" ], auto_sync_enabled_sources: [ "string_example" ], max_files: -1, max_files_per_upload: -1, max_characters: -1, max_characters_per_file: -1, max_characters_per_upload: -1, auto_sync_interval: -1 );
βοΈ Parameters
customer_ids: string
[]
List of organization supplied user IDs
auto_sync_enabled_sources: AutoSyncEnabledSourcesProperty
max_files: int
Custom file upload limit for the user over all user's files across all uploads. If set, then the user will not be allowed to upload more files than this limit. If not set, or if set to -1, then the user will have no limit.
max_files_per_upload: int
Custom file upload limit for the user across a single upload. If set, then the user will not be allowed to upload more files than this limit in a single upload. If not set, or if set to -1, then the user will have no limit.
max_characters: int
Custom character upload limit for the user over all user's files across all uploads. If set, then the user will not be allowed to upload more characters than this limit. If not set, or if set to -1, then the user will have no limit.
max_characters_per_file: int
A single file upload from the user can not exceed this character limit. If set, then the file will not be synced if it exceeds this limit. If not set, or if set to -1, then the user will have no limit.
max_characters_per_upload: int
Custom character upload limit for the user across a single upload. If set, then the user won't be able to sync more than this many characters in one upload. If not set, or if set to -1, then the user will have no limit.
auto_sync_interval: int
The interval in hours at which the user's data sources should be synced. If not set or set to -1, the user will be synced at the organization level interval or default interval if that is also not set. Must be one of [3, 6, 12, 24]
π Return
π Endpoint
/update_users
POST
π Back to Table of Contents
carbon.users.whoAmI
Me Endpoint
π οΈ Usage
$result = $carbon->users->whoAmI();
π Return
π Endpoint
/whoami
GET
π Back to Table of Contents
carbon.utilities.fetchUrls
Extracts all URLs from a webpage.
Args: url (str): URL of the webpage
Returns: FetchURLsResponse: A response object with a list of URLs extracted from the webpage and the webpage content.
π οΈ Usage
$result = $carbon->utilities->fetchUrls( url: "url_example" );
βοΈ Parameters
url: string
π Return
π Endpoint
/fetch_urls
GET
π Back to Table of Contents
carbon.utilities.fetchWebpage
Fetch Urls V2
π οΈ Usage
$result = $carbon->utilities->fetchWebpage( url: "string_example" );
βοΈ Parameters
url: string
π Return
object
π Endpoint
/fetch_webpage
POST
π Back to Table of Contents
carbon.utilities.fetchYoutubeTranscripts
Fetches english transcripts from YouTube videos.
Args: id (str): The ID of the YouTube video. raw (bool): Whether to return the raw transcript or not. Defaults to False.
Returns: dict: A dictionary with the transcript of the YouTube video.
π οΈ Usage
$result = $carbon->utilities->fetchYoutubeTranscripts( id: "id_example", raw: False );
βοΈ Parameters
id: string
raw: bool
π Return
π Endpoint
/fetch_youtube_transcript
GET
π Back to Table of Contents
carbon.utilities.processSitemap
Retrieves all URLs from a sitemap, which can subsequently be utilized with our web_scrape
endpoint.
π οΈ Usage
$result = $carbon->utilities->processSitemap( url: "url_example" );
βοΈ Parameters
url: string
π Return
object
π Endpoint
/process_sitemap
GET
π Back to Table of Contents
carbon.utilities.scrapeSitemap
Extracts all URLs from a sitemap and performs a web scrape on each of them.
Args: sitemap_url (str): URL of the sitemap
Returns: dict: A response object with the status of the scraping job message.-->
π οΈ Usage
$result = $carbon->utilities->scrapeSitemap( url: "string_example", tags: [ "key": "string_example", ], max_pages_to_scrape: 1, chunk_size: 1500, chunk_overlap: 20, skip_embedding_generation: False, enable_auto_sync: False, generate_sparse_vectors: False, prepend_filename_to_chunks: False, html_tags_to_skip: [], css_classes_to_skip: [], css_selectors_to_skip: [], embedding_model: "OPENAI", url_paths_to_include: [], url_paths_to_exclude: [], urls_to_scrape: [], download_css_and_media: False, generate_chunks_only: False, store_file_only: False, use_premium_proxies: False );
βοΈ Parameters
url: string
tags: array<string, Tags1
>
max_pages_to_scrape: int
chunk_size: int
chunk_overlap: int
skip_embedding_generation: bool
enable_auto_sync: bool
generate_sparse_vectors: bool
prepend_filename_to_chunks: bool
html_tags_to_skip: string
[]
css_classes_to_skip: string
[]
css_selectors_to_skip: string
[]
embedding_model:
url_paths_to_include: string
[]
URL subpaths or directories that you want to include. For example if you want to only include URLs that start with /questions in stackoverflow.com, you will add /questions/ in this input
url_paths_to_exclude: string
[]
URL subpaths or directories that you want to exclude. For example if you want to exclude URLs that start with /questions in stackoverflow.com, you will add /questions/ in this input
urls_to_scrape: string
[]
You can submit a subset of URLs from the sitemap that should be scraped. To get the list of URLs, you can check out /process_sitemap endpoint. If left empty, all URLs from the sitemap will be scraped.
download_css_and_media: bool
Whether the scraper should download css and media from the page (images, fonts, etc). Scrapes might take longer to finish with this flag enabled, but the success rate is improved.
generate_chunks_only: bool
If this flag is enabled, the file will be chunked and stored with Carbon, but no embeddings will be generated. This overrides the skip_embedding_generation flag.
store_file_only: bool
If this flag is enabled, the file will be stored with Carbon, but no processing will be done.
use_premium_proxies: bool
If the default proxies are blocked and not returning results, this flag can be enabled to use alternate proxies (residential and office). Scrapes might take longer to finish with this flag enabled.
π Return
object
π Endpoint
/scrape_sitemap
POST
π Back to Table of Contents
carbon.utilities.scrapeWeb
Conduct a web scrape on a given webpage URL. Our web scraper is fully compatible with JavaScript and supports recursion depth, enabling you to efficiently extract all content from the target website.
π οΈ Usage
$result = $carbon->utilities->scrapeWeb( body: [ [ "url" => "url_example", "recursion_depth" => 3, "max_pages_to_scrape" => 100, "chunk_size" => 1500, "chunk_overlap" => 20, "skip_embedding_generation" => False, "enable_auto_sync" => False, "generate_sparse_vectors" => False, "prepend_filename_to_chunks" => False, "html_tags_to_skip" => [], "css_classes_to_skip" => [], "css_selectors_to_skip" => [], "embedding_model" => "OPENAI", "url_paths_to_include" => [], "download_css_and_media" => False, "generate_chunks_only" => False, "store_file_only" => False, "use_premium_proxies" => False, ] ], );
βοΈ Request Body
π Return
object
π Endpoint
/web_scrape
POST
π Back to Table of Contents
carbon.utilities.searchUrls
Perform a web search and obtain a list of relevant URLs.
As an illustration, when you perform a search for βcontent related to MRNA,β you will receive a list of links such as the following:
- https://tomrenz.substack.com/p/mrna-and-why-it-matters
- https://www.statnews.com/2020/11/10/the-story-of-mrna-how-a-once-dismissed-idea-became-a-leading-technology-in-the-covid-vaccine-race/
- https://www.statnews.com/2022/11/16/covid-19-vaccines-were-a-success-but-mrna-still-has-a-delivery-problem/
- https://joomi.substack.com/p/were-still-being-misled-about-how
Subsequently, you can submit these links to the web_scrape endpoint in order to retrieve the content of the respective web pages.
Args: query (str): Query to search for
Returns: FetchURLsResponse: A response object with a list of URLs for a given search query.
π οΈ Usage
$result = $carbon->utilities->searchUrls( query: "query_example" );
βοΈ Parameters
query: string
π Return
π Endpoint
/search_urls
GET
π Back to Table of Contents
carbon.utilities.userWebpages
User Web Pages
π οΈ Usage
$result = $carbon->utilities->userWebpages( filters: [ ], pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "asc" );
βοΈ Parameters
filters: UserWebPagesFilters
pagination: Pagination
order_by:
order_dir:
π Return
object
π Endpoint
/user_webpages
POST
π Back to Table of Contents
carbon.webhooks.addUrl
Add Webhook Url
π οΈ Usage
$result = $carbon->webhooks->addUrl( url: "string_example" );
βοΈ Parameters
url: string
π Return
π Endpoint
/add_webhook
POST
π Back to Table of Contents
carbon.webhooks.deleteUrl
Delete Webhook Url
π οΈ Usage
$result = $carbon->webhooks->deleteUrl( webhook_id: 1 );
βοΈ Parameters
webhook_id: int
π Return
π Endpoint
/delete_webhook/{webhook_id}
DELETE
π Back to Table of Contents
carbon.webhooks.urls
Webhook Urls
π οΈ Usage
$result = $carbon->webhooks->urls( pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", filters: [ "ids" => [], ] );
βοΈ Parameters
pagination: Pagination
order_by:
order_dir:
filters: WebhookFilters
π Return
π Endpoint
/webhooks
POST
π Back to Table of Contents
carbon.whiteLabel.all
List White Labels
π οΈ Usage
$result = $carbon->whiteLabel->all( pagination: [ "limit" => 10, "offset" => 0, "starting_id" => 0, ], order_by: "created_at", order_dir: "desc", filters: [ "ids" => [], "data_source_type" => [], ] );
βοΈ Parameters
pagination: Pagination
order_by:
order_dir:
filters: WhiteLabelFilters
π Return
object
π Endpoint
/white_label/list
POST
π Back to Table of Contents
carbon.whiteLabel.create
Create White Labels
π οΈ Usage
$result = $carbon->whiteLabel->create( body: [ [ "data_source_type" => "GOOGLE_DRIVE", "credentials" => [ "client_id" => "client_id_example", "redirect_uri" => "redirect_uri_example", ], ] ], );
βοΈ Request Body
WhiteLabelCreateRequestInner
[]
π Return
object
π Endpoint
/white_label/create
POST
π Back to Table of Contents
carbon.whiteLabel.delete
Delete White Labels
π οΈ Usage
$result = $carbon->whiteLabel->delete( ids: [ 1 ] );
βοΈ Parameters
ids: int
[]
π Return
object
π Endpoint
/white_label/delete
POST
π Back to Table of Contents
carbon.whiteLabel.update
Update White Label
π οΈ Usage
$result = $carbon->whiteLabel->update( body: [ "data_source_type" => "GOOGLE_DRIVE", "credentials" => [ "client_id" => "client_id_example", "redirect_uri" => "redirect_uri_example", ], ], data_source_type: "INTERCOM", credentials: [ "client_id" => "client_id_example", "redirect_uri" => "redirect_uri_example", ] );
βοΈ Parameters
data_source_type: string
credentials: Credentials
π Return
object
π Endpoint
/white_label/update
POST
π Back to Table of Contents
Author
This PHP package is automatically generated by Konfig