Skip to content

Elevate your projects with the fastest & most powerful text to speech & voice API. Quickly generate AI voices in multiple languages for your chatbots, agents, LLMs, websites, apps and more. ElevenLabs's TypeScript SDK generated by Konfig (https://konfigthis.com/).

Notifications You must be signed in to change notification settings

konfig-sdks/eleven-labs-typescript-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Visit Elevenlabs

This is the documentation for the ElevenLabs API. You can use this API to use our service programmatically, this is done by using your xi-api-key.
You can view your xi-api-key using the 'Profile' tab on https://elevenlabs.io. Our API is experimental so all endpoints are subject to change.

Table of Contents

Installation

Getting Started

import { ElevenLabs } from "eleven-labs-typescript-sdk";

const elevenlabs = new ElevenLabs({
  // Defining the base path is optional and defaults to https://api.elevenlabs.io
  // basePath: "https://api.elevenlabs.io",
});

const createFromFileResponse =
  await elevenlabs.pronunciationDictionary.createFromFile({
    name: "name_example",
  });

console.log(createFromFileResponse);

Reference

elevenlabs.pronunciationDictionary.createFromFile

Creates a new pronunciation dictionary from a lexicon .PLS file

πŸ› οΈ Usage

const createFromFileResponse =
  await elevenlabs.pronunciationDictionary.createFromFile({
    name: "name_example",
  });

βš™οΈ Parameters

name: string

The name of the pronunciation dictionary, used for identification only.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

description: string

A description of the pronunciation dictionary, used for identification only.

file: Uint8Array | File | buffer.File

A lexicon .pls file which we will use to initialize the project with.

πŸ”„ Return

AddPronunciationDictionaryResponseModel

🌐 Endpoint

/v1/pronunciation-dictionaries/add-from-file POST

πŸ”™ Back to Table of Contents


elevenlabs.pronunciationDictionary.getMetadata

Get metadata for a pronunciation dictionary

πŸ› οΈ Usage

const getMetadataResponse =
  await elevenlabs.pronunciationDictionary.getMetadata({
    pronunciationDictionaryId: "pronunciationDictionaryId_example",
  });

βš™οΈ Parameters

pronunciationDictionaryId: string

The id of the pronunciation dictionary

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

GetPronunciationDictionaryMetadataResponseModel

🌐 Endpoint

/v1/pronunciation-dictionaries/{pronunciation_dictionary_id} GET

πŸ”™ Back to Table of Contents


elevenlabs.admin.archiveCouponPromocodePost

Archive Coupon

πŸ› οΈ Usage

const archiveCouponPromocodePostResponse =
  await elevenlabs.admin.archiveCouponPromocodePost({
    promocode: "promocode_example",
  });

βš™οΈ Parameters

promocode: string

🌐 Endpoint

/admin/n8enylacgd/coupon/{promocode}/archive POST

πŸ”™ Back to Table of Contents


elevenlabs.admin.editVanityLink

Edit Vanity Link

πŸ› οΈ Usage

const editVanityLinkResponse = await elevenlabs.admin.editVanityLink({
  vanityLinkId: "vanityLinkId_example",
  vanity_slug: "vanity_slug_example",
  target_url: "target_url_example",
  comment: "comment_example",
});

βš™οΈ Parameters

vanity_slug: string

The new slug for the vanity link. For example, if you want the vanity link to be /blog/NEW_SLUG, enter NEW_SLUG.

target_url: string

The new URL that the vanity link should redirect to.

comment: string

A new comment or description for the vanity link.

vanityLinkId: string

🌐 Endpoint

/admin/n8enylacgd/vanity-link/{vanity_link_id}/update POST

πŸ”™ Back to Table of Contents


elevenlabs.admin.getAllCoupons

Get All Coupons

πŸ› οΈ Usage

const getAllCouponsResponse = await elevenlabs.admin.getAllCoupons();

🌐 Endpoint

/admin/n8enylacgd/coupons GET

πŸ”™ Back to Table of Contents


elevenlabs.admin.getAllVanityLinks

Get All Vanity Links

πŸ› οΈ Usage

const getAllVanityLinksResponse = await elevenlabs.admin.getAllVanityLinks();

🌐 Endpoint

/admin/n8enylacgd/vanity-links GET

πŸ”™ Back to Table of Contents


elevenlabs.admin.getVanityLink

Get Vanity Link

πŸ› οΈ Usage

const getVanityLinkResponse = await elevenlabs.admin.getVanityLink({
  slug: "slug_example",
});

βš™οΈ Parameters

slug: string

🌐 Endpoint

/admin/n8enylacgd/vanity-link/{slug} GET

πŸ”™ Back to Table of Contents


elevenlabs.admin.removeVanityLink

Delete Vanity Link

πŸ› οΈ Usage

const removeVanityLinkResponse = await elevenlabs.admin.removeVanityLink({
  vanityLinkId: "vanityLinkId_example",
});

βš™οΈ Parameters

vanityLinkId: string

🌐 Endpoint

/admin/n8enylacgd/vanity-link/{vanity_link_id}/delete POST

πŸ”™ Back to Table of Contents


elevenlabs.audioNative.createProjectWithEmbeddableHtml

Creates AudioNative enabled project, optionally starts conversion and returns project id and embeddable html snippet.

πŸ› οΈ Usage

const createProjectWithEmbeddableHtmlResponse =
  await elevenlabs.audioNative.createProjectWithEmbeddableHtml({
    name: "name_example",
    small: false,
    sessionization: 0,
    file: fs.readFileSync("/path/to/file"),
    auto_convert: false,
  });

βš™οΈ Parameters

name: string

Project name.

file: Uint8Array | File | buffer.File

Either txt or HTML input file containing the article content. HTML should be formatted as follows \'<html><body><div><p>Your content</p><h5>More of your content</h5><p>Some more of your content</p></div></body></html>\'

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

title: string

Title used in the player and inserted at the top of the uploaded article. If not provided, the default title set in the Player settings is used.

image: string

Image URL used in the player. If not provided, default image set in the Player settings is used.

author: string

Author used in the player and inserted at the start of the uploaded article. If not provided, the default author set in the Player settings is used.

small: boolean

Whether to use small player or not. If not provided, default value set in the Player settings is used.

textColor: string

Text color used in the player. If not provided, default text color set in the Player settings is used.

backgroundColor: string

Background color used in the player. If not provided, default background color set in the Player settings is used.

sessionization: number

Specifies for how many minutes to persist the session across page reloads. If not provided, default sessionization set in the Player settings is used.

voiceId: string

Voice ID used to voice the content. If not provided, default voice ID set in the Player settings is used.

modelId: string

TTS Model ID used in the player. If not provided, default model ID set in the Player settings is used.

autoConvert: boolean

Whether to auto convert the project to audio or not.

πŸ”„ Return

AudioNativeCreateProjectResponseModel

🌐 Endpoint

/v1/audio-native POST

πŸ”™ Back to Table of Contents


elevenlabs.dubbing.deleteProject

Deletes a dubbing project.

πŸ› οΈ Usage

const deleteProjectResponse = await elevenlabs.dubbing.deleteProject({
  dubbingId: "dubbingId_example",
});

βš™οΈ Parameters

dubbingId: string

ID of the dubbing project.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/dubbing/{dubbing_id} DELETE

πŸ”™ Back to Table of Contents


elevenlabs.dubbing.fileInLanguage

Dubs provided audio or video file into given language.

πŸ› οΈ Usage

const fileInLanguageResponse = await elevenlabs.dubbing.fileInLanguage({
  source_lang: "auto",
  target_lang: "target_lang_example",
  num_speakers: 0,
  watermark: false,
  highest_resolution: false,
  dubbing_studio: false,
});

βš™οΈ Parameters

targetLang: string

Target language.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

mode: string

automatic or manual.

file: Uint8Array | File | buffer.File

One or more audio files to clone the voice from

csvFile: Uint8Array | File | buffer.File

CSV file containing transcription/translation metadata

foregroundAudioFile: Uint8Array | File | buffer.File

For use only with csv input

backgroundAudioFile: Uint8Array | File | buffer.File

For use only with csv input

name: string

Name of the dubbing project.

sourceUrl: string

URL of the source video/audio file.

sourceLang: string

Source language.

numSpeakers: number

Number of speakers to use for the dubbing.

watermark: boolean

Whether to apply watermark to the output video.

startTime: number

Start time of the source video/audio file.

endTime: number

End time of the source video/audio file.

highestResolution: boolean

Whether to use the highest resolution available.

dubbingStudio: boolean

Whether to prepare dub for edits in dubbing studio.

πŸ”„ Return

DoDubbingResponseModel

🌐 Endpoint

/v1/dubbing POST

πŸ”™ Back to Table of Contents


elevenlabs.dubbing.getFile

Returns dubbed file as a streamed file. Videos will be returned in MP4 format and audio only dubs will be returned in MP3.

πŸ› οΈ Usage

const getFileResponse = await elevenlabs.dubbing.getFile({
  dubbingId: "dubbingId_example",
  languageCode: "languageCode_example",
});

βš™οΈ Parameters

dubbingId: string

ID of the dubbing project.

languageCode: string

ID of the language.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/dubbing/{dubbing_id}/audio/{language_code} GET

πŸ”™ Back to Table of Contents


elevenlabs.dubbing.getProjectMetadata

Returns metadata about a dubbing project, including whether it's still in progress or not

πŸ› οΈ Usage

const getProjectMetadataResponse = await elevenlabs.dubbing.getProjectMetadata({
  dubbingId: "dubbingId_example",
});

βš™οΈ Parameters

dubbingId: string

ID of the dubbing project.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

DubbingMetadataResponse

🌐 Endpoint

/v1/dubbing/{dubbing_id} GET

πŸ”™ Back to Table of Contents


elevenlabs.models.listAvailableModels

Gets a list of available models.

πŸ› οΈ Usage

const listAvailableModelsResponse = await elevenlabs.models.listAvailableModels(
  {}
);

βš™οΈ Parameters

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

ModelResponseModel

🌐 Endpoint

/v1/models GET

πŸ”™ Back to Table of Contents


elevenlabs.projects.createNewProject

Creates a new project, it can be either initialized as blank, from a document or from a URL.

πŸ› οΈ Usage

const createNewProjectResponse = await elevenlabs.projects.createNewProject({
  name: "name_example",
  default_title_voice_id: "default_title_voice_id_example",
  default_paragraph_voice_id: "default_paragraph_voice_id_example",
  default_model_id: "default_model_id_example",
  quality_preset: "standard",
  acx_volume_normalization: false,
  volume_normalization: false,
  pronunciation_dictionary_locators: [
    "pronunciation_dictionary_locators_example",
  ],
});

βš™οΈ Parameters

name: string

The name of the project, used for identification only.

defaultTitleVoiceId: string

The voice_id that corresponds to the default voice used for new titles.

defaultParagraphVoiceId: string

The voice_id that corresponds to the default voice used for new paragraphs.

defaultModelId: string

The model_id of the model to be used for this project, you can query GET https://api.elevenlabs.io/v1/models to list all available models.

pronunciationDictionaryLocators: string[]

A list of pronunciation dictionary locators (id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

title: string

An optional name of the author of the project, this will be added as metadata to the mp3 file on project / chapter download.

fromUrl: string

An optional URL from which we will extract content to initialize the project. If this is set, \'from_url\' must be null. If neither \'from_url\' or \'from_document\' are provided we will initialize the project as blank.

fromDocument: Uint8Array | File | buffer.File

An optional .epub, .pdf, .txt or similar file can be provided. If provided, we will initialize the project with its content. If this is set, \'from_url\' must be null. If neither \'from_url\' or \'from_document\' are provided we will initialize the project as blank.

qualityPreset: string

Output quality of the generated audio. Must be one of: standard - standard output format, 128kbps with 44.1kHz sample rate. high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the character cost by 20%. ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the character cost by 50%.

author: string

An optional name of the author of the project, this will be added as metadata to the mp3 file on project / chapter download.

isbnNumber: string

An optional ISBN number of the project you want to create, this will be added as metadata to the mp3 file on project / chapter download.

acxVolumeNormalization: boolean

[Deprecated] When the project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements

volumeNormalization: boolean

When the project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements

callbackUrl: string

A url that will be called by our service when the project is converted with a json containing the status of the conversion

πŸ”„ Return

AddProjectResponseModel

🌐 Endpoint

/v1/projects/add POST

πŸ”™ Back to Table of Contents


elevenlabs.projects.deleteById

Delete a project by its project_id.

πŸ› οΈ Usage

const deleteByIdResponse = await elevenlabs.projects.deleteById({
  projectId: "projectId_example",
});

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/projects/{project_id} DELETE

πŸ”™ Back to Table of Contents


elevenlabs.projects.deleteChapterById

Delete a chapter by its chapter_id.

πŸ› οΈ Usage

const deleteChapterByIdResponse = await elevenlabs.projects.deleteChapterById({
  projectId: "projectId_example",
  chapterId: "chapterId_example",
});

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

chapterId: string

The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/projects/{project_id}/chapters/{chapter_id} DELETE

πŸ”™ Back to Table of Contents


elevenlabs.projects.getAllProjects

Returns a list of your projects together and its metadata.

πŸ› οΈ Usage

const getAllProjectsResponse = await elevenlabs.projects.getAllProjects({});

βš™οΈ Parameters

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

GetProjectsResponseModel

🌐 Endpoint

/v1/projects GET

πŸ”™ Back to Table of Contents


elevenlabs.projects.getById

Returns information about a specific project. This endpoint returns more detailed information about a project than GET api.elevenlabs.io/v1/projects.

πŸ› οΈ Usage

const getByIdResponse = await elevenlabs.projects.getById({
  projectId: "projectId_example",
});

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

ProjectExtendedResponseModel

🌐 Endpoint

/v1/projects/{project_id} GET

πŸ”™ Back to Table of Contents


elevenlabs.projects.getChapterById

Returns information about a specific chapter.

πŸ› οΈ Usage

const getChapterByIdResponse = await elevenlabs.projects.getChapterById({
  projectId: "projectId_example",
  chapterId: "chapterId_example",
});

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

chapterId: string

The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

ChapterResponseModel

🌐 Endpoint

/v1/projects/{project_id}/chapters/{chapter_id} GET

πŸ”™ Back to Table of Contents


elevenlabs.projects.getChapterSnapshots

Gets information about all the snapshots of a chapter, each snapshot corresponds can be downloaded as audio. Whenever a chapter is converted a snapshot will be automatically created.

πŸ› οΈ Usage

const getChapterSnapshotsResponse =
  await elevenlabs.projects.getChapterSnapshots({
    projectId: "projectId_example",
    chapterId: "chapterId_example",
  });

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

chapterId: string

The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

ChapterSnapshotsResponseModel

🌐 Endpoint

/v1/projects/{project_id}/chapters/{chapter_id}/snapshots GET

πŸ”™ Back to Table of Contents


elevenlabs.projects.listChapters

Returns a list of your chapters for a project together and its metadata.

πŸ› οΈ Usage

const listChaptersResponse = await elevenlabs.projects.listChapters({
  projectId: "projectId_example",
});

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

GetChaptersResponseModel

🌐 Endpoint

/v1/projects/{project_id}/chapters GET

πŸ”™ Back to Table of Contents


elevenlabs.projects.listSnapshots

Gets the snapshots of a project.

πŸ› οΈ Usage

const listSnapshotsResponse = await elevenlabs.projects.listSnapshots({
  projectId: "projectId_example",
});

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

ProjectSnapshotsResponseModel

🌐 Endpoint

/v1/projects/{project_id}/snapshots GET

πŸ”™ Back to Table of Contents


elevenlabs.projects.startChapterConversion

Starts conversion of a specific chapter.

πŸ› οΈ Usage

const startChapterConversionResponse =
  await elevenlabs.projects.startChapterConversion({
    projectId: "projectId_example",
    chapterId: "chapterId_example",
  });

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

chapterId: string

The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/projects/{project_id}/chapters/{chapter_id}/convert POST

πŸ”™ Back to Table of Contents


elevenlabs.projects.startConversion

Starts conversion of a project and all of its chapters.

πŸ› οΈ Usage

const startConversionResponse = await elevenlabs.projects.startConversion({
  projectId: "projectId_example",
});

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/projects/{project_id}/convert POST

πŸ”™ Back to Table of Contents


elevenlabs.projects.streamAudioFromSnapshot

Stream the audio from a project snapshot.

πŸ› οΈ Usage

const streamAudioFromSnapshotResponse =
  await elevenlabs.projects.streamAudioFromSnapshot({
    projectId: "projectId_example",
    projectSnapshotId: "projectSnapshotId_example",
  });

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

projectSnapshotId: string

The project_snapshot_id of the project snapshot. You can query GET /v1/projects/{project_id}/snapshots to list all available snapshots for a project.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/projects/{project_id}/snapshots/{project_snapshot_id}/stream POST

πŸ”™ Back to Table of Contents


elevenlabs.projects.streamAudioFromSnapshotPost

Stream the audio from a chapter snapshot. Use GET /v1/projects/{project_id}/chapters/{chapter_id}/snapshots to return the chapter snapshots of a chapter.

πŸ› οΈ Usage

const streamAudioFromSnapshotPostResponse =
  await elevenlabs.projects.streamAudioFromSnapshotPost({
    projectId: "projectId_example",
    chapterId: "chapterId_example",
    chapterSnapshotId: "chapterSnapshotId_example",
  });

βš™οΈ Parameters

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

chapterId: string

The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.

chapterSnapshotId: string

The chapter_snapshot_id of the chapter snapshot. You can query GET /v1/projects/{project_id}/chapters/{chapter_id}/snapshots to the all available snapshots for a chapter.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/projects/{project_id}/chapters/{chapter_id}/snapshots/{chapter_snapshot_id}/stream POST

πŸ”™ Back to Table of Contents


elevenlabs.projects.updatePronunciationDictionaries

Updates the set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.

πŸ› οΈ Usage

const updatePronunciationDictionariesResponse =
  await elevenlabs.projects.updatePronunciationDictionaries({
    projectId: "projectId_example",
    pronunciation_dictionary_locators: [
      {
        pronunciation_dictionary_id: "pronunciation_dictionary_id_example",
        version_id: "version_id_example",
      },
    ],
  });

βš™οΈ Parameters

pronunciation_dictionary_locators: PronunciationDictionaryVersionLocatorDBModel[]

A list of pronunciation dictionary locators (id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody

projectId: string

The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/projects/{project_id}/update-pronunciation-dictionaries POST

πŸ”™ Back to Table of Contents


elevenlabs.redirect.toMintlifyDocsGet

Redirect To Mintlify

πŸ› οΈ Usage

const toMintlifyDocsGetResponse = await elevenlabs.redirect.toMintlifyDocsGet();

🌐 Endpoint

/docs GET

πŸ”™ Back to Table of Contents


elevenlabs.samples.getAudioFromSample

Returns the audio corresponding to a sample attached to a voice.

πŸ› οΈ Usage

const getAudioFromSampleResponse = await elevenlabs.samples.getAudioFromSample({
  voiceId: "voiceId_example",
  sampleId: "sampleId_example",
});

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

sampleId: string

Sample ID to be used, you can use GET https://api.elevenlabs.io/v1/voices/{voice_id} to list all the available samples for a voice.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/voices/{voice_id}/samples/{sample_id}/audio GET

πŸ”™ Back to Table of Contents


elevenlabs.samples.removeById

Removes a sample by its ID.

πŸ› οΈ Usage

const removeByIdResponse = await elevenlabs.samples.removeById({
  voiceId: "voiceId_example",
  sampleId: "sampleId_example",
});

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

sampleId: string

Sample ID to be used, you can use GET https://api.elevenlabs.io/v1/voices/{voice_id} to list all the available samples for a voice.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/voices/{voice_id}/samples/{sample_id} DELETE

πŸ”™ Back to Table of Contents


elevenlabs.speechHistory.deleteHistoryItemById

Delete a history item by its ID

πŸ› οΈ Usage

const deleteHistoryItemByIdResponse =
  await elevenlabs.speechHistory.deleteHistoryItemById({
    historyItemId: "historyItemId_example",
  });

βš™οΈ Parameters

historyItemId: string

History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/history/{history_item_id} DELETE

πŸ”™ Back to Table of Contents


elevenlabs.speechHistory.downloadHistoryItems

Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.

πŸ› οΈ Usage

const downloadHistoryItemsResponse =
  await elevenlabs.speechHistory.downloadHistoryItems({
    history_item_ids: ["history_item_ids_example"],
  });

βš™οΈ Parameters

history_item_ids: string[]

A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/history/download POST

πŸ”™ Back to Table of Contents


elevenlabs.speechHistory.getGeneratedAudioMetadata

Returns metadata about all your generated audio.

πŸ› οΈ Usage

const getGeneratedAudioMetadataResponse =
  await elevenlabs.speechHistory.getGeneratedAudioMetadata({
    pageSize: 100,
  });

βš™οΈ Parameters

pageSize: number

How many history items to return at maximum. Can not exceed 1000, defaults to 100.

startAfterHistoryItemId: string

After which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date.

voiceId: string

Voice ID to be filtered for, you can use GET https://api.elevenlabs.io/v1/voices to receive a list of voices and their IDs.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

GetSpeechHistoryResponseModel

🌐 Endpoint

/v1/history GET

πŸ”™ Back to Table of Contents


elevenlabs.speechHistory.getHistoryItemAudio

Returns the audio of an history item.

πŸ› οΈ Usage

const getHistoryItemAudioResponse =
  await elevenlabs.speechHistory.getHistoryItemAudio({
    historyItemId: "historyItemId_example",
  });

βš™οΈ Parameters

historyItemId: string

History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/history/{history_item_id}/audio GET

πŸ”™ Back to Table of Contents


elevenlabs.speechHistory.getHistoryItemById

Returns information about an history item by its ID.

πŸ› οΈ Usage

const getHistoryItemByIdResponse =
  await elevenlabs.speechHistory.getHistoryItemById({
    historyItemId: "historyItemId_example",
  });

βš™οΈ Parameters

historyItemId: string

History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

SpeechHistoryItemResponseModel

🌐 Endpoint

/v1/history/{history_item_id} GET

πŸ”™ Back to Table of Contents


elevenlabs.speechToSpeech.createWithVoice

Create speech by combining the content and emotion of the uploaded audio with a voice of your choice.

πŸ› οΈ Usage

const createWithVoiceResponse = await elevenlabs.speechToSpeech.createWithVoice(
  {
    voiceId: "voiceId_example",
    optimizeStreamingLatency: 0,
    audio: fs.readFileSync("/path/to/file"),
    model_id: "eleven_english_sts_v2",
  }
);

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

audio: Uint8Array | File | buffer.File

The audio file which holds the content and emotion that will control the generated speech.

optimizeStreamingLatency: number

You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

modelId: string

Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for speech to speech, you can check this using the can_do_voice_conversion property.

voiceSettings: string

Voice settings overriding stored setttings for the given voice. They are applied only on the given request. Needs to be send as a JSON encoded string.

🌐 Endpoint

/v1/speech-to-speech/{voice_id} POST

πŸ”™ Back to Table of Contents


elevenlabs.speechToSpeech.createWithVoice_0

Create speech by combining the content and emotion of the uploaded audio with a voice of your choice and returns an audio stream.

πŸ› οΈ Usage

const createWithVoice_0Response =
  await elevenlabs.speechToSpeech.createWithVoice_0({
    voiceId: "voiceId_example",
    optimizeStreamingLatency: 0,
    audio: fs.readFileSync("/path/to/file"),
    model_id: "eleven_english_sts_v2",
  });

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

audio: Uint8Array | File | buffer.File

The audio file which holds the content and emotion that will control the generated speech.

optimizeStreamingLatency: number

You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

modelId: string

Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for speech to speech, you can check this using the can_do_voice_conversion property.

voiceSettings: string

Voice settings overriding stored setttings for the given voice. They are applied only on the given request. Needs to be send as a JSON encoded string.

🌐 Endpoint

/v1/speech-to-speech/{voice_id}/stream POST

πŸ”™ Back to Table of Contents


elevenlabs.textToSpeech.convertTextToSpeech

Converts text into speech using a voice of your choice and returns audio.

πŸ› οΈ Usage

const convertTextToSpeechResponse =
  await elevenlabs.textToSpeech.convertTextToSpeech({
    voiceId: "voiceId_example",
    optimizeStreamingLatency: 0,
    outputFormat: "mp3_44100_128",
    text: "text_example",
    model_id: "eleven_monolingual_v1",
    pronunciation_dictionary_locators: [],
  });

βš™οΈ Parameters

text: string

The text that will get converted into speech.

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

model_id: string

Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.

Voice settings overriding stored setttings for the given voice. They are applied only on the given request.

pronunciation_dictionary_locators: PronunciationDictionaryVersionLocatorDBModel[]

A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request

optimizeStreamingLatency: number

You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.

outputFormat: string

Output format of the generated audio. Must be one of: mp3_22050_32 - output format, mp3 with 22.05kHz sample rate at 32kbps. mp3_44100_32 - output format, mp3 with 44.1kHz sample rate at 32kbps. mp3_44100_64 - output format, mp3 with 44.1kHz sample rate at 64kbps. mp3_44100_96 - output format, mp3 with 44.1kHz sample rate at 96kbps. mp3_44100_128 - default output format, mp3 with 44.1kHz sample rate at 128kbps. mp3_44100_192 - output format, mp3 with 44.1kHz sample rate at 192kbps. Requires you to be subscribed to Creator tier or above. pcm_16000 - PCM format (S16LE) with 16kHz sample rate. pcm_22050 - PCM format (S16LE) with 22.05kHz sample rate. pcm_24000 - PCM format (S16LE) with 24kHz sample rate. pcm_44100 - PCM format (S16LE) with 44.1kHz sample rate. Requires you to be subscribed to Pro tier or above. ulaw_8000 - ΞΌ-law format (sometimes written mu-law, often approximated as u-law) with 8kHz sample rate. Note that this format is commonly used for Twilio audio inputs.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/text-to-speech/{voice_id} POST

πŸ”™ Back to Table of Contents


elevenlabs.textToSpeech.convertTextToSpeechStream

Converts text into speech using a voice of your choice and returns audio as an audio stream.

πŸ› οΈ Usage

const convertTextToSpeechStreamResponse =
  await elevenlabs.textToSpeech.convertTextToSpeechStream({
    voiceId: "voiceId_example",
    optimizeStreamingLatency: 0,
    outputFormat: "mp3_44100_128",
    text: "text_example",
    model_id: "eleven_monolingual_v1",
    pronunciation_dictionary_locators: [],
  });

βš™οΈ Parameters

text: string

The text that will get converted into speech.

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

model_id: string

Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.

Voice settings overriding stored setttings for the given voice. They are applied only on the given request.

pronunciation_dictionary_locators: PronunciationDictionaryVersionLocatorDBModel[]

A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request

optimizeStreamingLatency: number

You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.

outputFormat: string

Output format of the generated audio. Must be one of: mp3_22050_32 - output format, mp3 with 22.05kHz sample rate at 32kbps. mp3_44100_32 - output format, mp3 with 44.1kHz sample rate at 32kbps. mp3_44100_64 - output format, mp3 with 44.1kHz sample rate at 64kbps. mp3_44100_96 - output format, mp3 with 44.1kHz sample rate at 96kbps. mp3_44100_128 - default output format, mp3 with 44.1kHz sample rate at 128kbps. mp3_44100_192 - output format, mp3 with 44.1kHz sample rate at 192kbps. Requires you to be subscribed to Creator tier or above. pcm_16000 - PCM format (S16LE) with 16kHz sample rate. pcm_22050 - PCM format (S16LE) with 22.05kHz sample rate. pcm_24000 - PCM format (S16LE) with 24kHz sample rate. pcm_44100 - PCM format (S16LE) with 44.1kHz sample rate. Requires you to be subscribed to Pro tier or above. ulaw_8000 - ΞΌ-law format (sometimes written mu-law, often approximated as u-law) with 8kHz sample rate. Note that this format is commonly used for Twilio audio inputs.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/text-to-speech/{voice_id}/stream POST

πŸ”™ Back to Table of Contents


elevenlabs.user.getInfo

Gets information about the user

πŸ› οΈ Usage

const getInfoResponse = await elevenlabs.user.getInfo({});

βš™οΈ Parameters

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

UserResponseModel

🌐 Endpoint

/v1/user GET

πŸ”™ Back to Table of Contents


elevenlabs.user.getSubscriptionInfo

Gets extended information about the users subscription

πŸ› οΈ Usage

const getSubscriptionInfoResponse = await elevenlabs.user.getSubscriptionInfo(
  {}
);

βš™οΈ Parameters

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

ExtendedSubscriptionResponseModel

🌐 Endpoint

/v1/user/subscription GET

πŸ”™ Back to Table of Contents


elevenlabs.voiceGeneration.createVoice

Create a previously generated voice. This endpoint should be called after you fetched a generated_voice_id using /v1/voice-generation/generate-voice.

πŸ› οΈ Usage

const createVoiceResponse = await elevenlabs.voiceGeneration.createVoice({
  voice_name: "voice_name_example",
  voice_description: "voice_description_example",
  generated_voice_id: "generated_voice_id_example",
});

βš™οΈ Parameters

voice_name: string

Name to use for the created voice.

voice_description: string

Description to use for the created voice.

generated_voice_id: string

The generated_voice_id to create, call POST /v1/voice-generation/generate-voice and fetch the generated_voice_id from the response header if don\'t have one yet.

labels: Record<string, string>

Optional, metadata to add to the created voice. Defaults to None.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

VoiceResponseModel

🌐 Endpoint

/v1/voice-generation/create-voice POST

πŸ”™ Back to Table of Contents


elevenlabs.voiceGeneration.generateRandomVoice

Generate a random voice based on parameters. This method returns a generated_voice_id in the response header, and a sample of the voice in the body. If you like the generated voice call /v1/voice-generation/create-voice with the generated_voice_id to create the voice.

πŸ› οΈ Usage

const generateRandomVoiceResponse =
  await elevenlabs.voiceGeneration.generateRandomVoice({
    gender: "female",
    accent: "accent_example",
    age: "young",
    accent_strength: 3.14,
    text: "text_example",
  });

βš™οΈ Parameters

gender: string

Category code corresponding to the gender of the generated voice. Possible values: female, male.

accent: string

Category code corresponding to the accent of the generated voice. Possible values: american, british, african, australian, indian.

age: string

Category code corresponding to the age of the generated voice. Possible values: young, middle_aged, old.

accent_strength: number

The strength of the accent of the generated voice. Has to be between 0.3 and 2.0.

text: string

Text to generate, text length has to be between 100 and 1000.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/voice-generation/generate-voice POST

πŸ”™ Back to Table of Contents


elevenlabs.voiceGeneration.getVoiceGenerationParameters

Get possible parameters for the /v1/voice-generation/generate-voice endpoint.

πŸ› οΈ Usage

const getVoiceGenerationParametersResponse =
  await elevenlabs.voiceGeneration.getVoiceGenerationParameters();

πŸ”„ Return

VoiceGenerationParameterResponseModel

🌐 Endpoint

/v1/voice-generation/generate-voice/parameters GET

πŸ”™ Back to Table of Contents


elevenlabs.voices.addToCollection

Add a sharing voice to your collection of voices in VoiceLab.

πŸ› οΈ Usage

const addToCollectionResponse = await elevenlabs.voices.addToCollection({
  publicUserId: "publicUserId_example",
  voiceId: "voiceId_example",
  new_name: "new_name_example",
});

βš™οΈ Parameters

new_name: string

The name that identifies this voice. This will be displayed in the dropdown of the website.

publicUserId: string

Public user ID used to publicly identify ElevenLabs users.

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

AddVoiceResponseModel

🌐 Endpoint

/v1/voices/add/{public_user_id}/{voice_id} POST

πŸ”™ Back to Table of Contents


elevenlabs.voices.addVoiceToCollection

Add a new voice to your collection of voices in VoiceLab.

πŸ› οΈ Usage

const addVoiceToCollectionResponse =
  await elevenlabs.voices.addVoiceToCollection({
    name: "name_example",
    files: [fs.readFileSync("/path/to/file")],
  });

βš™οΈ Parameters

name: string

The name that identifies this voice. This will be displayed in the dropdown of the website.

files: Uint8Array | File | buffer.File[]

One or more audio files to clone the voice from

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

description: string

How would you describe the voice?

labels: string

Serialized labels dictionary for the voice.

πŸ”„ Return

AddVoiceResponseModel

🌐 Endpoint

/v1/voices/add POST

πŸ”™ Back to Table of Contents


elevenlabs.voices.deleteById

Deletes a voice by its ID.

πŸ› οΈ Usage

const deleteByIdResponse = await elevenlabs.voices.deleteById({
  voiceId: "voiceId_example",
});

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/voices/{voice_id} DELETE

πŸ”™ Back to Table of Contents


elevenlabs.voices.editSettingsPost

Edit your settings for a specific voice. "similarity_boost" corresponds to"Clarity + Similarity Enhancement" in the web app and "stability" corresponds to "Stability" slider in the web app.

πŸ› οΈ Usage

const editSettingsPostResponse = await elevenlabs.voices.editSettingsPost({
  voiceId: "voiceId_example",
  stability: 3.14,
  similarity_boost: 3.14,
  style: 0,
  use_speaker_boost: true,
});

βš™οΈ Parameters

stability: number
similarity_boost: number
voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

style: number
use_speaker_boost: boolean
xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

🌐 Endpoint

/v1/voices/{voice_id}/settings/edit POST

πŸ”™ Back to Table of Contents


elevenlabs.voices.getDefaultVoiceSettings

Gets the default settings for voices. "similarity_boost" corresponds to"Clarity + Similarity Enhancement" in the web app and "stability" corresponds to "Stability" slider in the web app.

πŸ› οΈ Usage

const getDefaultVoiceSettingsResponse =
  await elevenlabs.voices.getDefaultVoiceSettings();

πŸ”„ Return

VoiceSettingsResponseModel

🌐 Endpoint

/v1/voices/settings/default GET

πŸ”™ Back to Table of Contents


elevenlabs.voices.getSettings

Returns the settings for a specific voice. "similarity_boost" corresponds to"Clarity + Similarity Enhancement" in the web app and "stability" corresponds to "Stability" slider in the web app.

πŸ› οΈ Usage

const getSettingsResponse = await elevenlabs.voices.getSettings({
  voiceId: "voiceId_example",
});

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

VoiceSettingsResponseModel

🌐 Endpoint

/v1/voices/{voice_id}/settings GET

πŸ”™ Back to Table of Contents


elevenlabs.voices.getSharedVoices

Gets a list of shared voices.

πŸ› οΈ Usage

const getSharedVoicesResponse = await elevenlabs.voices.getSharedVoices({
  pageSize: 30,
  featured: false,
  page: 0,
});

βš™οΈ Parameters

pageSize: number

How many shared voices to return at maximum. Can not exceed 500, defaults to 30.

category: string

voice category used for filtering

gender: string

gender used for filtering

age: string

age used for filtering

accent: string

accent used for filtering

search: string

search term used for filtering

useCases: string[]

use-case used for filtering

descriptives: string[]

search term used for filtering

sort: string

sort criteria

featured: boolean

Filter featured voices

page: number
xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

GetLibraryVoicesResponseModel

🌐 Endpoint

/v1/shared-voices GET

πŸ”™ Back to Table of Contents


elevenlabs.voices.getVoiceMetadata

Returns metadata about a specific voice.

πŸ› οΈ Usage

const getVoiceMetadataResponse = await elevenlabs.voices.getVoiceMetadata({
  voiceId: "voiceId_example",
  withSettings: false,
});

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

withSettings: boolean

If set will return settings information corresponding to the voice, requires authorization.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

VoiceResponseModel

🌐 Endpoint

/v1/voices/{voice_id} GET

πŸ”™ Back to Table of Contents


elevenlabs.voices.listAllVoices

Gets a list of all available voices for a user.

πŸ› οΈ Usage

const listAllVoicesResponse = await elevenlabs.voices.listAllVoices({});

βš™οΈ Parameters

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

πŸ”„ Return

GetVoicesResponseModel

🌐 Endpoint

/v1/voices GET

πŸ”™ Back to Table of Contents


elevenlabs.voices.updateVoiceById

Edit a voice created by you.

πŸ› οΈ Usage

const updateVoiceByIdResponse = await elevenlabs.voices.updateVoiceById({
  voiceId: "voiceId_example",
  name: "name_example",
});

βš™οΈ Parameters

voiceId: string

Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.

name: string

The name that identifies this voice. This will be displayed in the dropdown of the website.

xiApiKey: string

Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.

description: string

How would you describe the voice?

files: Uint8Array | File | buffer.File[]

Audio files to add to the voice

labels: string

Serialized labels dictionary for the voice.

🌐 Endpoint

/v1/voices/{voice_id}/edit POST

πŸ”™ Back to Table of Contents


elevenlabs.workspace.getSsoProviderAdmin

Get Sso Provider Admin

πŸ› οΈ Usage

const getSsoProviderAdminResponse =
  await elevenlabs.workspace.getSsoProviderAdmin({
    workspaceId: "workspaceId_example",
  });

βš™οΈ Parameters

workspaceId: string

πŸ”„ Return

SsoProviderDBModel

🌐 Endpoint

/admin/{admin_url_prefix}/sso-provider GET

πŸ”™ Back to Table of Contents


Author

This TypeScript package is automatically generated by Konfig

About

Elevate your projects with the fastest & most powerful text to speech & voice API. Quickly generate AI voices in multiple languages for your chatbots, agents, LLMs, websites, apps and more. ElevenLabs's TypeScript SDK generated by Konfig (https://konfigthis.com/).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published