This is the documentation for the ElevenLabs API. You can use this API to use our service programmatically, this is done by using your xi-api-key.
You can view your xi-api-key using the 'Profile' tab on https://elevenlabs.io. Our API is experimental so all endpoints are subject to change.
- Installation
- Getting Started
- Reference
elevenlabs.pronunciationDictionary.createFromFile
elevenlabs.pronunciationDictionary.getMetadata
elevenlabs.admin.archiveCouponPromocodePost
elevenlabs.admin.editVanityLink
elevenlabs.admin.getAllCoupons
elevenlabs.admin.getAllVanityLinks
elevenlabs.admin.getVanityLink
elevenlabs.admin.removeVanityLink
elevenlabs.audioNative.createProjectWithEmbeddableHtml
elevenlabs.dubbing.deleteProject
elevenlabs.dubbing.fileInLanguage
elevenlabs.dubbing.getFile
elevenlabs.dubbing.getProjectMetadata
elevenlabs.models.listAvailableModels
elevenlabs.projects.createNewProject
elevenlabs.projects.deleteById
elevenlabs.projects.deleteChapterById
elevenlabs.projects.getAllProjects
elevenlabs.projects.getById
elevenlabs.projects.getChapterById
elevenlabs.projects.getChapterSnapshots
elevenlabs.projects.listChapters
elevenlabs.projects.listSnapshots
elevenlabs.projects.startChapterConversion
elevenlabs.projects.startConversion
elevenlabs.projects.streamAudioFromSnapshot
elevenlabs.projects.streamAudioFromSnapshotPost
elevenlabs.projects.updatePronunciationDictionaries
elevenlabs.redirect.toMintlifyDocsGet
elevenlabs.samples.getAudioFromSample
elevenlabs.samples.removeById
elevenlabs.speechHistory.deleteHistoryItemById
elevenlabs.speechHistory.downloadHistoryItems
elevenlabs.speechHistory.getGeneratedAudioMetadata
elevenlabs.speechHistory.getHistoryItemAudio
elevenlabs.speechHistory.getHistoryItemById
elevenlabs.speechToSpeech.createWithVoice
elevenlabs.speechToSpeech.createWithVoice_0
elevenlabs.textToSpeech.convertTextToSpeech
elevenlabs.textToSpeech.convertTextToSpeechStream
elevenlabs.user.getInfo
elevenlabs.user.getSubscriptionInfo
elevenlabs.voiceGeneration.createVoice
elevenlabs.voiceGeneration.generateRandomVoice
elevenlabs.voiceGeneration.getVoiceGenerationParameters
elevenlabs.voices.addToCollection
elevenlabs.voices.addVoiceToCollection
elevenlabs.voices.deleteById
elevenlabs.voices.editSettingsPost
elevenlabs.voices.getDefaultVoiceSettings
elevenlabs.voices.getSettings
elevenlabs.voices.getSharedVoices
elevenlabs.voices.getVoiceMetadata
elevenlabs.voices.listAllVoices
elevenlabs.voices.updateVoiceById
elevenlabs.workspace.getSsoProviderAdmin
import { ElevenLabs } from "eleven-labs-typescript-sdk";
const elevenlabs = new ElevenLabs({
// Defining the base path is optional and defaults to https://api.elevenlabs.io
// basePath: "https://api.elevenlabs.io",
});
const createFromFileResponse =
await elevenlabs.pronunciationDictionary.createFromFile({
name: "name_example",
});
console.log(createFromFileResponse);
Creates a new pronunciation dictionary from a lexicon .PLS file
const createFromFileResponse =
await elevenlabs.pronunciationDictionary.createFromFile({
name: "name_example",
});
The name of the pronunciation dictionary, used for identification only.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
A description of the pronunciation dictionary, used for identification only.
A lexicon .pls file which we will use to initialize the project with.
AddPronunciationDictionaryResponseModel
/v1/pronunciation-dictionaries/add-from-file
POST
π Back to Table of Contents
Get metadata for a pronunciation dictionary
const getMetadataResponse =
await elevenlabs.pronunciationDictionary.getMetadata({
pronunciationDictionaryId: "pronunciationDictionaryId_example",
});
The id of the pronunciation dictionary
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
GetPronunciationDictionaryMetadataResponseModel
/v1/pronunciation-dictionaries/{pronunciation_dictionary_id}
GET
π Back to Table of Contents
Archive Coupon
const archiveCouponPromocodePostResponse =
await elevenlabs.admin.archiveCouponPromocodePost({
promocode: "promocode_example",
});
/admin/n8enylacgd/coupon/{promocode}/archive
POST
π Back to Table of Contents
Edit Vanity Link
const editVanityLinkResponse = await elevenlabs.admin.editVanityLink({
vanityLinkId: "vanityLinkId_example",
vanity_slug: "vanity_slug_example",
target_url: "target_url_example",
comment: "comment_example",
});
The new slug for the vanity link. For example, if you want the vanity link to be /blog/NEW_SLUG, enter NEW_SLUG.
The new URL that the vanity link should redirect to.
A new comment or description for the vanity link.
/admin/n8enylacgd/vanity-link/{vanity_link_id}/update
POST
π Back to Table of Contents
Get All Coupons
const getAllCouponsResponse = await elevenlabs.admin.getAllCoupons();
/admin/n8enylacgd/coupons
GET
π Back to Table of Contents
Get All Vanity Links
const getAllVanityLinksResponse = await elevenlabs.admin.getAllVanityLinks();
/admin/n8enylacgd/vanity-links
GET
π Back to Table of Contents
Get Vanity Link
const getVanityLinkResponse = await elevenlabs.admin.getVanityLink({
slug: "slug_example",
});
/admin/n8enylacgd/vanity-link/{slug}
GET
π Back to Table of Contents
Delete Vanity Link
const removeVanityLinkResponse = await elevenlabs.admin.removeVanityLink({
vanityLinkId: "vanityLinkId_example",
});
/admin/n8enylacgd/vanity-link/{vanity_link_id}/delete
POST
π Back to Table of Contents
Creates AudioNative enabled project, optionally starts conversion and returns project id and embeddable html snippet.
const createProjectWithEmbeddableHtmlResponse =
await elevenlabs.audioNative.createProjectWithEmbeddableHtml({
name: "name_example",
small: false,
sessionization: 0,
file: fs.readFileSync("/path/to/file"),
auto_convert: false,
});
Project name.
Either txt or HTML input file containing the article content. HTML should be formatted as follows \'<html><body><div><p>Your content</p><h5>More of your content</h5><p>Some more of your content</p></div></body></html>\'
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
Title used in the player and inserted at the top of the uploaded article. If not provided, the default title set in the Player settings is used.
Image URL used in the player. If not provided, default image set in the Player settings is used.
Author used in the player and inserted at the start of the uploaded article. If not provided, the default author set in the Player settings is used.
Whether to use small player or not. If not provided, default value set in the Player settings is used.
Text color used in the player. If not provided, default text color set in the Player settings is used.
Background color used in the player. If not provided, default background color set in the Player settings is used.
Specifies for how many minutes to persist the session across page reloads. If not provided, default sessionization set in the Player settings is used.
Voice ID used to voice the content. If not provided, default voice ID set in the Player settings is used.
TTS Model ID used in the player. If not provided, default model ID set in the Player settings is used.
Whether to auto convert the project to audio or not.
AudioNativeCreateProjectResponseModel
/v1/audio-native
POST
π Back to Table of Contents
Deletes a dubbing project.
const deleteProjectResponse = await elevenlabs.dubbing.deleteProject({
dubbingId: "dubbingId_example",
});
ID of the dubbing project.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/dubbing/{dubbing_id}
DELETE
π Back to Table of Contents
Dubs provided audio or video file into given language.
const fileInLanguageResponse = await elevenlabs.dubbing.fileInLanguage({
source_lang: "auto",
target_lang: "target_lang_example",
num_speakers: 0,
watermark: false,
highest_resolution: false,
dubbing_studio: false,
});
Target language.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
automatic or manual.
One or more audio files to clone the voice from
CSV file containing transcription/translation metadata
For use only with csv input
For use only with csv input
Name of the dubbing project.
URL of the source video/audio file.
Source language.
Number of speakers to use for the dubbing.
Whether to apply watermark to the output video.
Start time of the source video/audio file.
End time of the source video/audio file.
Whether to use the highest resolution available.
Whether to prepare dub for edits in dubbing studio.
/v1/dubbing
POST
π Back to Table of Contents
Returns dubbed file as a streamed file. Videos will be returned in MP4 format and audio only dubs will be returned in MP3.
const getFileResponse = await elevenlabs.dubbing.getFile({
dubbingId: "dubbingId_example",
languageCode: "languageCode_example",
});
ID of the dubbing project.
ID of the language.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/dubbing/{dubbing_id}/audio/{language_code}
GET
π Back to Table of Contents
Returns metadata about a dubbing project, including whether it's still in progress or not
const getProjectMetadataResponse = await elevenlabs.dubbing.getProjectMetadata({
dubbingId: "dubbingId_example",
});
ID of the dubbing project.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/dubbing/{dubbing_id}
GET
π Back to Table of Contents
Gets a list of available models.
const listAvailableModelsResponse = await elevenlabs.models.listAvailableModels(
{}
);
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/models
GET
π Back to Table of Contents
Creates a new project, it can be either initialized as blank, from a document or from a URL.
const createNewProjectResponse = await elevenlabs.projects.createNewProject({
name: "name_example",
default_title_voice_id: "default_title_voice_id_example",
default_paragraph_voice_id: "default_paragraph_voice_id_example",
default_model_id: "default_model_id_example",
quality_preset: "standard",
acx_volume_normalization: false,
volume_normalization: false,
pronunciation_dictionary_locators: [
"pronunciation_dictionary_locators_example",
],
});
The name of the project, used for identification only.
The voice_id that corresponds to the default voice used for new titles.
The voice_id that corresponds to the default voice used for new paragraphs.
The model_id of the model to be used for this project, you can query GET https://api.elevenlabs.io/v1/models to list all available models.
A list of pronunciation dictionary locators (id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
An optional name of the author of the project, this will be added as metadata to the mp3 file on project / chapter download.
An optional URL from which we will extract content to initialize the project. If this is set, \'from_url\' must be null. If neither \'from_url\' or \'from_document\' are provided we will initialize the project as blank.
An optional .epub, .pdf, .txt or similar file can be provided. If provided, we will initialize the project with its content. If this is set, \'from_url\' must be null. If neither \'from_url\' or \'from_document\' are provided we will initialize the project as blank.
Output quality of the generated audio. Must be one of: standard - standard output format, 128kbps with 44.1kHz sample rate. high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the character cost by 20%. ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the character cost by 50%.
An optional name of the author of the project, this will be added as metadata to the mp3 file on project / chapter download.
An optional ISBN number of the project you want to create, this will be added as metadata to the mp3 file on project / chapter download.
[Deprecated] When the project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
When the project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
A url that will be called by our service when the project is converted with a json containing the status of the conversion
/v1/projects/add
POST
π Back to Table of Contents
Delete a project by its project_id.
const deleteByIdResponse = await elevenlabs.projects.deleteById({
projectId: "projectId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}
DELETE
π Back to Table of Contents
Delete a chapter by its chapter_id.
const deleteChapterByIdResponse = await elevenlabs.projects.deleteChapterById({
projectId: "projectId_example",
chapterId: "chapterId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/chapters/{chapter_id}
DELETE
π Back to Table of Contents
Returns a list of your projects together and its metadata.
const getAllProjectsResponse = await elevenlabs.projects.getAllProjects({});
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects
GET
π Back to Table of Contents
Returns information about a specific project. This endpoint returns more detailed information about a project than GET api.elevenlabs.io/v1/projects.
const getByIdResponse = await elevenlabs.projects.getById({
projectId: "projectId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}
GET
π Back to Table of Contents
Returns information about a specific chapter.
const getChapterByIdResponse = await elevenlabs.projects.getChapterById({
projectId: "projectId_example",
chapterId: "chapterId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/chapters/{chapter_id}
GET
π Back to Table of Contents
Gets information about all the snapshots of a chapter, each snapshot corresponds can be downloaded as audio. Whenever a chapter is converted a snapshot will be automatically created.
const getChapterSnapshotsResponse =
await elevenlabs.projects.getChapterSnapshots({
projectId: "projectId_example",
chapterId: "chapterId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/chapters/{chapter_id}/snapshots
GET
π Back to Table of Contents
Returns a list of your chapters for a project together and its metadata.
const listChaptersResponse = await elevenlabs.projects.listChapters({
projectId: "projectId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/chapters
GET
π Back to Table of Contents
Gets the snapshots of a project.
const listSnapshotsResponse = await elevenlabs.projects.listSnapshots({
projectId: "projectId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/snapshots
GET
π Back to Table of Contents
Starts conversion of a specific chapter.
const startChapterConversionResponse =
await elevenlabs.projects.startChapterConversion({
projectId: "projectId_example",
chapterId: "chapterId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/chapters/{chapter_id}/convert
POST
π Back to Table of Contents
Starts conversion of a project and all of its chapters.
const startConversionResponse = await elevenlabs.projects.startConversion({
projectId: "projectId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/convert
POST
π Back to Table of Contents
Stream the audio from a project snapshot.
const streamAudioFromSnapshotResponse =
await elevenlabs.projects.streamAudioFromSnapshot({
projectId: "projectId_example",
projectSnapshotId: "projectSnapshotId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
The project_snapshot_id of the project snapshot. You can query GET /v1/projects/{project_id}/snapshots to list all available snapshots for a project.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/snapshots/{project_snapshot_id}/stream
POST
π Back to Table of Contents
Stream the audio from a chapter snapshot. Use GET /v1/projects/{project_id}/chapters/{chapter_id}/snapshots to return the chapter snapshots of a chapter.
const streamAudioFromSnapshotPostResponse =
await elevenlabs.projects.streamAudioFromSnapshotPost({
projectId: "projectId_example",
chapterId: "chapterId_example",
chapterSnapshotId: "chapterSnapshotId_example",
});
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
The chapter_id of the chapter. You can query GET https://api.elevenlabs.io/v1/projects/{project_id}/chapters to list all available chapters for a project.
The chapter_snapshot_id of the chapter snapshot. You can query GET /v1/projects/{project_id}/chapters/{chapter_id}/snapshots to the all available snapshots for a chapter.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/chapters/{chapter_id}/snapshots/{chapter_snapshot_id}/stream
POST
π Back to Table of Contents
Updates the set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
const updatePronunciationDictionariesResponse =
await elevenlabs.projects.updatePronunciationDictionaries({
projectId: "projectId_example",
pronunciation_dictionary_locators: [
{
pronunciation_dictionary_id: "pronunciation_dictionary_id_example",
version_id: "version_id_example",
},
],
});
pronunciation_dictionary_locators: PronunciationDictionaryVersionLocatorDBModel
[]
A list of pronunciation dictionary locators (id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody
The project_id of the project, you can query GET https://api.elevenlabs.io/v1/projects to list all available projects.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/projects/{project_id}/update-pronunciation-dictionaries
POST
π Back to Table of Contents
Redirect To Mintlify
const toMintlifyDocsGetResponse = await elevenlabs.redirect.toMintlifyDocsGet();
/docs
GET
π Back to Table of Contents
Returns the audio corresponding to a sample attached to a voice.
const getAudioFromSampleResponse = await elevenlabs.samples.getAudioFromSample({
voiceId: "voiceId_example",
sampleId: "sampleId_example",
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Sample ID to be used, you can use GET https://api.elevenlabs.io/v1/voices/{voice_id} to list all the available samples for a voice.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices/{voice_id}/samples/{sample_id}/audio
GET
π Back to Table of Contents
Removes a sample by its ID.
const removeByIdResponse = await elevenlabs.samples.removeById({
voiceId: "voiceId_example",
sampleId: "sampleId_example",
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Sample ID to be used, you can use GET https://api.elevenlabs.io/v1/voices/{voice_id} to list all the available samples for a voice.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices/{voice_id}/samples/{sample_id}
DELETE
π Back to Table of Contents
Delete a history item by its ID
const deleteHistoryItemByIdResponse =
await elevenlabs.speechHistory.deleteHistoryItemById({
historyItemId: "historyItemId_example",
});
History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/history/{history_item_id}
DELETE
π Back to Table of Contents
Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.
const downloadHistoryItemsResponse =
await elevenlabs.speechHistory.downloadHistoryItems({
history_item_ids: ["history_item_ids_example"],
});
A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/history/download
POST
π Back to Table of Contents
Returns metadata about all your generated audio.
const getGeneratedAudioMetadataResponse =
await elevenlabs.speechHistory.getGeneratedAudioMetadata({
pageSize: 100,
});
How many history items to return at maximum. Can not exceed 1000, defaults to 100.
After which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date.
Voice ID to be filtered for, you can use GET https://api.elevenlabs.io/v1/voices to receive a list of voices and their IDs.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/history
GET
π Back to Table of Contents
Returns the audio of an history item.
const getHistoryItemAudioResponse =
await elevenlabs.speechHistory.getHistoryItemAudio({
historyItemId: "historyItemId_example",
});
History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/history/{history_item_id}/audio
GET
π Back to Table of Contents
Returns information about an history item by its ID.
const getHistoryItemByIdResponse =
await elevenlabs.speechHistory.getHistoryItemById({
historyItemId: "historyItemId_example",
});
History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
SpeechHistoryItemResponseModel
/v1/history/{history_item_id}
GET
π Back to Table of Contents
Create speech by combining the content and emotion of the uploaded audio with a voice of your choice.
const createWithVoiceResponse = await elevenlabs.speechToSpeech.createWithVoice(
{
voiceId: "voiceId_example",
optimizeStreamingLatency: 0,
audio: fs.readFileSync("/path/to/file"),
model_id: "eleven_english_sts_v2",
}
);
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
The audio file which holds the content and emotion that will control the generated speech.
You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for speech to speech, you can check this using the can_do_voice_conversion property.
Voice settings overriding stored setttings for the given voice. They are applied only on the given request. Needs to be send as a JSON encoded string.
/v1/speech-to-speech/{voice_id}
POST
π Back to Table of Contents
Create speech by combining the content and emotion of the uploaded audio with a voice of your choice and returns an audio stream.
const createWithVoice_0Response =
await elevenlabs.speechToSpeech.createWithVoice_0({
voiceId: "voiceId_example",
optimizeStreamingLatency: 0,
audio: fs.readFileSync("/path/to/file"),
model_id: "eleven_english_sts_v2",
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
The audio file which holds the content and emotion that will control the generated speech.
You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for speech to speech, you can check this using the can_do_voice_conversion property.
Voice settings overriding stored setttings for the given voice. They are applied only on the given request. Needs to be send as a JSON encoded string.
/v1/speech-to-speech/{voice_id}/stream
POST
π Back to Table of Contents
Converts text into speech using a voice of your choice and returns audio.
const convertTextToSpeechResponse =
await elevenlabs.textToSpeech.convertTextToSpeech({
voiceId: "voiceId_example",
optimizeStreamingLatency: 0,
outputFormat: "mp3_44100_128",
text: "text_example",
model_id: "eleven_monolingual_v1",
pronunciation_dictionary_locators: [],
});
The text that will get converted into speech.
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
voice_settings: VoiceSettingsResponseModel
Voice settings overriding stored setttings for the given voice. They are applied only on the given request.
pronunciation_dictionary_locators: PronunciationDictionaryVersionLocatorDBModel
[]
A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.
Output format of the generated audio. Must be one of: mp3_22050_32 - output format, mp3 with 22.05kHz sample rate at 32kbps. mp3_44100_32 - output format, mp3 with 44.1kHz sample rate at 32kbps. mp3_44100_64 - output format, mp3 with 44.1kHz sample rate at 64kbps. mp3_44100_96 - output format, mp3 with 44.1kHz sample rate at 96kbps. mp3_44100_128 - default output format, mp3 with 44.1kHz sample rate at 128kbps. mp3_44100_192 - output format, mp3 with 44.1kHz sample rate at 192kbps. Requires you to be subscribed to Creator tier or above. pcm_16000 - PCM format (S16LE) with 16kHz sample rate. pcm_22050 - PCM format (S16LE) with 22.05kHz sample rate. pcm_24000 - PCM format (S16LE) with 24kHz sample rate. pcm_44100 - PCM format (S16LE) with 44.1kHz sample rate. Requires you to be subscribed to Pro tier or above. ulaw_8000 - ΞΌ-law format (sometimes written mu-law, often approximated as u-law) with 8kHz sample rate. Note that this format is commonly used for Twilio audio inputs.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/text-to-speech/{voice_id}
POST
π Back to Table of Contents
Converts text into speech using a voice of your choice and returns audio as an audio stream.
const convertTextToSpeechStreamResponse =
await elevenlabs.textToSpeech.convertTextToSpeechStream({
voiceId: "voiceId_example",
optimizeStreamingLatency: 0,
outputFormat: "mp3_44100_128",
text: "text_example",
model_id: "eleven_monolingual_v1",
pronunciation_dictionary_locators: [],
});
The text that will get converted into speech.
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property.
voice_settings: VoiceSettingsResponseModel
Voice settings overriding stored setttings for the given voice. They are applied only on the given request.
pronunciation_dictionary_locators: PronunciationDictionaryVersionLocatorDBModel
[]
A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request
You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to 0.
Output format of the generated audio. Must be one of: mp3_22050_32 - output format, mp3 with 22.05kHz sample rate at 32kbps. mp3_44100_32 - output format, mp3 with 44.1kHz sample rate at 32kbps. mp3_44100_64 - output format, mp3 with 44.1kHz sample rate at 64kbps. mp3_44100_96 - output format, mp3 with 44.1kHz sample rate at 96kbps. mp3_44100_128 - default output format, mp3 with 44.1kHz sample rate at 128kbps. mp3_44100_192 - output format, mp3 with 44.1kHz sample rate at 192kbps. Requires you to be subscribed to Creator tier or above. pcm_16000 - PCM format (S16LE) with 16kHz sample rate. pcm_22050 - PCM format (S16LE) with 22.05kHz sample rate. pcm_24000 - PCM format (S16LE) with 24kHz sample rate. pcm_44100 - PCM format (S16LE) with 44.1kHz sample rate. Requires you to be subscribed to Pro tier or above. ulaw_8000 - ΞΌ-law format (sometimes written mu-law, often approximated as u-law) with 8kHz sample rate. Note that this format is commonly used for Twilio audio inputs.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/text-to-speech/{voice_id}/stream
POST
π Back to Table of Contents
Gets information about the user
const getInfoResponse = await elevenlabs.user.getInfo({});
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/user
GET
π Back to Table of Contents
Gets extended information about the users subscription
const getSubscriptionInfoResponse = await elevenlabs.user.getSubscriptionInfo(
{}
);
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
ExtendedSubscriptionResponseModel
/v1/user/subscription
GET
π Back to Table of Contents
Create a previously generated voice. This endpoint should be called after you fetched a generated_voice_id using /v1/voice-generation/generate-voice.
const createVoiceResponse = await elevenlabs.voiceGeneration.createVoice({
voice_name: "voice_name_example",
voice_description: "voice_description_example",
generated_voice_id: "generated_voice_id_example",
});
Name to use for the created voice.
Description to use for the created voice.
The generated_voice_id to create, call POST /v1/voice-generation/generate-voice and fetch the generated_voice_id from the response header if don\'t have one yet.
Optional, metadata to add to the created voice. Defaults to None.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voice-generation/create-voice
POST
π Back to Table of Contents
Generate a random voice based on parameters. This method returns a generated_voice_id in the response header, and a sample of the voice in the body. If you like the generated voice call /v1/voice-generation/create-voice with the generated_voice_id to create the voice.
const generateRandomVoiceResponse =
await elevenlabs.voiceGeneration.generateRandomVoice({
gender: "female",
accent: "accent_example",
age: "young",
accent_strength: 3.14,
text: "text_example",
});
Category code corresponding to the gender of the generated voice. Possible values: female, male.
Category code corresponding to the accent of the generated voice. Possible values: american, british, african, australian, indian.
Category code corresponding to the age of the generated voice. Possible values: young, middle_aged, old.
The strength of the accent of the generated voice. Has to be between 0.3 and 2.0.
Text to generate, text length has to be between 100 and 1000.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voice-generation/generate-voice
POST
π Back to Table of Contents
Get possible parameters for the /v1/voice-generation/generate-voice endpoint.
const getVoiceGenerationParametersResponse =
await elevenlabs.voiceGeneration.getVoiceGenerationParameters();
VoiceGenerationParameterResponseModel
/v1/voice-generation/generate-voice/parameters
GET
π Back to Table of Contents
Add a sharing voice to your collection of voices in VoiceLab.
const addToCollectionResponse = await elevenlabs.voices.addToCollection({
publicUserId: "publicUserId_example",
voiceId: "voiceId_example",
new_name: "new_name_example",
});
The name that identifies this voice. This will be displayed in the dropdown of the website.
Public user ID used to publicly identify ElevenLabs users.
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices/add/{public_user_id}/{voice_id}
POST
π Back to Table of Contents
Add a new voice to your collection of voices in VoiceLab.
const addVoiceToCollectionResponse =
await elevenlabs.voices.addVoiceToCollection({
name: "name_example",
files: [fs.readFileSync("/path/to/file")],
});
The name that identifies this voice. This will be displayed in the dropdown of the website.
One or more audio files to clone the voice from
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
How would you describe the voice?
Serialized labels dictionary for the voice.
/v1/voices/add
POST
π Back to Table of Contents
Deletes a voice by its ID.
const deleteByIdResponse = await elevenlabs.voices.deleteById({
voiceId: "voiceId_example",
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices/{voice_id}
DELETE
π Back to Table of Contents
Edit your settings for a specific voice. "similarity_boost" corresponds to"Clarity + Similarity Enhancement" in the web app and "stability" corresponds to "Stability" slider in the web app.
const editSettingsPostResponse = await elevenlabs.voices.editSettingsPost({
voiceId: "voiceId_example",
stability: 3.14,
similarity_boost: 3.14,
style: 0,
use_speaker_boost: true,
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices/{voice_id}/settings/edit
POST
π Back to Table of Contents
Gets the default settings for voices. "similarity_boost" corresponds to"Clarity + Similarity Enhancement" in the web app and "stability" corresponds to "Stability" slider in the web app.
const getDefaultVoiceSettingsResponse =
await elevenlabs.voices.getDefaultVoiceSettings();
/v1/voices/settings/default
GET
π Back to Table of Contents
Returns the settings for a specific voice. "similarity_boost" corresponds to"Clarity + Similarity Enhancement" in the web app and "stability" corresponds to "Stability" slider in the web app.
const getSettingsResponse = await elevenlabs.voices.getSettings({
voiceId: "voiceId_example",
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices/{voice_id}/settings
GET
π Back to Table of Contents
Gets a list of shared voices.
const getSharedVoicesResponse = await elevenlabs.voices.getSharedVoices({
pageSize: 30,
featured: false,
page: 0,
});
How many shared voices to return at maximum. Can not exceed 500, defaults to 30.
voice category used for filtering
gender used for filtering
age used for filtering
accent used for filtering
search term used for filtering
use-case used for filtering
search term used for filtering
sort criteria
Filter featured voices
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/shared-voices
GET
π Back to Table of Contents
Returns metadata about a specific voice.
const getVoiceMetadataResponse = await elevenlabs.voices.getVoiceMetadata({
voiceId: "voiceId_example",
withSettings: false,
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
If set will return settings information corresponding to the voice, requires authorization.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices/{voice_id}
GET
π Back to Table of Contents
Gets a list of all available voices for a user.
const listAllVoicesResponse = await elevenlabs.voices.listAllVoices({});
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
/v1/voices
GET
π Back to Table of Contents
Edit a voice created by you.
const updateVoiceByIdResponse = await elevenlabs.voices.updateVoiceById({
voiceId: "voiceId_example",
name: "name_example",
});
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
The name that identifies this voice. This will be displayed in the dropdown of the website.
Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
How would you describe the voice?
Audio files to add to the voice
Serialized labels dictionary for the voice.
/v1/voices/{voice_id}/edit
POST
π Back to Table of Contents
Get Sso Provider Admin
const getSsoProviderAdminResponse =
await elevenlabs.workspace.getSsoProviderAdmin({
workspaceId: "workspaceId_example",
});
/admin/{admin_url_prefix}/sso-provider
GET
π Back to Table of Contents
This TypeScript package is automatically generated by Konfig