diff --git a/docs/_includes/news/20241012.md b/docs/_includes/news/20241012.md new file mode 100644 index 0000000000..868970974a --- /dev/null +++ b/docs/_includes/news/20241012.md @@ -0,0 +1,74 @@ +### [12th October 2024](/news/20241012) + +We have another amazing developer case study online by the incredible Alex Raccuglia! + +You can read all about **Transcriber AI Metadata** [here](/developer-case-studies/transcriber-ai-metadata). + +--- + +Let's talk the future of editing! + +Join the amazing [Jenn Jager](https://www.youtube.com/@JennJagerPro), Final Cut Pro Guru Mark Spencer from [Ripple Training](https://www.rippletraining.com), Writer/Director and Squirrel Me Bad creator [Daniel Cohn](https://www.youtube.com/@squirrelmebad9684), Director of Product Marketing for Continuum at [Boris FX](https://borisfx.com) Nick Harauzand, and myself (Chris Hocking) chat about all things related to the upcoming Final Cut Pro Creative Summit! + +You can watch on YouTube Live here: + +[![](/static/fcp-creative-summit-youtube-preview.jpeg)](https://ltnt.tv/fcpcs-2024) + +--- + +**Dylan Bates** (The Final Cut Bro) has released an update to his **Pro Zooms** plugin. + +He explains: + +> I have added a zoom offset slider, which will now let you adjust the size of the zoom, without using the on screen controls. This is definitely a hacked together solution, but I was getting tired of waiting for an update from Apple to completely resolve this issue. +> +> Just know, if you use the zoom offset slider, the on screen controls will no longer accurately show exactly what you will be zooming into. +> +> If you were having no issues with Pro Zooms in the past, there is no need to update. This is only to help the few that can't seem to adjust the controls for whatever reason. + +You can learn more [here](https://thefinalcutbro.com/products/pro-zooms-for-final-cut-pro?variant=41021167370274¤cy=USD&utm_campaign=sag_organic&srsltid=AfmBOorxoV44-E_4yF6JcsL_aqxIxqfW6TnHnzDCOR3DpYUO1KjkDamsVGw&utm_content=YT3-tWvxgW-YCS0YHKpHZ98oGHDu-NVcdznbxBSoAdiSMxTQ9E16MGnj6O4QpB_z6C4kCzKv7JxN9pCFSFjo_Bbt_n1LEreg-xMEZ2K7vSAjYg&utm_term=UCYlZLHOzom9-MryCEodaoXg&utm_medium=product_shelf&utm_source=youtube). + +--- + +**iodyne** has just released a major firmware update to their **Pro Data** hardware. + +> Our second major firmware update of the year, Pro Data 1.5 has been released for all existing customers. Were very excited about the Multi-Reader Sharing feature which many of you, particularly in post, have been anticipating. + +You can read more [here](https://iodyne.com/multi-reader-sharing-brings-data-discipline-to-collaborative-workflows/). + +--- + +**Color Finale** `v2.10.1` is out now! + +It contains the following bug fixes and improvements: + +- Resolved a color discrepancy issue with the published mask Cutout Mode in HDR projects. +- No longer offering the Adjustment Layer Motion Template installation. Please use your own template. +- Enabled new film emulation tools to function within HDR video workflows. +- Updated the Color Finale app interface to include license information and an option to deactivate the application directly from the computer. + +You can learn more [here](https://colorfinale.com/release-notes#26). + +--- + +**Captionator for Final Cut** `v2.1.0` is out now! + +Whisper Turbo now runs substantially faster and more accurately. Choose the Whisper Large v3 Turbo model. + +Plenty of new changes: + +- Rewritten text editor for editing captions +- Total removal of CoreML +- Addition of Largev3-Turbo (A Much faster model) +- Set a default framerate if Final Cut doesn't produce one +- Multi-threading for even faster transcription +- Many fixes for the custom motion templates +- Validated downloader to ensure all model downloads are not corrupt + +You can download on the Mac App Store [here](https://apps.apple.com/au/app/captionator-for-final-cut/id1627843786?mt=12). + +--- + +**Worx4 X** `v1.3.11` is out now with added support for FCPXML `v1.12`. + +You can download on the Mac App Store [here](https://apps.apple.com/au/app/worx4-x/id1195903030?mt=12) \ No newline at end of file diff --git a/docs/developer-case-studies/transcriber-ai-metadata.md b/docs/developer-case-studies/transcriber-ai-metadata.md new file mode 100644 index 0000000000..aa67cf2386 --- /dev/null +++ b/docs/developer-case-studies/transcriber-ai-metadata.md @@ -0,0 +1,289 @@ +# Transcriber with AI Metadata + +This is a story that starts not so long ago, about what I call the hardest job of all after being a dad: being a good husband... + +Hi, I'm **Alex Raccuglia**, I'm a filmmaker and editor from Milan (in Italy, if maybe European geography isn't your thing), but I studied to be a software engineer, so in the past years I started writing tools to help me in my profession, and now it has become a real second job: I have a software house, called **[Ulti.Media](https://ulti.media)**, and I produce apps designed for people doing my (first) job. + +--- + +### 2023-2024: The years of Artificial Intelligence + +I think that if you have not been locked under a rock or hiding your head in the sand like ostriches, you can agree with me that the last 18 months have been really crazy when it comes to some application areas of artificial intelligence. + +We have been talking, often even a bit inappropriately, about artificial intelligence for a long time, but never before has it been more apparent than in recent times that it is also possible to do something creative with these tools that have more technology than soul. + +I'll make it clear: I can't draw anything that isn't a scribble, so platforms that allow you to create artwork by simply typing in text have been something very appealing to me from the beginning. + +![](/static/transcriber-ai-metadata-/Firefly.png) + +Then Goliath came along, ChatGPT, and it was a game changer for everybody, going on to create new needs and, literally, revolutionizing more than one industry. + +If you've come this far, I guess there's nothing new in my words, right? + +So *why the preamble?* + +--- + +### The favors a husband does for his wife. + +Meanwhile, my wife has started collaborating on a project for [a new company here in Milan](https://www.atlaswinestudio.com). + +I would love to tell you what they do, but I'll first show you the video I made for them: + +[![](/static/transcriber-ai-metadata-youtube-01.jpg)](https://youtu.be/B8FnQ5VeSkE) + +Just for some publicity, here is the video of the launch evening: + +[![](/static/transcriber-ai-metadata-youtube-02.jpg)](https://youtu.be/GKCax0cv-Kk) + +In the months that followed, my wife had to create several pieces of content for social platforms, and she often found herself with blank sheet syndrome, so she would come to me for help. + +Me, busy with a thousand other things, I have to be honest: I was not particularly creative, however, I had learned how to create rather complex and rich prompts, so that I could have in response, quickly, the texts of posts for Instagram, complete with hashtags and emoji. + +My wife would then use these texts as a starting point, reworking them, rewriting them, but no longer having the fear of not knowing where to start. + +And maybe that's the point: not having a machine do all the creative work, but leveraging the underlying language model to have a starting point to think about. + +--- + +### The Morning Rant + +![](/static/transcriber-ai-metadata-TheMorningRant.jpg) + +For the past year in English, and for the past three years in Italian, I have been producing a series of videos that I record in my car while driving from home to the office, called **[The Morning Rant](https://www.youtube.com/playlist?list=PLrDR4S9nie2YG87vJe_AjZRloVOucr7zk)**, in which, as if it were a podcast, I narrate what happens to me, both in my work as a filmmaker and in my work as a software developer, trying to have an effective and, especially in Italian, also entertaining narrative. + +For the last two years I've been making use of this internal tool I wrote myself, called **SciattaGPT** (the literal translation would be “*dull*, *sloppy*, *scrappy GPT*”), which I use to create the episode summary and title suggestion, always making use of ChatGPT, first with the 3.5 model, then with GPT-4 and now with GPT-4o-mini. + +![](/static/transcriber-ai-metadata-/SciattaGPT.png) + +In the case of this SciattaGPT, all prompts are predefined, rather statically. + +At some point, though, all these ingredients, in my head, came together, and I said to myself, I have sufficient (not excellent, but sufficient) experience with OpenAI prompts and APIs, I've been doing this for myself for a very long time... + +*Can't I just put it all together?* + +--- + +### NQR + +I started out developing a very simple application that would act as a front end to a relatively complex underlying system, which I called **NQR** (which stands for **Natural Query Responses**), the meaning of the acronym of which I found later because I liked the way the three letters sounded. + +![](/static/transcriber-ai-metadata-/NQR.png) + +NQR is, in its conception, and also a bit in its implementation, relatively simple: a system for managing prompts that generate content from other content, in this case, given a rather long text, which could very well be the transcript of a video, I prepared several prompts that generate a summary of it, an ideal title, a list of bullet points, ... in short things like that. + +And to make the application of these prompts usable and fast, I have developed a grouping system that allows you to organize different prompts within sets, there is a set for **YouTube**, a set for **social media**, a set for **meta data**, ... In this way a user can apply and execute different prompts just by selecting the single set. + +![](/static/transcriber-ai-metadata-/NQR2.png) + +Perhaps this thing I wrote may sound a bit ... “*pompous*,” or “ *self-praising*,” however, I tried very hard to think from the end user's point of view: the organization of prompts into sets allows you to generate an immense amount of content by simply doing two clicks by first selecting the set and then running the analysis. + +My competitor with this application, [MacWhisper](https://goodsnooze.gumroad.com/l/macwhisper), an application that I greatly value and respect (let me be clear!) has been providing for several months now the ability to query ChatGPT with prompts, but these must be written by hand. + +My system, on the other hand, provides for *doing the work once*, and then applying it later all the times automatically, and in the case of more common prompts, these are ready-made and implemented by yours truly. + +The system by the way allows for prompts and sets to be updated, continuously, without having to update the application. + +--- + +### One small planet, many different languages + +![](/static/transcriber-ai-metadata-OneSmallPlanet.jpg) + +When I started writing the first prompts-which, I recall, are the “*questions*” posed to the artificial intelligence to be answered-I thought I would have to create numerous prompts in all the languages I wanted to support. + +Of course, I started with English, thinking that I would then be able to translate all these prompts more or less automatically, specifying the language as well. + +Later, almost by accident, I submitted an Italian text for analysis using English prompts. To my surprise, I noticed that in most cases the response was still given in Italian. By repeating the processing several times, I was sometimes getting answers in Italian, sometimes in English. + +It is important to know that unless you intentionally force them, the responses of all these artificial intelligence systems, not just ChatGPT, have a component of probabilistic variability. This means that, using the exact same prompt, you can get very different results as well. + +These systems do not really understand languages, but rather are formidable imitators of human language. They do not even know the words themselves, but rather the particles that make them up (in jargon called *tokens*). They respond by trying to find the best solution to the question formulated. + +If most of the text is in Italian, even if the question is asked in English, there is a good chance they will answer in Italian. + +So I went a step further: at all system prompts I specified to answer using the same language as the content, even if the question is in English. In this way, I was able to achieve a very high success rate: to date, out of more than 200 prompts, in only one case was the answer given in English against a text in Italian. By repeating the request, I obtained the desired solution. + +In conclusion, with a margin of error of less than 1% (and probably much less), NQR provides results in the same language as the content being analyzed. Text quality is definitely higher for widely used languages such as English, Spanish, French, but also Portuguese, German, Italian, Russian, and others. + +It can be said that the quality of the response is comparable to what would be obtained by asking a question directly to ChatGPT. + +--- + +### Transcriber + +I released Transcriber, perhaps my most successful application, a little over a year ago. + +![](/static/transcriber-ai-metadata-/PakSideSite_Transcriber_00000.jpg) + +I've talked about it **[here](/developer-case-studies/transcriber/)** but it's okay to repeat a little, right? + +Transcriber is a tool that performs audio and video transcription, for creating subtitles with a strong skew toward direct export to Final Cut Pro. + +It has at its core a text editor that I think is pretty good, especially in the perspective of text that needs to be in sync with video. + +So I thought I would add NQR to Transcriber so that I could do analysis and content generation from the transcribed text. + +*Brand name*? **AI Metadata** (always use “*AI*” because it increases perceived value...). + +Something useful for me, for my The Morning Rant, but which I later thought was perfect and suitable for a lot of other people.... + +--- + +### But in simple terms, how does this stuff work? + +Yeah, how does it work? I'd rather show you with a visual example than describe it in words. + +[![](/static/transcriber-ai-metadata-youtube-03.jpg)](https://youtu.be/Z-W3vGPxDVE) + +Now I will say something unusual for someone in my position: *this system is not perfect*. The user interface could be improved, and as development continues, I will release updates to make the whole workflow easier and more intuitive. + +However, I am quite satisfied with the result, especially because all this information, or rather, this meta-information, is well organized. The various versions are managed so that nothing is lost and everything is saved. It is relatively easy to add new features, not only from my point of view as a developer, but also from the end users' point of view. + +--- + +### A couple of things I learned from using this class of artificial intelligence + +Ever since I started developing my applications, their main purpose was to automate repetitive tasks as much as possible, so that I would be free to focus on the creative component and on solving new problems. + +When ChatGPT came along, as I'm sure you all did, I was dazzled by the potential of the tool. And we were still talking about GPT-3 a couple of years ago. I had seen with my very own eyes, finally, *a machine pass the Turing test* brilliantly. + +![](/static/transcriber-ai-metadata-/TuringTest.jpg) + +But then, as with all things, I delved deeper, had my own experience, and realized which things LLMs do excellently and which, still, struggle to solve even with sufficiency. + +Given that these tools are continually improving, at a breakneck pace (but at some point the power consumption will be so high that they cannot go beyond a certain ceiling), I have learned to exploit them for what they are: a great help, an excellent kick-start, something that can get you out of the deadlock of the white paper, but definitely not a final solution. + +When I record an episode of my podcast, rather quickly, I take advantage of these tools to have description, tags, title. But then, for the most important information, I get on the keyboard and correct the errors, take out what is not needed and add the misses. + +My philosophy is to take this metadata that is generated and rework it in such a way that from a “just enough” level you get to “more than satisfactory.” + +You have to look at the responses of artificial intelligence a little bit like the work of a newly hired intern: you always have to check what it is doing, because, in the end, anyway, the people who put the face on it are us... + +--- + +### Discovering hidden things + +When I handed over the final version of the application to the guys at [**FxFactory**](https://fxfactory.com/) for them to publish on the store, I then focused on the prompts. + +Since these could be added and changed without updating the application, I thought it would be a nice gift for all users to have as much functionality as possible ready “*out of the box*”. + +Always starting from the content I create, particularly the wine podcast, I developed some prompts that allowed me to have a fresh look at the content itself. + +Initially I started with URLs: I wanted to know what links were being quoted in the broadcast, so I created this prompt: + +![](/static/transcriber-ai-metadata-/LinksPrompt.png) + +A relatively simple thing that, however, when I ran it, made me discover that there were many more references in an episode than I remembered: + +![](/static/transcriber-ai-metadata-/LinksResult.png) + +For me it was really a revelation: the clever “*stupidity*” of LLMs had made me discover something I had forgotten. + +Now, if you think about it for a moment, this is one of the most important and relevant things about this whole article, but also about everything we think or can think about this class of tools! + +These objects do a cold analysis, without love, without passion, without even understanding what they are doing, however, they do it rather precisely. I emphasize: “*rather precise*,” not “*infallible*,” let me be clear. + +Simply put, they make us discover or, better yet, rediscover something about the content, so that we can really have metadata that also has semantic meaning! + +I went ahead and developed other prompts, such as this one that identifies brands: + +![](/static/transcriber-ai-metadata-/BrandsResult.png) + +Or this one that tries to figure out who the participants are if they are mentioned: + +![](/static/transcriber-ai-metadata-/PeopleResult.png) + +I realize that I have only begun to scratch the surface of what can be done. In the coming weeks, either at the request of app users or out of personal push, I will be developing more such prompts. + +--- + +### Image Generation + +For my experiments, for my podcast and YouTube show, since I am a subscriber, I use [Adobe's Firefly](https://firefly.adobe.com/) for image generation, knowing full well that the level of quality is far below that of other systems, foremost among them [Midjourney](https://www.midjourney.com/). + +So I generated a prompt that generates a prompt... Basically, instead of me writing what I needed, as I always did, I asked the artificial intelligence, again via one of NQR's prompts, to write the prompt for generating the image, to be passed then, by copying and pasting it, into Firefly. + +![](/static/transcriber-ai-metadata-/ImagePrompt.png) + +It is interesting this first level of recursiveness: one prompt generating another prompt... + +But then, since OpenAI has the API to directly generate images with the DALL-E model, I thought it would be nice to bypass this whole round. At a not insignificant cost-we're speaking of a few cents, not a few thousandths-I decided to go ahead with direct generation. + +That said, as of now images can be generated directly from within Transcriber! + +![](/static/transcriber-ai-metadata-/ImageGeneration.png) + +You can choose the model, DALL-E 2 or DALL-E 3 (DALL-E 2 is absolutely unqualifiable in quality, I think they only keep it on because there are some applications that use it). For DALL-E 3 you can choose to generate a square or 16:9 image, either horizontally or vertically. + +![](/static/transcriber-ai-metadata-/ImageGeneratorSettings.png) + +You can also choose to generate a standard image or one with a “vivid” pattern, which creates more aesthetically pleasing results that look more like stock photos instead of regular photos. + +Generating a 16:9 image comes in at a cost of $0.12. + +--- + +### Money, Money, Money! + +![](/static/transcriber-ai-metadata-/Costs.jpg) + +But how much does this stuff cost? + +Probably by the time you read this article I will have updated Transcriber with the new version of the system that also shows how much you paid per generation. + +The costs, using the GPT-4o-mini model, are really low: we're talking about **$0.15 per million prompt tokens** (**$0.60 per million response tokens**), which translates, roughly speaking, to something like **$0.003-0.004** for a single prompt and its response on a 30-minute video. + +Simply put, doing a dozen or so prompts, more or less, with even an image, you end up spending something in the range of $0.15-0.25. + +If you are not interested in image generation, the costs drop into the range of $0.01-0.03! + +*A lot? Little?* I'll leave it up to you to decide; as far as I'm concerned, these are really small prices. + +Just to do the math: with what a coffee at the bar costs here in Italy (which, despite being the home of coffee, is one of the places where coffee, although *the best in the world*, costs less) you can run hundreds of prompts. + +In August, in which I tested this system far and wide with all my content, even generating dozens of images, I spent only $4 with OpenAI: the cost of a cappuccino and a brioche... + +You may argue that the model I use, GPT-4-o-mini, is much cheaper than GPT-4, which is, according to OpenAI, more refined. However, from my experience I have seen that the difference in quality is unnoticeable. + +--- + +### Before the conclusion + +I don't know what kind of article you were expecting. Probably anyone else, in my place, with the opportunity of such an important showcase (*I cannot thank you enough, Chris*), would have opted for a much more promotional text. + +I, since I do not know how to sell myself well - the cross and delight of my life - thought I would tell you what my discovery experience has been, to share what I have figured out by developing this application and, in this case, this infrastructure that allows you to analyze text. + +At this point, before the final mentions, the ball is in your court for those of you who have read. Let me know what you think, whether you have tried the app or simply read these lines. I am really curious and hungry for interactions with people to converse with! + +--- + +### Try it out, it's (kind of) free! + +If you have already purchased Transcriber, **this update is free** and you are all set. + +On the other hand, if you are curious, pop over to [**FxFactory**'s website](https://fxfactory.com/info/transcriber/) and download the application: the trial version allows you to work on the first 45 seconds of video (I know, it's not much, but it's enough to get an idea). + +I would like to take this opportunity to thank FxFactory for helping to introduce Transcriber to a much wider audience than I ever could have done on my own. In addition, they encouraged me to create a better app: more performant, intuitive, and pleasant to use and see. + +Is there anything else to add?
 + +*Another thanks to Chris for this space*. + +If you have any questions, please leave a comment on this article! + +--- + +### About Alex + +![](/static/transcriber-ai-metadata-/alexraccuglia.jpg) + +Alex Raccuglia, 50, from Milan, Italy, studied computer engineering but, fortunately for him, ended up as a director of TV commercials and promotional videos, accumulating a fair amount of experience in the field of visual effects. + +Over the years he continued to develop software and, at some point, decided to start selling his apps on [**Ulti.Media**](https://ulti.media). + +He also has a [**YouTube channel**](https://www.youtube.com/@ulti_media), run somewhat messy. + +He is a founding member of [**Runtime Radio**](https://runtimeradio.it), an Italian podcast network. + +*He hates the traffic in Milan and wrote this article dictating it to Siri during his commute to work.* \ No newline at end of file diff --git a/docs/developer-case-studies/transcriber-ai-metadata.yml b/docs/developer-case-studies/transcriber-ai-metadata.yml new file mode 100644 index 0000000000..f2956aaba3 --- /dev/null +++ b/docs/developer-case-studies/transcriber-ai-metadata.yml @@ -0,0 +1,4 @@ +label: Transcriber AI Metadata +icon: beaker +order: 2 +image: /static/thumbnail.jpg \ No newline at end of file diff --git a/docs/fcp-creative-summit.md b/docs/fcp-creative-summit.md index 1132b487ba..01d3835578 100644 --- a/docs/fcp-creative-summit.md +++ b/docs/fcp-creative-summit.md @@ -21,6 +21,20 @@ You can view the program [here](https://fcpcreativesummits.com/program/). --- +### Live Stream + +Let's talk the future of editing! + +Join the amazing [Jenn Jager](https://www.youtube.com/@JennJagerPro), Final Cut Pro Guru Mark Spencer from ‪[Ripple Training](https://www.rippletraining.com), Writer/Director and Squirrel Me Bad creator [Daniel Cohn](https://www.youtube.com/@squirrelmebad9684), Director of Product Marketing for Continuum at [Boris FX](https://borisfx.com) Nick Harauzand, and myself (Chris Hocking) chat about all things related to the upcoming Final Cut Pro Creative Summit! + +You can watch on YouTube Live here: + +[![](/static/fcp-creative-summit-youtube-preview.jpeg)](https://ltnt.tv/fcpcs-2024) + +--- + +### Intro Video + The official **FCP Creative Summit 2024** promotional video is out now, created by [Iain Anderson](https://iain-anderson.com). Spot the crazy person (yes, it's me)! @@ -31,6 +45,8 @@ You can watch on Vimeo here: --- +### Final Cut Pro Radio + **Richard Taylor** has announced that the **Final Cut Pro Creative Summit** will be happening **13th to 15th November** at **Juniper Hotel Cupertino**. You can learn more on Richard's Final Cut TV & Coffee YouTube Channel [here](https://www.youtube.com/watch?v=AhZNBV7vcpA). diff --git a/docs/static/fcp-creative-summit-youtube-preview.jpeg b/docs/static/fcp-creative-summit-youtube-preview.jpeg new file mode 100644 index 0000000000..e6cfdfa0bd Binary files /dev/null and b/docs/static/fcp-creative-summit-youtube-preview.jpeg differ diff --git a/docs/static/transcriber-ai-metadata-BrandsResult.png b/docs/static/transcriber-ai-metadata-BrandsResult.png new file mode 100644 index 0000000000..d46bbc313e Binary files /dev/null and b/docs/static/transcriber-ai-metadata-BrandsResult.png differ diff --git a/docs/static/transcriber-ai-metadata-Costs.jpg b/docs/static/transcriber-ai-metadata-Costs.jpg new file mode 100644 index 0000000000..11d457cab7 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-Costs.jpg differ diff --git a/docs/static/transcriber-ai-metadata-Firefly.png b/docs/static/transcriber-ai-metadata-Firefly.png new file mode 100644 index 0000000000..153b628559 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-Firefly.png differ diff --git a/docs/static/transcriber-ai-metadata-ImageGeneration.png b/docs/static/transcriber-ai-metadata-ImageGeneration.png new file mode 100644 index 0000000000..935496374c Binary files /dev/null and b/docs/static/transcriber-ai-metadata-ImageGeneration.png differ diff --git a/docs/static/transcriber-ai-metadata-ImageGeneratorSettings.png b/docs/static/transcriber-ai-metadata-ImageGeneratorSettings.png new file mode 100644 index 0000000000..ec449f93ae Binary files /dev/null and b/docs/static/transcriber-ai-metadata-ImageGeneratorSettings.png differ diff --git a/docs/static/transcriber-ai-metadata-ImagePrompt.png b/docs/static/transcriber-ai-metadata-ImagePrompt.png new file mode 100644 index 0000000000..8390a52403 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-ImagePrompt.png differ diff --git a/docs/static/transcriber-ai-metadata-LinksPrompt.png b/docs/static/transcriber-ai-metadata-LinksPrompt.png new file mode 100644 index 0000000000..9b7000dfd6 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-LinksPrompt.png differ diff --git a/docs/static/transcriber-ai-metadata-LinksResult.png b/docs/static/transcriber-ai-metadata-LinksResult.png new file mode 100644 index 0000000000..79026132f6 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-LinksResult.png differ diff --git a/docs/static/transcriber-ai-metadata-NQR.png b/docs/static/transcriber-ai-metadata-NQR.png new file mode 100644 index 0000000000..d186438ad0 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-NQR.png differ diff --git a/docs/static/transcriber-ai-metadata-NQR2.png b/docs/static/transcriber-ai-metadata-NQR2.png new file mode 100644 index 0000000000..7a363c7220 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-NQR2.png differ diff --git a/docs/static/transcriber-ai-metadata-OneSmallPlanet.jpg b/docs/static/transcriber-ai-metadata-OneSmallPlanet.jpg new file mode 100644 index 0000000000..7ad52247c2 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-OneSmallPlanet.jpg differ diff --git a/docs/static/transcriber-ai-metadata-PakSideSite_Transcriber_00000.jpg b/docs/static/transcriber-ai-metadata-PakSideSite_Transcriber_00000.jpg new file mode 100644 index 0000000000..f7278c238e Binary files /dev/null and b/docs/static/transcriber-ai-metadata-PakSideSite_Transcriber_00000.jpg differ diff --git a/docs/static/transcriber-ai-metadata-PeopleResult.png b/docs/static/transcriber-ai-metadata-PeopleResult.png new file mode 100644 index 0000000000..f08b71b244 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-PeopleResult.png differ diff --git a/docs/static/transcriber-ai-metadata-SciattaGPT.png b/docs/static/transcriber-ai-metadata-SciattaGPT.png new file mode 100644 index 0000000000..76ebf984bf Binary files /dev/null and b/docs/static/transcriber-ai-metadata-SciattaGPT.png differ diff --git a/docs/static/transcriber-ai-metadata-TheMorningRant.jpg b/docs/static/transcriber-ai-metadata-TheMorningRant.jpg new file mode 100644 index 0000000000..38cd42206e Binary files /dev/null and b/docs/static/transcriber-ai-metadata-TheMorningRant.jpg differ diff --git a/docs/static/transcriber-ai-metadata-TuringTest.jpg b/docs/static/transcriber-ai-metadata-TuringTest.jpg new file mode 100644 index 0000000000..91ff636eaf Binary files /dev/null and b/docs/static/transcriber-ai-metadata-TuringTest.jpg differ diff --git a/docs/static/transcriber-ai-metadata-alexraccuglia.jpg b/docs/static/transcriber-ai-metadata-alexraccuglia.jpg new file mode 100644 index 0000000000..e5198334ce Binary files /dev/null and b/docs/static/transcriber-ai-metadata-alexraccuglia.jpg differ diff --git a/docs/static/transcriber-ai-metadata-youtube-01.jpg b/docs/static/transcriber-ai-metadata-youtube-01.jpg new file mode 100644 index 0000000000..fdb807e59c Binary files /dev/null and b/docs/static/transcriber-ai-metadata-youtube-01.jpg differ diff --git a/docs/static/transcriber-ai-metadata-youtube-02.jpg b/docs/static/transcriber-ai-metadata-youtube-02.jpg new file mode 100644 index 0000000000..4f72a7dc9c Binary files /dev/null and b/docs/static/transcriber-ai-metadata-youtube-02.jpg differ diff --git a/docs/static/transcriber-ai-metadata-youtube-03.jpg b/docs/static/transcriber-ai-metadata-youtube-03.jpg new file mode 100644 index 0000000000..842531b7c0 Binary files /dev/null and b/docs/static/transcriber-ai-metadata-youtube-03.jpg differ diff --git a/docs/update-guide.md b/docs/update-guide.md index 4bc1ad74f0..4993a9a3ef 100644 --- a/docs/update-guide.md +++ b/docs/update-guide.md @@ -46,6 +46,8 @@ However, as macOS Sequoia 15.0 is brand new - take care, and don't rush to updat Sweetwater also has an incredibly detailed [macOS Sequoia Compatibility Guide](https://www.sweetwater.com/sweetcare/articles/macos-sequoia-compatibility-guide/) for editors that also use DAWs like Logic Pro. +We have had no issues with macOS Sequoia 15.0.0 or 15.0.1. + --- ## macOS Sonoma 14