Skip to content

Commit d503f89

Browse files
committed
Fix links broken in projects renaming
1 parent ffb663b commit d503f89

File tree

6 files changed

+8
-8
lines changed

6 files changed

+8
-8
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ And finally, here are some other examples and use cases for inspiration:
266266

267267
1. [E2E Batch Inference](examples/e2e/): Feature engineering, training, and inference pipelines for tabular machine learning.
268268
2. [Basic NLP with BERT](examples/e2e_nlp/): Feature engineering, training, and inference focused on NLP.
269-
3. [LLM RAG Pipeline with Langchain and OpenAI](https://github.com/zenml-io/zenml-projects/tree/main/llm-agents): Using Langchain to create a simple RAG pipeline.
269+
3. [LLM RAG Pipeline with Langchain and OpenAI](https://github.com/zenml-io/zenml-projects/tree/main/zenml-support-agent): Using Langchain to create a simple RAG pipeline.
270270
4. [Huggingface Model to Sagemaker Endpoint](https://github.com/zenml-io/zenml-projects/tree/main/huggingface-sagemaker): Automated MLOps on Amazon Sagemaker and HuggingFace
271271
5. [LLMops](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide): Complete guide to do LLM with ZenML
272272

@@ -341,8 +341,8 @@ our GitHub repo.
341341
## 📚 LLM-focused Learning Resources
342342

343343
1. [LL Complete Guide - Full RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) - Document ingestion, embedding management, and query serving
344-
2. [LLM Fine-Tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-finetuning) - From data prep to deployed model
345-
3. [LLM Agents Example](https://github.com/zenml-io/zenml-projects/tree/main/llm-agents) - Track conversation quality and tool usage
344+
2. [LLM Fine-Tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/zencoder) - From data prep to deployed model
345+
3. [LLM Agents Example](https://github.com/zenml-io/zenml-projects/tree/main/zenml-support-agent) - Track conversation quality and tool usage
346346

347347
## 🤖 AI-Friendly Documentation with llms.txt
348348

docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ The `run_with_accelerate` decorator accepts various arguments to configure your
4949
3. If `run_with_accelerate` is misused, it will raise a `RuntimeError` with a helpful message explaining the correct usage.
5050

5151
{% hint style="info" %}
52-
To see a full example where Accelerate is used within a ZenML pipeline, check out our [llm-lora-finetuning](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md) project which leverages the distributed training functionalities while finetuning an LLM.
52+
To see a full example where Accelerate is used within a ZenML pipeline, check out our [llm-lora-finetuning](https://github.com/zenml-io/zenml-projects/blob/main/gamesense/README.md) project which leverages the distributed training functionalities while finetuning an LLM.
5353
{% endhint %}
5454

5555
## Ensure your container is Accelerate-ready

docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetuned-embeddings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ LLM-powered evaluations.
120120
The next section will cover [LLM finetuning and deployment](../finetuning-llms/) as the\
121121
final part of our LLMops guide. (This section is currently still a work in\
122122
progress, but if you're eager to try out LLM finetuning with ZenML, you can use[our LoRA\
123-
project](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md)\
123+
project](https://github.com/zenml-io/zenml-projects/blob/main/gamesense/README.md)\
124124
to get started. We also have [a\
125125
blogpost](https://www.zenml.io/blog/how-to-finetune-llama-3-1-with-zenml) guide which\
126126
takes you through[all the steps you need to finetune Llama 3.1](https://www.zenml.io/blog/how-to-finetune-llama-3-1-with-zenml) using GCP's Vertex AI with ZenML,\

docs/book/user-guide/llmops-guide/finetuning-llms/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,6 @@ when you might need to finetune an LLM, how to evaluate the performance of what\
4141
you do as well as decisions around what data to use and so on.
4242

4343
To follow along with the example explained in this guide, please follow the\
44-
instructions in [the `llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) where the full code is also\
44+
instructions in [the `llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/gamesense) where the full code is also\
4545
available. This code can be run locally (if you have a GPU attached to your\
4646
machine) or using cloud compute as you prefer.

docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ When using Generalized evals, it's important to consider their limitations and c
9494
- [nervaluate](https://github.com/MantisAI/nervaluate) (for NER)
9595

9696
It's easy to build in one of these frameworks into your ZenML pipeline. The
97-
implementation of evaluation in [the `llm-lora-finetuning` project](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) is a good
97+
implementation of evaluation in [the `llm-lora-finetuning` project](https://github.com/zenml-io/zenml-projects/tree/main/gamesense) is a good
9898
example of how to do this. We used the `evaluate` library for ROUGE evaluation,
9999
but you could easily swap this out for another framework if you prefer. See [the previous section](finetuning-with-accelerate.md#implementation-details) for more details.
100100

docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ structured meaning representations.
1919
For a
2020
full walkthrough of how to run the LLM finetuning yourself, visit [the LLM Lora
2121
Finetuning
22-
project](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning)
22+
project](https://github.com/zenml-io/zenml-projects/tree/main/gamesense)
2323
where you'll find instructions and the code.
2424
{% endhint %}
2525

0 commit comments

Comments
 (0)