Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

whisper : allocate encoder results in dedicated buffer #1964

Closed
wants to merge 2 commits into from

Conversation

ggerganov
Copy link
Owner

fix #1959

Allocate the encoder results tensors (embd_conv and embd_enc) in dedicated backend buffer. Copy the results into these tensors and used them in follow-up graphs

Comment on lines +1708 to +1709
// TODO: this still triggers the assert:
//struct ggml_tensor * cur = ggml_view_tensor(ctx0, wstate.embd_conv);
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the wstate.embd_conv now pre-allocated, this view still triggers the assert:

Assertion failed: (tensor_alloc->offset == SIZE_MAX), function ggml_gallocr_init_tensor, file ggml-alloc.c, line 739.

Not sure why

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code that stores the allocation location of leafs didn't take into account pre-allocated leafs. This should fix it:

diff --git a/ggml-alloc.c b/ggml-alloc.c
index 8ac1d3e..60b86c2 100644
--- a/ggml-alloc.c
+++ b/ggml-alloc.c
@@ -701,8 +701,13 @@ bool ggml_gallocr_reserve_n(ggml_gallocr_t galloc, struct ggml_cgraph * graph, c
         struct ggml_tensor * leaf = graph->leafs[i];
         struct hash_node * hn = ggml_gallocr_hash_get(galloc, leaf);
         galloc->leaf_allocs[i].buffer_id = hn->buffer_id;
-        galloc->leaf_allocs[i].leaf.offset = hn->offset;
-        galloc->leaf_allocs[i].leaf.size_max = ggml_backend_buft_get_alloc_size(galloc->bufts[hn->buffer_id], leaf);
+        if (leaf->view_src || leaf->data) {
+            galloc->leaf_allocs[i].leaf.offset = SIZE_MAX;
+            galloc->leaf_allocs[i].leaf.size_max = 0;
+        } else {
+            galloc->leaf_allocs[i].leaf.offset = hn->offset;
+            galloc->leaf_allocs[i].leaf.size_max = ggml_backend_buft_get_alloc_size(galloc->bufts[hn->buffer_id], leaf);
+        }
     }

     // reallocate buffers if needed

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With just this change applied on master the code also works (i.e. without pre-allocating the tensors). But if I understand correctly, it is technically not correct because nothing guarantees that the data in these tensors would not be overwritten by some ops. Since they are currently at the end of the computation graphs it seems to produce correct results, but the concern is that this is not very future-proof if we expand the graphs in the future - is that correct?

What if I use ggml_set_output(ctx0, wstate.embd_conv); in whisper_build_graph_conv() to guarantee that the data would not be overwritten?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, with ggml_set_output it would guarantee that the tensor is never overwritten.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would pre-allocating the tensors as proposed in this PR have any advantage over the ggml_set_output option? Since with this PR we now have to perform extra copy of the data between the graph calls, which otherwise is not needed.

Copy link
Collaborator

@slaren slaren Mar 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there would be any significant advantage. Maybe it would allow using ggml_gallocr_reserve here instead of ggml_gallocr_alloc_graph, but the difference would be minimal:

whisper.cpp/whisper.cpp

Lines 496 to 502 in 66df44b

// since there are dependencies between the different graphs,
// we need to allocate them instead of only reserving to get the correct compute buffer size
if (!ggml_gallocr_alloc_graph(alloc, get_graph())) {
// failed to allocate the compute buffer
WHISPER_LOG_ERROR("%s: failed to allocate the compute buffer\n", __func__);
return false;
}

@ggerganov
Copy link
Owner Author

Picked the ggml-alloc commit directly on master

@ggerganov ggerganov closed this Mar 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

whisper : adapt to latest ggml changes
2 participants