Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #790

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

singhravipratap
Copy link

Paper Title: Attention is all you need

Paper Year: 2017

Reasons for including paper:

  • Paper gives attention mechanism also known as Transformers architecture.

@DarrenN
Copy link
Contributor

DarrenN commented Apr 22, 2024

👋🏽 Thanks for the PR.

It looks like there's a small issue with the formatting, in that you have the * inside a > blockquote, which incorrectly indents the entry. And there's an extra ) after the markdown link.

Also, "Attention" is misspelled in the link.

@zeeshanlakhani
Copy link
Member

@singhravipratap please fix when possible, and we'll get this in.

updated the paper name and formatting.
@singhravipratap
Copy link
Author

@DarrenN, @zeeshanlakhani, Thanks for the review, Updated the record.

> This paper proposes a method for translating music across musical instruments, genres, and styles. It is based on a multi-domain wavenet autoencoder, with a shared encoder and a disentangled latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the domain-independent encoder enables translation even from musical domains that were not seen during training. The method is unsupervised and does not rely on supervision in the form of matched samples between domains or musical transcriptions. This method is evaluated on NSynth, as well as on a dataset collected from professional musicians, and achieve convincing translations, even when translating from whistling, potentially enabling the creation of instrumental music by untrained humans.

* [Attention is all you need](http://papers.neurips.cc/paper/7181-attention-is-all-you-need.pdf)) by Ashish Vaswani et al.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@singhravipratap

super minor, but there's an extra ). Can you remove it?

> This paper proposes a method for translating music across musical instruments, genres, and styles. It is based on a multi-domain wavenet autoencoder, with a shared encoder and a disentangled latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the domain-independent encoder enables translation even from musical domains that were not seen during training. The method is unsupervised and does not rely on supervision in the form of matched samples between domains or musical transcriptions. This method is evaluated on NSynth, as well as on a dataset collected from professional musicians, and achieve convincing translations, even when translating from whistling, potentially enabling the creation of instrumental music by untrained humans.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need the extra space?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants