Skip to content

Commit

Permalink
Merge pull request #87 from gujiaxi/website
Browse files Browse the repository at this point in the history
[license] add license reference of TikTokDataset
  • Loading branch information
gujiaxi authored Sep 11, 2024
2 parents 34cb360 + 5e36130 commit 0a420c1
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -672,18 +672,18 @@ <h2 class="title is-3">Comparison to state-of-the-art methods</h2>
<div class="content has-text-justified">
<h3 class="title is-4">Quantitative evaluation</h3>
<p>
Our method achieves better hand generation quality, and more accurately adheres to the reference pose. Note that our method is not trained on the TikTok dataset.
Our method achieves better hand generation quality, and more accurately adheres to the reference pose. Note that our method is not trained on <a href="https://www.kaggle.com/datasets/yasaminjafarian/tiktokdataset">TikTokDataset</a>.
</p>
</div>
<div class="columns is-centered">
<img src="assets/images/experiments/cmp1.jpg" style="max-width: 800px" />
</div>
<h2 class="subtitle has-text-centered">
Qualitative comparison to the state-of-the-art methods on TikTok test set.
Qualitative comparison to the state-of-the-art methods on the test set of <a href="https://www.kaggle.com/datasets/yasaminjafarian/tiktokdataset">TikTokDataset</a>.
</h2>
<div class="content has-text-justified">
<p>
We visualize the 106th frame from seq 338 of TikTok dataset and the pixel-wise difference between consecutive frames. MagicPose exhibits abrupt transitions, while Moore and MuseV show instability in texture and text. In contrast, our method demonstrates stable inter-frame differences and better temporal smoothness.
We visualize the 106th frame from seq 338 of <a href="https://www.kaggle.com/datasets/yasaminjafarian/tiktokdataset">TikTokDataset</a> and the pixel-wise difference between consecutive frames. MagicPose exhibits abrupt transitions, while Moore and MuseV show instability in texture and text. In contrast, our method demonstrates stable inter-frame differences and better temporal smoothness.
</p>
</div>
<div class="columns is-centered">
Expand Down Expand Up @@ -737,7 +737,7 @@ <h3 class="title is-4">User study</h3>
<img src="assets/images/experiments/userstudy_export.svg" style="max-width: 480px; width: 100%;" />
</div>
<h2 class="subtitle has-text-centered">
Preference of MimicMotion (ours) over baseline methods on the TikTok dataset test split. Users prefer MimicMotion over other methods.
Preference of MimicMotion (ours) over baseline methods on the test split of <a href="https://www.kaggle.com/datasets/yasaminjafarian/tiktokdataset">TikTokDataset</a>. Users prefer MimicMotion over other methods.
</h2>
</div>
</section>
Expand Down Expand Up @@ -827,7 +827,7 @@ <h2 class="title">BibTeX</h2>
<p>
This page was built using the <a href="https://github.com/eliahuhorwitz/Academic-project-page-template" target="_blank">Academic Project Page Template</a> which was adopted from the <a href="https://nerfies.github.io" target="_blank">Nerfies</a> project page.
You are free to borrow the of this website, we just ask that you link back to this page in the footer. <br> This website is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative
Commons Attribution-ShareAlike 4.0 International License</a>.
Commons Attribution-ShareAlike 4.0 International License</a>. <br> <a href="https://www.kaggle.com/datasets/yasaminjafarian/tiktokdataset/data">TikTokDataset</a> authored by Yasamin Jafarian in “Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos” is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-nc/4.0/" target="_blank">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
</p>

</div>
Expand Down

0 comments on commit 0a420c1

Please sign in to comment.