diff --git a/ChatReviewerAndResponse/README.md b/ChatReviewerAndResponse/README.md new file mode 100644 index 0000000..1c66433 --- /dev/null +++ b/ChatReviewerAndResponse/README.md @@ -0,0 +1,3 @@ +首先,下载chatpaper整个项目后,打开项目时,单独打开ChatReviewerAndResponse这个文件夹。 + +因为这两个项目互相独立,如果打开的是chatpaper文件夹,会导致路径不对! \ No newline at end of file diff --git a/ReviewFormat.txt b/ChatReviewerAndResponse/ReviewFormat.txt similarity index 100% rename from ReviewFormat.txt rename to ChatReviewerAndResponse/ReviewFormat.txt diff --git a/chat_response.py b/ChatReviewerAndResponse/chat_response.py similarity index 100% rename from chat_response.py rename to ChatReviewerAndResponse/chat_response.py diff --git a/chat_reviewer.py b/ChatReviewerAndResponse/chat_reviewer.py similarity index 100% rename from chat_reviewer.py rename to ChatReviewerAndResponse/chat_reviewer.py diff --git a/get_paper.py b/ChatReviewerAndResponse/get_paper.py similarity index 100% rename from get_paper.py rename to ChatReviewerAndResponse/get_paper.py diff --git a/review_comments.txt b/ChatReviewerAndResponse/review_comments.txt similarity index 98% rename from review_comments.txt rename to ChatReviewerAndResponse/review_comments.txt index e6e8ee7..f5ff149 100644 --- a/review_comments.txt +++ b/ChatReviewerAndResponse/review_comments.txt @@ -1,68 +1,68 @@ -#1 Reviewer - -Overall Review: -The paper proposes a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection. The proposed model selects the most explainable sentences for verdicts based on raw reports, thereby reducing the dependency on fact-checked reports. The paper presents two explainable fake news datasets and experimental results demonstrating that the proposed model outperforms state-of-the-art detection baselines and generates high-quality explanations. - -Paper Strength: -(1) The paper addresses an important and timely problem of fake news detection and provide insights into the limitations of existing methods. -(2) The proposed CofCED model is innovative and utilizes a hierarchical encoder and cascaded selectors for selecting explainable sentences. -(3) The paper contributes to the research community by presenting two publicly available datasets for explainable fake news detection. - -Paper Weakness: -(1) The paper could benefit from more detailed clarification of the proposed model's architecture and implementation details. -(2) The paper lacks comparison with more relevant and widely-used baseline methods in the field. -(3) Although the paper constructs two explainable fake news datasets, the paper does not describe the process and criteria for creating them. - -Questions To Authors And Suggestions For Rebuttal: -(1) Can the authors provide additional information on the proposed model's architecture and implementation details? -(2) Can the authors compare their proposed method with additional relevant and widely-used baseline methods in the field? -(3) Can the authors provide more details on the process and criteria for creating the two constructed explainable fake news datasets? - -Overall score (1-5): 4 -The paper provides an innovative approach to fake news detection using a cascade of selectors and presents two publicly available datasets for the research community. However, the paper could benefit from additional details on architectural and implementation details and comparisons with more relevant baselines. - -#2 Reviewer - -Overall Review: -The paper proposes a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection. The proposed model selects the most explainable sentences for verdicts based on raw reports, thereby reducing the dependency on fact-checked reports. The paper presents two explainable fake news datasets and experimental results demonstrating that the proposed model outperforms state-of-the-art detection baselines and generates high-quality explanations. - -Paper Strength: -(1) The paper addresses an important and timely problem of fake news detection and provide insights into the limitations of existing methods. -(2) The proposed CofCED model is innovative and utilizes a hierarchical encoder and cascaded selectors for selecting explainable sentences. -(3) The paper contributes to the research community by presenting two publicly available datasets for explainable fake news detection. - -Paper Weakness: -(1) The paper could benefit from more detailed clarification of the proposed model's architecture and implementation details. -(2) The paper lacks comparison with more relevant and widely-used baseline methods in the field. -(3) Although the paper constructs two explainable fake news datasets, the paper does not describe the process and criteria for creating them. - -Questions To Authors And Suggestions For Rebuttal: -(1) Can the authors provide additional information on the proposed model's architecture and implementation details? -(2) Can the authors compare their proposed method with additional relevant and widely-used baseline methods in the field? -(3) Can the authors provide more details on the process and criteria for creating the two constructed explainable fake news datasets? - -Overall score (1-5): 4 -The paper provides an innovative approach to fake news detection using a cascade of selectors and presents two publicly available datasets for the research community. However, the paper could benefit from additional details on architectural and implementation details and comparisons with more relevant baselines. - -#3 Reviewer - -Overall Review: -The paper proposes a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection. The proposed model selects the most explainable sentences for verdicts based on raw reports, thereby reducing the dependency on fact-checked reports. The paper presents two explainable fake news datasets and experimental results demonstrating that the proposed model outperforms state-of-the-art detection baselines and generates high-quality explanations. - -Paper Strength: -(1) The paper addresses an important and timely problem of fake news detection and provide insights into the limitations of existing methods. -(2) The proposed CofCED model is innovative and utilizes a hierarchical encoder and cascaded selectors for selecting explainable sentences. -(3) The paper contributes to the research community by presenting two publicly available datasets for explainable fake news detection. - -Paper Weakness: -(1) The paper could benefit from more detailed clarification of the proposed model's architecture and implementation details. -(2) The paper lacks comparison with more relevant and widely-used baseline methods in the field. -(3) Although the paper constructs two explainable fake news datasets, the paper does not describe the process and criteria for creating them. - -Questions To Authors And Suggestions For Rebuttal: -(1) Can the authors provide additional information on the proposed model's architecture and implementation details? -(2) Can the authors compare their proposed method with additional relevant and widely-used baseline methods in the field? -(3) Can the authors provide more details on the process and criteria for creating the two constructed explainable fake news datasets? - -Overall score (1-5): 4 +#1 Reviewer + +Overall Review: +The paper proposes a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection. The proposed model selects the most explainable sentences for verdicts based on raw reports, thereby reducing the dependency on fact-checked reports. The paper presents two explainable fake news datasets and experimental results demonstrating that the proposed model outperforms state-of-the-art detection baselines and generates high-quality explanations. + +Paper Strength: +(1) The paper addresses an important and timely problem of fake news detection and provide insights into the limitations of existing methods. +(2) The proposed CofCED model is innovative and utilizes a hierarchical encoder and cascaded selectors for selecting explainable sentences. +(3) The paper contributes to the research community by presenting two publicly available datasets for explainable fake news detection. + +Paper Weakness: +(1) The paper could benefit from more detailed clarification of the proposed model's architecture and implementation details. +(2) The paper lacks comparison with more relevant and widely-used baseline methods in the field. +(3) Although the paper constructs two explainable fake news datasets, the paper does not describe the process and criteria for creating them. + +Questions To Authors And Suggestions For Rebuttal: +(1) Can the authors provide additional information on the proposed model's architecture and implementation details? +(2) Can the authors compare their proposed method with additional relevant and widely-used baseline methods in the field? +(3) Can the authors provide more details on the process and criteria for creating the two constructed explainable fake news datasets? + +Overall score (1-5): 4 +The paper provides an innovative approach to fake news detection using a cascade of selectors and presents two publicly available datasets for the research community. However, the paper could benefit from additional details on architectural and implementation details and comparisons with more relevant baselines. + +#2 Reviewer + +Overall Review: +The paper proposes a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection. The proposed model selects the most explainable sentences for verdicts based on raw reports, thereby reducing the dependency on fact-checked reports. The paper presents two explainable fake news datasets and experimental results demonstrating that the proposed model outperforms state-of-the-art detection baselines and generates high-quality explanations. + +Paper Strength: +(1) The paper addresses an important and timely problem of fake news detection and provide insights into the limitations of existing methods. +(2) The proposed CofCED model is innovative and utilizes a hierarchical encoder and cascaded selectors for selecting explainable sentences. +(3) The paper contributes to the research community by presenting two publicly available datasets for explainable fake news detection. + +Paper Weakness: +(1) The paper could benefit from more detailed clarification of the proposed model's architecture and implementation details. +(2) The paper lacks comparison with more relevant and widely-used baseline methods in the field. +(3) Although the paper constructs two explainable fake news datasets, the paper does not describe the process and criteria for creating them. + +Questions To Authors And Suggestions For Rebuttal: +(1) Can the authors provide additional information on the proposed model's architecture and implementation details? +(2) Can the authors compare their proposed method with additional relevant and widely-used baseline methods in the field? +(3) Can the authors provide more details on the process and criteria for creating the two constructed explainable fake news datasets? + +Overall score (1-5): 4 +The paper provides an innovative approach to fake news detection using a cascade of selectors and presents two publicly available datasets for the research community. However, the paper could benefit from additional details on architectural and implementation details and comparisons with more relevant baselines. + +#3 Reviewer + +Overall Review: +The paper proposes a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection. The proposed model selects the most explainable sentences for verdicts based on raw reports, thereby reducing the dependency on fact-checked reports. The paper presents two explainable fake news datasets and experimental results demonstrating that the proposed model outperforms state-of-the-art detection baselines and generates high-quality explanations. + +Paper Strength: +(1) The paper addresses an important and timely problem of fake news detection and provide insights into the limitations of existing methods. +(2) The proposed CofCED model is innovative and utilizes a hierarchical encoder and cascaded selectors for selecting explainable sentences. +(3) The paper contributes to the research community by presenting two publicly available datasets for explainable fake news detection. + +Paper Weakness: +(1) The paper could benefit from more detailed clarification of the proposed model's architecture and implementation details. +(2) The paper lacks comparison with more relevant and widely-used baseline methods in the field. +(3) Although the paper constructs two explainable fake news datasets, the paper does not describe the process and criteria for creating them. + +Questions To Authors And Suggestions For Rebuttal: +(1) Can the authors provide additional information on the proposed model's architecture and implementation details? +(2) Can the authors compare their proposed method with additional relevant and widely-used baseline methods in the field? +(3) Can the authors provide more details on the process and criteria for creating the two constructed explainable fake news datasets? + +Overall score (1-5): 4 The paper provides an innovative approach to fake news detection using a cascade of selectors and presents two publicly available datasets for the research community. However, the paper could benefit from additional details on architectural and implementation details and comparisons with more relevant baselines. \ No newline at end of file diff --git a/deploy/Private/README.md b/HuggingFaceDeploy/Private/README.md similarity index 100% rename from deploy/Private/README.md rename to HuggingFaceDeploy/Private/README.md diff --git a/deploy/Private/apikey.ini b/HuggingFaceDeploy/Private/apikey.ini similarity index 100% rename from deploy/Private/apikey.ini rename to HuggingFaceDeploy/Private/apikey.ini diff --git a/deploy/Private/app.py b/HuggingFaceDeploy/Private/app.py similarity index 100% rename from deploy/Private/app.py rename to HuggingFaceDeploy/Private/app.py diff --git a/deploy/Private/image.jpeg b/HuggingFaceDeploy/Private/image.jpeg similarity index 100% rename from deploy/Private/image.jpeg rename to HuggingFaceDeploy/Private/image.jpeg diff --git a/deploy/Private/optimizeOpenAI.py b/HuggingFaceDeploy/Private/optimizeOpenAI.py similarity index 100% rename from deploy/Private/optimizeOpenAI.py rename to HuggingFaceDeploy/Private/optimizeOpenAI.py diff --git a/deploy/Private/requirements.txt b/HuggingFaceDeploy/Private/requirements.txt similarity index 100% rename from deploy/Private/requirements.txt rename to HuggingFaceDeploy/Private/requirements.txt diff --git a/deploy/Public/app.py b/HuggingFaceDeploy/Public/app.py similarity index 100% rename from deploy/Public/app.py rename to HuggingFaceDeploy/Public/app.py diff --git a/deploy/Public/optimizeOpenAI.py b/HuggingFaceDeploy/Public/optimizeOpenAI.py similarity index 100% rename from deploy/Public/optimizeOpenAI.py rename to HuggingFaceDeploy/Public/optimizeOpenAI.py diff --git a/deploy/Public/requirements.txt b/HuggingFaceDeploy/Public/requirements.txt similarity index 100% rename from deploy/Public/requirements.txt rename to HuggingFaceDeploy/Public/requirements.txt diff --git a/HuggingFaceDeploy/README.md b/HuggingFaceDeploy/README.md new file mode 100644 index 0000000..7c92568 --- /dev/null +++ b/HuggingFaceDeploy/README.md @@ -0,0 +1,3 @@ +和docker的配置类似,现在的版本,基本上就是一个python文件,用huggingface的必要性没那么高 + +需要的话,可以直接使用我们的网站,chatwithpaper.org,效果类似。 \ No newline at end of file diff --git a/app.py b/HuggingFaceDeploy/app.py similarity index 100% rename from app.py rename to HuggingFaceDeploy/app.py diff --git a/README-old.md b/README-old.md deleted file mode 100644 index deb9b58..0000000 --- a/README-old.md +++ /dev/null @@ -1,283 +0,0 @@ -# ChatPaper - -
- - -针对很多其他非计算机同学的需求,我们团队已经在全力开发网页版了!敬请期待!也欢迎有前后端经验的大佬联系我们! -For the needs of many other non-computer students, our team has been working hard to develop the web version! Stay tuned! -Also welcome to the experts who have the experience about web and server to contact us! - -我们的愿景是:利用AI全方位加速人类科研 -希望能够集中超过GPT4.0的剩下5%的人类科研工作者,一起努力进化。 - - -**明天更新新必应自动生成的代码:完美获取特定关键词的最新arxiv论文**,不会现在这样出现关键词和官网搜索不一致的情况! - - -**GPT4的API开放后,ChatPaper才能进化成ChatPaperPlus!** - -**After GPT4 API, our ChatPaper will evolve to ChatPaperPlus!** - -To keep up with the huge arxiv papers and AI’s fast progress, we humans need to evolve. We download the latest papers on arxiv based on user keywords, and use ChatGPT3.5 API’s powerful summarization to condense them into a fixed format with minimal text and easy readability. We provide the most information for everyone to choose which papers to read deeply. - -## TODO list: -1. 将提问换成英文--已经完成 -2. 用更加鲁棒的方法解析Method章节--使用交互模式,来判断 -3. 打包成exe文件,供小白用户直接使用。--放弃这个功能,全力打造网页版 -4. 如果有佬愿意搭建网站,也可以合作--已经合作 -5. 实现一个ChatReview版本,供大家审稿的时候参考(但可能有学术伦理问题)--正在尝试 -6. 其他的优化功能正在添加:本地PDF批量总结;token的自动评估; ---completed! -7. Thanks for recommending ChatPaper by [AK](https://twitter.com/_akhaliq)! Next we will set up an English output mode. ---completed! -8. **为了感谢2k stars的点赞,我们团队发布以下更新预告:1. colab版本,修复作者单位瞎编的问题,2. 优化提问词,使得输出更加靠谱。**--colab版本已经发布,其他优化,合作者正在调试,敬请期待。 - -## 作者有话说: -1. colab版本的报错,主要是网络问题,希望大家能先谷歌再提issue,因为我对谷歌的网络问题也不熟悉. -2. 另外有一个重大的问题有待解决是,arxiv搜索最新的论文时,query关键词和实际的论文关联性很低,这个大家有没有好的解决方案? - - -我们为ChatPaper提供了一个Web图形界面。您可以选择在私有或者公共环境中部署ChatPaper,也可以在Hugging Face上[在线体验](https://huggingface.co/spaces/wangrongsheng/ChatPaper) 我们所提供的公共服务。 - -**这个功能免费,且代码开源,大家放心使用!** - -关于API如何获取,首先你得有一个没有被封的ChatGPT账号,然后根据下面链接去生成: [如何获取Api Key](https://chatgpt.cn.obiscr.com/blog/posts/2023/How-to-get-api-key/) - -![233](https://github.com/kaixindelele/ChatPaper/blob/main/images/chatpaper_0317.png) - -> [私有化部署](./deploy/Private/README.md) 、公共化部署,我们推荐您直接使用Hugging Face [在线体验](https://huggingface.co/spaces/wangrongsheng/ChatPaper) 。 - - -## 动机 - -面对每天海量的arxiv论文,以及AI极速的进化,我们人类必须也要一起进化才能不被淘汰。 - -作为中科大强化学习方向的博士生,我深感焦虑,现在AI的进化速度,我开脑洞都赶不上。 - -因此我开发了这款ChatPaper,尝试用魔法打败魔法。 - -ChatPaper是一款论文总结工具。AI用一分钟总结论文,用户用一分钟阅读AI总结的论文。 - -它可以根据用户输入的关键词,自动在arxiv上下载最新的论文,再利用ChatGPT3.5的API接口强大的总结能力,将论文总结为固定的格式,以最少的文本,最低的阅读门槛,为大家提供最大信息量,以决定该精读哪些文章。 - -也可以提供本地的PDF文档地址,直接处理。 - -一般一个晚上就可以速通一个小领域的最新文章。我自己测试了两天了。 - -祝大家在这个极速变化的时代中,能够和AI一起进化! - -这段代码虽然不多,但整个流程走通也花了我近一周的时间,今天分享给大家。 - -不知道能不能用这个工具,实现我小时候的梦想-- **如果每个中国人给我一块钱,那我就发财了** 哈哈~ - -言归正传,不强制付费,但是真的希望每个觉得能帮你节省时间的研究生,在花几块钱买API的同时,能够给我一块钱奖励,非常感谢! - -您的支持,是我持续更新的动力!如果有大佬愿意多支持,也是非常欢迎的! - -欢迎大家加入光荣的进化! - -Title: Diffusion Policy: Visuomotor Policy Learning via Action Diffusion 中文标题: 通过行为扩散的视觉运动策略学习
-Authors: Haonan Lu, Yufeng Yuan, Daohua Xie, Kai Wang, Baoxiong Jia, Shuaijun Chen
-Affiliation: 中南大学
-Keywords: Diffusion Policy, Visuomotor Policy, robot learning, denoising diffusion process
-Urls: http://arxiv.org/abs/2303.04137v1, Github: None
-Summary:
-(1): 本文研究的是机器人视觉动作策略的学习。机器人视觉动作策略的学习是指根据观察到的信息输出相应的机器人运动动作,这一任务较为复杂和具有挑战性。
-(2): 过去的方法包括使用高斯混合模型、分类表示,或者切换策略表示等不同的动作表示方式,但依然存在多峰分布、高维输出空间等挑战性问题。本文提出一种新的机器人视觉运动策略模型 - Diffusion Policy,其结合了扩散模型的表达能力,克服了传统方法的局限性,可以表达任意分布并支持高维空间。本模型通过学习代价函数的梯度,使用随机Langevin动力学算法进行迭代优化,最终输出机器人动作。
-(3): 本文提出的机器人视觉动作策略 - Diffusion Policy,将机器人动作表示为一个条件去噪扩散过程。该模型可以克服多峰分布、高维输出空间等问题,提高了策略学习的表达能力。同时,本文通过引入展望控制、视觉诱导和时间序列扩散变换等技术,继续增强了扩散策略的性能。
-(4): 本文的方法在11个任务上进行了测试,包括4个机器人操纵基准测试。实验结果表明,Diffusion Policy相对于现有的机器人学习方法,表现出明显的优越性和稳定性,平均性能提升了46.9%。
- -7.Methods: -本文提出的视觉动作策略学习方法,即Diffusion Policy,包括以下步骤:
-(1) 建立条件去噪扩散过程:将机器人动作表示为一个含有高斯噪声的源的条件随机扩散过程。在该过程中,机器人状态作为源,即输入,通过扩散过程输出机器人的运动动作。为了将其变为条件随机扩散模型,我们加入了代价函数,它在路径积分中作为条件。
-(2) 引入随机Langevin动力学:将学习代价函数的梯度转换为基于随机Langevin动力学的迭代优化问题。该方法可以避免显示计算扩散过程,并且可以满足无导数优化器的要求,使其受益于渐近高斯性质以及全局收敛性质。
-(3) 引入扩散策略增强技术:使用展望控制技术,结合决策网络,对由扩散产生的动作进行调整,从而增强策略的性能。同时,引入视觉诱导以及时间序列扩散变换,来进一步提高扩散策略的表达能力。
-(4) 在11个任务上进行测试:测试结果表明,该方法相对于现有的机器人学习方法,在机器人操纵基准测试中表现出明显的优越性和稳定性,平均性能提升了46.9%。
-7.Conclusion:
-(1):本文研究了机器人视觉动作策略的学习方法,提出了一种新的机器人视觉运动策略模型 - Diffusion Policy,通过引入扩散模型的表达能力,克服了传统方法的局限性,可以表达任意分布并支持高维空间。实验结果表明,该方法在11个任务上均表现出明显的优越性和稳定性,相对于现有机器人学习方法,平均性能提高了46.9%,这一研究意义巨大。
-(2):虽然本文提出了一种新的机器人视觉动作策略学习方法,并在实验中取得了良好的表现,但该方法的优化过程可能比较耗时。此外,该方法的性能受到多种因素的影响,包括演示的质量和数量、机器人的物理能力以及策略架构等,这些因素需在实际应用场景中加以考虑。
-(3):如果让我来推荐,我会给这篇文章打9分。该篇文章提出的Diffusion Policy方法具有较高的可解释性、性能表现良好、实验结果稳定等优点,能够为机器人视觉动作策略学习等领域带来很大的启发与借鉴。唯一的不足可能是方法的优化过程需要投入更多的时间和精力。
- -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=kaixindelele/ChatPaper&type=Date)](https://star-history.com/#kaixindelele/ChatPaper&Date) - -## Contributors - - - - diff --git a/README.md b/README.md index 86c8d7f..ebf83b0 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,11 @@ + +💥💥💥7.21 仓库的文件做了一个整理,可能会有些路径和bug,正在修复中。 +并且,我本地更新了全文总结的脚本,以及本地PDF全文翻译的脚本,正在考虑是否开源。 + + 💥💥💥 7.9号,师弟[red-tie](https://github.com/red-tie)在[auto-draft](https://github.com/CCCBora/auto-draft)的基础上,优化了一款[一键文献综述](https://github.com/kaixindelele/ChatPaper/tree/main/auto_survey)的功能。 适用于大家对具体某个领域快速掌握,并且支持直接生成中文文献调研报告。文件配置简单,欢迎大家使用和反馈! @@ -16,22 +21,15 @@ -💥💥💥5.10 我们网页版的即将进行更新,现在的总结效果如链接所示:[Sergey Levine近两个月12篇文章总结-ChatPaperDaily6](https://zhuanlan.zhihu.com/p/628338077),总结的内容更加全面且准确,更多的细节,更多的步骤,更多实验结果,且尽可能的降低瞎编. - - 💥💥💥**唯一官方网站:**[https://chatpaper.org/](https://chatpaper.org/) ,以及小白教程【ChatPaper网页版使用小白教程-哔哩哔哩】 https://b23.tv/HpDkcBU, 第三方文档:https://chatpaper.readthedocs.io . -💥💥💥 4.22 为了庆祝ChatPaper获得一万⭐,我们将联合两位同学,推出两个AI辅助文献总结工具,第一个是[auto-draft](https://github.com/CCCBora/auto-draft),AI自动搜集整理出文献总结!第二个是综述文章归纳,后面会上线我们网页版。敬请期待 +💥💥💥 4.22 为了庆祝ChatPaper获得一万⭐,我们将联合两位同学,推出两个AI辅助文献总结工具,第一个是[auto-draft](https://github.com/CCCBora/auto-draft),AI自动搜集整理出文献总结! 💥💥💥 为了降低学术伦理风险,我们为Chat_Reviewer增加了复杂的文字注入,效果如图:[示例图](https://github.com/kaixindelele/ChatPaper/blob/main/images/reviews.jpg) ,希望各位老师同学在使用的时候,一定要注意学术伦理和学术声誉,不要滥用工具。如果谁有更好的方法来限制少数人的不规范使用,欢迎留言,为科研界做一份贡献。 -💥💥💥 最近在开源众筹一个基于OpenReview的微调项目,欢迎大家一起搞事情:[ChatOpenReview](https://github.com/kaixindelele/ChatOpenReview) - - - 🌿🌿🌿使用卡顿?请Fork到自己的Space,轻松使用: 💥💥💥荣胜同学今天发布了一个非常有意思的工作[ChatGenTitle](https://github.com/WangRongsheng/ChatGenTitle),提供摘要生成标题,基于220wArXiv论文的数据微调的结果! @@ -97,7 +95,6 @@ - 🌟*2023.03.23*: chat_arxiv.py可以从arxiv网站,根据关键词,最近几天,几篇论文,直接爬取最新的领域论文了!解决了之前arxiv包的搜索不准确问题! - 🌟*2023.03.23*: ChatPaper终于成为完成体了!现在已经有论文总结+论文润色+论文分析与改进建议+论文审稿回复等功能了! -**增加了ChatReviewer(对论文进行优缺点分析,提出改进建议,⭐️千万别复制生成的内容用于论文评审!一定要注意审稿伦理和责任!该功能仅供大家作为参考!)和ChatResponse(自动提取审稿人的问题并一对一生成回复),该部分的代码均来自于[nishiwen1214](https://github.com/nishiwen1214)的[ChatReviewer](https://github.com/nishiwen1214/ChatReviewer)项目。** 使用技巧请参考这位大佬的项目! ## 开发动机 @@ -239,8 +236,6 @@ python google_scholar_spider.py --kw "deep learning" --nresults 30 --csvpath "./ 教程文章:https://zhuanlan.zhihu.com/p/644326031 - - --- 另外注意,目前这个不支持**综述类**文章。 @@ -341,10 +336,7 @@ python3 app.py + 所有的运行结果都被保存在 Docker 的 volumes 中,如果想以服务的形式长期部署,您可以将这些目录映射出来。默认情况下,它们位于 /var/lib/docker/volumes/ 下。您可以进入该目录并查看 chatpaper_log、chatpaper_export、chatpaper_pdf_files 和 chatpaper_response_file 四个相关文件夹中的结果。有关 Docker volumes 的详细解释,请参考此链接:http://docker.baoshu.red/data_management/volume.html。 - - - - + ## 在线部署 1. 在[Hugging Face](https://huggingface.co/) 创建自己的个人账号并登录; diff --git a/auto_survey/README.md b/auto_survey/README.md index a40be0d..7e02052 100644 --- a/auto_survey/README.md +++ b/auto_survey/README.md @@ -13,6 +13,10 @@ python_version: 3.10.10 # 部署方法 +首先,下载chatpaper整个项目后,打开项目时,打开的是auto_survey这个文件夹。 + +因为这两个项目互相独立,如果打开的是chatpaper文件夹,会导致路径不对! + 1. 安装依赖: ```angular2html pip install -r requirements.txt diff --git a/chat_pubmed.py b/chat_pubmed.py deleted file mode 100644 index 3eed3ac..0000000 --- a/chat_pubmed.py +++ /dev/null @@ -1,22 +0,0 @@ -## 正在写PubMed的爬虫,刚爬了一个title,先mark住,欢迎有时间的大佬按照arxiv的逻辑,把pubmed等其他的预印本爬虫写好~ - -import requests -from bs4 import BeautifulSoup - -def crawl_pubmed_top_ten_papers_by_keywords(keywords): - url = f"https://pubmed.ncbi.nlm.nih.gov/?term={'+'.join(keywords.split())}" - response = requests.get(url) - soup = BeautifulSoup(response.content, "html.parser") - articles = soup.find_all("article", {"class": "full-docsum"}) - articles.sort(key=lambda x: x.find("span", {"class": "date"}).text.strip() if x.find("span", {"class": "date"}) else "") - top_ten_articles = articles[:10] - return top_ten_articles - -if __name__ == "__main__": - keywords = "cancer" - top_ten_articles = crawl_pubmed_top_ten_papers_by_keywords(keywords) - for i, article in enumerate(top_ten_articles): - title = article.find("a", {"class": "docsum-title"}).text.strip() - authors = article.find("span", {"class": "docsum-authors full-authors"}).text.strip() if article.find("span", {"class": "docsum-authors full-authors"}) else "" - date = article.find("span", {"class": "date"}).text.strip() if article.find("span", {"class": "date"}) else "" - print(f"{i+1}. {title}\n {authors}\n {date}\n") diff --git a/deploy/Private/__pycache__/optimizeOpenAI.cpython-39.pyc b/deploy/Private/__pycache__/optimizeOpenAI.cpython-39.pyc deleted file mode 100644 index a2d0d88..0000000 Binary files a/deploy/Private/__pycache__/optimizeOpenAI.cpython-39.pyc and /dev/null differ diff --git a/Makefile b/docker/Makefile similarity index 100% rename from Makefile rename to docker/Makefile diff --git a/docker/README.md b/docker/README.md new file mode 100644 index 0000000..eea990b --- /dev/null +++ b/docker/README.md @@ -0,0 +1,3 @@ +现在的docker文件夹因为路径被我修改过,所以不一定能正常使用了! + +还是建议大家直接使用命令行操作,因为只有一个python文件,也没必要用docker。 diff --git a/build.sh b/docker/build.sh old mode 100755 new mode 100644 similarity index 100% rename from build.sh rename to docker/build.sh diff --git a/dev.sh b/docker/dev.sh old mode 100755 new mode 100644 similarity index 100% rename from dev.sh rename to docker/dev.sh diff --git a/docker-compose.yaml b/docker/docker-compose.yaml similarity index 100% rename from docker-compose.yaml rename to docker/docker-compose.yaml diff --git a/make.bat b/docker/make.bat similarity index 95% rename from make.bat rename to docker/make.bat index 747ffb7..dc1312a 100644 --- a/make.bat +++ b/docker/make.bat @@ -1,35 +1,35 @@ -@ECHO OFF - -pushd %~dp0 - -REM Command file for Sphinx documentation - -if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=sphinx-build -) -set SOURCEDIR=source -set BUILDDIR=build - -%SPHINXBUILD% >NUL 2>NUL -if errorlevel 9009 ( - echo. - echo.The 'sphinx-build' command was not found. Make sure you have Sphinx - echo.installed, then set the SPHINXBUILD environment variable to point - echo.to the full path of the 'sphinx-build' executable. Alternatively you - echo.may add the Sphinx directory to PATH. - echo. - echo.If you don't have Sphinx installed, grab it from - echo.https://www.sphinx-doc.org/ - exit /b 1 -) - -if "%1" == "" goto help - -%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% -goto end - -:help -%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% - -:end -popd +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=source +set BUILDDIR=build + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.https://www.sphinx-doc.org/ + exit /b 1 +) + +if "%1" == "" goto help + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/tagpush.sh b/docker/tagpush.sh old mode 100755 new mode 100644 similarity index 100% rename from tagpush.sh rename to docker/tagpush.sh diff --git a/get_paper_from_pdf.py b/get_paper_from_pdf.py deleted file mode 100644 index 2117a58..0000000 --- a/get_paper_from_pdf.py +++ /dev/null @@ -1,274 +0,0 @@ -import fitz, io, os -from PIL import Image - - -class Paper: - def __init__(self, path, title='', url='', abs='', authers=[]): - # 初始化函数,根据pdf路径初始化Paper对象 - self.url = url # 文章链接 - self.path = path # pdf路径 - self.section_names = [] # 段落标题 - self.section_texts = {} # 段落内容 - self.abs = abs - self.title_page = 0 - if title == '': - self.pdf = fitz.open(self.path) # pdf文档 - self.title = self.get_title() - self.parse_pdf() - else: - self.title = title - self.authers = authers - self.roman_num = ["I", "II", 'III', "IV", "V", "VI", "VII", "VIII", "IIX", "IX", "X"] - self.digit_num = [str(d+1) for d in range(10)] - self.first_image = '' - - def parse_pdf(self): - self.pdf = fitz.open(self.path) # pdf文档 - self.text_list = [page.get_text() for page in self.pdf] - self.all_text = ' '.join(self.text_list) - self.section_page_dict = self._get_all_page_index() # 段落与页码的对应字典 - print("section_page_dict", self.section_page_dict) - self.section_text_dict = self._get_all_page() # 段落与内容的对应字典 - self.section_text_dict.update({"title": self.title}) - self.section_text_dict.update({"paper_info": self.get_paper_info()}) - self.pdf.close() - - def get_paper_info(self): - first_page_text = self.pdf[self.title_page].get_text() - if "Abstract" in self.section_text_dict.keys(): - abstract_text = self.section_text_dict['Abstract'] - else: - abstract_text = self.abs - first_page_text = first_page_text.replace(abstract_text, "") - return first_page_text - - def get_image_path(self, image_path=''): - """ - 将PDF中的第一张图保存到image.png里面,存到本地目录,返回文件名称,供gitee读取 - :param filename: 图片所在路径,"C:\\Users\\Administrator\\Desktop\\nwd.pdf" - :param image_path: 图片提取后的保存路径 - :return: - """ - # open file - max_size = 0 - image_list = [] - with fitz.Document(self.path) as my_pdf_file: - # 遍历所有页面 - for page_number in range(1, len(my_pdf_file) + 1): - # 查看独立页面 - page = my_pdf_file[page_number - 1] - # 查看当前页所有图片 - images = page.get_images() - # 遍历当前页面所有图片 - for image_number, image in enumerate(page.get_images(), start=1): - # 访问图片xref - xref_value = image[0] - # 提取图片信息 - base_image = my_pdf_file.extract_image(xref_value) - # 访问图片 - image_bytes = base_image["image"] - # 获取图片扩展名 - ext = base_image["ext"] - # 加载图片 - image = Image.open(io.BytesIO(image_bytes)) - image_size = image.size[0] * image.size[1] - if image_size > max_size: - max_size = image_size - image_list.append(image) - for image in image_list: - image_size = image.size[0] * image.size[1] - if image_size == max_size: - image_name = f"image.{ext}" - im_path = os.path.join(image_path, image_name) - print("im_path:", im_path) - - max_pix = 480 - origin_min_pix = min(image.size[0], image.size[1]) - - if image.size[0] > image.size[1]: - min_pix = int(image.size[1] * (max_pix/image.size[0])) - newsize = (max_pix, min_pix) - else: - min_pix = int(image.size[0] * (max_pix/image.size[1])) - newsize = (min_pix, max_pix) - image = image.resize(newsize) - - image.save(open(im_path, "wb")) - return im_path, ext - return None, None - - # 定义一个函数,根据字体的大小,识别每个章节名称,并返回一个列表 - def get_chapter_names(self,): - # # 打开一个pdf文件 - doc = fitz.open(self.path) # pdf文档 - text_list = [page.get_text() for page in doc] - all_text = '' - for text in text_list: - all_text += text - # # 创建一个空列表,用于存储章节名称 - chapter_names = [] - for line in all_text.split('\n'): - line_list = line.split(' ') - if '.' in line: - point_split_list = line.split('.') - space_split_list = line.split(' ') - if 1 < len(space_split_list) < 5: - if 1 < len(point_split_list) < 5 and (point_split_list[0] in self.roman_num or point_split_list[0] in self.digit_num): - print("line:", line) - chapter_names.append(line) - # 这段代码可能会有新的bug,本意是为了消除"Introduction"的问题的! - elif 1 < len(point_split_list) < 5: - print("line:", line) - chapter_names.append(line) - - return chapter_names - - def get_title(self): - doc = self.pdf # 打开pdf文件 - max_font_size = 0 # 初始化最大字体大小为0 - max_string = "" # 初始化最大字体大小对应的字符串为空 - max_font_sizes = [0] - for page_index, page in enumerate(doc): # 遍历每一页 - text = page.get_text("dict") # 获取页面上的文本信息 - blocks = text["blocks"] # 获取文本块列表 - for block in blocks: # 遍历每个文本块 - if block["type"] == 0 and len(block['lines']): # 如果是文字类型 - if len(block["lines"][0]["spans"]): - font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小 - max_font_sizes.append(font_size) - if font_size > max_font_size: # 如果字体大小大于当前最大值 - max_font_size = font_size # 更新最大值 - max_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串 - max_font_sizes.sort() - print("max_font_sizes", max_font_sizes[-10:]) - cur_title = '' - for page_index, page in enumerate(doc): # 遍历每一页 - text = page.get_text("dict") # 获取页面上的文本信息 - blocks = text["blocks"] # 获取文本块列表 - for block in blocks: # 遍历每个文本块 - if block["type"] == 0 and len(block['lines']): # 如果是文字类型 - if len(block["lines"][0]["spans"]): - cur_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串 - font_flags = block["lines"][0]["spans"][0]["flags"] # 获取第一行第一段文字的字体特征 - font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小 - # print(font_size) - if abs(font_size - max_font_sizes[-1]) < 0.3 or abs(font_size - max_font_sizes[-2]) < 0.3: - # print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags) - if len(cur_string) > 4 and "arXiv" not in cur_string: - # print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags) - if cur_title == '' : - cur_title += cur_string - else: - cur_title += ' ' + cur_string - self.title_page = page_index - # break - title = cur_title.replace('\n', ' ') - return title - - - def _get_all_page_index(self): - # 定义需要寻找的章节名称列表 - section_list = ["Abstract", - 'Introduction', 'Related Work', 'Background', - "Preliminary", "Problem Formulation", - 'Methods', 'Methodology', "Method", 'Approach', 'Approaches', - # exp - "Materials and Methods", "Experiment Settings", - 'Experiment', "Experimental Results", "Evaluation", "Experiments", - "Results", 'Findings', 'Data Analysis', - "Discussion", "Results and Discussion", "Conclusion", - 'References'] - # 初始化一个字典来存储找到的章节和它们在文档中出现的页码 - section_page_dict = {} - # 遍历每一页文档 - for page_index, page in enumerate(self.pdf): - # 获取当前页面的文本内容 - cur_text = page.get_text() - # 遍历需要寻找的章节名称列表 - for section_name in section_list: - # 将章节名称转换成大写形式 - section_name_upper = section_name.upper() - # 如果当前页面包含"Abstract"这个关键词 - if "Abstract" == section_name and section_name in cur_text: - # 将"Abstract"和它所在的页码加入字典中 - section_page_dict[section_name] = page_index - # 如果当前页面包含章节名称,则将章节名称和它所在的页码加入字典中 - else: - if section_name + '\n' in cur_text: - section_page_dict[section_name] = page_index - elif section_name_upper + '\n' in cur_text: - section_page_dict[section_name] = page_index - # 返回所有找到的章节名称及它们在文档中出现的页码 - return section_page_dict - - def _get_all_page(self): - """ - 获取PDF文件中每个页面的文本信息,并将文本信息按照章节组织成字典返回。 - - Returns: - section_dict (dict): 每个章节的文本信息字典,key为章节名,value为章节文本。 - """ - text = '' - text_list = [] - section_dict = {} - - # 再处理其他章节: - text_list = [page.get_text() for page in self.pdf] - for sec_index, sec_name in enumerate(self.section_page_dict): - print(sec_index, sec_name, self.section_page_dict[sec_name]) - if sec_index <= 0 and self.abs: - continue - else: - # 直接考虑后面的内容: - start_page = self.section_page_dict[sec_name] - if sec_index < len(list(self.section_page_dict.keys()))-1: - end_page = self.section_page_dict[list(self.section_page_dict.keys())[sec_index+1]] - else: - end_page = len(text_list) - print("start_page, end_page:", start_page, end_page) - cur_sec_text = '' - if end_page - start_page == 0: - if sec_index < len(list(self.section_page_dict.keys()))-1: - next_sec = list(self.section_page_dict.keys())[sec_index+1] - if text_list[start_page].find(sec_name) == -1: - start_i = text_list[start_page].find(sec_name.upper()) - else: - start_i = text_list[start_page].find(sec_name) - if text_list[start_page].find(next_sec) == -1: - end_i = text_list[start_page].find(next_sec.upper()) - else: - end_i = text_list[start_page].find(next_sec) - cur_sec_text += text_list[start_page][start_i:end_i] - else: - for page_i in range(start_page, end_page): -# print("page_i:", page_i) - if page_i == start_page: - if text_list[start_page].find(sec_name) == -1: - start_i = text_list[start_page].find(sec_name.upper()) - else: - start_i = text_list[start_page].find(sec_name) - cur_sec_text += text_list[page_i][start_i:] - elif page_i < end_page: - cur_sec_text += text_list[page_i] - elif page_i == end_page: - if sec_index < len(list(self.section_page_dict.keys()))-1: - next_sec = list(self.section_page_dict.keys())[sec_index+1] - if text_list[start_page].find(next_sec) == -1: - end_i = text_list[start_page].find(next_sec.upper()) - else: - end_i = text_list[start_page].find(next_sec) - cur_sec_text += text_list[page_i][:end_i] - section_dict[sec_name] = cur_sec_text.replace('-\n', '').replace('\n', ' ') - return section_dict - -def main(): - path = r'demo.pdf' - paper = Paper(path=path) - paper.parse_pdf() - for key, value in paper.section_text_dict.items(): - print(key, value) - print("*"*40) - - -if __name__ == '__main__': - main() diff --git a/ChatPaper.ipynb b/others/ChatPaper.ipynb similarity index 100% rename from ChatPaper.ipynb rename to others/ChatPaper.ipynb diff --git a/chat_arxiv_maomao.py b/others/chat_arxiv_maomao.py similarity index 100% rename from chat_arxiv_maomao.py rename to others/chat_arxiv_maomao.py diff --git a/google_scholar_spider.py b/others/google_scholar_spider.py similarity index 100% rename from google_scholar_spider.py rename to others/google_scholar_spider.py diff --git a/others/machine_learning.csv b/others/machine_learning.csv new file mode 100644 index 0000000..d9fab5d --- /dev/null +++ b/others/machine_learning.csv @@ -0,0 +1,51 @@ +Rank,Author,Title,Citations,Year,Publisher,Venue,Source,cit/year +440," Bishop, NM Nasrabadi",Pattern recognition and machine learning,65423,2006, Springer,,https://link.springer.com/book/9780387310732,3635 +410, Murphy,Machine learning: a probabilistic perspective,13922,2012, books.google.com,,https://books.google.com/books?hl=en&lr=&id=RC43AgAAQBAJ&oi=fnd&pg=PR7&dq=machine+learning&ots=umou8zRxZ6&sig=Yt4k1SbH83Yoaefkx6C0lzerP6c,1160 +20," Jordan, TM Mitchell","Machine learning: Trends, perspectives, and prospects",6373,2015, science.org, Science,https://www.science.org/doi/abs/10.1126/science.aaa8415,708 +240,Shale,Understanding machine learning: From theory to algorithms,6371,2014, books.google.com,,https://books.google.com/books?hl=en&lr=&id=Hf6QAwAAQBAJ&oi=fnd&pg=PR15&dq=machine+learning&ots=2IyfLknQK-&sig=0FaXB-Y1uBej-f0TGukldQjCjqQ,637 +200,"Mohri, A Rostamizadeh, A Talwalkar",Foundations of machine learning,5377,2018, books.google.com,,https://books.google.com/books?hl=en&lr=&id=dWB9DwAAQBAJ&oi=fnd&pg=PR5&dq=machine+learning&ots=AywPTRw5j5&sig=gDH_EE9DckSxR1-ldLaeBzpnP2c,896 +480, King,Dlib-ml: A machine learning toolkit,3556,2009, jmlr.org, The Journal of Machine Learning Research,https://www.jmlr.org/papers/volume10/king09a/king09a.pdf,237 +460," Butler, DW Davies, H Cartwright, O Isayev, A Walsh",Machine learning for molecular and materials science,2542,2018, nature.com, Nature,https://www.nature.com/articles/s41586-018-0337-2,424 +380, Dietterich,Machine-learning research,2121,1997, ojs.aaai.org, AI magazine,https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1324,79 +130,"Sammut, GI Webb",Encyclopedia of machine learning,1877,2011, books.google.com,,https://books.google.com/books?hl=en&lr=&id=i8hQhp1a62UC&oi=fnd&pg=PT29&dq=machine+learning&ots=91r7wtiH6Q&sig=AHa5z1TSiO_oCiGOL7GKIcbmzLc,144 +340," Liakos, P Busato, D Moshou, S Pearson, D Bochtis",Machine learning in agriculture: A review,1831,2018, mdpi.com, Sensors,https://www.mdpi.com/1424-8220/18/8/2674,305 +280,"Carleo, I Cirac, K Cranmer, L Daudet, M Schuld…",Machine learning and the physical sciences,1655,2019, APS, Reviews of Modern …,https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91.045002,331 +60,"Wang, Z Lei, X Zhang, B Zhou, J Peng",Machine learning basics,1619,2016, whdeng.cn, Deep learning,http://whdeng.cn/Teaching/PPT_01_Machine%20learning%20Basics.pdf,202 +10, Zhou,Machine learning,1613,2021, books.google.com,,https://books.google.com/books?hl=en&lr=&id=ctM-EAAAQBAJ&oi=fnd&pg=PR6&dq=machine+learning&ots=oZRhS3WzYs&sig=eYf8c9ZHOUx0vYceVoUcNWlnUWE,538 +30,Mahesh,Machine learning algorithms-a review,1455,2020, researchgate.net, International Journal of Science and Research (IJSR) …,https://www.researchgate.net/profile/Batta-Mahesh/publication/344717762_Machine_Learning_Algorithms_-A_Review/links/5f8b2365299bf1b53e2d243a/Machine-Learning-Algorithms-A-Review.pdf?eid=5082902844932096,364 +310,Raschka,Python machine learning,1369,2015, books.google.com,,https://books.google.com/books?hl=en&lr=&id=GOVOCwAAQBAJ&oi=fnd&pg=PP1&dq=machine+learning&ots=NdgvGcWXUE&sig=zcVIzg9Fr4KP4eRtU0FRKjO75CI,152 +140,Harrington,Machine learning in action,1205,2012, books.google.com,,https://books.google.com/books?hl=en&lr=&id=XTozEAAAQBAJ&oi=fnd&pg=PT18&dq=machine+learning&ots=pw4cI3NRbp&sig=BJiIhWUSg-CH6QVNLCTuqB8ksXA,100 +260,Langley,Elements of machine learning,942,1996, books.google.com,,https://books.google.com/books?hl=en&lr=&id=TNg5qVoqRtUC&oi=fnd&pg=PR9&dq=machine+learning&ots=Q4tmWtv1Kj&sig=uD85WO3spUWAJLb5uNXTgkru0HY,34 +150,"Sra, S Nowozin, SJ Wright",Optimization for machine learning,890,2012, books.google.com,,https://books.google.com/books?hl=en&lr=&id=JPQx7s2L1A8C&oi=fnd&pg=PR5&dq=machine+learning&ots=vel6ugncBg&sig=G8Jv0hOnac1oGD8BLAupTCG_IxU,74 +300, Mitchell,The discipline of machine learning,885,2006, cs.cmu.edu,,https://www.cs.cmu.edu/afs/cs/usr/mitchell/ftp/pubs/MachineLearningTR.pdf,49 +220, Ayodele,Types of machine learning algorithms,867,2010, books.google.com, New advances in machine learning,https://books.google.com/books?hl=en&lr=&id=XAqhDwAAQBAJ&oi=fnd&pg=PA19&dq=machine+learning&ots=r2Oi6UDmIk&sig=vyuLuQXQG82JB1PKGDbfNPwjPAA,62 +40,"El Naqa, MJ Murphy",What is machine learning?,861,2015, Springer,,https://link.springer.com/chapter/10.1007/978-3-319-18305-3_1,96 +190,Burkov,The hundred-page machine learning book,781,0,papers.com,,https://order-papers.com/sites/default/files/tmp/webform/order_download/pdf-the-hundred-page-machine-learning-book-andriy-burkov-pdf-download-free-book-d835289.pdf,0 +160,Athey,The impact of machine learning on economics,750,2018, nber.org, The economics of artificial intelligence: An agenda,https://www.nber.org/system/files/chapters/c14009/c14009.pdf,125 +350,"Janiesch, P Zschech, K Heinrich",Machine learning and deep learning,750,2021, Springer, Electronic Markets,https://link.springer.com/article/10.1007/s12525-021-00475-2,250 +370," Tarca, VJ Carey, X Chen, R Romero…",Machine learning and its applications to biology,689,2007, journals.plos.org, PLoS computational …,https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0030116,41 +230,Surden,Machine learning and law,650,2014, HeinOnline, Wash. L. Rev.,https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/washlr89§ion=7,65 +470,Ray,A quick review of machine learning algorithms,606,2019, ieeexplore.ieee.org, 2019 International conference on machine learning …,https://ieeexplore.ieee.org/abstract/document/8862451/,121 +360,"Mohammed, MB Khan, EBM Bashier",Machine learning: algorithms and applications,578,2016, books.google.com,,https://books.google.com/books?hl=en&lr=&id=X8LBDAAAQBAJ&oi=fnd&pg=PP1&dq=machine+learning&ots=qQHqwrKdxD&sig=WLSpodFeOX3K5XdZ39bXnsYztuk,72 +180,Alpaydin,Machine learning: the new AI,565,2016, books.google.com,,https://books.google.com/books?hl=en&lr=&id=ylE4DQAAQBAJ&oi=fnd&pg=PR5&dq=machine+learning&ots=S7kG0qqCTQ&sig=bqxKlF7oZPDtGuCjRiuRwnC30xM,71 +120,Bonaccorso,Machine learning algorithms,549,2017, books.google.com,,https://books.google.com/books?hl=en&lr=&id=_-ZDDwAAQBAJ&oi=fnd&pg=PP1&dq=machine+learning&ots=epmyw0IG1J&sig=P0kb9Im4Ktz1Um7h7tFy8-8_LIA,78 +110," Shavlik, TG Dietterich",Readings in machine learning,536,1990, books.google.com,,https://books.google.com/books?hl=en&lr=&id=UgC33U2KMCsC&oi=fnd&pg=PA1&dq=machine+learning&ots=Thodeg8Lma&sig=FvnUKCsN9oMubqxbsNhc0qJfURk,16 +390,"Alzubi, A Nayyar, A Kumar",Machine learning from theory to algorithms: an overview,482,2018, iopscience.iop.org, Journal of physics: conference …,https://iopscience.iop.org/article/10.1088/1742-6596/1142/1/012012/meta,80 +90," Greener, SM Kandathil, L Moffat…",A guide to machine learning for biologists,434,2022, nature.com, Nature Reviews Molecular …,https://www.nature.com/articles/s41580-021-00407-0,217 +290,"Wei, X Chu, XY Sun, K Xu, HX Deng, J Chen, Z Wei…",Machine learning in materials science,383,2019, Wiley Online Library, InfoMat,https://onlinelibrary.wiley.com/doi/abs/10.1002/inf2.12028,77 +330,Dangeti,Statistics for machine learning,363,2017, books.google.com,,https://books.google.com/books?hl=en&lr=&id=C-dDDwAAQBAJ&oi=fnd&pg=PP1&dq=machine+learning&ots=j2brZqt4Xp&sig=xr-iInyZ0efVuBWnLf70GbaWpbU,52 +170,Wagstaff,Machine learning that matters,347,2012, arxiv.org, arXiv preprint arXiv:1206.4656,https://arxiv.org/abs/1206.4656,29 +80, Mitchell,Machine learning,334,1997, ds.amu.edu.et,,https://ds.amu.edu.et/xmlui/bitstream/handle/123456789/14637/Machine_Learning%20-%20421%20pages.pdf?sequence=1&isAllowed=y,12 +270,"Ribeiro, K Grolinger…",Mlaas: Machine learning as a service,328,2015, ieeexplore.ieee.org, … on machine learning and …,https://ieeexplore.ieee.org/abstract/document/7424435/,36 +70,"Bi, KE Goodman, J Kaminsky…",What is machine learning? A primer for the epidemiologist,313,2019, academic.oup.com, American journal of …,https://academic.oup.com/aje/article-abstract/188/12/2222/5567515,63 +100,"Provost, R Kohavi",On applied research in machine learning,296,1998, ai.stanford.edu,,https://ai.stanford.edu/~ronnyk/editorial.pdf,11 +420, Bishop,Model-based machine learning,252,2013, royalsocietypublishing.org, … Transactions of the Royal Society A …,https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2012.0222,23 +490,"Vartak, H Subramanyam, WE Lee…",ModelDB: a system for machine learning model management,221,2016, dl.acm.org, Proceedings of the …,https://dl.acm.org/doi/abs/10.1145/2939502.2939516,28 +50,Alpaydin,Machine learning,216,2021, books.google.com,,https://books.google.com/books?hl=en&lr=&id=2nQJEAAAQBAJ&oi=fnd&pg=PR7&dq=machine+learning&ots=fH62O5ZGhs&sig=FrqykiQWufPDLZbZp0Gc8WqyxyU,72 +320,"Wang, C Ma, L Zhou",A brief review of machine learning and its application,170,2009, ieeexplore.ieee.org, 2009 international conference on …,https://ieeexplore.ieee.org/abstract/document/5362936/,11 +500,Daumé,A course in machine learning,169,2017, academia.edu,,https://www.academia.edu/download/37276995/Course_in_Machine_Learning.pdf,24 +210,Gollapudi,Practical machine learning,162,2016, books.google.com,,https://books.google.com/books?hl=en&lr=&id=WmsdDAAAQBAJ&oi=fnd&pg=PP1&dq=machine+learning&ots=1AD1xuPo5S&sig=o_dmiuADBZd5Gj38Tsv0to44s7k,20 +430," Wilson, NV Sahinidis",The ALAMO approach to machine learning,160,2017, Elsevier, Computers & Chemical Engineering,https://www.sciencedirect.com/science/article/pii/S0098135417300662,23 +250, Zhou,Learnware: on the future of machine learning.,132,2016, lamda.nju.edu.cn, Frontiers Comput. Sci.,https://www.lamda.nju.edu.cn/publication/fcs16learnware.pdf,16 +400,"Paluszek, S Thomas",MATLAB machine learning,127,2016, books.google.com,,https://books.google.com/books?hl=en&lr=&id=3kXODQAAQBAJ&oi=fnd&pg=PR6&dq=machine+learning&ots=ZMPqTJbhkK&sig=QC7mMx0eNpIiipWtXZsT79pTrBQ,16 +450,"Graves, V Nagisetty, V Ganesh",Amnesiac machine learning,59,2021, ojs.aaai.org, … of the AAAI Conference on Artificial …,https://ojs.aaai.org/index.php/AAAI/article/view/17371,20 diff --git a/project_analysis.md b/others/project_analysis.md similarity index 100% rename from project_analysis.md rename to others/project_analysis.md