Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请教数据量较大时怎么处理? #311

Open
winwill2012 opened this issue Aug 27, 2019 · 1 comment
Open

请教数据量较大时怎么处理? #311

winwill2012 opened this issue Aug 27, 2019 · 1 comment

Comments

@winwill2012
Copy link

请教一下,当数据量比较大时:100w用户,2w物品,900万行评分,矩阵密度:0.05%
这样的数据量应该怎么处理呢?目前使用BiasedMFRecommender,25G内存都不够用,计算不出来。

@hklgit
Copy link

hklgit commented Aug 4, 2020

这个就是个demo,你的数据量不小了,25g铁定不够用老铁,你物品都两万了 算物品的相似度你25g都够呛,减少数据量本着可以跑通的原则来吧

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants