Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How did you decide the number of hidden layers to be 32. Can you please explain this? #35

Open
sidharthgurbani opened this issue Apr 2, 2020 · 1 comment

Comments

@sidharthgurbani
Copy link

You have 128 readings in every window. How will these input values go into the LSTRM network? I am trying to follow similar logic for my own generated dataset. So I need to understand the logic to implement for my dataset.

@guillaume-chevalier
Copy link
Owner

How did you decide the number of hidden layers to be 32. Can you please explain this?

As of having tried ANNs and RNNs on other datasets before, I know that with this quantity of data, this amount of hidden layer would be good. In practice, I've tried a few ones before settling on one. You could use this to speed up this process of finding the best parameters: https://www.neuraxle.org/stable/hyperparameter_tuning.html

You have 128 readings in every window. How will these input values go into the LSTM network?

Using Perceptrons and a 3D cube. Values are normalized and inputted to the perceptrons of the LSTM. The axis of time is well separated as a specific dimension in the 3D cube.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants