-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-Output Regression Models #16431
Comments
Needing some clarification here. If I have one set of predictors, I want to be able to build a model to predict say dependent outcome 1, outcome 2 and outcome 3. If we are thinking about using GLM, are you saying that we want to have three sets of parameters for the 3 outcomes right? Thanks, Wendy |
@narasimhard please correct me if I'm wrong but I think the point could be joint estimation of the parameters (e.g. betas). @wendycwong GLM is a little bit special in this area since there is a generalization of GLM that is trying to do that (VGLM). It still uses N sets of "betas" but they are different than independently estimated betas using N independent GLMs, i.e., the N sets of betas can take an advantage of correlation structure in the data and build a better estimator. For DeepLearning, I think the solution would be just more neurons in the output layer. Am I correct @narasimhard ? |
Yes, that's what I am thinking as well. When using DL models, there would be more neurons in the output that can do the predictions for N target variables. @tomasfryda @wendycwong |
Thank you @tomasfryda and @narasimhard : From reading your notes, this means that we are doing the following:
Please let me know. |
It would be fantastic to have models that can handle multiple target regression problems, for instance, in a scenario where the same set of independent variables can be used to learn and predict multiple dependent outcomes. This would simplify the training process by using just one model and also make deployment easier.
The text was updated successfully, but these errors were encountered: