You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, in the TimeGrad model, experimental results reveal suboptimal adherence to control inputs when using covariate conditioning. This issue stems from the dominance of previous sequences within the context window over the autoregressive component, resulting in a stronger influence of previous inputs compared to the desired control inputs.
In the TimeGrad paper, both covariates and previous sequences are fed into the RNN, which the model is then conditioned on. I propose a modification where the conditioning information is separated from the input data and concatenated with the autoregressive output for conditioning.
Additionally, introducing per-frame dropout before input into the RNN can help reduce the model's reliance on past sequences, thereby allowing for more precise control input conditioning.
If there are any questions or concerns about the proposed solution I'm happy to chat about it.
The text was updated successfully, but these errors were encountered:
The TimeGrad predicts the future in an autoregressive way. I did a work to replace it with a method similar to TCN. However, the prediction performance on solar and taxi datasets is much worse than that of TimeGrad . I guess it's because my method doesn't use a covariate condition. But no matter how much I incorporated covariates into my method, the prediction did not improve. If possible, we can explore the impact of covariates on prediction performance in a non-autoregressive manner.
By the way, TimeGrad not only extracts the features of historical sequences, but also extracts the features of future prediction sequences to generate conditional embeddings. In a non-autoregressive way, how can this be adjusted? This may also be the reason for the lack of performance of my method.
Currently, in the TimeGrad model, experimental results reveal suboptimal adherence to control inputs when using covariate conditioning. This issue stems from the dominance of previous sequences within the context window over the autoregressive component, resulting in a stronger influence of previous inputs compared to the desired control inputs.
In the TimeGrad paper, both covariates and previous sequences are fed into the RNN, which the model is then conditioned on. I propose a modification where the conditioning information is separated from the input data and concatenated with the autoregressive output for conditioning.
Additionally, introducing per-frame dropout before input into the RNN can help reduce the model's reliance on past sequences, thereby allowing for more precise control input conditioning.
If there are any questions or concerns about the proposed solution I'm happy to chat about it.
The text was updated successfully, but these errors were encountered: