Skip to content

This project is a collection of recent research in areas such as new infrastructure and urban computing, including white papers, academic papers, AI lab and dataset etc.

Notifications You must be signed in to change notification settings

Knowledge-Precipitation-Tribe/Spatio-Temporal-papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Urban-computing-papers

I have no energy to maintain this project. If you are interested, please contact me.

Introduction

This project is a collection of recent research in areas such as new infrastructure and urban computing, including white papers, academic papers, AI lab and dataset etc.

相关项目:spatio-temporal-papers

Contribution

Contributions are always welcome! Make an individual pull request for each suggestion.

Content


Method summary


Relevant papers

[1] What is new infrastructure

[2] Baidu AI new infrastructure layout baidi

[3] Inventory of new infrastructure projects

[4] Map of new infrastructure enterprises

[1] 百度城市大脑白皮书

[2] 区块链赋能新型智慧城市白皮书

[3] 京东云智能城市白皮书2019

[4] 中国智能城市发展战略与策略研究

[5] 城市交通数字化转型白皮书

[6] 赛迪白皮书:2020城市新基建布局与发展白皮书

[7] 赛迪报告:中国“新基建”发展研究报告

[8] 新基建,新机遇:中国智能经济发展白皮书

[9] 新基建政策白皮书

[10] 新基建发展白皮书

[11] 我国各省市新基建发展潜力白皮书

[12] 中国城市人工智能发展指数报告

[13] 人工智能与工业融合发展研究报告

[14] 2020年中国智慧城市发展研究报告

[15] 数据生产力崛起:新动能 新治理

[1] Yu Zheng: link

[2] Yanhua Li: link

[3] Xun Zhou: link

[4] YaGuang Li: link

[5] Zhenhui Jessie Li: link

[6] David S. Rosenblum: link

[7] Huaiyu Wan: link

[8] Junbo Zhang: link

[9] Shining Xiang:link

[1] iFly Tek: link

[2] JD city : link

[3] alibaba: link

[4] Huawei: link

[5] ByteDance: link

[6] alibaba damo academy: link

[7] Tencent: link

[8] Microsoft: link

[9] intel: link

[10] FACEBOOK: link

[11] Google: link

[12] National Laboratory of Pattern Recognition: link

[13] Baidu: link

[14] JD cloud: link

[15] Urban Computing Foundation Interactive Landscape: link

[1] GAIA Open Dataset: link

[2] 智慧足迹: link

[1] UK traffic flow datasets: link

[2] Illinois traffic flow datasets: link

[3] PeMS: link, Baidu Netdisk password:jutw | PeMS Guide

[1] Chengdu: link

[2] Xian: link

[1] Weather and events data: link

[2] Weather and climate data: link

[3] NSW POI data: link

[4] Road network data: link

[5] NYC OpenData: link

[6] METR-LA: link, Baidu Netdisk password:xsz5

[7] TaxiBJ: link, Baidu Netdisk password:sg4n

[8] BikeNYC: link, Baidu Netdisk password:lmwj

[9] NYC-Taxi: link, Baidu Netdisk password:022y

[10] NYC-Bike: link

[11] San Francisco taxi: link

[12] Chicago bike: link

[13] BikeDC: link

中文版思维导图

urban-computing

Reference Modules description Architecture
[3.1] CNN First convert the city into grid-shaped data, and then use CNN to capture spatial dependencies. Expand the size of the receptive field by stacking convolutional layers. STResnet
[3.1] GCN The traffic network generally organizes as a graph structure. It is natural and reasonable to formulate road networks as graphs mathematically. The graph convolution is employed directly on graph-structured data to extract highly meaningful patterns and features in the space domain. GCN
Reference Modules description Architecture
[1.1] causal convolution Based on the past observation data, predict the possible future value $y$. Consider the sequence of time during the convolution process. If you want to model a long time sequence, you need to stack more convolutional layers. causal
[1.5] dilated casual convolution In order to solve the problems such as the disappearance of gradients, the explosion of gradients, and the difficulty of model training in long-term sequences caused by causal convolution. The time dependence can be modeled using dilated causal convolution. Dilated convolution achieves a larger receptive field with fewer convolutional layers by skipping part of the input. dilatedCasual
[2.2] LSTM Use Long Short-Term Memory (LSTM) network to capture the temporal sequential dependency, which is proposed to address the exploding and vanishing gradient issue of traditional Recurrent Neural Network (RNN). dilatedCasual
[1.7] GRU Gated Recurrent Units (GRU) , which is a simple yet powerful variant of RNNs. dilatedCasual
Reference Modules description Architecture
[3.1] External Component Mainly consider weather, holiday event, and metadata (i.e. DayOfWeek, Weekday/Weekend). To predict flows at time interval $t$,the holiday event and metadata can be directly obtained. The weather can use the forecasting weather at time interval $t$ or the approximate weather at time interval $t−1$. STResnet
[2.4] External Factor Fusion First incorporate the temporal factors including time features, meteorological features, and SensorID which specifies the target sensor. weather can use the forecasting weather at time interval $t$. Most of these factors are categorical which cannot be fed to neural networks directly, we transform each categorical attribute into a low- dimensional vector by feeding them into different embedding layers separately. GeoMAN
Reference Modules description Architecture
[3.1] Residual connection As the network deepens, the accuracy of the training set has decreased. We can be sure that this is not caused by overfitting (the training set should have a high accuracy in the case of overfitting); so the author raised this question A new kind of network, called deep residual network, which allows the network to deepen as much as possible. STResnet
Attention

Relevant papers

All papers have been sorted into folders. If the paper cannot be downloaded, please help yourself. ➡ Link(Code:HC8C)

[1] Urban Computing: Concepts, Methodologies, and Applications. ACM Transactions on Intelligent Systems and Technology 2014. paper

YU ZHENG, LICIA CAPRA, OURI WOLFSON, HAI YANG


[2] A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 2020. paper

Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu


[3] Batman or the Joker? The Powerful Urban Computing and its Ethics Issues. SIGSPATIAL 2019. paper

Kaiqun Fu, Abdulaziz Alhamadani, Taoran Ji, Chang-Tien Lu


[4] Deep Learning for Spatio-Temporal Data Mining: A Survey. arXiv paper

Senzhang Wang, Jiannong Cao, Fellow, Philip S. Yu


[5] Urban flow prediction from spatialtemporal data using machine learning: A survey. Information Fusion 2020. paper

Peng Xie, Tianrui Li, Jia Liu, Shengdong Du, Xin Yang, Junbo Zhang


[6] How to Build a Graph-Based Deep Learning Architecture in Traffic Domain: A Survey. arXiv paper

translation

Jiexia Ye, Juanjuan Zhao, Kejiang Ye, Chengzhong Xu


[7] A Survey on Modern Deep Neural Network for Traffic Prediction: Trends, Methods and Challenges. TKDE 2020. paper

David Alexander Tedjopurnomo, Zhifeng Bao, Baihua Zheng, Farhana Murtaza Choudhury, Kai Qin


[8] A Survey of Hybrid Deep Learning Methods for Traffic Flow Prediction. ICAIP 2019. paper

Yan Shi, Haoran Feng, Xiongfei Geng, Xingui Tang, Yongcai Wang

[1] GRAPH ATTENTION NETWORKS. ICLR 2018. paper

Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, Yoshua Bengio


[2] AM-GCN: Adaptive Multi-channel Graph Convolutional Networks. SIGKDD 2020. paper

Xiao Wang, Meiqi Zhu, Deyu Bo, Peng Cui, Chuan Shi, Jian Pei


[3] Heterogeneous Graph Neural Network. SIGKDD 2019. paper

Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, Nitesh V. Chawla


[4] Adaptive Graph Convolutional Neural Networks. AAAI 2018. paper

Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang


[5] Temporal Graph Networks for Deep Learning on Dynamic Graphs. arXiv 2020. paper

Emanuele Rossi,Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, Michael Bronstein


[6] GEOM-GCN: GEOMETRIC GRAPH CONVOLUTIONAL NETWORKS. ICLR 2020. paper

Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, Bo Yang


[7] Investigating and Mitigating Degree-Related Biases in Graph Convolutional Networks. CIKM 2020. paper

Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, Suhang Wang


[8] TinyGNN: Learning Efficient Graph Neural Networks. KDD 2020. paper

Bencheng Yan, Chaokun Wang, Gaoyang Guo, Yunkai Lou


[9] Graph Neural Architecture Search. IJCAI 2020. paper

Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, Yue Hu


[10] A Practical Guide to Graph Neural Networks. arXiv 2020. paper

ISAAC RONALD WARD, JACK JOYNER, CASEY LICKFOLD,STASH ROWE,YULAN GUO,MOHAMMED BENNAMOUN

[1] Learning Long-term Dependencies Using Cognitive Inductive Biases in Self-attention RNNs. PMLR 2020. paper

Giancarlo Kerg, Bhargav Kanuparthi, Anirudh Goyal, Kyle Goyette, Yoshua Bengio, Guillaume Lajoie


[2] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. AAAI 2021. note. paper, github

Models Modules Architecture conclusion
Informer

Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang


[1.1]

Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. IJCAI 2018. note paper, github, code 密码:j6ak.

Models Modules Architecture conclusion
STGCN GCN,Gated CNN STGCN This paper uses GCN to model spatial dependence, temporal dependence modeling uses causal convolution, and uses the gating mechanism GLU. The bottleneck strategy is used in the structure to achieve feature compression. This paper is also the first application of GCN in the field of transportation.

Bing Yu, Haoteng Yin, Zhanxing Zhu


[1.2]

Dynamic Graph Convolution Network for Traffic Forecasting Based on Latent Network of Laplace Matrix Estimation. TITS 2020. paper

Models Modules Architecture conclusion

Kan Guo, Yongli Hu, ZhenQian, Yanfeng Sun, Junbin Gao, Baocai Yin


[1.3]

Spatio-Temporal Graph Structure Learning for Traffic Forecasting. AAAI 2020. paper.

Models Modules Architecture conclusion
SLC SLCNN, P3D SLC This paper proposes a new type of graph convolution formula. The article mentions that it is necessary to learn not only the feature information on the graph, but also the structure information of the graph, which means that the structure of the graph changes dynamically. Use P3D to model the time dependence.

Qi Zhang, Jianlong Chang, Gaofeng Meng, Shiming Xiang, Chunhong Pan


[1.4]

GMAN: A Graph Multi-Attention Network for Traffic Prediction. AAAI 2020. paper, github, code 密码:4fdh.

Models Modules Architecture conclusion
GMAN Encoder-Decoder,ST-Attention,Trans Attention GMAN This paper proposes a spatial-temporal attention mechanism with gated fusion to simulate complex spatial-temporal correlation.

Chuanpan Zheng, Xiaoliang Fan, Cheng Wang, Jianzhong Qi


[1.5]

Graph WaveNet for Deep Spatial-Temporal Graph Modeling. IJCAI 2019. paper, github, code 密码:acfw.

Models Modules Architecture conclusion
GWN GCN with adaptive Matrix,Gated TCN GWN This paper proposes a diffusion convolution formula with an adaptive adjacency matrix on the basis of DCRNN. During the training process, it also emphasizes that the structure of the graph changes dynamically. The paper uses two embedding vectors to dynamically learn the graph structure. Causal convolution is used to model time dependence. The overall structure of the model is similar to WaveNet.

Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang


[1.6]

Spatial-Temporal Synchronous Graph Convolutional Networks: A New Framework for Spatial-Temporal Network Data Forecasting. AAAI 2020. paper, github, code 密码:3jkd.

Models Modules Architecture conclusion
STSGCN Spatial-Temporal Embedding, STSGCM, STSGCN This paper proposes a new structured local spatio-temporal graph. By combining the graph structures of adjacent time slices into a local spatio-temporal graph, a new adjacency matrix is constructed, which can simultaneously capture spatio-temporal dependence.

Chao Song, Youfang Lin, Shengnan Guo, Huaiyu Wan


[1.7]

DIFFUSION CONVOLUTIONAL RECURRENT NEURAL NETWORK: DATA-DRIVEN TRAFFIC FORECASTING. ICLR 2018. paper, github, code 密码:ba0q.

Models Modules Architecture conclusion
DCRNN Diffusion Convolutional Layer, encoder-decoder, GRU DCRNN This paper proposes diffusion convolution based on random walk for modeling spatio-temporal dependence. Use GRU to model time dependence.

Yaguang Li, Rose Yu, Cyrus Shahabi, Yan Liu


[1.8]

Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting. AAAI 2019. paper github, code 密码:nbje.

Models Modules Architecture conclusion
ASTGCN Spatial attention,Temporal attention,GCN,TCN ASTGCN The model combines the spatial-temporal attention mechanism and the spatial-temporal convolution, including graph convolu- tions in the spatial dimension and standard convolutions in the temporal dimension, to simultaneously capture the dy- namic spatial-temporal characteristics of traffic data.

Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, Huaiyu Wan


[1.9]

ST-GRAT: A Novel Spatio-temporal Graph Attention Network for Accurately Forecasting Dynamically Changing Road Speed. CIKM 2020. paper

Models Modules Architecture conclusion
ST-GRAT Encoder-Decoder, Embedding, Spatial Attention, Temporal Attention ST-GRAT This paper presented ST-GRAT with a novel spatial and temporal attention for accurate traffic speed prediction. Spatial attention captures the spatial correlation among roads, utilizing graph structure information, while temporal attention captures the temporal dynamics of the road network by directly attending to features in long sequences.

Cheonbok Park , Chunggi Lee , Hyojin Bahng, Yunwon Tae, Seungmin Jin, Kihwan Kim, Sungahn Ko, Jaegul Choo


[1.10]

Temporal Multi-Graph Convolutional Network for Traffic Flow Prediction. TITS 2020. paper

Models Modules Architecture conclusion

Mingqi Lv , Zhaoxiong Hong, Ling Chen , Tieming Chen, Tiantian Zhu , Shouling Ji


[1.11]

Hybrid Spatio-Temporal Graph Convolutional Network: Improving Traffic Prediction with Navigation Data. SIGKDD 2020. paper

Models Modules Architecture conclusion

Rui Dai, Shenkun Xu, Qian Gu, Chenguang Ji, Kaikui Liu


[1.12]

Multi-Range Attentive Bicomponent Graph Convolutional Network for Traffic Forecasting. AAAI 2020. paper

Models Modules Architecture conclusion

Weiqi Chen, Ling Chen, Yu Xie, Wei Cao, Yusong Gao, Xiaojie Feng


[1.13]

LSGCN: Long Short-Term Traffic Prediction with Graph Convolutional Networks. IJCAI 2020. paper

Models Modules Architecture conclusion

Rongzhou Huang , Chuyin Huang, Yubao Liu, Genan Dai, Weiyang Kong


[1.14]

Optimized Graph Convolution Recurrent Neural Network for Traffic Prediction. TITS 2020. paper

Models Modules Architecture conclusion

Kan Guo, Yongli Hu, Zhen Qian, Hao Liu, Ke Zhang, Yanfeng Sun, Junbin Gao, Baocai Yin


[1.15]

Dynamic Graph Convolution Network for Traffic Forecasting Based on Latent Network of Laplace Matrix Estimation. TITS 2020. paper

Models Modules Architecture conclusion

Kan Guo, Yongli Hu, ZhenQian, Yanfeng Sun, Junbin Gao, Baocai Yin


[1.16]

GSTNet: Global Spatial-Temporal Network for Traffic Flow Prediction. IJCAI 2019. paper

Models Modules Architecture conclusion

Shen Fang, Qi Zhang, Gaofeng Meng, Shiming Xiang, Chunhong Pan


[1.17]

Short-Term Traffic Flow Forecasting Method With M-B-LSTM Hybrid Network. TITS 2020. paper

Models Modules Architecture conclusion

Qu Zhaowei, Li Haitao, Li Zhihui, Zhong Tao


[1.18]

Traffic Graph Convolutional Recurrent Neural Network: A Deep Learning Framework for Network-Scale Traffic Learning and Forecasting. TITS 2019. paper

Models Modules Architecture conclusion

Zhiyong Cui, Kristian Henrickson, Ruimin Ke, Ziyuan Pu, Yinhai Wang


[2.1]

Urban Traffic Prediction from Spatio-Temporal Data Using Deep Meta Learning. SIGKDD 2019. paper

Models Modules Architecture conclusion

Zheyi Pan , Yuxuan Liang , Weifeng Wang, Yong Yu, Yu Zheng, Junbo Zhang


[2.2]

Revisiting Spatial-Temporal Similarity: A Deep Learning Framework for Traffic Prediction. AAAI 2019. paper, github, code 密码:7hu9

Models Modules Architecture conclusion
STDN CNN, LSTM, Attention, FGM STDN This paper operates on grid data, using CNN for spatial dependence modeling, LSTM for temporal dependence modeling, and introducing an attention mechanism to model periodic changes in time.

Huaxiu Yao, Xianfeng Tang, Hua Wei, Guanjie Zheng, Zhenhui Li


[2.3]

Deep Spatial–Temporal 3D Convolutional Neural Networks for Traffic Data Forecasting. TITS 2019. paper

Models Modules Architecture conclusion

Shengnan Guo, Youfang Lin, Shijie Li, Zhaoming Chen, and Huaiyu Wan


[2.4]

GeoMAN: Multi-level Attention Networks for Geo-sensory Time Series Prediction. IJCAI 2018. paper, github, code 密码:vavc

Models Modules Architecture conclusion
GeoMAN Spatial Attention,Temporal Attention, LSTM, Encoder-decoder GeoMAN This paper applies local and global spatial attention mechanisms to capture dynamic correlations between sensors in the data. And time attention is used to adaptively select the relevant time step to make the prediction. In addition, the proposed model takes into account the influence of external factors through a common fusion module.

Yuxuan Liang, Songyu Ke, Junbo Zhang, Xiuwen Yi, Yu Zheng


[2.5]

Preserving Dynamic Attention for Long-Term Spatial-Temporal Prediction. SIGKDD 2020. paper

Models Modules Architecture conclusion

Haoxing Lin, Rufan Bai,Weijia Jia,Xinyu Yang,Yongjian You


[2.6]

Self-Attention ConvLSTM for Spatiotemporal Prediction. AAAI 2020. paper

Models Modules Architecture conclusion
SA-ConvLSTM self-attention,ConvLSTM,Self-Attention Memory Module SAM This paper attempts to capture long-term spatial and temporal dependence by constructing a self-attention memory module. It was combined with ConvLSTM for spatiotemporal prediction.

Zhihui Lin,Maomao Li,Zhuobin Zheng,Yangyang Cheng,Chun Yuan


[3.1]

Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. AAAI 2017. paper

Models Modules Architecture conclusion
ST-ResNet CNN, Extract key frames STResnet This paper is an operation on grid data, using CNN plus residual connections to model spatial dependence, and using key frame extraction to simulate the trend, periodicity, and proximity in the time dimension, and consider external factors.

Junbo Zhang, Yu Zheng, Dekang Qi


[3.2]

UrbanFM: Inferring Fine-Grained Urban Flows. SIGKDD 2019. paper

Models Modules Architecture conclusion
UrbanFM CNN, Upsampling, SubPixel Block UrbanFM This paper is similar to the resolution conversion of images, and aims to obtain more fine-grained traffic information in cities based on observations.

Yuxuan Liang, Kun Ouyang, Lin Jing, Sijie Ruan, Ye Liu1 Junbo Zhang, David S. Rosenblum, Yu Zheng


[3.3]

DeepSTD: Mining Spatio-Temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction. TITS 2019. paper

Models Modules Architecture conclusion

Chuanpan Zheng, Xiaoliang Fan, Chenglu Wen, Longbiao Chen, Cheng Wang, Jonathan Li


[3.4]

Dynamic Spatial-Temporal Representation Learning for Traffic Flow Prediction. TITS 2020. paper

Models Modules Architecture conclusion

Lingbo Liu, Jiajie Zhen, Guanbin Li , Geng Zhan, Zhaocheng He,Bowen Du,Liang Lin


[3.5]

AutoST: Efficient Neural Architecture Search for Spatio-Temporal Prediction. SIGKDD 2020. paper

Models Modules Architecture conclusion

Ting Li, Junbo Zhang, Kainan Bao, Yuxuan Liang, Yexin Li, Yu Zheng


[3.6]

Flow Prediction in Spatio-Temporal Networks Based on Multitask Deep Learning. TKDE 2020. paper

Models Modules Architecture conclusion

Junbo Zhang, Yu Zheng, Junkai Sun, Dekang Qi


[3.7]

Multi-Graph Convolutional Network for Short-Term Passenger Flow Forecasting in Urban Rail Transit. IET Intelligent Transport Systems 2020. paper

Models Modules Architecture conclusion

Jinlei Zhang, Feng Chen, Yinan Guo, Xiaohong Li


[3.8]

Revisiting Convolutional Neural Networks for Citywide Crowd Flow Analytics. arXiv 2020. paper

Models Modules Architecture conclusion

Yuxuan Liang, Kun Ouyang, Yiwei Wang, Ye Liu, Junbo Zhang, Yu Zheng, David S. Rosenblum


[3.9]

Citywide Traffic Flow Prediction Based on Multiple Gated Spatio-temporal Convolutional Neural Networks. TKDD 2020. paper

Models Modules Architecture conclusion

Cen Chen, Kenli Li, Sin G. Teo, Xiaofeng Zou, Keqin Li, Zeng Zeng


[3.10]

Physical-Virtual Collaboration Modeling for Intra-and Inter-Station Metro Ridership Prediction. arXiv 2020. paper

Models Modules Architecture conclusion

Lingbo Liu, Jingwen Chen, Hefeng Wu, Jiajie Zhen, Guanbin Li, Liang Lin


[3.11]

Predicting Citywide Crowd Flows in Irregular Regions Using Multi-View Graph Convolutional Networks. TKDE 2020. paper

Models Modules Architecture conclusion

Junkai Sun, Junbo Zhang, Qiaofei Li, Xiuwen Yi, Yuxuan Liang, Yu Zheng


[3.12]

Spatial-Temporal Convolutional Graph Attention Networks for Citywide Traffic Flow Forecasting. CIKM 2020. note paper code

Models Modules Architecture conclusion
ST-CGA framework i) captures the multiple granularity-aware temporal factors that govern the dynamic transition regularities of traffic flow; ii) models the high-order spatial relation structures with a channel-aware convolutional graph learning model; iii) integrates the collaborative signals from spatial, temporal and semantic dimensions.

Xiyue Zhang, Chao Huang, Yong Xu, Lianghao Xia


[4.1]

Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction. AAAI 2018. paper

Models Modules Architecture conclusion

Huaxiu Yao, Fei Wu, Jintao Ke, Xianfeng Tang, Yitian Jia, Siyu Lu, Pinghua Gong, Jieping Ye, Zhenhui Li


[4.2]

Origin-Destination Matrix Prediction via Graph Convolution: a New Perspective of Passenger Demand Modeling. SIGKDD 2019. paper

Models Modules Architecture conclusion

Yuandong Wang, Hongzhi Yin, Hongxu Chen, Tianyu Wo, Jie Xu, Kai Zheng


[4.3]

STG2Seq: Spatial-temporal Graph to Sequence Model for Multi-step Passenger Demand Forecasting. IJCAI 2019. paper

Models Modules Architecture conclusion

Lei Bai, Lina Yao , Salil.S Kanhere, Xianzhi Wang, Quan.Z Sheng


[4.4]

Taxi Demand Prediction Using Parallel Multi-Task Learning Model. TITS 2020. paper

Models Modules Architecture conclusion

Chizhan Zhang, Fenghua Zhu, Xiao Wang, Leilei Sun, Haina Tang, Yisheng Lv


[4.5]

Traffic Demand Prediction Based on Dynamic Transition Convolutional Neural Network. TITS 2020. paper

Models Modules Architecture conclusion

Bowen Du, Xiao Hu, Leilei Sun, Junming Liu, Yanan Qiao, Weifeng Lv

[5.1]

HetETA: Heterogeneous Information Network Embedding for Estimating Time of Arrival. SIGKDD 2020. paper github, code 密码:eag2

Models Modules Architecture conclusion
HetETA GatedCNNs, GCN HetETA In this paper, traffic structure is constructed by digging deeper semantic information of traffic network. HetETA combines gated convolution neural networks and graph neural networks to capture the correlations in spatiotemporal information.

Huiting Hong, Yucheng Lin, Xiaoqing Yang, Zang Li, Kun Fu, Zheng Wang, Xiaohu Qie, Jieping Ye


[5.2]

CompactETA: A Fast Inference System for Travel Time Prediction. KDD 2020. paper

Models Modules Architecture conclusion

Kun Fu, Fanlin Meng, Jieping Ye, Zheng Wang


[5.3]

Spatiotemporal Multi-Graph Convolution Network for Ride-hailing Demand Forecasting. AAAI 2019. paper.

Models Modules Architecture conclusion
STMGCN GCN,CGRNN STMGCN This paper simulates complex spatial relationships by constructing multi-graphs, captures temporal dependencies by context-gated RNN, and captures spatial dependencies by GCN.

Xu Geng, Yaguang Li, Leye Wang, Lingyu Zhang, Qiang Yang, Jieping Ye, Yan Liu


Contributors

Stargazers over time

Stargazers over time

LICENSE

知识共享许可协议

本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。