-
Notifications
You must be signed in to change notification settings - Fork 7
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added solutions to problem 1, 2, 3 & 4
- Loading branch information
Showing
9 changed files
with
1,530 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
# GitHeat | ||
# Projects for summer | ||
This program uses modules like urllib, BeautifulSoup, Pandas, Selenium etc to find train schedules when user gives train number as an input. | ||
The train schedule includes the following information: | ||
1) Train Name | ||
2) Operation Dates | ||
3) Arrival Time | ||
4) Departure Time | ||
5) Station Name with code | ||
6) Route | ||
7) Day of running | ||
|
||
The information above is printed in json as well as in a table format. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
|
||
# coding: utf-8 | ||
|
||
# In[10]: | ||
|
||
|
||
import bs4 as bs | ||
import urllib.request | ||
import pandas as pd | ||
from selenium import webdriver | ||
|
||
|
||
url = 'https://www.cleartrip.com/trains/' | ||
x = input() #Takes train number as input in the variable x | ||
finalUrl = url + x + '/' | ||
|
||
sauce = urllib.request.urlopen(finalUrl).read() | ||
|
||
soup = bs.BeautifulSoup(sauce,'lxml') | ||
trainDetails = soup.h1.text | ||
|
||
print('Train Details: ',trainDetails) | ||
|
||
for spans in soup.find_all('span' , class_ = 'days-op'): | ||
print('Operational Days: ', spans.text) | ||
|
||
df1=pd.read_html(finalUrl, header=0)[0] | ||
out = df1.to_json(orient = 'records') | ||
|
||
print(out, '\n\n\n\n\n') | ||
print(df1[['Station name (code)', 'Arrives', 'Departs', 'Stop time', | ||
'Distance travelled', 'Day', 'Route']]) | ||
|
||
driver = webdriver.Firefox() #Opens the site from where information | ||
driver.get(finalUrl) #is taken. | ||
|
Oops, something went wrong.