Python Web Scraping / Automation: Connecting to Firefox with Selenium

Selenium is a Python package that allows you to control web browsers through Python. In this tutorial (and the following tutorials), we will be connecting to Googles Chrome browser, Selenium does work with other browsers as well.

First you will need to download Selenium, you can use the following commands depending on your Python distribution

c:\> Pip install selenium

c:\> Conda install selenium

If you are on a work computer or dealing with a restrictive VPN, the offline install option may help you: Selenium_Install_Offline

Next you need to download the driver that let’s you manage Firefox through Python.

Start by determining what version of Firefox you have on your computer

Click the three horizontal lines in the upper right corner > Help >About Firefox

Search for geckodriver to download the file that matches your Firefox version. (note, this is something you will need to do every time Firefox is updated, so get used to it.)

Open up the zipfile you downloaded, you will find a file called geckodriver.exe

Put it somewhere you can find, put in the following code to let Python know where to find it.

from selenium import webdriver
opts = webdriver.FirefoxOptions()
dr = webdriver.Firefox('C:/Users/larsobe/Desktop/geckodriver.exe',chrome_options=opts)

Now to see if this works, use the following line, (you can try another website if you choose)   

Note the message Firefoxis being controlled by automated test software.

You are now running a web browser via Python.

Python: Working with Dictionaries

A dictionary is an advanced data structure in Python. It holds data in a Key – Value combination.

Basic Syntax for a dictionary:

dictionary = { <key> : <value>,
               <key> : <value>,
               <key> : <value> }

I think the key/value relationship can be explained thinking about a car. A car has a few key values that describe it. Think: make, model, color, year

If I want to make a dictionary to describe a car, I could build it like this:

car = { 'Make': 'Ford',
        'Model': 'Mustang',
        'Color': 'Candy Apple Red',
        'Year': 1965 }

You can also use the dict() function to create a dictionary

car = dict([ ('Make', 'Ford'), 
             ('Model', 'Mustang'),
             ('Color', 'Candy Apple Red'),
             ('Year', 1965)])

Now, how do we access the values in a dictionary. I know in lists you can just refer to the index, but in dictionaries, it doesn’t work that way.

Try your code in our online Python console: 

  

Instead, we refer to dictionaries by their key:

You can add a key/value pair:

You can change a dictionary value:

To remove values from a dictionary, use the del() function

Try your code in our online Python console: 

Previous Lesson: Working with lists

Next Lesson: Tuples and Sets

Back to Python Course: Course

Python: Webscraping using Requests and BeautifulSoup to identify website content

When learning any new skill, it is always helpful to see a practical application. I currently work as a data scientist in the Cyber-security department. One of our concerns is fake websites set up to look like a real website for my company. We search recently registered Domain names for names that might mimic our company brand or name. We need to know if these are possibly malicious sites looking to steal our employee or customer information.

The problem is, we get 100’s of new domains registered a day that fit our search criteria. Instead of looking at each on manually, I created a Python script that scans the list of new domains for key words. To do this, all you need is Requests and BeautifulSoup.

In this example, I am going to look at a couple of websites. I want to know if they discuss Python or podcasts or neither?

Let us start by looking at my site: Analytics4All.org

I imported requests and BeautifulSoup, then ran Request.get() on my website to pull the HTML code.

import requests as re
from bs4 import BeautifulSoup

r = re.get("https://analytics4all.org")
print(r.text)

If you are not sure how to do this, check out my earlier tutorials on this topic here:

Intro to Requests

Requests and BeautifulSoup

Now let us search the HTML for our keywords

Using .find() command, Python returns the first location in the HTML of the keywords. Notice podcast returned -1. That means podcasts are not mentioned in my site. But Python is listed at 35595. So I can label this site as mentioning Python, but not podcasts

r.text.find('Python')
r.text.find('podcast')

Let’s try another site, and use BeautifulSoup to search the code

In this example, we will look at iHeartRadio’s website: iheart.com

Using the same .find() from the last time we see that Podcasts are mentioned in this website

r1 = re.get("https://www.iheart.com/")
soup1 = BeautifulSoup(r1.content, 'html.parser')
print(soup1.prettify())  

r1.text.find('Podcasts')

Using BeautifulSoup we can limit our search to a more targeted element of the website, which is useful in reducing false positives, as sometimes you will find some weird stuff buried in the HTML that is honestly irrelevant to the website.

Above we just pulled the title from the website and looked for Podcasts. And we found it.

a = soup1.title.string
print(a)
str(a)
print(a.find('Podcasts'))

Finally, let us inspect the login page for Macy’s

Notice we are getting Access Denied when searching this site. Unfortunately Requests doesn’t work in all case. I will be explaining how to get around this problem in my lessons on Selenium.

r1 = re.get("https://www.macys.com/account/signin")
soup1 = BeautifulSoup(r1.content, 'html.parser')
print(soup1.prettify())    

But for now, just know Requests does work for most websites, and using it makes a simple way to automate scanning websites for keywords that can be used to categorize the site, and in my case, find bad operators and shut them down.

Python: Webscraping using BeautifulSoup and Requests

I covered an introduction to webscraping with Requests in an earlier post. You can check it out here: Requests

As a quick refresher, the requests module allows you call on a website through Python and retrieve the HTML behind the website. In this lesson we are going to add on to this functionality by adding the module BeautifulSoup.

BeautifulSoup

BeautifulSoup provides an useful HTML parser that makes it much easier to work with the HTML results from Requests. Let’s start by importing our libraries we will need for this lesson

Next we are going to use requests to call on my website.
We will then pass the HTML code to BeautifulSoup

The syntax is BeautifulSoup(HTML, ‘html.parser’)

The HTML I am sending to BeautifulSoup comes from my request.get() call. In the last lesson, I used r.text to print out the HTML to view, here I am passing r.content to BeautifulSoup and printing out the results.

Note I am also using the soup.prettify() command to ensure my printout is easier to read for humans

BeautifulSoup makes parsing the HTML code easier. Below I am asking to see soup.title – this returns the HTML code with the “title” markup.

To take it even another step, we can add soup.title.string to just get the string without the markup tags

soup.get_text() returns all the text in the HTML code without the markups or other code

In HTML ‘a’ and ‘href’ signify a link

We can use that to build a for loop that reads all the links on the webpage.

Python: Create a Word Cloud

Word Clouds are a simple way of visualizing word frequency in a corpus of text. Word Clouds typically work by displaying frequently used words in a text corpus, with the most frequent words appearing in larger text.

Here is the data file I will be using in this example if you want to follow along:

As far as libraries go, you will need pandas, matplotlib, os, and wordcloud. If you are using the Anaconda python distribution you should have all the libraries but wordcloud. You can install it using PIP or Conda install.

Lets start by loading the data

import pandas as pd
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import os

#Set working directory
os.chdir('C:\\Users\\blars\\Documents')

#Import CSV
df = pd.read_csv("movies.csv")

#First look at the Data
df.head()

** Note: if you are using Jupyter notebooks to run this, add %matplotlib inline to the end of the import matplotlib line, otherwise you will not be able to see the word cloud

import matplotlib.pyplot as plt %matplotlib inline

We can use df.info() to look a little closer at the data

We have to decide what column we want to build our word cloud from. In this example I will be using the title column, but feel free to use any text column you would like.

Let look at the title column

As you can see, we have 20 movie titles in our data set. Next thing we have to do is merge these 20 rows into one large string

corpus = " ".join(tl for tl in df.title)

The code above is basically a one line for loop. For every Row in the Column df.title, join it with the next row, separating by a space ” “

Now build the word cloud

wordcloud = WordCloud(width=640, height=480, max_words=20).generate(corpus)

You can change the width and height, number of words that will appear. Play around with the numbers, see how it changes your output

Finally, let’s chart it, so we can see the cloud

plt.imshow(wordcloud,interpolation="bilinear")
plt.axis("off")
plt.show()

interpolation = “bilinear” is what lets the words so sideways and up and down

plt.axis(“off”) gets rid or axis markers (see below)

You can also go back to the word cloud and change the background color
wordcloud = WordCloud(width=640, height=480, background_color = 'white', max_words=25).generate(corpus)
plt.imshow(wordcloud,interpolation="bilinear")
plt.axis("off")
plt.show()

Python: Rename Pandas Dataframe Columns

Renaming columns is easy using pandas, first lets build a quick dataframe:

import pandas as pd
x= {'Job Title' :['Manager', 'Tech', 'Supervisor'],
    'Employee' : ['Jill', 'Will', 'Phil']}

df = pd.DataFrame(x)

now to rename, we have a few options

df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'}, inplace=True)
#or you can move it to a new dataframe if you want to keep the original intact
df1 = df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'})
#note I left off the inplace=True argument on the second since I didn't want to 
#overwrite the original

Here are a few other ways to do it, each will give you the same results

df2 = df.rename({'Job Title': 'Job_title', 'Employee': 'Emp'}, axis=1)  
df3 = df.rename({'Job Title': 'Job_title', 'Employee': 'Emp'}, axis='columns')
df4 = df.rename(columns={'Job Title': 'Job_title', 'Employee': 'Emp'})   

Last Lesson: Pandas Working with DataFrames

Next Lesson: Python: Working with Rows in DataFrames

Return to: Python for Data Science Course

Follow this link for more Python content: Python

Python: Convert Datetime to Date using Pandas

To convert a datetime in a pandas dataframe to date use the dt.date function:

df['column'] = pd.to_datetime(df['column']).dt.date

To demonstrate, first let’s build a dataframe

import pandas as pd
df = pd.DataFrame({'Job_Start': ['Demolition','Construction', 'Cleanup'],
                   'time': ['2022-05-20 08:07:22', '2022-05-27 07:34:01', 
                   '2022-06-01 09:12:11']})

Now lets convert the “time” column to date instead of datetime

df['time'] = pd.to_datetime(df['time']).dt.date

Python: Read all files in a folder

os.listdir() command will easily give you a list off all files in a folder.

So for this exercise I created a folder and threw a few files in it.

Using the following code, can iterate through the file list

import os
for files in os.listdir("C:/Users/blars/Documents/test_folder"):
    print(files)

Now if I wanted to read the files, I could use the Pandas command pd.read_excel to read each file in the loop

***Note, I made this folder with only Excel files on purpose for ease of demonstration. You could do this with multiple file types in a folder, it would however require some conditional logic to handle the different file types

To read all the Excel files in the folder:

import pandas as pd
import os

os.chdir('C:/Users/blars/Documents/test_folder')
for files in os.listdir("C:/Users/blars/Documents/test_folder"):
    print(files)
    file = pd.read_excel(files)
    print(file)

Python Web Scraping: Get http code easily with the Requests module

The Requests module for Python makes capturing and working with HTML code from any website.

Requests comes installed in many of the Python distributions, you can test if it is installed on yours machine by running the command: import requests

If that command fails, then you’ll need to install the module using Conda or Pip

import requests
t = requests.get('http://aiwithai.com')
print(t.text)

As you can see, using just 3 lines of code you can return the HTML from any website

You can see that all the text found on the web page is found in the HTML code, so parsing through the text can allow you to scrape the information off of a website

Requests has plenty more features, here are couple I use commonly

t.status_code == returns the status of your get request. If all goes well, it will return 200, otherwise you will get error codes like 404

t.headers

You can also extract your results into json

t.json()

Python: Selenium – Setting Chrome browser size

There are times when running automation on a web browser that you will want to adjust the window size of the browser. The most obvious reason I can think of is some that websites, (mine included) act display differently based on the window size.

For example: Full Size

Reduced size

Notice in the minimized window, my menu list is replaced by accordion button. For the purposes of automation and webscraping, the accordion button is actually easy to navigate than my multi-layered menu.

The code for opening the browser in full screen mode is below: note the line –start-maximized

To open the window in a smaller scale try: window-size=width, length. Play around with the values to get one that works for your screen.