DEV Community

Gabriel Rowan
Gabriel Rowan

Posted on

Web scraping and analysing foreign languages data

Recently I decided that I would like to do a quick web scraping and data analysis project. Because my brain likes to come up with big ideas that would take lots of time, I decided to challenge myself to come up with something simple that could viably be done in a few hours.

Here's what I came up with:

As my undergrad degree was originally in Foreign Languages (French and Spanish), I thought it'd be fun to web scrape some language related data. I wanted to use the BeautifulSoup library, which can parse static html but isn't able to deal with dynamic web pages that need onclick events to reveal the whole dataset (ie. clicking on the next page of data if the page is paginated).

I decided on this Wikipedia page of the most commonly spoken languages.

Table of most commonly spoken languages from wikipedia

I wanted to do the following:

  • Get the html for the page and output to a .txt file
  • Use beautiful soup to parse the html file and extract the table data
  • Write the table to a .csv file
  • Come up with 10 questions I wanted to answer for this dataset using data analysis
  • Answer those questions using pandas and a Jupyter Notebook

I decided on splitting out the project into these steps for separation of concern, but also I wanted to avoid making multiple unnecessary requests to get the html from Wikipedia by rerunning the script. Saving the html file and then working with it in a separate script means that you don't need to keep re-requesting the data, as you already have it.

Project Link

The link to my github repo for this project is: https://github.com/gabrielrowan/Foreign-Languages-Analysis

Getting the html

First, I retrieved and output the html. After working with C# and C++, it's always a novelty to me how short and concise Python code is ๐Ÿ˜„


url = 'https://en.wikipedia.org/wiki/List_of_languages_by_number_of_native_speakers'

response = requests.get(url)
html = response.text

with open("languages_html.txt", "w", encoding="utf-8") as file:
    file.write(html)

Enter fullscreen mode Exit fullscreen mode

Parsing the html

To parse the html with Beautiful soup and select the table I was interested in, I did:

with open("languages_html.txt", "r", encoding="utf-8") as file:
    soup = BeautifulSoup(file, 'html.parser')

# get table
top_languages_table = soup.select_one('.wikitable.sortable.static-row-numbers')


Enter fullscreen mode Exit fullscreen mode

Then, I got the table header text to get the column names for my pandas dataframe:


# get column names
columns = top_languages_table.find_all("th")
column_titles = [column.text.strip() for column in columns]

Enter fullscreen mode Exit fullscreen mode

After that, I created the dataframe, set the column names, retrieved each table row and wrote each row to the dataframe:

# get table rows
table_data = top_languages_table.find_all("tr")

# define dataframe
df = pd.DataFrame(columns=column_titles)

# get table data
for row in table_data[1:]:
    row_data = row.find_all('td')
    row_data_txt = [row.text.strip() for row in row_data]
    print(row_data_txt)
    df.loc[len(df)] = row_data_txt 


Enter fullscreen mode Exit fullscreen mode

Note - without using strip() there were \n characters in the text which weren't needed.

Last, I wrote the dataframe to a .csv.

Analysing the data

In advance, I'd come up with these questions that I wanted to answer from the data:

  1. What is the total number of native speakers across all languages in the dataset?
  2. How many different types of language family are there?
  3. What is the total number of native speakers per language family?
  4. What are the top 3 most common language families?
  5. Create a pie chart showing the top 3 most common language families
  6. What is the most commonly occuring Language family - branch pair?
  7. Which languages are Sino-Tibetan in the table?
  8. Display a bar chart of the native speakers of all Romance and Germanic languages
  9. What percentage of total native speakers is represented by the top 5 languages?
  10. Which branch has the most native speakers, and which has the least?

The Results

While I won't go into the code to answer all of these questions, I will go into the 2 ones that involved charts.

Display a bar chart of the native speakers of all Romance and Germanic languages

First, I created a dataframe that only included rows where the branch name was 'Romance' or 'Germanic'

romance_lang = df[(df['Branch'] == 'Romance') | (df['Branch'] == 'Germanic')]

Enter fullscreen mode Exit fullscreen mode

Then I specified the x axis, y axis and the colour of the bars that I wanted for the chart:

romance_lang.plot.bar(x='Language', y='Native speakers(in millions)', 
color='#26C6DA')
Enter fullscreen mode Exit fullscreen mode

This created:

bar chart of speakers of romance and germanic languages

Create a pie chart showing the top 3 most common language families

To create the pie chart, I retrieved the top 3 most common language families and put these in a dataframe.

This code groups gets the total sum of native speakers per language family, sorts them in descending order, and extracts the top 3 entries.

top_3_lang_families = df.groupby('Language family')['Native speakers(in millions)'].sum().sort_values(ascending=False)[:3]

Enter fullscreen mode Exit fullscreen mode

Then I put the data in a pie chart, specifying the y axis of 'Native Speakers' and a legend, which creates colour coded labels for each language family shown in the chart.


pie_chart_top_3_lang_families = top_3_lang_families.plot.pie(y='Native speakers(in millions)')
pie_chart_top_3_lang_families.legend(
    labels=top_3_lang_families.index,  
    title="Language Families",        
    loc="upper right"                        
)


Enter fullscreen mode Exit fullscreen mode

pie chart of most common language families

The code and responses for the rest of the questions can be found here. I used markdown in the notebook to write the questions and their answers.

Next Time:

For my next iteration of a web scraping & data analysis project, I'd like to make things more complicated with:

  • Web scraping a dynamic page where more data is revealed on click/ scroll
  • Analysing a much bigger dataset, potentially one that needs some data cleaning work before analysis

Tendi meme

Final thoughts

Even though it was a quick one, I enjoyed doing this project. It reminded me how useful short, manageable projects can be for getting the practice reps in ๐Ÿ’ช Plus, extracting data from the internet and creating charts from it, even with a small dataset, is fun ๐Ÿ˜Š

Top comments (0)