Lab 01 - Hello R!

R is the name of the programming language itself and Posit is a convenient interface.

The main goal of this lab is to introduce you to R and Posit, which we will be using throughout the course both to learn the statistical concepts discussed in the course and to analyze real data and come to informed conclusions.

git is a version control system (like “Track Changes” features from Microsoft Word on steroids) and GitHub is the home for your Git-based projects on the internet (like DropBox but much, much better).

An additional goal is to introduce you to Git and GitHub, which is the collaboration and version control system that we will be using throughout the course.

As the labs progress, you are encouraged to explore beyond what the labs dictate; a willingness to experiment will make you a much better programmer. Before we get to that stage, however, you need to build some basic fluency in R. Today we begin with the fundamental building blocks of R and Posit: the interface, reading in data, and basic commands.

And to make versioning simpler, this is a solo lab. Additionally, we want to make sure everyone gets a significant amount of time at the steering wheel. In future labs you’ll learn about collaborating on GitHub and produce a single lab report for your team.

Connecting GitHub and Posit Cloud

You should have already received an invitation to join the GitHub organization for this course. You need to accept the invitation before moving on to the next step.

To connect your Posit and GitHub accounts by following the steps below:

Getting started

Each of your assignments will begin with the following steps. You saw these once in class yesterday, they’re outlined in detail here again. Going forward each lab will start with a “Getting started” section but details will be a bit more sparse than this. You can always refer back to this lab for a detailed list of the steps involved for getting started with an assignment.

Introducing yourself to git: setting up your personal access token.

Once you’ve opened up Posit Cloud, we will want to introduce Posit to git and set up our personal access token, these replaced passwords in 2021. If you have already done this in the homework, skip ahead to step 6. Our personal access token will connect our project to our github account. This is perhaps the most tedious part of the process as you often have to (re)introduce yourself to git when you make a new project.

In the console (lower left panel in Posit), run the following code:

install.packages("usethis") # Install the package
library(usethis) # Load the package
git_sitrep()

Note that before git is connected it will show that there is no personal access token <unset>.

Step 1: Open page to make personal access token

To add a token we can run the following code in the console:

create_github_token()

Step 2: Add description and update expiry date

Add a description (e.g. course name) and update the expiry date to never expire. This will mean that it does not need to be updated during the course.

Step 4: Generate the token

Aside from the description and expiry date, you can leave the other default settings.

Step 5: Copy and save generated token

Copy and save your generated token by clicking on the clipboard symbol. Keep your token somewhere you can find it later, like in a text file in your course folder. Note that for more sensitive projects you might consider using a password manager.

Step 6: Set github credentials

Now we’re ready to set our github credentials.

Step 7: Recheck git credentials.

Now, you’ll notice that it says that the personal access token has been discovered.

Note that for future projects you can skip directly to Step 6. You do not need to generate a personal access token each time.

Warm up

Before we introduce the data, let’s warm up with some simple exercises.

The top portion of your R Markdown file (between the three dashed lines) is called YAML. It stands for “YAML Ain’t Markup Language”. It is a human friendly data serialization standard for all programming languages. All you need to know is that this area is called the YAML (we will refer to it as such) and that it contains meta information about your document.

YAML

Open the R Markdown (Rmd) file in your project, change the author name to your name, and knit the document.

Committing changes

Then go to the Git pane in your Posit.

If you have made changes to your Rmd file, you should see it listed here. Click on it to select it in this list and then click on Diff. This shows you the difference between the last committed state of the document and its current state that includes your changes. If you’re happy with these changes, write “Update author name” in the Commit message box and hit Commit.

You don’t have to commit after every change, this would get quite cumbersome. You should consider committing states that are meaningful to you for inspection, comparison, or restoration. In the first few assignments we will tell you exactly when to commit and in some cases, what commit message to use. As the semester progresses we will let you make these decisions.

Pushing changes

Now that you have made an update and committed this change, it’s time to push these changes to the web! Or more specifically, to your repo on GitHub. Why? So that others can see your changes. And by others, we mean the course teaching team (your repos in this course are private to you and us, only).

In order to push your changes to GitHub, click on Push. This will prompt a dialogue box where you first need to enter your user name, and then your password.

Packages

In this lab we will work with two packages: datasauRus which contains the dataset we’ll be using and tidyverse which is a collection of packages for doing data analysis in a “tidy” way. You can install these packages using the following code once in the Console:

install.packages("tidyverse")
install.packages("datasauRus")

You can load the packages by running the following in the Console.

library(tidyverse) 
library(datasauRus)

Note that the packages are also loaded with the same commands in your R Markdown document.

Data

If it’s confusing that the data frame is called datasaurus_dozen when it contains 13 datasets, you’re not alone! Have you heard of a baker’s dozen?

The data frame we will be working with today is called datasaurus_dozen and it’s in the datasauRus package. Actually, this single data frame contains 13 datasets, designed to show us why data visualisation is important and how summary statistics alone can be misleading. The different datasets are marked by the dataset variable.

To find out more about the dataset, type the following in your Console: ?datasaurus_dozen. A question mark before the name of an object will always bring up its help file. This command must be ran in the Console.

Exercises

The ✏️ symbol is a reminder to write a written response discussing the questions in the exercises.

  1. ✏️ Based on the help file, how many rows and how many columns does the datasaurus_dozen file have? What are the variables included in the data frame? Add your responses to your lab report.

Let’s take a look at what these datasets are. To do so we can make a frequency table of the dataset variable:

datasaurus_dozen |>
  count(dataset) |>
  print(13)
## # A tibble:
## #   13 × 2
##    dataset   
##    <chr>     
##  1 away      
##  2 bullseye  
##  3 circle    
##  4 dino      
##  5 dots      
##  6 h_lines   
##  7 high_lines
##  8 slant_down
##  9 slant_up  
## 10 star      
## 11 v_lines   
## 12 wide_lines
## 13 x_shape   
## # ℹ 1 more
## #   variable:
## #   n <int>

Matejka, Justin, and George Fitzmaurice. “Same stats, different graphs: Generating datasets with varied appearance and identical statistics through simulated annealing.” Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2017.

The original Datasaurus (dino) was created by Alberto Cairo in this great blog post. The other Dozen were generated using simulated annealing and the process is described in the paper Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing by Justin Matejka and George Fitzmaurice. In the paper, the authors simulate a variety of datasets that have the same summary statistics as the Datasaurus but have very different distributions.

🧶 ✅ ⬆️ Knit, commit, and push your changes to GitHub with the commit message “Add answer for Ex 1”. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.

  1. Plot y vs. x for the dino dataset. Then, calculate the correlation coefficient between x and y for this dataset.

Below is the code you will need to complete this exercise. Basically, the answer is already given, but you need to include relevant bits in your Rmd document and successfully knit it and view the results.

Start with the datasaurus_dozen and pipe it into the filter function to filter for observations where dataset == "dino". Store the resulting filtered data frame as a new data frame called dino_data.

dino_data <- datasaurus_dozen |>
  filter(dataset == "dino")

There is a lot going on here, so let’s slow down and unpack it a bit.

First, the pipe operator: |>, takes what comes before it and sends it as the first argument to what comes after it. So here, we’re saying filter the datasaurus_dozen data frame for observations where dataset == "dino".

Second, the assignment operator: <-, assigns the name dino_data to the filtered data frame.

Next, we need to visualize these data. We will use the ggplot function for this. Its first argument is the data you’re visualizing. Next we define the aesthetic mappings. In other words, the columns of the data that get mapped to certain aesthetic features of the plot, e.g. the x axis will represent the variable called x and the y axis will represent the variable called y. Then, we add another layer to this plot where we define which geometric shapes we want to use to represent each observation in the data. In this case we want these to be points, hence geom_point.

ggplot(data = dino_data, mapping = aes(x = x, y = y)) +
  geom_point()

If this seems like a lot, it is. And you will learn about the philosophy of building data visualizations in layer in detail next week. For now, follow along with the code that is provided.

For the second part of these exercises, we need to calculate a summary statistic: the correlation coefficient. Correlation coefficient, often referred to as \(r\) in statistics, measures the linear association between two variables. You will see that some of the pairs of variables we plot do not have a linear relationship between them. This is exactly why we want to visualize first: visualize to assess the form of the relationship, and calculate \(r\) only if relevant. In this case, calculating a correlation coefficient really doesn’t make sense since the relationship between x and y is definitely not linear – it’s dinosaurial!

But, for illustrative purposes, let’s calculate the correlation coefficient between x and y.

Start with dino_data and calculate a summary statistic that we will call r as the correlation between x and y.

dino_data |>
  summarize(r = cor(x, y))
## # A tibble: 1 × 1
##         r
##     <dbl>
## 1 -0.0645

🧶 ✅ ⬆️ Knit, commit, and push your changes to GitHub with the commit message “Add answer for Ex 2”. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.

  1. ✏️ Plot y vs. x for the star dataset. You can (and should) reuse code we introduced above, just replace the dataset name with the desired dataset. Then, calculate the correlation coefficient between x and y for this dataset. How does this value compare to the r of dino? Write a sentence based on your comparison.

🧶 ✅ ⬆️ This is another good place to pause, knit, commit changes with the commit message “Add answer for Ex 3”, and push. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.

  1. ✏️ Plot y vs. x for the circle dataset. You can (and should) reuse code we introduced above, just replace the dataset name with the desired dataset. Then, calculate the correlation coefficient between x and y for this dataset. How does this value compare to the r of dino? Write a sentence based on your comparison.

🧶 ✅ ⬆️ You should pause again, commit changes with the commit message “Add answer for Ex 4”, and push. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.

Facet by the dataset variable, placing the plots in a 3 column grid, and don’t add a legend.

  1. Finally, let’s plot all datasets at once. In order to do this we will make use of faceting.
ggplot(datasaurus_dozen, aes(x = x, y = y, color = dataset))+
  geom_point()+
  facet_wrap(~ dataset, ncol = 3) +
  theme(legend.position = "none")

And we can use the group_by function to generate all the summary correlation coefficients.

datasaurus_dozen |>
  group_by(dataset) |>
  summarize(r = cor(x, y)) |>
  print(13)


🧶 ✅ ⬆️ Yay, you’re done! Commit all remaining changes, use the commit message “Finish Lab 1! 💪“, and push. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards. Before you wrap up the assignment, make sure all documents are updated on your GitHub repo. Use the self check list in the Readme to review.