Advent of Code 2017 in #rstats: Day 4

Whew, much easier than day 3.  Probably would be a great day to learn the tidytext package but I’ll be faster just muddling through with my current tools.

I wrote a function that compares the length of the input tokens to length of unique() input tokens.  Then for part 2, I added an argument as to whether anagrams should be permitted; if not, the tokens are each sorted alphabetically first into alphagrams (a word borrowed from competitive Scrabble).

Parts 1 & 2 together

I borrowed the string sorting algorithm from StackOverflow – I cited it below and upvoted as well 🙂

p_load(dplyr, stringr, testthat)

check_valid <- function(string, anagrams_okay){
  subs <- unlist(str_split(string, " "))
    subs <- unlist(lapply(subs, make_alphagram)) # for part 2
  length(unique(subs)) == length(subs)

# thanks StackOverflow!

make_alphagram <- function(x){
  paste(sort(unlist(strsplit(x, ""))), collapse = "")

Now it’s just a matter of testing & running:

# Tests
expect_equal(check_valid("aa bb cc dd ee aa", anagrams_okay = TRUE), FALSE)
expect_equal(check_valid("ba bb cc dd ee ab", anagrams_okay = TRUE), TRUE)
expect_equal(check_valid("ba bb cc dd ee ab", anagrams_okay = FALSE), FALSE)

# Part 1
raw <- read.delim("04_1_dat.txt", header = FALSE, stringsAsFactors = FALSE)[[1]] # maybe an inelegant way to read in as a vector...
lapply(raw, check_valid) %>%
  unlist %>%

# Part 2
lapply(raw, check_valid, anagrams_okay = FALSE) %>% unlist %>% sum

Leave a Reply

Your email address will not be published. Required fields are marked *