Whew, much easier than day 3. Probably would be a great day to learn the tidytext package but I’ll be faster just muddling through with my current tools.
I wrote a function that compares the length of the input tokens to length of unique()
input tokens. Then for part 2, I added an argument as to whether anagrams should be permitted; if not, the tokens are each sorted alphabetically first into alphagrams (a word borrowed from competitive Scrabble).
Parts 1 & 2 together
I borrowed the string sorting algorithm from StackOverflow – I cited it below and upvoted as well 🙂
library(pacman) p_load(dplyr, stringr, testthat) check_valid <- function(string, anagrams_okay){ subs <- unlist(str_split(string, " ")) if(!anagrams_okay){ subs <- unlist(lapply(subs, make_alphagram)) # for part 2 } length(unique(subs)) == length(subs) } # thanks StackOverflow! https://stackoverflow.com/questions/5904797/how-to-sort-letters-in-a-string make_alphagram <- function(x){ paste(sort(unlist(strsplit(x, ""))), collapse = "") }
Now it’s just a matter of testing & running:
# Tests expect_equal(check_valid("aa bb cc dd ee aa", anagrams_okay = TRUE), FALSE) expect_equal(check_valid("ba bb cc dd ee ab", anagrams_okay = TRUE), TRUE) expect_equal(check_valid("ba bb cc dd ee ab", anagrams_okay = FALSE), FALSE) # Part 1 raw <- read.delim("04_1_dat.txt", header = FALSE, stringsAsFactors = FALSE)[[1]] # maybe an inelegant way to read in as a vector... lapply(raw, check_valid) %>% unlist %>% sum # Part 2 lapply(raw, check_valid, anagrams_okay = FALSE) %>% unlist %>% sum