I liked the twist in the Day 13 puzzle. Implementing a literal solution to part 1, where I had the scanners walk back and forth, was straightforward. And Part 2 looked easy enough. Then I peeked at how slowly my algorithm was proceeding and realized I would need to refactor.

### Part 1

This function has been modified in two places for Part 2. I added the first line in the function, using the modulo operator `%%` to trim the time down to a single back-and-forth pass. This made the commented-out direction change line obsolete.

I worked out the `time %% ((size-1)*2)` bit by counting examples on my fingers 🙂

# Time corresponds to "depth", size = "range" move <- function(size, time){ time <- time %% ((size-1)*2) # added this line for part 2 to be O(n) not O(n^2) pos <- 1 increment <- 1 for(i in seq_len(time)){ # with the modulo line this could be done w/o loop # if(pos == 1) { increment <- 1 } # Don't need this b/c of adding the modulo if(pos == size) { increment <- -1} pos <- pos + increment } pos } library(testthat) expect_equal(move(3, 0), 1) expect_equal(move(2, 1), 2) expect_equal(move(4, 4), 3) expect_equal(move(4, 6), 1) dat <- read.csv("13_dat.txt", sep = ":", header = FALSE) %>% setNames(c("depth", "size")) sum((dat$size * dat$depth)[with(dat, mapply(move, size, depth)) == 1]) # 1728

### Part 2

Without the modulo line above, my loop will walk the scanner back and forth for `time`

steps. As the delay increases, so does the time. The answer to this problem is a delay of approximately 4 million time units; at that point, each scanner is walked 4 million times… it’s been a long time since CS 201 but I think that makes the algorithm O(N^{2}). In practice, I realized I had a problem when progress slowed dramatically.

Realizing this, I eliminated the unnecessary walking. Now, this could still be much more efficient. Because I’m recycling code from part 1, I test all scanner depths for collisions at each iteration; a faster solution would move on to the next delay value after a single scanner collision is found. But hey, that is a mere ~20x slowdown or so, and is still O(N).

Having started with R in 2014, I rarely feel like an old-timer. I learned dplyr from the beginning, not `tapply`

. But I had that feeling here… being somewhat comfortable with `mapply` gets in the way of sitting down to learn the `map2` functions from purrr, which I suspect will be useful to know.

# Part 2 system.time({ delay <- 0 hits <- sum(with(dat, mapply(move, size, depth)) == 1) while( hits != 0 ){ dat$depth <- dat$depth + 1 delay <- delay + 1 hits <- sum(with(dat, mapply(move, size, depth)) == 1) } } ) user system elapsed 906.512 2.804 909.319 > delay [1] 3946838

My runtime was about 15 minutes.