Day 14
It was a bit of a tough one today - a sequel to Day 6's lanternfish!
I've included just my p2 code above because I'm pretty sure everyone would have done p1 the same [wrong] way! To get the Part 1 answer, just switch out the 40 with a 10. For Part 2, we again needed to keep a frequency map/counter of every pair and develop that instead of simulating a string that would end up trillions and trillions of characters long. My code first formats the conversions, then creates a dictionary with the current amount of pairs in the polymer. It begins to get complicated in the loop. The idea is that every current pair will go away as they are split with a character in between them, so we need to reset them. We also need to count every new pair and for every occurence of the current pair, there are two new pairs to be made due to the central insertion of the new element. Once we have our counts for each pair, we need to find the count for each element as per the question - doing this was just a matter of having a new dictionary and adding up the numbers used in the previous dictionary. However, these counts are actually wrong because we look at every overlapping pair, which means that every element is being used twice (except the two end elements) and hence appear in twice as many pairs as they should. To fix this, we need to divide the count by two. However, another issues arises because the two end elements from the original input (which always stay on the end) are only part of one pair and hence not doubled on the first move, which means that the count of these is recorded as (n*2)-1. To solve this issue, we can just divide every count by two, but then round the result. Doing this yields the answer!
Excuse the long and confusing explanation...
That was a computationally difficult challenge - I think it's time for an implementation one now :)