# Posts by Philip Pham

The only question that really stumped me during my Google interviews was The Skyline Problem. I remember only being able to write some up a solution in pseudocode after being given many hints before my time was up.

It's been banned for some time now, so I thought I'd dump the solution here. Maybe, I will elaborate and clean up the code some other time. It's one of the cleverest uses of an ordered map (usually implemented as a tree map) that I've seen.

```
#include <algorithm>
#include <iostream>
#include <map>
#include <sstream>
#include <utility>
#include <vector>
using namespace std;
namespace {
struct Wall {
enum Type : int {
LEFT = 1,
RIGHT = 0
};
int position;
int height;
Type type;
Wall(int position, int height, Wall::Type type) :
position(position), height(height), type(type) {}
bool operator<(const Wall &other) {
return position < other.position;
}
};
ostream& operator<<(ostream& stream, const Wall &w) {
return stream << "Position: " << to_string(w.position) << ';'
<< " Height: " << to_string(w.height) << ';'
<< " Type: " << (w.type == Wall::Type::LEFT ? "Left" : "Right");
}
} // namespace
class Solution {
public:
vector<vector<int>> getSkyline(vector<vector<int>>& buildings) {
vector<Wall> walls;
for (const vector<int>& building : buildings) {
walls.emplace_back(building[0], building[2], Wall::Type::LEFT);
walls.emplace_back(building[1], building[2], Wall::Type::RIGHT);
}
sort(walls.begin(), walls.end());
vector<vector<int>> skyline;
map<int, int> heightCount;
for (vector<Wall>::const_iterator wallPtr = walls.cbegin(); wallPtr != walls.cend();) {
int currentPosition = wallPtr -> position;
do {
if (wallPtr -> type == Wall::Type::LEFT) {
++heightCount[wallPtr -> height];
} else if (wallPtr -> type == Wall::Type::RIGHT) {
if (--heightCount[wallPtr -> height] == 0) {
heightCount.erase(wallPtr -> height);
}
}
++wallPtr;
} while (wallPtr != walls.cend() && wallPtr -> position == currentPosition);
if (skyline.empty() || heightCount.empty() ||
heightCount.crbegin() -> first != skyline.back()[1]) {
skyline.emplace_back(vector<int>{
currentPosition, heightCount.empty() ? 0 : heightCount.crbegin() -> first});
}
}
return skyline;
}
};
```

The easiest way to detect cycles in a linked list is to put all the seen nodes into a set and check that you don't have a repeat as you traverse the list. This unfortunately can blow up in memory for large lists.

Floyd's Tortoise and Hare algorithm gets around this by using two points that iterate through the list at different speeds. It's not immediately obvious why this should work.

```
/*
* For your reference:
*
* SinglyLinkedListNode {
* int data;
* SinglyLinkedListNode* next;
* };
*
*/
namespace {
template <typename Node>
bool has_cycle(const Node* const tortoise, const Node* const hare) {
if (tortoise == hare) return true;
if (hare->next == nullptr || hare->next->next == nullptr) return false;
return has_cycle(tortoise->next, hare->next->next);
}
} // namespace
bool has_cycle(SinglyLinkedListNode* head) {
if (head == nullptr ||
head->next == nullptr ||
head->next->next == nullptr) return false;
return has_cycle(head, head->next->next);
}
```

The above algorithm solves HackerRank's Cycle Detection.

To see why this work, consider a cycle that starts at index $\mu$ and has length $l$. If there is a cycle, we should have $x_i = x_j$ for some $i,j \geq \mu$ and $i \neq j$. This should occur when \begin{equation} i - \mu \equiv j - \mu \pmod{l}. \label{eqn:cond} \end{equation}

In the tortoise and hare algorithm, the tortoise moves with speed 1, and the hare moves with speed 2. Let $i$ be the location of the tortoise. Let $j$ be the location of the hare.

The cycle starts at $\mu$, so the earliest that we could see a cycle is when $i = \mu$. Then, $j = 2\mu$. Let $k$ be the number of steps we take after $i = \mu$. We'll satisfy Equation \ref{eqn:cond} when \begin{align*} i - \mu \equiv j - \mu \pmod{l} &\Leftrightarrow \left(\mu + k\right) - \mu \equiv \left(2\mu + 2k\right) - \mu \pmod{l} \\ &\Leftrightarrow k \equiv \mu + 2k \pmod{l} \\ &\Leftrightarrow 0 \equiv \mu + k \pmod{l}. \end{align*}

This will happen for some $k \leq l$, so the algorithm terminates within $\mu + k$ steps if there is a cycle. Otherwise, if there is no cycle the algorithm terminates when it reaches the end of the list.

Thomas Chatterton Williams quotes Ryszard Kapuściński:

Only one thing interests him: how to defeat communism. On this subject he can discourse with energy and passion for hours, concoct schemes, present proposals and plans, unaware that as he does so he becomes for a second time communism’s victim: the first time he was a victim by force, imprisoned by the system, and now he has become a victim voluntarily, for he has allowed himself to be imprisoned in the web of communism’s problems. For such is the demonic nature of great evil—that without our knowledge and consent, it manages to blind us and force us into its straitjacket.

He notes that *communism* could easily be replaced by *racism* and applied to today's anti-racist movement. In his memoir, Williams recounts how he escapes the anxiety and fatalism brought about by chiefly seeing himself as a black man.

Currently, I find myself in a straitjacket. My restraint is the conception of myself as a yellow male. Williams freely admits that freeing himself and unlearning race may be easier for him because he is light-skinned. He even questions:

Can "black" women, or for that matter "Asian" men—both of whom are, in contrast to the opposite sexes of their groups, statistically far less able to find partners of any race—meaningfully renounce their racial identity?

I hope to answer this question.

To start, why would a smart, rational person like me even take part in this insidious game of race? Williams consistently hammers home that the racial categories constructed by the US Census are meaningless. Statistically, "race" along the axes of Asian, White, Black, Hispanic, American Indian, or Pacific Islander means hardly anything. By nearly every metric (IQ, income, how well one can dance), intragroup (individual) variance overwhelems the intergroup variance. And each group contains subgroups with wildly diverging outcomes: for example, Nigerian Americans versus American Descendants of Slavery or Japanese Americans versus Hmong Americans. That is, $P(X \mid \text{Race}, E) \approx P(X \mid E)$ for most $X$, where $E$ is the environment.

At the same time, one cannot really say group differences aren't real and can't be significant beyond different shades of brown skin. Indeed, these differences may even extend beyond social factors. See How Genetics Is Changing Our Understanding of ‘Race’ for a more nuanced discussion. Most stereotypes lack any scientific basis, and the few that we have evidence for, we don't understand their causal mechanisms well enough to draw any definitive conclusions. Thus, the null hypothesis should still be that individual diferences dominate group differences.

And yet, I came to believe in "race" for the same reason I believe in math and physics. Race as a concept really does create a coherent narrative of the world that is consistent with my experiences. I share a lot with Asian Americans that fought with their parents about speaking their mother tongue at home. I succumbed to peer pressure to be less "Asian" so I could win acceptance from my white friends. I lift weights to compensate for my supposed lack of masculinity. I am good at math. I wonder if I didn't get into my desired college or PhD programs of choice because Asians are judged more harshly in admissions. I struggle with dating: my own female cousins have mentioned a "no Asians" policy. I've had many social interactions that left me baffled (see We Out Here). Truthfully, I feel injured and want redress for these wrongs. The sloppy engineer in me quickly takes hold of the first and easiest solution to the problem, even if it only approximately fits the data. At times, the most obvious solution is explicit race-based policy to counter racism, that is *anti-racism*, that is, I would add an ever-growing number of `if`

statements to the program.

```
if race in OPPRESSED_MINORITY:
# Do something to fix the disparity.
```

The better mathematician in me has to contend that this race essentialist view gets us nowhere closer to the axioms that **diversity should be tolerated, even embraced** and **individuals should be accorded the same freedoms and opportunities regardless of their differences.** Williams writes

"Woke" anti-racism proceeds from the premise that race is

real—if not biological, then socially constructed and therefore equally if not more significant still—putting it in sync with toxic presumptions of white supremacism that would also like to insist on the fundamentality of racial difference.

Thus, our ideals run into a fundamental inconsistency with anti-racism when implemented as policies explicitly targeting race.

Firstly, if we accept that the intergroup variance is small compared to individual differences, encouraging *racial diversity* does nothing for actual diversity. In fact, one could argue that prioritizing racial diversity may flatten differences and reduce diversity. Indeed, we may be prioritizing racial diversity over economic diversity.

Secondly, it's almost a tautology that racial balancing does not treat individuals fairly. Of course, it's unfair to the individual who happens to be the "wrong" race. But even those of the "right" race may not benefit because it turns out that race doesn't tell you a whole lot about what hardships the person has actually encountered.

I don't think I could ever add enough hacks to fix the program: there will always be another marginalized group. Such a system naturally grows fragile and impossible to maintain. At best such a system merely oppresses, and at worst, you get Sectarianism. In particular, sectarianism tends to devolve into corruption and incompetentence by creating the wrong incentives.

For engineers, when a system can no longer satisfy the fundamental goals it has set out to achieve, one needs to rewrite the system. We should deprecate the anti-racist agenda. But what do we replace it with?

Williams does not propose "infantile colorblindness" and pretending racism does not exist. We must acknowledge race to the extent that we must fight against explicit racist policies like those from the Jim Crow era.

Denying that racism still persists would deny my own personal experience. But racial essentialism also contradicts my personal experience. I do have free will. Everyday, I do manage to work with a diverse group of colleagues in ways that aren't colored by race. Despite seeing dating profiles that explicity say "no Asians", I have found satisfying romantic relationships, where my partner treats me as an individual. There is also the personal tax that Wesley Yang describes in We Out Here:

Or maybe the nameless Asian man came away from that incident inwardly torn, uncertain whether he had encountered subtle racism, his own social ineptitude, or the intrinsic hardness of the world. Maybe he suspected that all these things were factors — knowing all the while that to make an issue of it would seem an excessive response to an easily deniable claim about an event of small importance with many possible explanations.

And so the process of freeing myself from this straitjacket begins. I must take the nuanced view that some people and even institutions are racist and harbor biases, but there is nothing so real about race to say that it is the underlying force that determines my life. It is not easy or comfortable: Williams remarks on "the terror involved in imagining the total absence of race." There is real power in the explanation "race" provides and the comfort in belonging to a group, but

Real dignity, as he saw it, the kind that can never be stripped from you because only you have the power to bestow it upon yourself, comes from accepting and playing the hand you are dealt as best you can. It also comes from understanding and accepting that no one else has a perfect hand, whatever the appearances.

One has to ignore the peer pressure and the media onslaught that traffics in outrage.

But there is a universality that binds all of us together. Indeed, one doesn't have to be a certain "race" to appreciate mathematics or jazz, for Henry Louis Gates, Jr.'s states "All human culture is available and knowable to all human beings." Our future depends on us moving past race. Solving the global problems of income inequality and climate change will require a level of human ingenuity and coordination that demands it.

Consider the problem Overrandomized. Intuitively, one can see something like Benford's law. Indeed, counting the leading digit works:

```
#include <algorithm>
#include <iostream>
#include <string>
#include <unordered_map>
#include <unordered_set>
#include <utility>
#include <vector>
using namespace std;
string Decode() {
unordered_map<char, int> char_counts; unordered_set<char> chars;
for (int i = 0; i < 10000; ++i) {
long long Q; string R; cin >> Q >> R;
char_counts[R[0]]++;
for (char c : R) chars.insert(c);
}
vector<pair<int, char>> count_chars;
for (const pair<char, int>& char_count : char_counts) {
count_chars.emplace_back(char_count.second, char_count.first);
}
sort(count_chars.begin(), count_chars.end());
string code;
for (const pair<int, char>& count_char : count_chars) {
code += count_char.second;
chars.erase(count_char.second);
}
code += *chars.begin();
reverse(code.begin(), code.end());
return code;
}
int main(int argc, char *argv[]) {
ios::sync_with_stdio(false); cin.tie(NULL);
int T; cin >> T;
for (int t = 1; t <= T; ++t) {
int U; cin >> U;
cout << "Case #" << t << ": " << Decode() << '\n';
}
cout << flush;
return 0;
}
```

Take care to read `Q`

as a `long long`

because it can be large.

It occurred to me that there's no reason the logarithms of the randomly generated numbers should be uniformly distributed, so I decided to look into this probability distribution closer. Let $R$ be the random variable representing the return value of a query.

\begin{align*} P(R = r) &= \sum_{m = r}^{10^U - 1} P(M = m, R = r) \\ &= \sum_{m = r}^{10^U - 1} P(R = r \mid M = m)P(M = m) \\ &= \frac{1}{10^U - 1}\sum_{m = r}^{10^U - 1} \frac{1}{m}. \end{align*} since $P(M = m) = 1/(10^U - 1)$ for all $m$.

The probability that we get a $k$ digit number that starts with a digit $d$ is then \begin{align*} P(d \times 10^{k-1} \leq R < (d + 1) \times 10^{k-1}) &= \frac{1}{10^U - 1} \sum_{r = d \times 10^{k-1}}^{(d + 1) \times 10^{k-1} - 1} \sum_{m = r}^{10^U - 1} \frac{1}{m}. \end{align*}

Here, you can already see that for a fixed $k$, smaller $d$s will have more terms, so they should occur as leading digits with higher probability. It's interesting to try to figure out how much more frequently this should happen, though. To get rid of the summation, we can use integrals! This will make the computation tractable for large $k$ and $U$. Here, I start dropping the $-1$s in the approximations.

\begin{align*} P\left(d \times 10^{k-1} \leq R < (d + 1) \times 10^{k-1}\right) &= \frac{1}{10^U - 1} \sum_{r = d \times 10^{k-1}}^{(d + 1) \times 10^{k-1} - 1} \sum_{m = r}^{10^U - 1} \frac{1}{m} \\ &\approx \frac{1}{10^U} \sum_{r = d \times 10^{k-1}}^{(d + 1) \times 10^{k-1} - 1} \left[\log 10^U - \log r \right] \\ &=\frac{10^{k - 1}}{10^{U}}\left[ U\log 10 - \frac{1}{10^{k - 1}}\sum_{r = d \times 10^{k-1}}^{(d + 1) \times 10^{k-1} - 1} \log r \right]. \end{align*}

Again, we can apply integration. Using integration by parts, we have $\int_a^b x \log x \,dx = b\log b - b - \left(a\log a - a\right)$, so \begin{align*} \sum_{r = d \times 10^{k-1}}^{(d + 1) \times 10^{k-1} - 1} \log r &\approx 10^{k-1}\left[ (k - 1)\log 10 + (d + 1) \log (d + 1) - d \log d - 1 \right]. \end{align*}

Substituting, we end up with \begin{align*} P&\left(d \times 10^{k-1} \leq R < (d + 1) \times 10^{k-1}\right) \approx \\ &\frac{1}{10^{U - k + 1}}\left[ 1 + (U - k + 1)\log 10 - \left[(d + 1) \log(d+1) - d\log d\right] \right]. \end{align*}

We can make a few observations. Numbers with lots of digits are more likely to occur since for larger $k$, the denominator is much smaller. This makes sense: there are many more large numbers than small numbers. Independent of $k$, if $d$ is larger, the quantity inside the inner brackets is larger since $x \log x$ is convex, so the probability decreases with $d$. Thus, smaller digits occur more frequently. While the formula follows the spirit of Benford's law, the formula is not quite the same.

This was the first time I had to use integrals for a competitive programming problem!

One of my favorite things about personal coding projects is that you're free to over-engineer and prematurely optimize your code to your heart's content. Production code written in a shared code based needs to be maintained, and hence, should favor simplicity and readability. For personal projects, I optimize for fun, and what could be more fun than elaborate abstractions, unnecessary optimizations, and abusing recursion?

To that end, I present my solution to the Google Code Jam 2019 Round 1A problem, Alien Rhyme.

In this problem, we maximize the number of pairs of words that could possibly rhyme. I guess this problem has some element of realism as it's similar in spirit to using frequency analysis to decode or identify a language.

After reversing the strings, this problem reduces to greedily taking pairs of words with the longest common prefix. Each time we select a prefix, we update the sizes of the remaining prefixes. If where are $N$ words, this algorithm is $O\left(N^2\right)$ and can be implemented with a linked list in C++:

```
// Reverses and sorts suffixes to make finding common longest common suffix easier.
vector<string> NormalizeSuffixes(const vector<string>& words) {
vector<string> suffixes; suffixes.reserve(words.size());
for (const string& word : words) {
suffixes.push_back(word);
reverse(suffixes.back().begin(), suffixes.back().end());
}
sort(suffixes.begin(), suffixes.end());
return suffixes;
}
int CountPrefix(const string &a, const string &b) {
int size = 0;
for (int i = 0; i < min(a.length(), b.length()); ++i)
if (a[i] == b[i]) { ++size; } else { break; }
return size;
}
int MaximizePairs(const vector<string>& words) {
const vector<string> suffixes = NormalizeSuffixes(words);
// Pad with zeros: pretend there are empty strings at the beginning and end.
list<int> prefix_sizes{0};
for (int i = 1; i < suffixes.size(); ++i)
prefix_sizes.push_back(CountPrefix(suffixes[i - 1], suffixes[i]));
prefix_sizes.push_back(0);
// Count the pairs by continually finding the longest common prefix.
list<int>::iterator max_prefix_size;
while ((max_prefix_size = max_element(prefix_sizes.begin(), prefix_sizes.end())) !=
prefix_sizes.begin()) {
// Claim this prefix and shorten the other matches.
while (*next(max_prefix_size) == *max_prefix_size) {
--(*max_prefix_size);
++max_prefix_size;
}
// Use transitivity to update the common prefix size.
*next(max_prefix_size) = min(*prev(max_prefix_size), *next(max_prefix_size));
prefix_sizes.erase(prefix_sizes.erase(prev(max_prefix_size)));
}
return suffixes.size() - (prefix_sizes.size() - 1);
}
```

A single file example can be found on GitHub. Since $N \leq 1000$ in this problem, this solution is more than adequate.

## Asymptotically Optimal Solution

We can use the fact that the number of characters in each word $W$ is at most 50 and obtain a $O\left(N\max\left(\log N, W\right)\right)$ solution.

### Induction

Suppose we have a tree where each node is a prefix (sometimes called a trie). In the worst case, each prefix will have a single character. The title image shows such a tree for the words: PREFIX, PRELIM, PROF, SUFFER, SUFFIX, SUM, SWIFT, SWIFTER, SWOLE.

Associated with each node is a count of how many words have that prefix as a maximal prefix. The depth of each node is the sum of the traversed prefix sizes.

The core observation is that at any given node, any words in the subtree can have a common prefix with length at least the depth of the node. Greedily selecting the longest common prefixes corresponds to pairing all possible prefixes in a subtree with length greater than the depth of the parent. The unused words can then be used higher up in the tree to make additional prefixes. Tree algorithms are best expressed recursively. Here's the Swift code.

```
func maximizeTreePairs<T: Collection>(
root: Node<T>, depth: Int, minPairWordCount: Int) -> (used: Int, unused: Int)
where T.Element: Hashable {
let (used, unused) = root.children.reduce(
(used: 0, unused: root.count),
{
(state: (used: Int, unused: Int), child) -> (used: Int, unused: Int) in
let childState = maximizeTreePairs(
root: child.value, depth: child.key.count + depth, minPairWordCount: depth)
return (state.used + childState.used, state.unused + childState.unused)
})
let shortPairUsed = min(2 * (depth - minPairWordCount), (unused / 2) * 2)
return (used + shortPairUsed, unused - shortPairUsed)
}
func maximizePairs(_ words: [String]) -> Int {
let suffixes = normalizeSuffixes(words)
let prefixTree = compress(makePrefixTree(suffixes))
return prefixTree.children.reduce(
0, { $0 + maximizeTreePairs(
root: $1.value, depth: $1.key.count, minPairWordCount: 0).used })
}
```

Since the tree has maximum depth $W$ and there are $N$ words, recursing through the tree is $O\left(NW\right)$.

### Making the Prefix Tree

The simplest way to construct a prefix tree is to start at the root for each word and character-by-character descend into the tree, creating any nodes necessary. Update the count of the node when reaching the end of the word. This is $O\left(NW\right)$.

As far as I know, the wost case will always be $O\left(NW\right)$. In practice, though, if there are many words with lengthy shared common prefixes we can avoid retracing paths through the tree. In our example, consider SWIFT and SWIFTER. If we naively construct a tree, we will need to traverse through $5 + 7 = 12$ nodes. But if we insert our words in lexographic order, we don't need to retrace the first 5 characters and simply only need to traverse 7 nodes.

Swift has somewhat tricky value semantics. `structs`

are always copied, so we need to construct this tree recursively.

```
func makePrefixTree<T: StringProtocol>(_ words: [T]) -> Node<T.Element> {
let prefixCounts = words.reduce(
into: (counts: [0], word: "" as T),
{
$0.counts.append(countPrefix($0.word, $1))
$0.word = $1
}).counts
let minimumPrefixCount = MinimumRange(prefixCounts)
let words = [""] + words
/// Inserts `words[i]` into a rooted tree.
///
/// - Parameters:
/// - root: The root node of the tree.
/// - state: The index of the word for the current path and depth of `root`.
/// - i: The index of the word to be inserted.
/// - Returns: The index of the next word to be inserted.
func insert(_ root: inout Node<T.Element>,
_ state: (node: Int, depth: Int),
_ i: Int) -> Int {
// Start inserting only for valid indices and at the right depth.
if i >= words.count { return i }
// Max number of nodes that can be reused for `words[i]`.
let prefixCount = state.node == i ?
prefixCounts[i] : minimumPrefixCount.query(from: state.node + 1, through: i)
// Either (a) inserting can be done more efficiently at a deeper node;
// or (b) we're too deep in the wrong state.
if prefixCount > state.depth || (prefixCount < state.depth && state.node != i) { return i }
// Start insertion process! If we're at the right depth, insert and move on.
if state.depth == words[i].count {
root.count += 1
return insert(&root, (i, state.depth), i + 1)
}
// Otherwise, possibly create a node and traverse deeper.
let key = words[i][words[i].index(words[i].startIndex, offsetBy: state.depth)]
if root.children[key] == nil {
root.children[key] = Node<T.Element>(children: [:], count: 0)
}
// After finishing traversal insert the next word.
return insert(
&root, state, insert(&root.children[key]!, (i, state.depth + 1), i))
}
var root = Node<T.Element>(children: [:], count: 0)
let _ = insert(&root, (0, 0), 1)
return root
}
```

While the naive implementation of constructing a trie would involve $48$ visits to a node (the sum over the lengths of each word), this algorithm does it in $28$ visits as seen in the title page. Each word insertion has its edges colored separately in the title image.

Now, for this algorithm to work efficiently, it's necessary to start inserting the next word at the right depth, which is the size of longest prefix that the words share.

### Minimum Range Query

Computing the longest common prefix of any two words reduces to a minimum range query. If we order the words lexographically, we can compute the longest common prefix size between adjacent words. The longest common prefix size of two words $i$ and $j$, where $i < j$ is then:

\begin{equation} \textrm{LCP}(i, j) = \min\left\{\textrm{LCP}(i, i + 1), \textrm{LCP}(i + 1, i + 2), \ldots, \textrm{LCP}(j - 1, j)\right\}. \end{equation}

A nice dynamic programming $O\left(N\log N\right)$ algorithm exists to precompute such queries that makes each query $O\left(1\right)$.

We'll $0$-index to make the math easier to translate into code. Given an array $A$ of size $N$, let

\begin{equation} P_{i,j} = \min\left\{A_k : i \leq k < i + 2^{j} - 1\right\}. \end{equation}

Then, we can write $\mathrm{LCP}\left(i, j - 1\right) = \min\left(P_{i, l}, P_{j - 2^l, l}\right)$, where $l = \max\left\{l : l \in \mathbb{Z}, 2^l \leq j - i\right\}$ since $\left([i, i + 2^l) \cup [j - 2^l, j)\right) \cap \mathbb{Z} = \left\{i, i + 1, \ldots , j - 1\right\}$.

$P_{i,0}$ can be initialized $P_{i,0} = A_{i}$, and for $j > 0$, we can have

\begin{equation} P_{i,j} = \begin{cases} \min\left(P_{i, j - 1}, P_{i + 2^{j - 1}, j - 1}\right) & i + 2^{j -1} < N; \\ P_{i, j - 1} & \text{otherwise}. \\ \end{cases} \end{equation}

See a Swift implementation.

```
struct MinimumRange<T: Collection> where T.Element: Comparable {
private let memo: [[T.Element]]
private let reduce: (T.Element, T.Element) -> T.Element
init(_ collection: T,
reducer reduce: @escaping (T.Element, T.Element) -> T.Element = min) {
let k = collection.count
var memo: [[T.Element]] = Array(repeating: [], count: k)
for (i, element) in collection.enumerated() { memo[i].append(element) }
for j in 1..<(k.bitWidth - k.leadingZeroBitCount) {
let offset = 1 << (j - 1)
for i in 0..<memo.count {
memo[i].append(
i + offset < k ?
reduce(memo[i][j - 1], memo[i + offset][j - 1]) : memo[i][j - 1])
}
}
self.memo = memo
self.reduce = reduce
}
func query(from: Int, to: Int) -> T.Element {
let (from, to) = (max(from, 0), min(to, memo.count))
let rangeCount = to - from
let bitShift = rangeCount.bitWidth - rangeCount.leadingZeroBitCount - 1
let offset = 1 << bitShift
return self.reduce(self.memo[from][bitShift], self.memo[to - offset][bitShift])
}
func query(from: Int, through: Int) -> T.Element {
return query(from: from, to: through + 1)
}
}
```

### Path Compression

Perhaps my most unnecessary optimization is path compression, especially since we only traverse the tree once in the induction step. If we were to traverse the tree multiple times, it might be worth it, however. This optimization collapses count $0$ nodes with only $1$ child into its parent.

```
/// Use path compression. Not necessary, but it's fun!
func compress(_ uncompressedRoot: Node<Character>) -> Node<String> {
var root = Node<String>(
children: [:], count: uncompressedRoot.count)
for (key, node) in uncompressedRoot.children {
let newChild = compress(node)
if newChild.children.count == 1, newChild.count == 0,
let (childKey, grandChild) = newChild.children.first {
root.children[String(key) + childKey] = grandChild
} else {
root.children[String(key)] = newChild
}
}
return root
}
```

### Full Code Example

A full example with everything wired together can be found on GitHub.

## GraphViz

Also, if you're interested in the graph in the title image, I used GraphViz. It's pretty neat. About a year ago, I made a trivial commit to the project: https://gitlab.com/graphviz/graphviz/commit/1cc99f32bb1317995fb36f215fb1e69f96ce9fed.

```
digraph {
rankdir=LR;
// SUFFER
S1 [label="S"];
F3 [label="F"];
F4 [label="F"];
E2 [label="E"];
R2 [label="R"];
S1 -> U [label=12, color="#984ea3"];
U -> F3 [label=13, color="#984ea3"];
F3 -> F4 [label=14, color="#984ea3"];
F4 -> E2 [label=15, color="#984ea3"];
E2 -> R2 [label=16, color="#984ea3"];
// PREFIX
E1 [label="E"];
R1 [label="R"];
F1 [label="F"];
I1 [label="I"];
X1 [label="X"];
P -> R1 [label=1, color="#e41a1c"];
R1 -> E1 [label=2, color="#e41a1c"];
E1 -> F1 [label=3, color="#e41a1c"];
F1 -> I1 [label=4, color="#e41a1c"];
I1 -> X1 [label=5, color="#e41a1c"];
// PRELIM
L1 [label="L"];
I2 [label="I"];
M1 [label="M"];
E1 -> L1 [label=6, color="#377eb8"];
L1 -> I2 [label=7, color="#377eb8"];
I2 -> M1 [label=9, color="#377eb8"];
// PROF
O1 [label="O"];
F2 [label="F"];
R1 -> O1 [label=10, color="#4daf4a"];
O1 -> F2 [label=11, color="#4daf4a"];
// SUFFIX
I3 [label="I"];
X2 [label="X"];
F4 -> I3 [label=17, color="#ff7f00"];
I3 -> X2 [label=18, color="#ff7f00"];
// SUM
M2 [label="M"];
U -> M2 [label=19, color="#ffff33"];
// SWIFT
I4 [label="I"];
F5 [label="F"];
T1 [label="T"];
S1 -> W [label=20, color="#a65628"];
W -> I4 [label=21, color="#a65628"];
I4 -> F5 [label=22, color="#a65628"];
F5 -> T1 [label=23, color="#a65628"];
// SWIFTER
E3 [label="E"];
R3 [label="R"];
T1 -> E3 [label=24, color="#f781bf"];
E3 -> R3 [label=25, color="#f781bf"];
// SWOLE
O2 [label="O"];
L2 [label="L"];
E4 [label="E"];
W -> O2 [label=26, color="#999999"];
O2 -> L2 [label=27, color="#999999"];
L2 -> E4 [label=28, color="#999999"];
}
```

In general, the Hamiltonian path problem is NP-complete. But in some special cases, polynomial-time algorithms exists.

One such case is in Pylons, the Google Code Jam 2019 Round 1A problem. In this problem, we are presented with a grid graph. Each cell is a node, and a node is connected to every other node except those along its diagonals, in the same column, or in the same row. In the example, from the blue cell, we can move to any other cell except the red cells.

If there are $N$ cells, an $O\left(N^2\right)$ is to visit the next available cell with the most unavailable, unvisited cells. Why? We should visit those cells early because if we wait too long, we will become stuck at those cells when we inevitably need to visit them.

I've implemented the solution in `findSequence`

(see full Swift solution on GitHub):

```
func findSequence(N: Int, M: Int) -> [(row: Int, column: Int)]? {
// Other cells to which we are not allowed to jump.
var badNeighbors: [Set<Int>] = Array(repeating: Set(), count: N * M)
for i in 0..<(N * M) {
let (ri, ci) = (i / M, i % M)
for j in 0..<(N * M) {
let (rj, cj) = (j / M, j % M)
if ri == rj || ci == cj || ri - ci == rj - cj || ri + ci == rj + cj {
badNeighbors[i].insert(j)
badNeighbors[j].insert(i)
}
}
}
// Greedily select the cell which has the most unallowable cells.
var sequence: [(row: Int, column: Int)] = []
var visited: Set<Int> = Set()
while sequence.count < N * M {
guard let i = (badNeighbors.enumerated().filter {
if visited.contains($0.offset) { return false }
guard let (rj, cj) = sequence.last else { return true }
let (ri, ci) = ($0.offset / M, $0.offset % M)
return rj != ri && cj != ci && rj + cj != ri + ci && rj - cj != ri - ci
}.reduce(nil) {
(state: (i: Int, count: Int)?, value) -> (i: Int, count: Int)? in
if let count = state?.count, count > value.element.count { return state }
return (i: value.offset, count: value.element.count)
}?.i) else { return nil }
sequence.append((row: i / M, column: i % M))
visited.insert(i)
for j in badNeighbors[i] { badNeighbors[j].remove(i) }
}
return sequence
}
```

The solution returns `nil`

when no path exists.

Why this work hinges on the fact that no path is possible if there are $N_t$ remaining nodes and you visit a non-terminal node $x$ which has $N_t - 1$ unavailable neighbors. There must have been some node $y$ that was visited that put node $x$ in this bad state. This strategy guarantees that you visit $x$ before visiting $y$.

With that said, a path is not possible when there is an initial node that that has $N - 1$ unavailable neighbors. This situation describes the $2 \times 2$, $2 \times 3$, and $3 \times 3$ case. A path is also not possible if its impossible to swap the order of $y$ and $x$ because $x$ is in the unavailable set of node $z$ that must come before both. This describes the $2 \times 4$ case.

Unfortunately, I haven't quite figured out the conditions for this $O\left(N^2\right)$ algorithm to work in general. Certainly, it works in larger grids since we can apply the Bondy-Chvátal theorem in that case.