A monorepo misconception - atomic cross-project commits

Posted on 2021-07-21 in General

In articles and discussions about monorepos, there's one frequently alleged key benefit: atomic commits across the whole tree let you make changes to both a library's implementation and the clients in a single commit. Many authors even go as far to claim that this is the only benefit of monorepos.

I like monorepos, but that particular claim makes no sense! It's not how you'd actually make backwards incompatible changes, such as interface refactorings, in a large monorepo. Instead the process would be highly incremental, and more like the following:

  1. Push one commit to change the library, such that it supports both the old and new behavior with different interfaces.
  2. Once you're sure the commit from stage 1 won't be reverted, push N commits to switch each of the N clients to use the new interface.
  3. Once you're sure the commits from stage 2 won't be reverted, push one commit to remove the old implementation and interface from the library.

... Continue reading ...

Writing a procedural puzzle generator

Posted on 2019-05-14 in Games

This blog post describes the level generator for my puzzle game Linjat. The post is standalone, but might be a bit easier to digest if you play through a few levels. The source code is available; anything discussed below is in src/main.cc.

A rough outline of this post:

  • Linjat is a logic game of covering all the numbers and dots on a grid with lines.
  • The puzzles are procedurally generated by a combination of a solver, a generator, and an optimizer.
  • The solver tries to solve puzzles the way a human would, and assign a score for how interesting a given puzzle is.
  • The puzzle generator is designed such that it's easy to change one part of the puzzle (the numbers) and have other parts of the puzzle (the dots) get re-organized such that the puzzle remains solvable.
  • A puzzle optimizer repeatedly solves levels and generates new variations from the most interesting ones that have been found so far.

... Continue reading ...

Optimizing a breadth-first search

Posted on 2018-07-23 in Games

A couple of months ago I finally had to admit I wasn't smart enough to solve a few of the levels in Snakebird, a puzzle game. The only way to salvage some pride was to write a solver, and pretend that writing a program to do the solving is basically as good as having solved the problem myself. The C++ code for the resulting program is on Github. Most of what's discussed in the post is implemented in search.h and compress.h. This post deals mainly with optimizing a breadth-first search that's estimated to use 50-100GB of memory to run on a memory budget of 4GB.

There will be a follow up post that deals with the specifics of the game. For this post, all you need to know is that that I could not see any good alternatives to the brute force approach, since none of the usual tricks worked. There are a lot of states since there are multiple movable or pushable objects, and the shape of some of them matters and changes during the game. There were no viable conservative heuristics for algorithms like A* to narrow down the search space. The search graph was directed and implicit, so searching both forward and backward simultaneously was not possible. And a single move could cause the state to change in a lot of unrelated ways, so nothing like Zobrist hashing was going to be viable.

A back of the envelope calculation suggested that the biggest puzzle was going to have on the order of 10 billion states after eliminating all symmetries. Even after packing the state representation as tightly as possible, the state size was on the order of 8-10 bytes depending on the puzzle. 100GB of memory would be trivial at work, but this was my home machine with 16GB of RAM. And since Chrome needs 12GB of that, my actual memory budget was more like 4GB. Anything in excess of that would have to go to disk (the spinning rust kind).

... Continue reading ...

Numbers and tagged pointers in early Lisp implementations

Posted on 2017-09-04 in Lisp, History

There was a bit of discussion on HN about data representations in dynamic languages, and specifically having values that are either pointers or immediate data, with the two cases being distinguished by use of tag bits in the pointer value:

If there's one takeway/point of interest that I'd recommend looking at, it's the novel way that Ruby shares a pointer value between actual pointers to memory and special "immediate" values that simply occupy the pointer value itself [1].
This is usual in Lisp (compilers/implementations) and i wouldn't be surprised if it was invented on the seventies once large (i.e. 36-bit long) registers were available.

I was going to nitpick a bit with the following:

The core claim here is correct; embedding small immediates inside pointers is not a novel technique. It's a good guess that it was first used in Lisp systems. But it can't be the case that its invention is tied into large word sizes, those were in wide use well before Lisp existed. (The early Lisps mostly ran on 36 bit computers.)

It seems more likely that this was tied into the general migration from word-addressing to byte-addressing. Due to alignment constraints, byte-addressed pointers to word-sized objects will always have unused bits around. It's harder to arrange for that with a word-addressed system.

But the latter part of that was speculation, maybe I should try to check the facts first before being tediously pedantic? Good call, since that speculation was wrong. Let's take a tour through some early Lisp implementations, and look at how they represented data in general, and numbers in particular.

... Continue reading ...

Why PS4 downloads are so slow

Posted on 2017-08-19 in Networking, Games

Game downloads on PS4 have a reputation of being very slow, with many people reporting downloads being an order of magnitude faster on Steam or Xbox. This had long been on my list of things to look into, but at a pretty low priority. After all, the PS4 operating system is based on a reasonably modern FreeBSD (9.0), so there should not be any crippling issues in the TCP stack. The implication is that the problem is something boring, like an inadequately dimensioned CDN.

But then I heard that people were successfully using local HTTP proxies as a workaround. It should be pretty rare for that to actually help with download speeds, which made this sound like a much more interesting problem.

... Continue reading ...

The mystery of the hanging S3 downloads

Posted on 2017-07-20 in Networking

A coworker was experiencing a strange problem with their Internet connection at home. Large downloads from most sites worked fine. The exception was that downloads from a Amazon S3 would get up to a good speed (500Mbps), stall completely for a few seconds, restart for a while, stall again, and eventually hang completely. The problem seemed to be specific to S3, downloads from generic AWS VMs were ok.

What could be going on? It shouldn't be a problem with the ISP, or anything south of that: after all, connections to other sites were working. It should not be a problem between the ISP and Amazon, or there would have been problems with AWS too. But it also seems very unlikely that S3 would have a trivially reproducible problem causing large downloads to hang. It's not like this is some minor use case of the service.

If it had been a problem with e.g. viewing Netflix, one might suspect some kind of targeted traffic shaping. But an ISP throttling or forcibly closing connections to S3 but not to AWS in general? That's just silly talk.

The normal troubleshooting tips like reducing the MTU didn't help either. This sounded like a fascinating networking whodunit, so I couldn't resist butting in after hearing about it through the grapevine.

... Continue reading ...

I don't want no 'wantarray'

Posted on 2017-07-18 in Perl

A while back, I got a bug report for json-to-multicsv. The user was getting the following error for any input file, including the one used as an example in the documentation:

    , or } expected while parsing object/hash, at character offset 2 (before "n")

The full facts of the matter were:

  • The JSON parser was failing on the third character of the file.
  • That was also the end of the first line in the file. (I.e. the first line of the JSON file contained just the opening bracket).
  • The user was running it on Windows.
  • The same input file worked fine for me on Linux.

... Continue reading ...

The origins of XXX as FIXME

Posted on 2017-04-17 in History

The token XXX is frequently used in source code comments as a way of marking some code as needing attention. (Similar to a FIXME or TODO, though at least to me XXX signals something far to the hacky end of the spectrum, and perhaps even outright broken).

It's a bit of an odd and non-obvious string though, unlike FIXME and TODO. Where did this convention come from? I did a little bit of light software archaeology to try to find out. To start with, my guesses in order were:

  • MIT (since it sometimes feels like that's the source of 90% of ancient hacker shibboleths)
  • Early Unix (probably the most influential codebase that's ever existed)
  • Some kind of DEC thing (because really, all the world was a PDP)

... Continue reading ...

Computing multiple hash values in parallel with AVX2

Posted on 2017-03-19 in General

I wanted to compute some hash values in a very particular way, and couldn't find any existing implementations. The special circumstances were:

  • The keys are short (not sure exactly what size they'll end up, but almost certainly in the 12-40 byte range).
  • The keys all of the same length.
  • I know the length at compile time.
  • I have a batch of keys to process at once.

Given the above constraints, it seems obvious that doing multiple keys in a batch with SIMD could speed thing up over computing each one individually. Now, typically small data sizes aren't a good sign for SIMD. But that's not the case here, since the core problem parallelizes so neatly.

After a couple of false starts, I ended up with a version of xxHash32 that computes hash values for 8 keys at the same time using AVX2. The code is at parallel-xxhash.

... Continue reading ...

I've been writing ring buffers wrong all these years

Posted on 2016-12-13 in General

So there I was, implementing a one element ring buffer. Which, I'm sure you'll agree, is a perfectly reasonable data structure.

It was just surprisingly annoying to write, due to reasons we'll get to in a bit. After giving it a bit of thought, I realized I'd always been writing ring buffers "wrong", and there was a better way.

... Continue reading ...

Archives

For earlier posts, head over to the archives.