Why does Linux load average include processes that are blocked on swapping. (Never realized they did; thought it used the classical definition). You know it's good software archaeology when it's treating with something that's still relevant today, and the search bottoms out in MACRO-10 code.
> font-size is the worst.
Just how hard coan it be to determine which font size should be used for an element based on the CSS? Pretty damn hard, it turns out.
> To recap, we are now at four different notions of font size being inherited: ...
Why and how to deprecate a programming language.
The thesis here is that the Linux kernel isn't a monorepo. Instead it's a monotree with multiple repositories. There are multiple repositories, e.g. the main one by Linus, subsystem specific ones, etc. Hence not a monorepo. But all of those repositories are rooted in the same tree, with changes flowing between the repos arbitrarily (so they're not polyrepos, which would generally need to be totally independent of each other). Hence the need for the new term.
Unsurprisingly, Github doesn't support this fairly unique workflow.
Computer science paper recommendations from Fabien Giesen, with long summaries of exactly why these papers are particularly useful/interesting.
A HN comment from 2015 explaining why the 6502 instruction set encouraged a SOA layout over AOS.
> But CSS wouldn’t be introduced for five years, and wouldn’t be fully implemented for ten. This was a period of intense work and innovation which resulted in more than a few competing styling methods that just as easily could have become the standard.
A survey of the early history of HTML styling languages.
Reverse engineering all the Pokemon games, to extract the full list of Pokemon stats and graphics. The particularly interesting bit here is the evolution in how the games have been storing their data.
Mike Hearn on the hard lessons about user account authentication learned at Google. I think I disagree about the ultimate conclusion about it being futile to implement your own system and just use OAuth to piggyback on Google/FB auth. Or that the only good alternative is session-token generating email links. As a user I don't think I'd like either of those. But it's still super important to be aware of the actual issues.
On the social implications of rating systems in games. What happens when the output of a rating system stops being used as a prediction, and instead becomes a status symbol? (And an argument for keeping MMRs purely hidden, while making the public "ranks" something you can advance on by sufficient grinding).
Packing functions in memory such that caller/callee are more likely to be on same cache line / same page. (Surprising to see the "same cache line" part actually happens 5% of the time; the ITLB improvements make a lot more intuitive sense). Do this using callgraph information collected continuously from production machines.
Then use same mechanism for keeping the very hottest code in huge pages. Can't do this universally, due to the tiny number of hugepage I-TLB entries.
5-10% improvements across a selection of Facebook's services.
(Excluding FreeBSD on their CDN servers, of course). Asked in the context of Gregg being an ex-Solaris hacker.
It's very easy for people to underestimate how big the cumulative effect from 20 years of even slightly faster improvements ends up being. E.g. were there any major enhancements to the Illumos TCP stack in this decade? If ther were, it's at least not obvious. Or (since I dug this post out due to a "Why would people run Linux instead of OpenBSD" discussion), anyone wanting to run a major Internet service on OpenBSD would probably need to hire 1-2 fulltime hackers to modernize the TCP implementation.
That's just the bit of operating systems I'm familiar with. But hard to believe it would somehow be a unique problem area./p>
An investigation of HTTP middleboxes all over the internet. How do they behave, and how do you fool them into doing things they weren't meant to do.
The Gamecube had a GPU with some programmable parts, rather than being purely fixed-function. For Dolphin to emulate that, they need to compile the Gamecube GPU programs to modern GPU shaders. But this compilation takes time, and they don't know the set of needed shaders up front (it's fully dynamic). How do you solve that?
> But what if we don't have to rely on specialized shaders? The crazy idea was born to emulate the rendering pipeline itself with an interpreter that runs directly on the GPU as a set of monsterous flexible shaders.
The great thing about Dolphin updates is that they don't just explain what a new feature is; they explain what other solutions have been tried or proposed, and why those solutions don't actually work.
It's the contract programming language for Ethereum, where bugs in contracts lead to $10M cyber-heists. And if you read the details, the quoted bit is actually a fair summary...
Also. I've been playing way too much Dead Cells this weekend, it's pretty good. Not really a Metroidvania though, despite what the title of this blog post says.
This is incredibly cool, and I'd love to read similar work on other data structures. Unfortunately I rarely have a need for immutable sorted sequences.
Surprisingly expensive: In practice it can generate several levels over the course of a day depending on how much CPU power it is given and how large the desired levels are.
Need to reread to see if I really understand what they're doing. But this might be the approach I'll try for the project I'm reading this stuff for.
Catchy soundtrack, too.
I want to refer to this one roughly once a year, and it always takes me half an hour to find it. (Though usually I stumble onto other interesting papers during that search, so it's not too bad).
Amazing ITS restoration project. A lot of of other cool stuff under /PDP-10/ in general.