1. 2

I recently worked with Csmith, which generates random C programs. It found some bugs in several of my companies compilers (some serious, some not so serious).

“But I Want Fuzzing My Code to be Harder, Not Easier”

It depends on the type of project, but it seems perfectly reasonable to have compile a ‘debug’ version which shows detailed information, but to release a version which just crashes with a generic error message. This way, you can fuzz your own program easily, but it will be hard for people looking for security holes in your program.

1. 2

I don’t think this is buying you as much “security” as you might think it does.

1. 1

So you don’t think that detailed error messages make fuzzing easier? Or you don’t think that fuzzing will show the existence of security problems? I’m eager to hear your argument.

1.

You won’t really make fuzzing harder by stripping symbols, removing backtrack generation etc. There are easy reverse engineering /black box analysis tools like afl-gunicorn that will do the job just fine

1.

Cool! I wasn’t aware of that.

1. 3

Camping in the rain!

1. 2

The nice thing about KaTeX is that you can use it to render math statically with node.js. This is also how stackedit.io works when you export to ‘styled HTML’.

1. 10

Creating a guix image that runs on my chromebook pixel.

1. 2

That seems interesting. If you happen to do a writeup I’d love to hear about it!

1. 1

guix

TIL this is a thing that exists. Interesting.

1. 9

Working on my book proposal after spending some time this week gauging how much work it will be to find relevant materials. I also have a wedding party to attend, and another party for a friend who just moved back to Canada from the UK. He was happy to get out before Brexit turns into an even bigger dumpster fire.

1. 2

I think I made a comment like this before, but working on a book seems like a nice project! I suppose the proposal is meant to pitch the book to a publisher?

1. 3

Even if you don’t pitch to a publisher (something I’ll probably do) writing a proposal is recommended so that you can focus your effort better.

1. 9

I’m a mess! Pretty tired (training and working a lot, and sleeping badly), but I also want to do so many things while having no time.

On friday evening and saturday: Participating in a running competition (I am running 150 and 300m), going to a party, picking up a new contact lenses, picking up my racing bicycle which is repaired, athletics training. I also want to write an article about the conjugate gradient method, build a minimal Linux live CD (see this submission) and try if I can make an install script (probably based on arch linux installation guide). I also have two running projects of making a nixie clock and making a path tracer. But I also have to a lot of chores on sunday (groceries, cleaning, washing), and my GF is coming over…

I don’t understand how people get to do any coding while working full-time :(

1. 21

I don’t understand how people get to do any coding while working full-time

A lot of us don’t.

1. 3

I code outside of work for maybe 20 hours a year, burst over a few weekends. It’s not much.

1. 3

Yeah, it’s just not a thing I get to do much any more, between work that is non-coding, and home, where there are two little people who demand basically all of my time and energy. I am idly kicking around the idea of building a basic iTunes replacement (because word on the street is that the new “Music” app will no longer support my use-cases), as a way to get back into Apple platform development; but realistically, coding time will likely be supplanted by “sitting on the sofa with a glass of wine decompressing time” until the girls are less demanding.

1. 2

Swinsian might be able to replace iTunes for you. I certainly don’t mean to discourage you from scratching your own itch, though; that’s bound to be more fun and rewarding :-)

2. 7

I don’t understand how people get to do any coding while working full-time :(

Slowly. And with a massive TODO file.

1. 6

I don’t understand how people get to do any coding while working full-time :(

There’s a reason why I rarely code for the FOSS projects I’m involved in as a hobbyist. I just can’t code for 8 hours and then 2 more at a reliable pace. I can be helpful, answer questions and triage, though.

1. 3

making a nixie clock

That’s a very fun project! How much are you building yourself? If you are buying most of the parts it’s definitely the type of thing you can knock out in an afternoon after work.

1. 3

I’ve bought the actual nixie tubes, but I still have to buy the driver parts. I’m mostly following EEVblog’s approach, but I don’t want to have a wifi-connected thing, so I’ll probably buy a high-accuracy RTC chip or breakout board.

I have no experience with high-voltage stuff, so I’ll probably just buy a boost converter module that boosts the voltage enough (though it would be fun to design a switch-mode boost converter myself… and I still might decide to go down that rabbithole). Then I still have to design the PCB itself, and get it manufactured. So I’m thinking more in terms of months than in terms of afternoons ;)

If you made one and have some tips for me, I’d be happy to hear them!

1. 2

I ended up following something pretty close to https://www.instructables.com/id/Arduino-Nixie-Clock-for-Absolute-Beginners/ And I just didn’t go through with the last step of soldering it all down. This was pretty early into my EE tinkering phase so I tried not to overcomplicate it.

The engineering of it can be made pretty simple so I’m not sure I have any tips on that front, but if you want to show it off I’d put some extra time into thinking about your casing. I got much better feedback on mine once I put it in a 3D printed box to hide the components.

2. 3

path tracer

Ooh! What for?

I don’t understand how people get to do any coding while working full-time

Magic. For me, I see it as I’m either learning something new I can then use for work, or working on something that’s different enough that it doesn’t feel like work.

Hence HW projects.

1. 1

It’s cliched but you’ve got to make the time (and you’ve got to want to do that, and fair enough if you don’t!). I like to go to a cafe for breakfast on at least one day each weekend, take the laptop, and tap out a bit of code. I don’t get a lot done, but it’s more than nothing, and it adds up.

1. 6

I have also thought about this from another perspective. If you have an application you make for yourself, it can be worth it to improve the performance. If you do this, and it costs you 10 minutes, but it saves you 10 seconds every time you use the program, you need to use it 60 times before it pays off.

Now, I don’t claim that this is anywhere near as important as saving lives, but it’s still a baffling figure. If you work on an application that is used repeatedly by even some users, it’s often worth it to improve the performance (of course, it’s often not worth it to optimize it if the delay isn’t noticeable or if there are other steps in the process which take way longer). If you think about things this way it’s ridiculous how much time is wasted by not optimizing performance in some systems.

1. 1

I have also thought about this from another perspective. If you have an application you make for yourself, it can be worth it to improve the performance. If you do this, and it costs you 10 minutes, but it saves you 10 seconds every time you use the program, you need to use it 60 times before it pays off.

In this situation performance is also important, but what is usually more important is that the task is automated and that you can free up a human for other things.

I usually stick to the rule of thumb that I do things manually the first three times, but if it pops up a fourth time, I start automating it. But if I get that far, I usually also start optimizing things, because I will usually reach the said 60 times without question.

1. 3

In this situation performance is also important, but what is usually more important is that the task is automated and that you can free up a human for other things.

Two other bonuses:

• We can do a bunch of sanity checks, etc. in the automated solution, which would seem like too much hassle when doing it manually. Counterpoint: automated scripts might do stupid things that a person wouldn’t, e.g. rm -rf $DIR/* when $DIR hasn’t been set.
• Once something’s automated, we can work at a higher level of abstraction, e.g. using a script can ensure that generated files follow some particular naming pattern and directory structure; that makes it easier to make tools for browsing or querying those files. Counterpoint: easy to end up with a house of cards, which does bad things if some underlying job breaks.
1. 2

Counterpoint: easy to end up with a house of cards, which does bad things if some underlying job breaks.

Another rule of thumb: Only use shell scripting at the top level of your processing (for example to start things up), or use shell scripting all the way down to the one but lowest level like Slackware does. My present self regularly thanks my past self for sticking to this.

1. 4

I’ve been using this for about a month and installed it on top of vanilla ubuntu 19.04 rather than using the regolith distribution (I’m not sure why it needs to be a new distribution). Long story short, I ended up changing the color scheme, switching from st to kitty, broke the pretty i3 bar with a dist-upgrade, and probably would have been better off just doing all of this myself.

1. 1

But that kind of makes it interesting for me, as I have no desire to fiddle around on my work machine. But I had to a little when I installed it. (Been using i3 for a while, but I also wanted some DE integration, so having KDE Plasma as well and it’s not 100% perfect).

So I might give this a shot and live with “only 90% what I love, but at leasti t works in a non wonky way”

1. 1

How does the integration with Gnome’s control center look like?

The best thing about Gnome are all the things working together; the screen lock, the keyboard layouts, the printers, the plug-and-play displays. I prefer using tiling window managers but I invariable end-up re-creating a lot of these things, half baked.

1. 4

I really wish the major DEs would add a tiling mode. I really like tilling window managers but I also really like Gnome. I’ve tried a few of the tiling extensions but they all kinda suck and feel kludgey and really don’t like that I have one of my monitors oriented vertically. If Gnome had an option to force windows to tile instead of float I’d be a happy person.

1. 2

Myself I find that “corner snapping” is all tiling I want in my life, and MATE does that just fine.

1. 1

I use XFCE + i3, works pretty well. You can just disable the window manager and start i3 and it works. Not sure if that would work for you (see https://github.com/benoncoffee/archlinuxconfig for the installation, dotfiles, etc.).

2. 2

Excellent, actually. Everything works – from screen lock to display. No arandr or half-baked scripts. This is a really redeeming point that I didn’t even think about. The fact that I missed this illustrates how seamless it is. I suppose I also haven’t had to make any scripts for the bar either, which is nice

1. 1
• 2nd July, 2019: CVE-2019-13143 reserved
• No response from vendor
• 2nd August 2019: Public disclosure

So… This is essentially an invitation to start using the exploit in the wild?

1. 3

No? What about it reads that way to you?

1. 2

Nothing (and of course you don’t mean it that way), but I’m just not sure if it’s a good idea to disclose an exploit if it’s not fixed yet. At the same time, I’m not in security and I don’t know the common accepted practices. I was hoping that someone would patronizingly explain why it’s OK to do so in response to my deliberately provoking comment ;)

1. 8

I believe the general idea is that you give the vendor time to fix it before you disclose it publicly, but if the vendor does nothing at all, then it is better for the user to know about the vulnerability, so they can attempt to mitigate it (e.g. in this case by not using the ‘smart’ ‘lock’).

1. 5

u/Thra11 has answered this, but to add to that, public disclosure acts as a ditch effort to get the vendor to notice the issue and hopefully, address it. It’s a hard choice to make, especially when releasing exploits that could compromise thousands of user accounts. But at the same time, the users (at least a few) will learn of it and stop using the lock.

1. 3

There’s a good likelihood for any vulnerability that’s disclosed that there are people who already are aware of it and are presently exploiting it. Disclosure allows people to avoid being exploited and takes power away from those who hope to use it.

1. 1

I do wonder how you “know” if you’ve built a properly secure device. I guess something like TLA could help you write a spec saying something like “in order to change ownership you need to have existing login access”?

The modeling is probably the most difficult part, but I don’t know what tool would be best to ingest a model with some constraints here

1. 2

That’s the crux of the issue. We know exactly how regular locks work, how lock picking works and how to mitigate that with modern locks. As the final line says, “Do not buy a smart lock”, the attack surface is enormous and a lot of these companies are only “security experts” on the surface.

1. 1

I get the impression that it’s pretty hard indeed. There are always things you don’t model (with timings as an obvious example). Of course, this thing seems to be broken in a more obvious way.

1. 2

I think the underlying cause is that we usually assign precedence to operators, and concatenation is not a well-defined operator (just a convention that means ‘some common operator, dependent on context’). Now, since concatenation places its two operands together, it is natural to assume that it has a higher precedence than other operators.

In the same spirit: Does 1½*2 equal (1+½)*2=3 or 1+(½*2)=2? Strictly speaking, following PEMDAS here means that you should evaluate these expressions as 1+(½*2)=2.

1. 34

My answer is “the question makes no sense.”

Why do we have PEMDAS? Because we need to standardize on something. It’s a convention to make communication easier. We break it all the time. If I give you 1/2x, is that (1/2)*x or 1/(2x)? Most would say the former, but I know a lot of physicists read the latter. If you meant (1/2)*x, why did you write 1/2x and not x/2? It’s still a bit ambiguous, though. But if I gave you y/2x, everybody would read it as y/(2x). So order of operations is actually contextual!

PEMDAS is a contextual convention. A question designed to mess with the convention is outside that context and PEMDAS doesn’t automatically apply. If you asked “according to PEMDAS, what is 8÷2(2+2)”, then that’s 16. Without that frame, though, the answer is “wtf”.

1. 18

For reference Cedric Villani (2010 Fields Medal) take on this is: “The right reaction is not to give the result. But to say that the expression is poorly written and that the ambiguity must be removed by adding brackets, for example. Better go on holidays without worrying about this non problem.” [1]

I find it pretty baffling that this post reached so high on the front page.

1. 3

I find it pretty baffling that this post reached so high on the front page.

This phenomena exists on ‘hacker’ ‘news’, where articles related to ‘math’ are upvoted highly but there are scant comments. My personal opinion is that folks on both sites embellish the idea of ‘math’ but don’t actually understand enough about the post to contribute. E.g. ‘upvote the idea, but any discussion escapes me.’

1. 4

Ironically, this exact submission on HN has very low engagement: https://news.ycombinator.com/item?id=20613244

2. 8

Agreed. People usually use the convention that concatenation represents some arbitrary common operations. This ‘common operation’ is then pretty arbitrary: 2x represent 2 * x, 1½ represents 1 + ½, and D f(x) represents the derivative of f at the point x. There are lots of ambiguities and weird things, but it mostly works out because of context and conventions. And note that these notational ambiguities are examples from math, where things are defined quite rigorously compared to other fields.

Resists the urge to rant about questions on assessments

1. 8

but I know a lot of physicists read the latter.

They would probably write it as

    1
--
2x


though.

1. 5

May be there is a different convention at play here. My mental grouping order when I read 1/2x or y/2x is different from when I read 1/2*x or y/2*x. I imagine y/2x to be similar to \frac{y}{2x} while y/2*x is not read that way.

1. 4

Indeed…

In at least a handful of respected academic journals, textbooks, and lectures, multiplication denoted by juxtaposition (also known as implied multiplication) is interpreted as having higher precedence than division. This makes sense intuitively, but most decent calculators have no truck for it, and doggedly follow the left-to-right order for division and multiplication.

So journals and textbooks say to do one thing, the intuitive thing even, but because some calculator lacks a button for it, we just ignore all that? How sad the state of mathematics that the expression of ideas is limited by such mechanical devices.

1. 3

Sad or not it is how things have always been. If you can’t express an idea in your computation systems, you are unlikely to express them outside of those systems. We use calculators and computers because they’re useful, but they come with a cost nonetheless.

1. 2

So basically there was no math idea before computers calculators, and there is not a single mathematician who does math without them. I really wonder how I did nearly all my studies in math without using calculators or computers.

1. 2

More broadly speaking the tools we use to do math shape the math we choose to do and how we do it. This was true with the slide rule, the abacus, napier’s bones, compass and straightedge, chalk and slate, or clay tablet.

2. 2

These problems (and others, like ‘which bucket fills first’) are created specifically to generate arguments, because the more comments / reactions a post gets, the more algorithmic reach it’s creator gets on facebook/twitter.

1. 15

Apart from buggy syntax highlighting, broken scrolling and others

It is explicitly advertised as “Pre-alpha - not yet usable!”, so picking on bugs doesn’t seem especially fair to me.

Want to contribute to Onivim? Don’t. They make a profit out of your contributions.

Vim was used to write Google, and they make billions and billions off that. Is that not worse than spending a few bucks for someone’s time?

I don’t really have any opinion about this OniVim thing. Perhaps it’s great, perhaps it’s not. But it’s clearly people spending time writing code. What’s wrong with paying them?

We really need to get away from this “zomg making profit from code is bad” attitude. The “please please please donate”-model doesn’t work very well, and it’s time for some new options. The “time-delayed license” doesn’t strike me as a good option for various reasons, but the article doesn’t state any of them. It just goes “profit bad!” Not very insightful.

If you want to write really good software you need to spend time. Quite a lot of it. Right now writing free software is often like a job, except that you don’t get paid.

Imagine if the supermarket worked this way: “this bread is €2, but you can also take it for free, if you want”. That would be an unthinkable business model: people still need to actually make the bread, and they’re not going to do it in the evening after their day job. Software can be distributed for free – so it’s not exactly like bread – but people still do actually need to make the software.

1. 3

I think the argument might be made that Vim was created as a hobby project and then transformed into a charitable one: the popularity of Vim is used as a vehicle to raise awareness and increase donations for a charity. In many ways, then, Vim is a charity project before it is an open source project.

With that being said, piggygbacking of Vim is “worse” than piggybacking off of other software. Hobbyist software exists to be used, raise the profile of the author(s), etc. The “mission” of that software is to be used. The “mission” of Vim is to raise money for needy children in Uganda. There’s nothing wrong with wanting to be paid for your software if that’s what you want, but that’s not what Vim’s authors wanted. If the authors of Onivim were to, say, donate 10% or something of their proceeds to those children, I would be 100% on board with this…but it at least appears as though they are taking a project designed to help charity and making a profit from it.

(Note that I’m playing Devil’s Advocate here. I’m not particularly invested in either side of the debate.)

1. 3

Vim was used to write Google, and they make billions and billions off that. Is that not worse than spending a few bucks for someone’s time?

I don’t understand this argument.

1. 3

I don’t think Vim is created for the Uganda charity, it just so happens that Bram does and cares about both. But you’ll have to ask Bram to be sure.

I’m also not so sure if Omivim would really take out a significant chunk of the donations. It’s not that they get that many donations anyway (I did a detailed summary a while ago)

Either way, the linked post doesn’t make any of these arguments; it merely asserts that profit==bad.

1. 2

Either way, the linked post doesn’t make any of these arguments; it merely asserts that profit==bad.

Profit is bad. The only way people get rich is off the unpaid wages of the workers.

1. 2

Whever my wife asks what I want to do today I always say “help realize class consciousness and establish a dictatorship of the proletariat.”

We’re still married after 11 years so I’m assuming she either agrees with me or has given up.

2. 1

In many ways, then, Vim is a charity project before it is an open source project.

Before I read this, I had never seen Vim as “a project designed to help charity” and I hadn’t even heard of the Uganda thing. And I’ve used Vim for … many years. (So, arguably, the existence of Onivim brings more attention to this charity.)

There’s nothing wrong with wanting to be paid for your software if that’s what you want, but that’s not what Vim’s authors wanted.

Is this actually in Vim’s license? To me, it just seems as though Vim’s authors don’t want to be paid themselves for their work; there’s no indication that they think this should apply to everyone else.

3. 3

I think you misunderstand me. By not supporting Onivim, I mean, not making contributions in the form of issues or pull requests. I never mentioned anything about the profit they make from distributing Onivim because you can make your own free (as in price) builds.

What you are suggesting is, “Open source software doesn’t make money, go proprietary instead!”. This is simply not the way to go, there is no sense of community here. Devs should try to sell the service and not the product. This is a tried and tested model, followed by the likes of RedHat and IBM. Please do look at business models of open source projects, they exist.

I also want readers to realize that, Onivim was born out of free (as in freedom) and open source projects like neovim (oni1 was a gui for neovim) and vim.

A couple of other popular misconceptions in your post:

1. There is nothing stopping devs from earning from free (as in freedom) software. Donations aren’t the only source of income.
2. Their proprietary license prevents other devs from contributing, and goes against the spirit of open source. If my pull request doesn’t get merged for some reason, there is no way for me to share my version with others!
3. The bread analogy does not work. Software is different from bread, you can make copies of software. So the supermarket would say, “here are the ingredients (source code), make it yourself, or purchase one for \$2, feel free to add new ingredients and share it with others!”

I didn’t quite understand this:

Vim was used to write Google, and they make billions and billions off that.

1. 6

Devs should try to sell the service and not the product. This is a tried and tested model,

Let’s say I write a super-secure PNG decoder library. It is faster than libpng, is a drop-in replacement for libpng, and has zero security flaws. How do I sell that as a service? There is a real funding problem for software infrastructure that cannot easily be made into a service.

As someone interested in bootstrapping (e.g. making a product and selling it on the side), I’ve gradually realized that programmers in general are an awful target market. They’re averse to change, don’t understand the value of their money WRT time, don’t always have purchasing power, and heavily favor low-quality/free solutions (e.g. OSS).

FWIW I’d pay for a copy of Vim that I didn’t have to screw around with for hours in order for it to be pleasant. This is coming from someone who has used Vim for a long time. With each passing year I detest the “infinite configurability as long as your time is free!” idea, because my time is never free, and I’d rather be actually making things instead of configuring software to help me write software.

1. 1

heavily favor low-quality/free solutions (e.g. OSS).

If you think that an FOSS project and a closed source program are comparable goods in any meaningful way then you are failing to understand the products in question. Perhaps before you blame your consumers, you should evaluate what their incentives are, and what the product provides differently other than “free as in money”. You can’t meaningfully break into any market with the attitude of “The consumers are wrong”, instead you need to actually evaluate why they hold the opinions they do and what shapes their preferences.

1. 1

That’s why I’m not actually a bootstrapper.

But I do see these threads and there’s a crab like mentality where people get all weird at the idea that they’d have to pay for things, esp around dev tools.

1. 1

I think the fear of selling the product isn’t rooted in having to pay for something, but rather that the product and the code that gets run on your machine becomes a trade secret. The other aspect is that when code is locked down, if the business owner goes away or sells the company, I cannot rely on that tool anymore. If the tool were instead open , theoretically I could get many many more years out of it. Emacs is 43 years old and I would not be surprised if 43 years later it will still have a bustling community.

1. 1

That’s fair. How do I sell dev tools that aren’t cloud based, then?

1. 2

Service does not mean “cloud”. Service can be support. Service can be a tailored solution. There are a lot of ways you can go. Red Hat for example is not strictly speaking cloud based.

1. 1

There’s a few successful (that is, making money) products in this category which have a ‘source available’ pro edition (react on rails, sidekiq).

2. 1

You lost it at drop-in replacement. It needs to require or at least warrant some service, and be good enough to be worth it.

1. 3

I’m sure imgur.com might appreciate it, as their business relies on accepting potentially malicious input.

Why wouldn’t we pay money for good software components? What’s the difference between charging money for access to an API and integrating a paid-for component into a larger system?

1. 1

Ok, sure, if your lib is closed-source, works well and is api-compatible. Then you sell licenses, not a service.

And I’m ok with paying money for good things :)

2. 1

Offer a support contract. I think many businesses would go for a PNG decoder with paid support over an otherwise-identical PNG decoder with a license fee.

2. 1

Devs should try to sell the service and not the product. This is a tried and tested model, followed by the likes of RedHat and IBM. Please do look at business models of open source projects, they exist.

Super profitable model for big corporations.

3. 1

Heh. Using bread for your example is quite pertinent.

1. 1

I can’t see how it’s unfair. There’s a difference between releasing a buggy product that everyone can contribute to and benefit from, and selling a buggy proprietary product. If it’s not usable, why are you even selling it to begin with?

1. 1

Well, using an open source tool in the way it’s supposed to be used is something else than extending it and selling it. That being said, if Bram wanted to avoid this, he should have used the GNU license or something similar (which basically states that you’re free to use, modify, and distribute if you publish your code under the same license).

1. 1

Genuine question: Was calling to order a pizza not an option?

1. 6

No* it wasn’t.

*Yes, but it would have cost more. They had an online only discount.

1. -1

I would think so, however it’s drastically less convenient.

1. 7

Been a while since I posted in one of these threads :)

I burnt out and stopped working last summer, and now it’s looking like I should probably start again. If anyone needs low latency C++ or graphics programmers, hit me up!

In the meantime, I’ve been working on a game with a friend. We forked from an existing game (itself a fork of Quake 2) because we thought it would be easier to start from a complete game, but the engine was in a very bad state so most of my work has been spent modernising it and cleaning things up. So far I’ve:

• Deleted nearly 200k lines of code, or about 60% of the codebase
• Set up CI for everything, including prebuilt third party libs which I store in the engine repo and statically link
• Replaced cmake with a far simpler Lua/ninja build system. The build system is in the repo so you can clone and build on any pc with no fuss
• Added support for easy ASAN builds
• Reduced compile times by about 80%
• Converted the codebase from C to C++11. I don’t like most of C++ and I’m certainly not going full C++ best practices, but things like operator overloading/constexpr/safe array count/sane strings/simple templates are nice to have
• Upgraded the renderer from OpenGL 1 and extensions (for real) to OpenGL 3 with a modern GL loader
• Downgraded the renderer to GL2.1 plus most of the extensions from GL3 because some DX10 level Intel GPUs claim to not support GL3
• Added profiling infrastructure based on microprofile
• Deleted the UI, which was a librocket (buggy slow CSS2/XHTML library) and angelscript (meh scripting language) abomination, and replaced it with imgui. -25k LOC of engine code and removed a gigantic third party lib
• Rewritten the sound system. 11k LOC to 400 LOC and dropped some deps. New sound system has fancy HRTF directional audio which everyone loves. It had a nasty bug which took a long time to figure out (and two lines to fix): if you remove-swap elements from an array sometimes you need to special case removing the last element, which gets swapped with itself :(
• Added support for glTF. The old model formats (MD3/IQM) are difficult to work with, especially IQM because absolutely nothing supports it, so glTF support means we can actually get content into the game now. Weapon models are still in MD3 and are hard to convert. MD3 doesn’t have skeletal animation so they are split into a “hand” model, which is empty except for one animated joint and is used by the engine to play firing animations and stuff, the actual weapon model. Some guns also have a “barrel” model for any moving parts, and the joints for those animations are defined by hand in a text file. It’s a big mess and very difficult for us to work with! glTF has lots of annoying things and is difficult to implement but it’s widely supported by tools and the only other realistic option is FBX which sucks more. cgltf is really good.
• Wrote a new memory allocation system. I did a tracking allocator (basically std::unordered_map< void *, AllocInfo >) and a temporary allocator. The temporary allocator allocates two big blocks of memory at startup, which I use then as memory arenas on odd/even frames. So I can allocate temp stuff and it will automatically get freed at the end of the next frame. Simple to add ASAN support to it too
• Replaced the animation system. The old one was terribly complicated so I just deleted it, the new one is very simple: Classify model pose for the given frame (running/jumping/etc) -> allocate memory for pose on the frame arena -> sample animation into temp memory -> blend multiple poses together as needed (e.g. walking and also holding a gun) -> apply pose adjustments for looking up/down and leaning -> send pose to the renderer -> pose is automatically freed.
• Tons of random gameplay tweaks and fixes

And over the next few months I hope to:

• Drop MD3 support and use glTF for everything
• Replace the text renderer. The current one is a lazy atlas generator, so whenever you use a new font size or a new glyph it has to go render it and upload it to the GPU. The implementation is also a huge mess. I’m in the middle of writing a new text renderer based on signed distance fields, so we only need one atlas and can do nice effects like adding borders. It’s been hard because SDF was a meme technology some years ago so most implementations and info you can find online is just complete garbage and msdfgen is really difficult to use. But I got my test renderer working and now I’m working on properly integrating it with the engine
• Replace the whole renderer. It’s a huge mess and very difficult to work with. I nearly gave up on the whole project like 3 days in when I saw how bad it is. It’s been slowing us down since the beginning, but I’m nearly at the point where I can delete it and start fresh
• New fancy pants rendering techniques like teammate outlines and decals that don’t need you to clip polys against the world and uploading things to the GPU and drawing them without ridiculous boilerplate
• New asset system. The current one is very complicated and slow and supports things like streaming (i.e. random hitching) and reference counting, which is all very pointless because the assets folder is 50MB. I want to rewrite it to load everything at startup in < 1 second, and then hotload things when they change on disk to make development easier. I also want to refer to assets by hash rather than by name, so we can use compile time string hashing and simplify some server->client communication (right now the server sends a big list of asset paths it plans to use then sends indices into that array, easier to just send hashes)
• Modernise the engine’s standard library. typedef vec_t vec4_t[4]; and 500 macros to operate on it is not cool
• Steam and/or EGS release :)

It’s a lot of work and everything takes longer than I would like it to, but what I’ve done so far has been the result of however many years of learning and working on engines for fun. It’s been much easier to stay motivated working on an actual game rather than one of my toy engines which will never become a game, and what I’ve done so far has mostly been very high quality. After burning out a year ago I’m relieved and proud to see that I still got it!

1. 1

Your summer project sounds like it would burn me out ;) On a more serious note: I’ve never been burnt out but imagine it being pretty shitty. I’m happy you’ve (almost?) recovered.

Sounds like a cool project. Looking forward to see more from it!

1. 2

Athletics training, groceries, cleaning, and canoeing with my girlfriend. Maybe some coding.

1. 9

Installation: make

Delightful! Never heard of Frank Denis before, but he seems like a kind of Batman to the open source community. Shooting fashion photography at day, making crypto software at night.

1. 2

He is also on twitter 24/7 sharing great stuff. I have been following him for years.

1. 13

Someone for whom it’s equally easy to shift the building up and put a new foundation down as it is to change the colour of the door. Someone who can easily reach past the layers of wood and insulation to add a new room below.

If you don’t utilize the advantages of software (easy to delete, easy to redo, easy to reuse) when building software, then that’s like treating your helicopter like a car and only flying over roads, and waiting for the cars on the road to move before you move. The helicopter is not a car. It doesn’t have to follow the road. Software is not a house. You can fix the insulation directly.

In software, a near-complete spec could be the program itself.

1. 7

I think that’s only true for small projects. Facebook find it easier to extend PHP than rewrite their legacy code. Also, how most banks are still stuck on mainframes.

1. 3

Having a vast cabinet full of blueprints will merely mean before you can change anything, you first have to find and change the blueprint…. a task which is as hard if not harder than changing the code.

Especially as, odds on, the blueprints are out of date since nobody does roundtripping.

Or you can look at the problem differently….

The most precise and concise and readable and understandable representation of software is the code.

Of course, if you start with a crap representation (aka PHP) and find yourself at the bottom of a hole…

What’s the rule about getting out holes you have dug yourself into?

STOP DIGGING!

1. 1

The most precise and concise and readable and understandable representation of software is the code.

That’s like like saying “The most precise and concise and readable and understandable representation of a novel is its text”. It’s precise and complete, but usually not concise. And sadly, it is often the only way to find out how it works. In many projects I would have killed for a 10-line description of the architecture and how it maps to source files.

1. 1

You need to learn to skim the data structures… and find where the I/O end points are.

Give me those things I will navigate via declaration/reference graph around the system like a monkey swinging on a jungle gym.

OOP gets a fair bit of hate, but I can read the instance variables and immediately say, aha, I have a clue what all this code in this class is doing.

And I will be faster and more accurate than somebody with a cabinet full of blueprints.

2. 2

Software is a lot worse than a house. The space a house lives in is 3d. The space of a software program is infinite dimensional. You can hide all the bugs in one corner and never notice.

1. 2

Metaphors are imperfect by definition. I think it’s more constructive to look at the complete analogy and metaphor and see which aspects map nicely and which don’t. Obviously, a programmer intimately familiar with a particular codebase, might indeed be able to change or implement certain features quickly. In other cases, he might not be able to. I think so far the metaphor holds pretty closely. I would distinguish between tasks that require no deep understanding of the architecture (painting a wall, changing a message), and tasks that do (removing a wall – is it a load-bearing wall? – changing a method in the common path – does it break an invariant?).

One difference, I think, is that progress is a lot more visible in construction. In software, you might think you’re almost done with something, but when you have a closer look it might turn out to be impossible to do after all. It’s also a lot harder to mess things up, due to source control. This is a big point why construction requires more planning: If you seriously mess it up, the house might collapse. If you mess up your software, you get crappy software. The difference is that people can live with crappy software (facebook notifications don’t go away on mobile, I can’t disable linkedin job alerts, twitter fails at loading pretty much anything), but they often die when they are in a house that collapses. Indeed, in areas where it matters (cars, aviation, aeronautics, security, large-scale manufactoring), people tend to be a bit more conservative about ‘moving fast’ (and they certainly won’t encourage you to ‘break things’). Indeed, these are areas where formal methods are more often applied than in the rest of the industry.

Another difference between construction and software engineering is that (I think) it is easier to spot obvious mistakes in construction. In software, you might have an obvious mistake, but it might be hidden in a big codebase.

I think the metaphor holds pretty closely when you compare projects of similar size (of course, comparing the ‘size’ of an construction project and a software project is terribly ill-defined). The bigger and the more mission-critical your project, the more you need to plan ahead and ensure safety (which sounds surprisingly much like common sense).

1. 1

That is true until physics gets in the way. When a system is concurrent, distributed and runs mission-critical logic, the cost of moving/shifting/changing the software of the system may be considerably higher than using formal methods to specify it.

Mr. Lamport mentions an interesting example of a chip designed by Intel for Xbox. It has a bug (that was discovered with formal methods, prior to production). Imagen you have 1 million chips deployed all around the world with a bug. So yeah, software systems are subject to physics too, one way or another.

1. 4

Nice article. The point is conveyed very well; it helps that the article is aptly named. However, I still miss the answers to the “why?” and “how?” questions.

It is mentioned that this behavior is not implemented to annoy the user. I believe that, but I still can’t come up with any other reason. It seems like the compiler needs to detect access of uninitialized memory and chooses not to output an error or a warning, but chooses to implement all the logic on uninitialized values as “hah, I can do whatever I want here”. I expect the reality to be more nuanced, but I don’t see how from this article. Now, I am not the most careful reader in the world, and I may have missed something.

On the other hand, you’d expect C compilers at least have an option to insert checks to avoid undefined behavior as well. For example, insert assert(x < 255); before the return in

bool foo(uint8_t x) {
return x < x + 1;
}


But C also chooses to not warn the user of undefined behavior. Again, there are probably reasons for this, but it doesn’t feel very sane to do this. Now you just can’t be sure that your program does not rely on undefined behavior to function correctly.

1. 3

I actually think an uninitialized value should be a random bit pattern. But here is an example of complications. Let’s call uninitialized value “poison”, and random bit pattern “undef”. One way to implement poison is to set output to poison if any of input is poison. But you can’t do that with undef! If undef is a random bit pattern, you are forced to do bit-level tracking. undef << 1 is not undef, because its LSB is 0, and (undef << 1) & 1 must be 0, not undef.

These days, all optimizing compilers do bit-level tracking anyway, but it was a significant burden in the past.

1. 2

The “how?” is easy: C compilers track extra metadata about values when analyzing the program.

The “why?” is optimization. Compilers first generate a naive unoptimized version of the program, and then optimize it in passes. Generating of fast code from the start has been tried, but multiple passes simplify the compiler and give faster output.

So if we agreed that uninitialized memory isn’t “undefined”, but is some specific garbage pattern, then this:

int x, y, z;


in the first pass would have to generate some code that saves these random-but-specific values to make them behave non-magically later (e.g. allocates 3 registers or pushes 3 things to the stack). That’s obviously wasteful! So we’d like this to be optimized out (or moved to be an “accident” later), even if these values are not assigned to in all paths in the function. And the easiest way to explain that to the optimizer is to give them a magic “undef” value.

Remember that when the code is being optimized, the machine code hasn’t been generated yet. Program sections haven’t been laid out yet. Registers haven’t been allocated yet. The only thing that exists is your program’s intermediate representation, so you can’t make the optimizer do “whatever the machine does”, because it’s in the process of deciding what the machine will be told to do.

1. 1

What if we agree that “unspecified” means that the memory is initialized to some unknown bit pattern and the compiler doesn’t need to know?

1. 1

Thanks, that helps a lot. Still, I don’t understand why a compiler would happily compile

fn always_returns_true(x: u8) -> bool {
x < 150 || x > 120
}


to a function that always returns true. Why would it be fine with comparing an uninitialized/undefined value? I still don’t see a sensible reason to not output an error or at least a warning. Unless I’m missing something, this is exactly the kind of reasoning that has lead to the abominable situation with undefined behavior in C.

1. 2

It’s an effect of the optimizer having only a local view of the problem. Optimizing anything-you-want < 150 makes sense on its own. Why generate a comparison instruction if you’re allowed to hardcode it to false?

Such nonsense code may happen in legitimate situations. For example, it could have been if status < OK to do error handling in a function, and the function has been inlined in a place where the compiler can see the error won’t happen, so it can remove the whole error handling code.

There’s UB in every correct C program. For example, int + int is UB if the values overflow. The compiler can’t warn about using +! But it does take advantage of the UB to avoid generating sign-extend instructions, simplify arithmetic and loop end conditions.

https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759de5a7

1. 2

Thanks for your answer. I gave it some time but I’m still not convinced that this is useful behavior.

It’s an effect of the optimizer having only a local view of the problem. Optimizing anything-you-want < 150 makes sense on its own. Why generate a comparison instruction if you’re allowed to hardcode it to false?

I can see that in general, undefined behavior allows some optimizations. The particular case of comparing with a variable that is guaranteed to be uninitialized just seems to be something that deserves a hard error. I can’t imagine this being useful in any way.

1. 10

While many experienced programmers can write correct systems-level code, it’s clear that no matter the amount of mitigations put in place, it is near impossible to write memory-safe code using traditional systems-level programming languages at scale.

This is actually something I can agree with. I’m glad to finally hear a more nuanced view than “Hurr, durr, it’s impossible to write correct software”.