The weekly thread to discuss what you have done recently and are working on this week.
Be descriptive, and don’t hesitate to ask for help!
Workwise it’s Thanksgiving week so mostly going to be reviewing documentation and architecture proposals.
I’ve got some really exciting news: I just signed a contract with Apress! If everything goes well we’ll be publishing Practical TLA+ by the end of 2018. I plan to spend this week fleshing out the outline, getting familiar with their tooling, and writing the introduction. I’m incredibly excited about this chance to make TLA+ just a little easier to learn :)
Holy crap, dude, that’s awesome! There are a few Lobsters learning TLA+ right now thanks to your prior work. Perhaps they might be a sounding board or even have ideas for parts of your book. Especially in terms of examples of real-world issues TLA+ could knock out.
You might even consider collecting any you see published out there with authors’ permission to put in an online collection/appendix that goes in the book. The best, relatively-short examples would be in the book itself.
I’ve been experimenting (in the context of making a low-level windows front-end for xi-editor with making a desktop UI that sends layers to the compositor, rather than rendering complete frames. Based on my experiments so far, I’m encouraged that the performance (latency and smoothness) will be qualitatively better than anything else out there. I think performant desktop UI is something of a lost art, all the attention seems to be in other areas.
I’m a little uncertain how much public noise to make, or whether I should just crank on this on my own until it’s a usable client (might be a while, since I’m interleaving this with lots of other things).
I don’t think it’s totally lost, but I really only hear noise about it from the video games space, which has mostly settled on the immediate-mode model for UI. There are a handful of projects like imgui but I think it’s mostly hand-rolled.
Make at least a little noise; marketing doesn’t have to be all-or-nothing. And if you start small, you’ll mostly only attract early adopters happy with incomplete experiments while planting those “oh yeah, I’ve heard of xi, I should check that out” seeds.
Agreeing with “make a little noise while attracting early adopters”. Check out how @crazyloglad has been writing up his work on Arcan.
Still churning away on my speedrun timer app for macOS. (last week) It’s already functional, just fleshing out the last pieces of functionality. I’m building a history browser now, still need a global shortcut kinda thing to ‘split’, and I want it to sync with a service I’ve built.
The whole masterplan behind this was to provide a better testbed for the Twitch video overlay I’ve built. There also don’t seem to be many apps in this category targeting macOS, which is a nice plus. :)
As for Cocoa, I stand by my earlier statement that it’s a decade regression. I basically find myself subclassing everything, and writing tiny methods that poke at view objects to update UI. There’s quite a bit of hacky overriding of methods. I now have >10 classes for what I feel should be a relatively small app.
On the upside, Cocoa does take care of the burden of document handling. The autosaving system on macOS is quite nice, and you basically get it for free, all the way up to iCloud support.
All in all progress is steady, and I enjoy working on it.
Released an update to Transmitter recently. It’s a WebExtension that talks to the Transmission torrent client.
Working on adding FreeBSD support to u2f-hid-rs, it already works but not reliably :(
I’ve took the week off for Thanksgiving to get three things done:
The last item is in the process of driving me slightly crazy. Due to the way that objects are defined, its turning into a bit of a puzzle as to how I can get a representation of the source which allows for easy code generation.
When you started the Haskell book, did you have no experience with it or already some? And if little experience, how did that work out for you in terms of understanding enough to read or contribute to a FOSS codebase done in Haskell? I try to get feedback on these things to identify which books are worth passing onto others that ask me for beginner resources. A few people said they liked that one but not much detail.
I had a moderate amount of experience with Haskell. My alma mater’s comparative programming class used it for the majority of the assignments.
Oh ok. So, do you think it took things a piece at a time enough for beginners or probably start with a different book?
I do think it would be an ok book for beginners.
Still working to get a decent CD chain setup for $CLIENT.
Last week introduced an interesting interaction with @andyc, trying the shell script builder under Oil Shell. This week I’m hoping to get a small improvement landed and then ideally get the packages published.
Several admin tools build on the framework the aforementioned shell script builder is part of. Hopefully this week I’ll be able to get some of them closer to finished.
Still car-less, now waiting on $MECHANIC2 to analyse the damage and identify/quote parts needed.
I’ve put my Blog and Configuration Language on the shelf and tinkered a bit on a small kernel.
UEFI really makes it easy to write a small kernel, you boot straight into paging-enabled 64bit kernel space. Very neat. If this will ever take of I’m going to steer it into the modular kernel territory (microkernel but you run the drivers as processes in ring 0 instead of ring 3, eliminating syscall/IPC overhead) and doing mostly boring choices (capabilities, simple task scheduler, pragmatism over being fancy)
Otherwise I’ve been thinking about how to redesign my Blogging Engine since the current efforts are turning into a churn. I’m thinking about using git as a database so I can simply push blog entries and manage comment moderation via pull requests/issues.
I’ve been building delimited continuations in Mu, my Basic-like statement-oriented language. Now I’m trying to find bugs in it by using continuations in programs. I’m not fluent with continuations, so any suggestions for programs to write are most appreciated. Currently I’m building coroutines, using Simon Tatham’s famous post as a guide.
Kind of unrelated, but out of curiosity, what’s the message scheme you’re using for your commit messages? Edit: Also, mu sounds quite interesting.
Thanks! When I see interesting projects on Github I tend to scan their logs to get a sense for activity, coding practices, etc. So this approach started out as a way to help people who might be like me. (For most people, the log for a random project on the internet is just not on the radar.)
I like being able to refer to commits in later commits, thereby threading a conversation. Using a number rather than a hash allows me to get a sense for how distant a reference is. And since I’m developing alone and on a single branch, non-linearity hasn’t been a concern. If that ever changed and multiple people started doing parallel development, I wouldn’t be too concerned about dropping commit numbers.
Many of my commits are pretty low-level details. Saying nothing on the subject line is a way to make them less salient, so that the eye focuses on commits like “starting work on delimited continuations” or “done with delimited continuations”.
I like not having to come up with commit messages for trivialities like renaming a variable, or regenerating browseable versions of the sources. Since I’m mostly building for myself, I don’t have to put up with arbitrary rules about what mandatory commit messages :)
I really like those points—and thank you for taking the time to explain it.
It’s bothered me a bit too that hashes don’t really give away and information by themselves, so I think I might start adopting a similar pattern too. Causing more focus to be given to other commits is an interesting point! I have found it overwhelming sometimes when there’s commit messages for the sake of commit messages, and it can take longer to parse a log.
I stumbled onto your blog post about ‘Habitability’ and found it also quite interesting so far! Just subscribed to your RSS feed.
Things I have to do this week - mostly coursework:
I’ve been speaking to some of my friends about further embedded stuff, but it needs to take a bit of a back seat for this week. Between uni work, extracurricular work, and my non-tech stuff, I’m struggling a bit to keep my head above water.
Tryin to get F# Azure Functions working. Then the plan is to switch again to learning TLA+.
Turns out some of the stuff in the clipmaps paper I thought was unnecessary is actually necessary. Specifically, the 2 square wide tiles are needed to prevent big tiles from overlapping (they can be 1 square wide but it makes trim placement ever so slightly more complicated), and the L shaped trim tiles are needed because if you try to do stretchy seams you end up with missing triangles in the corners.
I’m quite annoyed that I had to find out the hard way so I’ll probably end up rage blogging a clipmaps overview/tutorial when I’m done.
I still need to fix a few bugs and fill in the seams, but the seams should really be easy now the trim is in place. It’s looking a lot better now the holes are mostly gone: https://i.imgur.com/ssyXO94.jpg
I got the go ahead to sell the compression algorithm I’ve been working on in the hackathons at work. The product is good and valuable but I really have no idea how to do the business side of things. Do I need to start a company to be able to sell it? Would it be better to start the company in the UK or here in Finland?
Time to start reading barnacl.es I guess.
Reading to understand the TCP/IP stack, before trying to develop a user-mode stack. Also continue with sockets programming, which I started last week. This time I’m going to look at the server side. It’s also helping me with the TCP/IP reading.
For TCP I liked the blog posts that Julia Evans did about them.
You probably already know that but a classic for Socket programming is Beej’s Networking Guide.
Workwise: only in the office a single day this week; I am going to try to close some issues that can’t wait till next week (essentially tweaking some xpaths) and then think about adding a feature our customer success team might be able to use.
Otherwise: travel and leisure, essentially. Lots of eating and down time. I’ll do some reading (I am working through Category Theory for Programmers and the three most recent issues of Poetry) and perhaps some coding. I want to write a simple CMS that serves the site through IPFS, and I am thinking about the best way to do that.
I implemented a 3 parts file transfer/storage with custom encryption per storage communication and custom protocol.
The initialization handshake is done over UDP, agreeing if the storage encryption is trusted, and the file transfer is done over TCP.
There’s 3 programs/parts:
This was challenging and fun to do.
This week I’m hoping to finish the final major missing feature before launching our Peergos  alpha - the ability to limit a single user’s storage space on our server (whilst still using native ipfs calls for writes). Then things start to get exciting. It’s gotten much faster in recent weeks, by optimising an ipfs call, fixing our UI code, and now moving the crypto to a web worker so it doesn’t fight with ui events for cpu.
Last week I found some interesting DOS potential in the cbor parser - kinda similar to a zip bomb, it’s very easy to encode something in a very small object (which isn’t valid cbor) which will explode the decoder by trying allocate loads of memory. There’s a relatively easy way to remove this possibility by checking in advance the max size of things to allocate.
Working on Helmspoint, a tool to deploy Keras ML models to the web. Wasn’t able to get to what I said I was going to do last week.
This week, I’ll be:
Trying to finally port the Oil lexer to re2c . The lexer shows up at the very top of the profile.
The very first version of the code, written in C++ in April of last year, successfully used re2c! That was one of the things that led me to think that writing a bash-compatible shell would be possible in a reasonable amount of time. (Although it’s taking more work than expected, it doesn’t seem unreasonable yet :) )
Before doing that, I have to shave some yaks regarding the AST enum types. As of this post last year , there were 219 IDs in 21 Kinds. Now there are 233 distinct IDs in 23 Kinds. That’s a pretty good measure for how large the language is.
These IDs have to be shared between Python and C in order for the lexer to be in C, hence the yak shaving. The code is getting cleaner though.
Nearly forgot the less technical work for this week:
The LSHTM MSc Med Stats inference assignment went out today, so mostly that.
Otherwise, this week is inference, regression, more epi, more clinical trials, more intro bayes.
Oh, and the poor artifex.org server may finally have working spam again, now that it’s back in a colo of sorts.
Trying to convince the G’MIC developer(s) that it’s a good idea to allow distro packagers like me to link all the binaries to a common shared library instead of compiling and shipping the common code for each and every one of the six programs that can be installed by my Gentoo ebuild. The shared library already exists, but it is not used.
So far, I’m failing: https://github.com/dtschump/gmic/pull/29
The funny thing is that I expected resistance so I double checked that the default behaviour is not altered in any way. Didn’t matter.
“Work” has mostly been trying to finish the prepaid SANS cert from my last employer before it expires next month. In between I’ve been seduced by Firefox Developer and its CSS tools to redesign my personal website.
Still wanting to fiddle more with Inferno once I put those to bed.
Also, with moving to a new apartment in an old building along with this discussion, I want to try simulating a 4G connection to see if it’d be acceptable to me for home internet, as I’m tempted to donate to Calyx to avoid having to deal with Comcast.