This is the weekly thread to discuss what you have done recently and are working on this week.
Please be descriptive and don’t hesitate to champion your accomplishments or ask for help, advice or other guidance.
I don’t quite feel like sharing my article with the general link-hungry public yet but it describes the area. Feedback welcome.
In short: Go, X11, and the two combined.
You might like this: https://venam.nixers.net/blog/unix/2018/09/02/fonts-xcb.html.
What a coincidence. It looks like I’ll have to mention xcb_renderutil.h.
Nice blog, I’ll make sure to read it all.
I’m working on a compiler for the x11 protocol, it handles all the extensions and infers enough information to (hopefully) provide a more idiomatic interface than the other binding generators. At the moment I only have a somewhat working OCaml frontend, but I was hoping to output a rich enough IR to be able to compile to other languages too when it’s done, maybe Go would be a good fit!
That is the approach taken by the guile bindings to XCB.
I know, guile-xcb is actually what got me to start this! There’s a few other bindings generators for other languages too (I keep a list in the documentation for the one I’m working on) but either they’re unfinished or they don’t output an API that’s quite as usable as I’d like. And, of course, there’s none for OCaml yet.
Back from a bit of a summer hiatus, including a work trip to Phoenix with a short vacation (with hiking!) at Grand Canyon.
With work, I’m writing some Arm assembly in interrupt handlers which is rather fun. We’re working on something that requires minimal space and cycle usage, so you try to pull out all the hackery you can.
At home, I’m back at the debugging reviews. I’ve been struggling to get something for “retro debugging” for the last couple of months, and I have read, or am reading, nearly 7 books for the review. The problem is that each book on its own is mostly uninteresting, but in aggregate it’s much better. The problem is the disparity of details and general lack of familiarity for things from the 70s and 80s.
re interrupt handlers. Interesting coincidence given I was just reading this wondering if the priority tactic they use is common or clever. I periodically look at interrupt handling approaches. Some kernels had to turn off interrupts entirely whereas some of these RTOS’s claim to be able to keep them on without losing critical stuff due to using low-latency instructions, prioritizing interrupts, and so on. I find it interesting and maybe it be useful for servers.
I’m hacking on NewBusinessMonitor again this week. I’ve just shipped the latest feature, which is custom reports. You go into the dashboard, set your query filters to the types of companies you want to sell your products/services to, save that search, and then NewBusinessMonitor emails you every morning with the latest search results that match your filters.
I’ve had an amazing first week at Y Combinator Startup School, and the people there have given me excellent feedback. One thing I’ll need to do is completely overhaul the website design and the sales copy. I’ll hire professionals to do that — I should stick to programming!
My goal for this week is to send one physical letter automatically through NewBusinessMonitor. How cool would that be? Every morning, personalised sales letters are sent to your target audience automatically. I’m going to use this feature myself to actually grow my customer base.
Tech: Haskell, Elm, PostgreSQL, Redis, NixOps/AWS.
i’ll be working on libp2p integration testing (i work for protocol labs). excited to work towards having fully automated interoperability testing between libp2p implementations in different languages.
Heading up to Seattle for Elixirconf
I’m building an application to catalog my book collection. It’s going to use a webcam to scan ISBN barcodes, look them up at isbndb.com, and save the info to a SQLite database. For books without a barcode I want to scan and OCR the title page and look that up.
The database interface and isbndb.com lookup are implemented and now I’m figuring out the barcode scanning.
The code is on Github here and here, but it’s still a WIP.
Finally signed the paperwork to create Ferrous Systems, my new Rust company. Now, it’s time or business development :). It’ll take a little bit of a different route than my existing one Asquera.
We already have our first clients and my already running Rust training business is doing good, as well.
Also, back on developing a database access layer (for the 5th? time I threw away a lot of prototypes).
$work: Start of academic year for our customers, so expecting a couple of weeks of madness at work. Also looking into using MariaDB with Galera for multi-primary database layer and what tradeoffs we’d have to make to use it. (And how it breaks when one node reboots, vanishes entirely…? What does recovery from backup to new cluster look like, etc.)
$personal: Buying a new interior for the Mini so the front seats have lumbar support, so I can drive it for more than an hour without getting backache. Hopefully collecting some more bookcases from a colleague for home too, one can never have too many books.
…one can never have too many books.
…one can never have too many books.
Once you have 20 years worth of them and you have to move (again!), you start to think differently. ;)
Moving with a small library definitely does that…
Filing a patent.
While keeping the specifics a trade secret until it’s filed. Classic!
You know how it is! Going public before the application invalidates the claims. All I can say it’s acoustics related.
Good luck on it!
I’m mostly going to be working on infrastructural pieces (in Rust) for my music game. That includes lock-free queues suitable for embedding in VST’s, some Windows-based GUI logic, and spectrogram generation.
My to-do list for Seaglass (native Matrix client for macOS) is not getting any shorter. I’m going to try and get pagination and a few other things working and see if I can check off some things.
Backpacking in Taiwan with my girlfriend. Have plenty of ideas and things to work on, but my laptop gave up the ghost a few days back in Taitung. Probably gonna head to the Apple store in Taipei to pick up a new one.
I love how you can pick up a SIM card here for cheaps with unlimited LTE! Also interesting are the iPass/EasyCard things that you use for public transport, pay taxis, in supermarkets and so on. I’m thinking of taking mine apart after this trip to tinker with it.
Yesterday, I wrapped up the first version of cargo-cdo, a tiny utility that checks dependencies in the members of a Cargo workspace and reports any version conflicts. Our workspace is growing, and this should hopefully help keep the dependencies in sync. This week, I’d like to improve its docs a tiny bit, push it to crates.io, and perhaps write a nix expression for the build.
Otherwise, I’ve been reading about the Rust macro system, and I think I’m becoming more and more comfortable with its ways. I find macros neat for abstracting away some repetitive pieces of code, although I’m wondering how much they impact its comprehensibility. Anyone with a bit more Rust experience to shed some light on the limits of use and abuse?
Working on Israel’s Indie Hackers Chapter - indietlv.com (pretty much basic right now) and a DNS over TLS library and client for Node.js:
I’m doing a lot of writing for Merit. Last week we published an article about the performance of our new Proof-of-Growth algorithm and the numbers look really good.
My primary goal is to write a high-level design for a feature which will allow building communities and custom tokens with Merit. I usually write these things in LaTeX, and will likely do the same thing this time. I usually start first by creating an idea graph using FreeMind, though I never publish those.
Nose applied directly to grindstone.
At $work, keep making our in-house Go-based Graphite stack more resilient. It’d be good to get ops around write-limiting done, and to cancel work properly on client timeout. I also hope I can help a couple of junior developers to get their patches for gogo/protobuf submitted upstream with benchmarks proving their work doesn’t impact CPU use. They picked up some work I did on optimizing the memory allocations for deserializing packed fields and treated the general case of varints, instead of just the one of fixed-length types I needed.
At $home, … uhh… maybe carve enough time out to get a much needed haircut? Having a family turns out to be quite time-consuming. :)
At work: Continuing infrastructure improvements & automation.
At home: I started a side-project last week that I’m super excited about. It’s a file-synchronization thing written in Rust based on the notify crate. The idea is to watch a local directory in real-time and sync file updates to a remote server, which can then fan those updates out to other servers near it. I feel like I’ve gotten 90% of the fiddly bits working in my prototype. Hopefully this week I’ll have the skeleton working and then I’ll upload it somewhere public.
I am messing around with a hand-built parser. I’ve only ever used parser generators (yacc, menhir, lalrpop, etc) so this is my first time ever attempting to understand how parsing actually works. I finally wrote a working tokenizer, though, so that’s progress.
Still dealing with Japanese, signed up to the official language school.
At work rushing the release of the product that should be done in 1-1.5 months.
I was on vacation last week in Wisconsin.
Work: I spent much of yesterday and today extracting a collection of .tar archives to a case-insensitive, case-preserving file system (Windows). There were a number of files and directories with names that differed only in case, e.g. files named “filename.txt” and “Filename.txt”. I expect to spend tomorrow and Friday on a number of ETL tasks that I received while I was on vacation.