The weekly thread to discuss what you have done recently and are working on this week.
Be descriptive, and don’t hesitate to ask for help!
Working out the kinks of the new log-structured storage architecture for a rust bw tree! After studying the LFS paper I specced out a similar architecture that will split the storage file into segments, and manage the segments similar to a generational GC to try to group similarly aged data together across increasingly rare copies. One advantage this has over LSM trees in terms of write and space amplification is that we can atomically compact and reclaim space from old fragments over the entire storage file, rather than only merging / deduplicating updates within adjacent merged sstable files, because we can reorder things in our log during compaction as long as it preserves recovery semantics.
Anyway, once this effort is complete, it is time for the “big burn” where I more or less constantly run torturous workloads, identifying performance hiccups, and smoothing out documentation and other ergonomics, getting ready for an alpha release :)
Last week I got to have some great conversations with curious folks who read my update here, so I’ll repeat my call to arms! If anyone is curious about database engineering, rust, lock-free algorithms, performance tuning, reliable systems, or formal methods, there is a ton of juicy stuff to get through before this is a real thing, and I have a ton of time on my hands that I’m really happy to spend mentoring and collaborating with interested folks. Most of the interesting features lie in the future, like MVCC, snapshots, transactions, and merge operators, and there’s a ton of performance we still have left to be gained (currently it’s at like 7 million reads/s on a MBP, and 200-300k writes/s, although this should be far far higher).
I am writing an MUA with ncurses inspired by mutt. I wanted my first project in Rust to be a dive into the language. My goals for now are:
The UI is pretty rudimentary so far; just an index and pager (not shown). ncurses is a pain to work with, I must say. I might look into the panel library since I think it has good platform support.
I’m still working on the library part, I have implemented mail backends as traits but have only Maildir for now. I believe when I finish the ‘check for new mails on a different thread’ part the library will only have mail composition left unimplemented as far as necessary features go. I wrote my own parsers with nom which I might publish separately later on.
I still haven’t made my repo public because It is not close to being finished, it’s a lot of work. I don’t think having an unfinished repo public is a good idea, what do you think? (I also use my emails for testing which I think should be replaced with something more public.)
Nice project! I’d definitely like to take a look at your code base if you make it public :)
I don’t see why developing in the open would be problematic, especially if the software is to be eventually released as “open source” or free software. Though as you mentioned you should definitely remove your private emails before putting it out there.
Thanks a lot! It will be GPLv3, but I’d like to release it when it’s ready for an alpha or beta version. For now there’s no basic functionality other than reading e-mails, and since I’m in the design process still I guess it’s not ready for co-operative developing. So publishing it will be only for read-only in-progress code.
Glad to hear that you’ll GPLv3 it!
And you have a reasonable point about being in the design process. It’d be great if you could document your design decisions. We need more docs :) Good luck!
At work I’m switching off my (supposedly) main project for the second time to start working on re-writing the integration for █████ because they’re releasing a new API and we need to check a box to make some agreement and they’ll only check that box if we use the new API and not the old API. (Although AFAIK the old API was super unreliable, so this really is a good thing.)
For kinda work, I’m writing a Rust library to interface with TP-Link smart devices, just working on figuring out the right way to do everything in tokio and figuring out how to properly test it automatically with real hardware.
Although the first part of this week will be me not doing much of anything because my laptop just died. That will then be followed by setting up NixOS on the Dell XPS 13 that I have coming tomorrow. It’ll be my first time using NixOS for real; I’ve played around with it in a VM the last few days, but have never tried to use it for a real system before.
This week I’m going to merge a big change to Peergos which fixes a potential data loss bug under concurrent writes by the same user to the same directory/file from different machines. More excitingly, I’m hoping to get streaming end-to-end encrypted video working in Peergos.
$work: used Java 8’s streams to reduce a slightly-buggy implementation of a combinatoric search to about 30 lines of code, finishing out a large portion of the backend work.
$hobby: more work on my PL. Testing out different formulations of imperative and functional programming. (The imposter syndrome is real.) The question I’m trying to answer is: how do we best guide developers to write software in a functional core/imperative shell? Education is a part, but I think language also plays an important role! This is still a personal moonshot but I’m sticking to the Wirth school of thought in keeping the language as simple as possible and being very deliberate about features. Also playing with whether it is worth it for a hobby project to have a formal semantics or not (this is partially a function of having lots of reading time and less impl time).
Fixed a bug involving sequencing where the old environment was incorrectly applied to the second expression (ha). Ported the lexer to Alex and got line numbers in syntax errors. My Alex fear was largely unfounded! This week I want to add a few more runtime types (there is no type system yet) and start studying a basic type system to put in to finish a vertical slice (grammar + type system + compiler + VM).
$life: got a personal trainer to get back on the workout train, taking 5000IU of vitamin D before fall hits (both highly recommended)
I’m finishing up my first website: coupizza. It’s a pizza coupon finder that’s filterable by zip code. It uses the Twitter/FB APIs to find codes from merchants, then matches them to users based on zip.
I’m still figuring out how to increase the number of codes found by the DB API. I’m also trying to find a better way of deleting expired coupons (currently on a fixed timer of 3 days from when the code was inserted).
Is it still under construction - it isn’t loading properly for me.
It should be up now.
For work, I’m continuing to work one the mobile web app version of our product in VueJS.
I’ve been doing some assorted work on PISC, like preparing for an IRC eval bot, but probably the most interesting thing that happened in the last week was getting a basic vim-pisc syntax highlighter written. That now marks the second editor for which PISC has highlighting support, the first being Sublime Text 3
A couple of things:
I’m wrapping up a v2 release, with a substantially better API, of my Elixir library Hammer. Hopefully that’ll be done this week.
I’m also trying to put together a lineup of co-hosts for a podcast project, analyzing and generally gabbing about technology and tech-work from a far-left perspective, and looking at left theory through the lens of tech. It’s proving pretty hard to find people who are interested in collaborating though, and who have the right mix of political theory and tech experience, so we’ll see if that pans out.
Update: just landed a third co-host and we’re already on to planning the first few episodes!
Working on Helmspoint. It’s deploys machine learning models to the web. Currently only supporting Keras. You just upload the weights and the architecture, and we take care of provisioning the servers, the web app, and the load balancing.
Within Helmspoint, I’m currently working on making chainable background jobs. In previous work, when a user makes a web request, we want to do the work in a background job, since it takes too long to execute. However, these jobs could be complex, and might change due to changing business requirements.
Taking a cue from effects managers in functional programming, I’m separating control flow into ‘and_then’ and ‘map’ semantics, from the side effects in the jobs. That way, I have a small DSL for deploying and provision servers that’s flexible and modular. The unwieldy thing about it right now is that I need to serialize both functions and their context, to make them queueable as a job. If anyone else has a suggestions and pitfalls when it comes to serializing functions, I’d be happy to hear it.
Haven’t finished that blog post from last week, but will do so this week.
Helping out in gearing up for the third edition of RustFest.
Finishing up the last 10% of a house move that takes 90% of the time. (There’s a parallel to software development there for sure.) Then on to prepping the old house for sale! (More 90%/90% of work no doubt ?)
A python script to log work into a self-hosted JIRA instance.
Final step remains - Integrate it into my workflow as a post-commit git-hook that asks the user (me :-) ) how much time to log into the JIRA task (using the issue key of the JIRA task from the commit description)
That’s nice. You could also log TODOs and ask the user if to update existing TODOs or add new ones, or even parse TODOs from the committed code.
Just mundane work stuff, finishing up containerizing a few coin daemon’s for usage with our mining pool software and setting up rancher-nfs to use as a remote volume store for various client versions of their coins individual blockchain. Other then that I’m hashing out what instructions to include in my VMs instruction set, trying to find the nice cross section between minimal and featureful.
I’m working on integrating JIRA into org-mode so I can log tasks more rapidly than the nightmarish horrorshow that is Atlassian’s web apps can handle; helping out with process – we are pretty ad-hoc in my group at work, and getting a little more structure, even Agile structure, is useful. I’m no big fan of all the Agile dogma, I don’t like the metaphor of a sprint, &c., but having regularly scheduled activities that involve the entire team is a good way for us to up our intrateam bandwidth.
Otherwise, I’m test-driving Firefox as a replacement for Chrome; test-driving NixOS as a full-time desktop; and really regretting not going to see my friend Adam’s band play in Chicago this weekend.
Continuing development on theft:
Moving functionality that was implemented in C control flow (for loops, switch/case, conditionals) into a separate API, called “planner”, to coordinate concurrent work done by one or more processes in a worker pool, as mediated through a supervisor communicating with async IO. (Did somebody mention Erlang?) The code is a pretty simple state machine, but has a couple points where it needs to decide between multiple potentially useful options. It also lets me test quite a bit of overall logic in isolation from all the multi-process Unix syscall stuff. I wrote about some details on the randomized-testing google group yesterday.
If any of you are using theft, I’d love to get usability feedback, or hear more about how you’re using it.
Otherwise, trying to catch up on addressing fairly small PRs against other projects, since pretty much all my personal project time lately has gone into theft for a while.
Work: Fuzzing, and helping to fix stuff I’ve found. (deliberately vague)
I’m officially becoming a manager this week as I make my first hires.
Trying to figure out if I can avoid using Salt as a configuration management tool if at all possible. What a poorly designed, poorly laid out and poorly documented thing. I’m hoping to do the same without learning another language with some reverse SSH and well written, maintained and understood scripts.
In the other job, I’m going to see how much pain it will be to use Qt’s QtWebEngine to replace our currently obsolete QtWebKit. Feels like it will be painful. Do you think Electron or CEF would be better for our app, which is a mixture of native code backend with an HTML UI?
So it took two weeks longer than I thought it would, but the new owners of the company I’m working for has decided to cut funding for the project I was contracting for. (I was so close to completing the project too..)
So this week is mostly about pivoting, finding new stuff to do. Yesterday I brewed my first ever beer (a stout), so I’m going to be keeping an eye on that fermentor. I found a new part time contract to build an mvp for a startup, but that wont be enough to sustain me more than a month. Also, I used to to my daily exercise by running from home to work, so now I’ll have to adapt that.
I want to get started on practising troubleshooting using Cisco packet tracer. I plan to practice more IPv4 VLSM exercises and maybe also IPv6 as well.
Other than that, playing around with my home server and setting up a RSS reader and maybe an openvpn server as well.