1. 12

This is the weekly thread to discuss what you have done recently and are working on this week.

Please be descriptive and don’t hesitate to champion your accomplishments or ask for help, advice or other guidance.

  1. 9

    Currently working on a GTK frontend to Alacritty.

    Also recently switched my libweston bindings to foreign-types instead of a hand rolled mess (I wish I knew about foreign-types before, only accidentally discovered it via a very lucky google search). Also made a list of GTK applications you might not know.

    1. 6

      I’ll continue experimenting with the “12 week year” idea: setting a small number of goals and working towards them for 12 weeks. Selecting just a few goals and devising strategies to accomplish them is pretty darn useful. It feels really good when you get to the end of the week and can see the progress you made.

      One of my goals is to read the “Designing Data-intensive applications” book in this period. I’ll keep up with my daily read during this week.

      1. 6

        I spent the weekend working on a project just for fun, a CHIP-8 emulator. I’ve been meaning to write an emulator for a while and thought of giving it a go. Of course, being a hardware nut I had to build my own hardware to run it on. My hope is to get it all working using an Atmega 1284 or 644, with an SD Card reader, keypad and beeper on an OLED screen, then 3d print a case so I have it in a nice handheld form factor. After I’ve finished this I’ll look at adding Superchip support, which makes full use of the OLED. I’m using a Haltec OLED for now which is erm… not very good, but am waiting on getting parts from western suppliers (probably an SSD1306) so people can more easily build their own.

        So far I have most of the opcodes working, the screen running and a keypad working (but I haven’t integrated it into the emulator yet). I’ve got it loading roms off SD card and was up all night last night trying to fix bugs in the screen display code. It’s been a heck of an experience and thanks to the long weekend here in the UK and an extra day of, I’m going to spend a bit more time on it before going back to work.

        Obligatory weekend pictures:

        I’ll post updates and progress on stevelord@mastodon.social. Once I’ve got something stable I’ll post a writeup and the code here.

        1. 5

          I’m keeping an eye on the latest restic change which allows using rclone as backend. That would make my operations much simpler (I could simply do a OVH>Cache>Crypt setup with Cache size of about 30GiB to get decent performance). I’m planning to switch once this feature is included in a release and prepare setting all that up.

          Secondly, I’m reworking my youtube hoard, the number of files got a bit too large for Nextcloud to handle and caused server outages during sync (~16000 files in a single folder aren’t a great decision).

          Next, I’m writing a federated link aggregator. It’s a bit of a toy project but I think I got most of it figured out; I’ll have the server use drivers for interacting with remote communities (such as subreddits) which are simply another binary that is called and uses HTTP over Stdio to communicate. That way I can simply plug in an adapter for, for example, lobste.rs or reddit and it’ll handle natively. I’m not sure on all the interfaces yet, I’ll have to finish most of the basic design before I can nail the entire interface into stone. (Reason being that ActivityPub is complicated and implementing it in go is a bit of PITA, so I’ll implement it in something else and simple communicate with that)

          The idea is to have something that handles like mastodon (and ideally allows mastodon users to interact with threads as if they were toots if they’re shorter than 500 characters) and can federate into the non-federated internet. I feel like enabling people to follow and participate with normal reddit and also federated communities would drive greater adoption.

          I’ll see how it turns out, it’s a bit of a slow tinker project, which gets my hopes up that I haven’t been distracted by end of the week.

          Lastly, I ordered a logic analyzer so I can analyze my wire protocol I mentioned last time so I can get an overloaded arduino to communicate (slowly) with other arduinos. And possibly fuck around with a logic analyzer (yay!)

          1. 5

            Over the long Easter weekend, I flew back to the UK to see my family. I’m using the extra time travelling and not working to make progress reading Rhythms of the Brain (I’m currently about 1/4 of the way through, and it’s very good so far), and continue with Coursera courses on Maths for ML and Computational Neuroscience.

            I’ll be working from Manchester next week & will likely be doing lots of social events in the evening, before going back to Munich next weekend.

            1. 5

              Release nuster v1.7.9.9, a caching proxy server.

              Added cache stats functionality, fixed a security bug.

              1. 5

                Last week I set up the Lobsters April Fools Day gag. Later today I’ll be taking it down again. I wanted to let it run a few hours into Monday so people who only check from work see it - there’s some nice reactions on AIM. Otherwise I’m finishing up folding the site for my talks + long-form code writing into my blog.

                1. 5

                  I’ve been doing some go/sqlite insert performance testing with differnent journaling modes. For grins I’m benchmarking against postgres. I started testing serial inserts (no concurrency), sqlite can easily beat postgres in this case (with the right settings). I set up a simple http server to test concurrent inserts, sqlite suffers in this case as you might expect. I welcome critique of my methodology if anyone wants to take a look:

                  https://github.com/mtayl0r/sqlite-test

                  1. 6

                    Increasing the page cache size can help with bigger transactions. And read workloads of course.

                    -- e.g.
                    PRAGMA cache_size = -131072; -- 128mb
                    -- or
                    PRAGMA cache_size = -1048576; -- 1gb
                    

                    If you’re okay with relaxing durability, running with synchronous in NORMAL mode can help with perf.

                    PRAGMA synchronous = NORMAL;
                    

                    It will run a sync on every checkpoint, which by default is every 1000 pages of writes. You can control this on your own by disabling automatic checkpoints and running a separate thread to periodically checkpoint.

                    PRAGMA wal_autocheckpoint=0;
                    
                    -- sync WAL and write back as much data as possible without blocking
                    PRAGMA wal_checkpoint(PASSIVE);
                    
                    -- sync WAL and write back entire WAL even if writers must be blocked
                    PRAGMA wal_checkpoint(TRUNCATE);
                    

                    That’s what I do. Specifically I run a passive checkpoint every second, and switch to running truncate checkpoints every second if the log file exceeds 1gb. The transactions are all pretty small so that doesn’t really happen, it’s there as a failsafe against excessive disk use.

                    Running a checkpoint every second ensures that the WAL will be fsynced every second, which might not otherwise happen in synchronous=NORMAL mode with pagecount-based automatic checkpoints. If you really want to retain every write then you should run synchronous=FULL, but honestly this is a pipe dream for normal use cases. You can’t rely on that guarantee without undertaking significant effort to ensure your hardware actually will actually respect the sync in the case of a power loss. That means battery-backed storage, monitoring the status of your storage controllers and disks, and stopping your application at any sign of trouble. I think it’s easier to block at the application level during important transactions until the change is visible on a geo-redundant replica.

                    More application authors need to be aware that their data isn’t likely to be durable unless they take great pains to ensure it actually is. Loads of people assume SQL database = durable, when in reality 1 second of data loss maximum is probably a lot better than most people have. And for most use cases complete durability isn’t even a valuable feature. If a tweet gets lost, or a match win doesn’t increase your ELO, it doesn’t really matter. You probably have worse bugs. As long as your database is consistent after a power loss, you probably won’t even notice 1 second of data loss.

                    So there’s my soap box rant about why you shouldn’t, bother running with synchronous=FULL if it’s going to hurt your perf.

                    1. 2

                      Thanks for the feedback, for concurrent writes, to start I’m just trying to avoid failures due to the database being locked. So I’m trying to “serialize” writes in the http version with a goroutine. I figured putting all writes in a goroutine (blocking, unbuffered) using a single “write” database connection would ensure all writes are serial, but that doesn’t appear to be the case, as I’m still getting database locked errors.

                      I’m new to go so still trying to wrap my head around whats going on.

                      1. 2

                        You can’t do that with Go, because database/sql does connection pooling no matter what you try to do.

                        You shouldn’t do that anyway, since SQLite in WAL mode is fully capable of handling concurrent writes. You should handle SQLITE_BUSY with a short sleep and a retry. In WAL mode that should only happen doing tx.Commit().

                        1. 1

                          Actually, I’ve been reading the code for mattn’s sqlite wrapper, and it doesn’t do any connection pooling.

                          1. 2

                            I know. But the interface provided by the database/sql package does. Those SQLite bindings only implement the low level Driver interface from database/sql/driver, which gets wrapped and managed by sql.DB.

                            From the database/sql docs:

                            DB is a database handle representing a pool of zero or more underlying connections. It’s safe for concurrent use by multiple goroutines. […] The sql package creates and frees connections automatically; it also maintains a free pool of idle connections.

                            But it looks like it’s no longer impossible to get an individual connection, as of ~7 months ago. Go 1.9 added the DB.Conn method to get an individual connection.

                            I can’t imagine how a serial writer would get contention, even with a pool, but I wouldn’t be surprised to see it happen. Connection pools tend to have pretty mysterious behavior. If your http version is hitting locks but your ordinary version is not, I believe your write serialization code must be wrong. Your latest code on GitHub doesn’t appear to attempt any write serialization so I can’t currently review.

                    2. 2

                      hah, for fun (as always) I started a pure Go sqlite reader over the weekend (https://github.com/alicebob/sqlittle). I don’t plan to add write support, though.

                      For now it’s all low level routines to read the tabes and indexes. I’ll try to add some very basic SQL support.

                    3. 4

                      Haskell beginner here.

                      Working on a Haskell CLI tool for managing my contract working hours on a daily basis and keeping track of how many hours were worked for whom, etc, and doing so in plain text files, similar to https://github.com/ginatrapani/todo.txt-cli.

                      Here is the planned API, and I’ve only really gotten started on the ls feature:

                      https://github.com/rpearce/timetrack-cli

                      Any feedback on how to approach this is <3 and welcome.

                      1. 4

                        I have continued to hack on ISETL. My next two tasks will be to make all of switch expressions exhaustive and clean up the numerous cases of unwarranted chumminess with the compiler.

                        1. 4

                          I am nearly done the MVP of a GTD application that runs on Nextcloud, and I am really hoping to get that moved to a place where I can just import my massive inbox, and work from there. Along the way, I am aiming to improve the Nextcloud documentation improved.

                          I have been toying with the idea of starting a Nextcloud hosting service, and I have set aside some time this week to go through a complete accounting of what that would entail, and whether or not it would be something that could at least cover expenses. (I already host more than one Nextcloud instance, so I have the expertise required.)

                          I am still really struggling to get my personal computing environment in a state where I can be productive again, with the constraints I’m placing on myself. I really do not want to give up the constraints, but I also want things to work well enough that I can make forward progress in a reasonable way.

                          1. 1

                            Oh this sounds excellent! Please email me when you’re ready; I’d love to test this!

                          2. 4

                            I’m hoping to get v2.2 of the Shell Script Library pushed out tonight.

                            That will facilitate following through with the v1.0 release of the Proxy Manager and the Failover Manager.

                            Then onto a mix of direct client work and hopefully PXC Manager similar to the above tools.

                            1. 2

                              I forgot to mention of course I’ll also be working on ripping my eyeballs out so I don’t have to see this abomination of an April fools joke any longer.

                                1. 4

                                  I’m continuing to read The Pragmatic Programmer (p.100), got sidetracked by reading The Checklist Manifest and The Culture Map. After TPP I’ll continue Clean Code (p.180), hopefully finishing them both this month.

                                  Last Saturday was very busy physically, with my flatmate we spent basically the whole day moving the house around, going to IKEA, installing new lights in our rooms/offices, painting a wall and just general cleanup. So now I have an adjustable standing desk from IKEA and nice passive lighting along with a desk lamp 〜( ̄▽ ̄〜)

                                  Also this weekend was reinstalling Arch Linux and setting it up, this time going with the Budgie Desktop, see how that goes (good so far). I want to get it ready for some Java development as I’m learning that.

                                  This week will be to start work up after the extended weekend from Easter and I also want to spend some time learning Java, doing something real with it.

                                  1. 4

                                    Last week:

                                    This week:

                                    There are two more things I could do to speed up do to speed up guitktk. But its a complete redesign and rewrite of some parts. And it would take up more memory and might make other usecases slower.

                                    On the other hand, if this works, it may be the last speed up I’ll need for a while. I’m still undecided if I should do this.

                                    Maybe take a short break on projects. Efficiency is dropping.

                                    As per the blog post, look into making a stack and memory visualizer for Flpc in guitktk. All of memory is probably too big and slow to render (about 10000 cells) so maybe only view part of it.

                                    This means picking a ptrace backend between lldb, gdb and direct ptrace. Anyone knows of a more pythonic ptrace library?

                                    Both lldb and gdb are pretty heavy. lldb’s console can drop into a Python console so I can test things out directly. Both extension interfaces are a bit clunky but beats rewriting something from scratch.

                                    1. 4

                                      Continuing the home automation front with a xiaofang wifi camera that arrived over the weekend, so I need to get that reflashed and hooked into the house. Should probably redeploy the MQTT server and HASS now that I’m leaning on them seriously. (Currently they were stood up by hand as prototypes. We all know how long prototypes live for…)

                                      For the first time in my life, I now own three cars as well. Luckily we also have a three car drive, so this isn’t an issue yet. The more amusing thing is the new-to-me car isn’t actually new to me, I last owned it in 2011. Only has 10k more miles on it since then. Given it was given to me for free and may well die in the next 12 months, I’m treating it as a cheap crap car to do whatever I like to. Currently in the middle of bodging DRLs onto it (these from amazon are E4 certified! And bright! Most excellent.) Also have a tune box on the way, which should take it from 112bhp to 130 or so, and add around 80Nm of torque hopefully. Debating going the whole hog and fitting a light pod to the bonnet, because rally car.

                                      1. 4

                                        Hoping to get my project Amiga 2500 back together again so I can use the desk space for the next repair project, which may or may not be a Mac SE FDHD.

                                        1. 3

                                          I’m going to end up spending a large amount of my time this week dealing with the aftermath of trying to switch phone carriers. My old device doesn’t work on their network so I need to go buy a new one and return the rose gold loaner phone. Once all of that is taken care of, I’m going to work on getting my mastadoon integration branch rebased on top of the rubocop work that pushcx merged in.

                                          1. 3

                                            At work I’ll be going on my third two week sprint, working on UI cleanup and other such things for the product I”m working on.

                                            Away from work, rather enjoying the April Fool’s joke we have going on right now, and enjoying some Tradewinds Legends.

                                            1. 2

                                              Learning how things are done at my new job, continue attempting to fix my sleep schedule so I don’t fall asleep at the aforementioned new job, continue attempting to move from Bash to Zsh…

                                              btw, I finished that mandolin strap I mentioned a few weeks ago: https://www.instagram.com/p/BgjgrF9g5vN/

                                              1. 2

                                                Nice work on that strap. Super unique

                                              2. 2

                                                Last week I set up the Lobsters April Fools Day gag. Later today I’ll be taking it down again. I wanted to let it run a few hours into Monday so people who only check from work see it - there’s some nice reactions on AIM. Otherwise I’m finishing up folding the site for my talks + long-form code writing into my blog.

                                                1. 1

                                                  I’m working on making IndieMark something people can get a score on without too much hassle. Currently, I’ve been eager to get more into the space to get a better understanding of what it is. They encourage using the practice of self dogfooding so I’ll leverage that.