Hello Everyone, What side project are you working on? I’m currently developing my old project further such as adding new features to it.
It’s a project that allows you to send sms without using any thirdparty apis it uses simple smtp and sms gateways. So I’m developing a frontend for that project
For the past year I’ve been converting archive.org CD/disk rips into modern formats.
I love that archive.org make these available. My goal is to make them accessible. Lots of art, music and animations are locked away in ancient formats that few can access today.
I extract the original CD/disk and recursively extract and convert all sub files. Archives (ZIP/ZOO/SIT) I extract. Images (PCX/TGA/PICT/TIFF) to PNG. Music/Audio (MOD/MID/S3M/AIFF/AU) to MP3. Video (FLC/FLI/AVI/SMK) to MP4. Documents (DOC/WP5/WRI) to PDF. Fonts to OTF/TTF & PNG preview. A single CD ISO can baloon to over 100,000 files.
The converter part currently supports 608 formats: https://github.com/Sembiance/dexvert/blob/master/SUPPORTED.md
The website part is a work in progress. Likely another year of work before I put it live so folks can explore all this content. It has faceted search with a full text index including OCR’ed text from images.
The website does not require JS/CSS/HTML5/HTTPS so it works well in text based and vintage browsers. This allows retro system users like Amiga/Win95/AtariST to directly access the website and download useful shareware/freeware.
If you mean side projects with the intention of making money: none. My side projects are for fun and for things that are useful to me.
Currently I am solving the last bugs in GTE: Getting Things Email. A todo-app/task management system based on IMAP. That is, all the tasks are stored as emails. Basically, I got fed up with the million todo-apps that are already out there, because exactly zero of them can meet the (I think) reasonable requirement that I can use it with clients that are native to my devices. If you work in the terminal and you have a phone that does not run Android or iOS, then… nothing is available. And that is just the first requirement I have.
So the idea is to have something that runs on email, because email is supported on every platform and I can immediately use it from everywhere and later, if I want to, I can build dedicated clients on top of that. But it already works terrific for me, so maybe I’ll never get to that last part.
Are you intending on releasing that? Because it sounds very interesting, mostly because I was thinking something like that a while back (see https://lobste.rs/s/8aiw6g/what_software_do_you_dream_about_do_not#c_6bpbbx) but never got around to doing anything about it.
Currently I have no plans to release it. Mainly because… I haven’t really thought about that yet. I am not sure if this is interesting for other people. It would need some serious polishing and idiot-proofing for that, I guess. With “solving the last bugs” I actually meant “fix the things that still annoy me on a weekly basis” :) The things you mention in your comment are possible, in theory. It is all very basic at the moment.
I did plan to write some posts about it, but I haven’t gotten around to that yet. If you want to have a look, here is the source code: https://git.sr.ht/~ewintr/gte
This is absolutely interesting and very much in line with what I’ve been looking into as well – a collection of tools that utilize email for the heavy lifting and perhaps are able to work under any client (implying the tool is run as a service or in a recurring fashion). Things like:
The code you’ve shared looks awesome, I’ll give it a try. But seriously, email infrastructure and semantics solve a lot of the issues inherent with these sorts of tasks, and there’s not much in terms of self-hosted tools that utilize email in solving them, so kudos to you.
Thanks! If you (or anyone else) have questions or suggestions, feel free to send me a message.
Working on making a programming language called Garnet. The goal is basically “what if Rust, but small?”. Take an OCaml/Haskell-ish type system, add move semantics and a borrow checker, and see if it can be made small and powerful enough to be usable as a lingua franca similar to how C is used: The ABI works anywhere (one way or another), there are compilers that work even in places they probably shouldn’t, building a simple compiler from scratch can be a medium-sized one-person project, etc.
Currently it looks doable, with your basic language primitives being functions, data types, and namespaces. I want to try to do something about some of the icky/weird parts of Rust (
FnMut(), oh my, how do you tell the differences between these things and where does it end…), which may or may not be possible. I want to nail things down to have as little Undefined Behavior as possible, which may be a fool’s errand but I’m fine with trying. And I want to try to preserve the Totally Awesome things such as macros/derives.
So far the hard part is dealing with generics, one way or another; I want to explore the design space between monomorphic and polymorphic generics a bit more. It looks like you can find a nice middle point of performance, simplicity and generalness with them… until you start adding type bounds of one kind or another at least. Swift provides some interesting inspiration there but not all of it seems suitable for a systems language, so I have work to do. We’ll see how it goes; I haven’t even started figuring out how to write a borrow checker yet. But it can compile and run a Fibonacci function, so how much harder could the rest of that be?
I really like the idea of compiler as a library, particularly for a bootstrapped language. Anders Hejlsberg talked about how modern compilers have moved from a batch design to a sort of query based design to support IDEs with interactive workflows. A compiler as a library approach would be a strong basis for this.
Yeah, it’s really weird how primitive “conventional” AOT compilers feel sometimes. “Input a file, exec a process, get the output as a file, do stuff with it”; even
rustcdoes things this way. Why can’t I just give it an AST to read, call a library function, and get a function object back? That’s more or less what Rust’s proc-macros do, but you need to make a whole ‘nother compilation unit to do them in. Or worst case, can I give it source code and get a dynamic library back, and then just call
dlopen()on it myself and call the functions I want? Apparently not. Why not? All the machinery is there, and has been for decades. Any programming language with a JIT has been doing exactly this for decades as well, except their problem is even harder. I like my immutable statically-linked binaries with no dependencies and no built-in runtime as much as anyone else, but it seems very limiting for certain things. If your compiler is a library that can run fast, consumes a few megs of space, and produces code with a known ABI, then what’s holding us back?
I think part of this is the issue of ABI’s. It’s a complicated topic that I don’t actually know a whole lot about, but something as conceptually simple as loading and running a DLL can be fraught with weird, platform-specific edge cases. Raymond Chen’s writings are a rich source of examples. But I think even a simplified version of the full power of that sort of interface can do a lot of interesting stuff.
TCC, the Tiny C Compiler, can be used as a library, and it can compile code directly into RAM and have it linked with running code in the process. To see an example, I wrapped TCC into a Lua module and used it to load Lua modules directly from C source.
With that said, I don’t use this in production, but it was a fun project to do.
Julia maybe does this? It uses LLVM to do compilation on demand and you can easily get the output of different levels of the compiler for a given function with
In April, I’ll have hacked on HardenedBSD for eight years. It consumes 100% of my spare time. It’s a heck of fun. There’s so much to do, so it keeps me busy and there’s always something to work on.
My open source side project for almost a year now is GitUI: https://github.com/extrawurst/gitui A fast Terminal-UI for git written in Rust.
It got especially rewarding when people started contributing❤️
Personally been hacking a bit on misc tooling for my secure boot stuff.
sbctl - secure boot control. Essentially aims to provide a nice abstraction over managing your self-enrolled secure boot keys.
go-uefi - efivarfs bindings written in Golang. Been slacking a bit but the goal is to have integration tests with ovmf/edk2 and qemu using vmtest
EDIT: I almost forgot I had a talk about this on FOSDEM last month :) link
Apart from that I have been implementing support for debug packages and debuginfod in Arch Linux. Debian got it before us so I have been working consistently on this the past week to ensure I don’t bring shame upon my future children generations to come.
This is a very real problem! Secure boot is great but the tooling around it is painful.
Yes! I’m amazed that nobody has really taken a good look at the tooling. The good news is that people are working on it. systemd-boot is looking at enabling better support which is great!
I have been working on a personal replacement for ncurses, for use in my own projects. It does not have a lot of features yet (colour, for example), but I am continually improving it.
I’ve been doing something similar for TUI windowing/controls on Windows, in the hope that with a good library for it, I’d write useful TUI programs. Unfortunately that second part has proven hard - I haven’t found many problems where TUI seems better suited than CLI. Do you have any TUI applications in mind that you’d like to see?
Personally, I’m writing the quintessential rogue-clone for my library. After that, I’ll probably start work on a text editor. That’s just my ideas though, I do understand if you want to do something innovative, but in that case I can’t help you.
Admin panels of various sorts, IMO. Things like the Services admin window, where I can open/close things. Site checkers that report up/down.
Those are the things I’d write as TUIs
Wow, I wrote a curses replacement in the 80’s to run on DOS, definitely a lot of fun.
Edit: remembered some details I wrote it in Turbo C and used a telnet library but the library I had was the small memory model, so the program was a maximum of 64KB. I could only implement a subset of curses because the program exceeded 64k, the large model library cost money, so 64K it was. It was used quite a bit iirc.
I am trying to find new ways to draw geometric shapes. No comercial intentions.
Do you have a repo available? That sounds interesting!
sorry for the long delay in answering. Things are scattered in a couple of repos that I keep changing.
The main one is this: https://github.com/HugoDaniel/shape-the-pixel (still disorganised and not ready to be followed I think).
The most recent experimentation is this https://github.com/HugoDaniel/OnlyLines that will be merged soon(wish) in the shape-the-pixel one.
In summary I am trying to do the most simple drawing app possible. In order to achieve that I work with a set of limitations and rules that are in opposition to what is commonly done:
Dragging is the main action, and infinite lines are the base shape. When dragging in an open space a new line is formed. When lines intersect a point is made, those points can then be dragged to form circles. Then there is a lot of other small things that come and go as experimentations keep proving them more or less useful.
I am trying to avoid overthinking things too much, and keeping it as simple as possible, this is very hard for me to achieve and it is the fruit of many experimentations and failed attempts throughout the years.
Here is the most recent video with the current state of it:
I am not sure why I am doing this, sunk cost maybe, but yeah :) thanks for asking about these, it feels great :D
An ad/junk blocker that:
Good luck. I’d love a “junk blocker” that goes beyond ads to remove things that users (obviously) don’t want: synchronous popups to join mailing lists; synchronous popups where devs brag about the latest features they’ve added; consent for tracking cookies; background JS that’s not driven by user interaction, etc. Unfortunately I don’t know how to build one that doesn’t devolve to a site-by-site, element-by-element breakdown of which components are obnoxious.
I’m getting a lot of user out of “reader view” these days. Do you know how those work? Do they end up encoding known elements of major sites or is it purely heuristic? It’s not perfect, but there’s a good chance if a site does a synchronous popup that reader view will fix it.
I think almost all the browser-based reader views work like the original Readability bookmarklet code.
Don’t know if this is based on that code, but it should do what you want.
It will not go into that.
…but, by blocking third-party js by default, most of that goes away like magic.
Only per-site config available will be a domain blocklist. Or an allowlist that allows more stuff through.
I mentioned it on the “what are you doing this weekend?” thread last week, but it’s the plain-text-file-as-a-datastore lobsters/HN/reddit “clone” here
Yes it is as cursed as it sounds, but it has been quite enjoyable to hack on so far
edit: also building a little web app that will allow me and my wife to keep track of the words my little boy says, because he’s in speech therapy
Hahah, I’m doing a similar thing on https://littr.me. I’ve written so far about 4 storage backends for the thing including one that saves the objects as json files in a directory structure. Do you have any tips for text as a datastore usage (improving caching, searching, etc)?
Mine is cursed, far from production-ready. Regarding searching, I’d say index things. I did this on another unfinished project which is a search engine that indexes json files it’s scraper creates: https://git.sr.ht/~ols/veri-index
Most recently I’ve published a basic web app called Vizor for interacting with Google’s Vision API to extract text from images. I have a small backlog of public domain books I’d like to digitize with it, and give them the hypertext treatment on llll.ro.
Cool project. Are you uploading the books to archive.org?
By the way, I’ve found that the Microsoft Vision API is just a tad bit better at properly identifying text in images: https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/
Don’t even bother with the Amazon Rekognition, it’s horrible at properly OCRing text.
Eventually, I’d like to. But I need to figure out a rig to take proper photographs — the kinds I’m producing right now are merely for OCR purposes.
Thanks, I’ll check it out! I’m doing OCR on Romanian texts that use older orthography and diacritical marks, so Google’s offering was the only remotely offering adequate results among the options I explored. I’d be great if I could squeeze some extra accuracy out of the process.
You could always just upload the OCR text results and not worry about the photos. As for Microsoft Vision, I only tested with English, so not sure how it’ll perform with your use case. Good luck!
I mentioned it in the weekend post, but I suppose it’s ok to mention here again. I am working on building a clone of Lobste.rs in Elixir/Phoenix. The application is complex enough to force me to really learn web development, but not too big that I’ll never finish. It has been a lot of fun so far!
Part of my motivation for this is that I’d also like to create some developer tools, but that’s hard to do without dogfooding. I have a basic log ingestion and search service written, and I need logs to test it! I also have a few ideas around CI I’d like to build, such as auto rollback based on metrics.
P.S. If anyone is interested in checking out the events site when it’s released, shoot me a PM with your email. No mailing list BS, I’m just gonna send an email with a link once it’s ready.
runson.cloud which is a service that aggregates many of the public IP address range lists published by major cloud providers and CDNs and provides two query interfaces over that data.
The inspiration was icanhazip.com in the sense that the site exist primary to serve the CLI and other lightweight integrations.
The Go libraries for parsing the data for the big 3 clouds are available and come with a CLI interface of their own for local use:
My utter frustration to Helm lead me to write dy so that one can use the filesystem to sanely construct YAML documents.
Kapo was written in a similar fashion in order to abuse (at the time) ELBs to health check and autoscale non-HTTP services.
5 years or so ago I wrote a benchmark and functional test harness for ELK stacks.
Currently working on something that can be used to apply linux traffic control setups and planning to eventually build it out into more some kind of QoS control plane daemon. The hardest part has been actually figuring out the operation of this tool that it become easier than a shell script with just
tccommands (since that would be the main competition of this tool). It’s been a good excersise in stopping myself from overthinking certain things and just write them to see what happens, since I find myself reworking the same functionality often just because I think it could be done better.
Progress is slow though: sometimes lack of motivation, other times I get nerdsniped into spending the weekends resolving weird bugs and problems in other projects.
Are you using XDP for this purpose?
I am working on GPS analysis for rowers: it takes a GPS file and computes metrics that are useful to track performance. I’m doing this specifically for rowers in Cambridge UK who are rowing on a narrow river. The feature set is similar to what Strava does. Currently it is a command-line application that generates an HTML report. An example is here: https://lindig.github.io/tmp/example.html
The ideas are not necessarily tied to rowing. They could be also used for running or cycling but these domains are much better served by existing tools than rowing. Main ideas are:
I have been working on (and currently extending) these:
RTFM - a terminal file manager with a ton of features: https://github.com/isene/RTFM
Astropanel - a terminal dashboard for amateur astronomers that shows weather forecasts, Sun/Moon/planets rise & sets graphically as well as positions in the sky, a starchart, etc to help decide when to bring out your telescope: https://github.com/isene/astropanel
T-REX - a terminal RPN calculator similar to the range of HP calculator from when they made quality calcs (70s & early 80s): https://github.com/isene/T-REX
I’ve been slowly making progress on a tool for passively monitoring TCP RTTs using eBPF and golang. The basic PoC is done and it just outputs a single RTT derived from SYN/SYN-ACK to stdout, but the plan is to eventually have continuous RTTs for a TCP connection inserted into a timeseries database. https://github.com/MarkPash/flowlat
There are a couple of other small things but the code isn’t open source yet so it’s barely worth mentioning until I make the repos public.
That’s really cool!
I am working on a SaaS startup to provide tools for boards of directors that will help imrpove the quality of information they need to govern better while reducing the amount of work for staff and management to provide that information. The first implemented is a simple risk registry to replace the spreadsheet file we were using (I am on the board of a small CU here in the Fraser Valley). Others are planned. Working on the marketing site for it and then the TOS and privacy policies.
My most “serious” side project is probably swaylock-effects, though I don’t work that actively on it currently because it does what I want it to do. I mainly just get in there and fix some bugs sometimes.
I’m making a log analyzer called lograt, because that’s something I needed and I didn’t like the existing alternatives I found - plus learning GUI programming is nice.
I’m also working on a programming language, currently under the working title lang2. It’s my first attempt at making a language which compiles source code into bytecode and then interprets the bytecode; previous languages have just built a syntax tree and interpreted the tree directly. I think the syntax of lang2 is pretty cool, and one interesting thing is that it doesn’t ever build a prase tree, the parser just emits bytecode as it goes along. The VM is stack based, so with some careful design, it works out.
Outside of programming, I’m playing with making music. Here’s my current music project. I’m fairly pleased with how it’s turning out.
I’m also working on a game, which I won’t link to right now but it might eventually become something serious.
Here’s my comment about compiling directly to bytecode. One of my earlier languages did this. The advantage was, there wasn’t much code, and the compiler was super fast, and the interpreter was fast. But once the transformation from syntax to bytecode becomes complicated enough, it turns into a liability. I gave up the approach after my code for compound locatives (eg, like the left of this assignment: a.b[i] = x) turned into an unmaintainable mess. So you need to limit the complexity of your language to make it work.
My current language generates a concrete parse tree, then transforms that into an executable tree that is then interpreted. That has worked really well for several years. But now I’m at the point where I am trying to optimize and do more ambitious code generation. Once again, I’ve run into data structure limitations, so I’m going to need another pass: parse tree -> IR -> executable code.
I actually think it’s possible to write a formal proof which proves that any language which can be represented as a parse tree can also be compiled to bytecode directly if the bytecode is carefully designed and the VM is stack based. My observation was that a stack over time represents a tree. The call stack over time becomes the parse tree, and it outputs bytecode which makes the VM’s stack reflect the parser’s stack.
My language’s equivalent of
a.b[i] = xwould be
a.b.(i) = x(dot syntax is used for all lookups; arr.10 looks up index 10 in an array, arr.(idx) looks up the index given by the expression
idx). Here’s the bytecode it will output:
STACK_FRAME_LOOKUP 0x00000007– Look up name
a(numeric ID 7) in the current stack frame. After this instruction, a variable reference is on the stack.
NAMESPACE_LOOKUP 0x00000008– Pop the variable reference on the stack, look up name
b(numeric ID 8) in that variable. After this instruction, a reference to the variable given by
a.bis on the stack.
STACK_FRAME_LOOKUP 0x00000009– Look up the name
i(numeric ID 9) in the current stack frame and push the variable reference to the stack.
STACK_FRAME_LOOKUP 0x0000000a– Look up the name
x(numeric ID 10) in the current stack frame and push the variable reference to the stack.
DYNAMIC_SET– Pop the stack, treat the variable reference as the value to be assigned. Pop the stack again, treat the variable reference as the key. Pop the stack again, treat the variable reference as the array which will be assigned to.
It’s able to handle all kinds of combinations,
a.b.c.(+ 10 20)().(foo()) = 11works just fine. It’ll find
a.b.c, look up the index given by the sub-expression
+ 10 20, call the function (assuming index 30 is a function), then assign the value 11 to the name or index given by the sub-expression
The main thing I’m really missing by not doing an AST is the ability to do AST-based optimizations. If I want to do constant folding for example, I have to do that on the generated bytecode with the current design, which is much harder than doing it on a tree structure. I would probably have used an AST if it was a serious language project rather than a toy, but I wanted to experiment with a different approach. As it is, the parser/compiler generates the entire program in a single pass, without producing an AST, and it writes as it goes to a write stream so it can’t even make tables for things like functions and string literals, it has to put those things inline with the rest of the code.
I think there is an additional constraint: you have to design the syntax of the language so that you can generate code blindly, without knowing what is coming later on in the token stream. In my language, the token
xcould be followed by
:=, the assignment operator, in which case
xis a locative. Or
xcould be followed by
+ 1, in which case this is an expression and not part of an assignment statement. Or
xcould be followed by
->, in which case this is a lambda expression, and
xis the formal parameter of a new function. I can’t generate the same code for
xin all 3 cases, without knowing the context in which
xappears, and this problem generalizes to the case where
xis a more complex expression. But given that constraint, your approach has the benefit of extreme simplicity and a fast compiler.
I notice that you use
a.1as the syntax for array indexing. I briefly considered this idea, but then I backed off because it conflicts with the syntax for floating point numerals. My lexical analyser will tokenize this expression as
(2 tokens, not 3). You have either overcome this problem, or you don’t have floating point numerals.
I do support things like
a.b.(i) = x := 10just fine. My main invariant is that every expression leaves exactly one new variable reference at the top of the stack. The
a.bpart leaves one variable at the top of the stack, then the expression
ipushes another variable reference to the stack, then when the parser sees the
=, it generates code for the expression
x := 10which results in a third variable reference being pushed to the stack, then it generates an assignment instruction which pops the value, pops the index and pops the thing being indexes and pushes the value. Therefore, it distinguishes between
... = x + 10,
... = xand
... = x := 10in the exact same way it would have if it was a top-level expression rather than the expression denoting the value in an assignment. (I don’t have infix operators, but I’m fairly sure I could implement them with my current scheme. Could be an interesting experiment.)
Here’s the exact code which handles the
<expression> <dot> <open paren> <expression> <close paren>case: https://git.mort.coffee/mort/lang2/src/branch/master/lib/parse/parse.c#L330-L356 - if the token after the close paren is an equals sign, it calls the
parse_expressionfunction, which parses a top-level expression, then generates a dynamic_set instruction which uses the three top variable references on the stack.
.1syntax for indexing: It works for me because I don’t support floating point numbers with the leading digit omitted. My language would interpret
.1as a dot and a 1; to write the number 0.1, you would just write
0.1. Maybe it’s an American thing, but I never liked the leading dot syntax.
(In reality, I don’t actually have a float syntax at all, because I didn’t feel like writing a floating point parser and
strtodis locale and system dependent. I’ll get around to writing a float parser …eventually.)
If I absolutely wanted to support floats with leading dots though, I could totally do that I think. I would take the float parser out of the lexer and into the parser proper; the lexer would see
0.1, generate a number token, a dot token and a number token, and the parser would interpret that token sequence as a float literal. The parser could then also interpret a dot token followed by a number token as a float literal in the places where it makes sense;
a.10would still mean index 10 of
.10by itself would be a float literal. This would mean that I have to change some other syntax to disambiguate, because currently
[a .10 20]would be interpreted as an array with two elements,
20. Adding a comma between array members (and between function arguments) would fix that. I don’t think I want to do that though, since I like the separator-less syntax (and I don’t like the the leading dot float syntax).
One challenge I actually have, regardless of leading dots, is what to do with things like
a.10.20. If I supported float parsing and just let the lexer loose, it would generate the token sequence
<identifier "a"> <dot> <float 10.2>, which obviously isn’t right. My solution is that if the lexer sees a dot followed by a digit, it consumes the dot, reads the following digits, and emits a
dot-numbertoken. That means
a.10.20is actually not
<identifier "a"> <dot> <number 10> <dot> <number 20>, but rather
<identifier "a"> <dot-number 10> <dot-number 20>. This doesn’t change anything though, the parser could interpret a dot-number token as a float literal.
Thanks for pointing out that strtod is locale dependent. That’s a bug in my code. I plan to fix it by using the StringToDoubleConverter in google’s
double-conversionlibrary. As a bonus, this will allow me to add support for the use of optional
_separators in numerals, like
Well, this discussion made me finally get around to writing a float parser! It’s not the greatest code, it could do with some better factoring, but it works: https://git.mort.coffee/mort/lang2/src/commit/d8239aaef42f78d677d8b18cabc5160d8850a252/lib/parse/lex.c#L138-L310
I support the common bases (
0o755), I support arbitrary bases (
36rhellofor base 36), I support fractions (also in different bases,
630.908), I support apostrophes as separators (
0b0001'1111''0001'0110). And obviously, leading zeroes even in decimal numbers are supported. I currently don’t support exponents, and won’t support leading or trailing periods (all numbers are doubles, so the thing where you use a trailing dot to change the type isn’t necessary).
Anyways, thanks for the conversation. I like talking about this stuff, and I have way too few language nerds in my social circle.
Writing a floating point parser that returns correct answers is extremely difficult, and only a handful of people have succeeded. It can’t be done using the obvious algorithm that you have used. The author of Google’s double-conversion library got a job with Google on the basis of being the first person to figure out an efficient algorithm for parsing floating point numbers correctly. Previous attempts were much slower and allocated more memory. The paper to read is “How to read floating point numbers accurately”. Printing floating point numbers accurately, and with the fewest number of digits, is also extremely difficult, and can’t be done using ‘printf’. So I use double-conversion for this task as well. See the paper “How to print floating point numbers accurately”. There is an even more recent library called Ryu that claims to be even more efficient than double-conversion, but I haven’t investigated it.
Both of the side coding projects that I’m actively working on these days are related to amateur radio.
1: An open-source ionospheric mapping website thingy that takes real-time data from ionosondes (think radars pointed straight up that can measure the structure of the ionosphere) and combines it with the IRI-2016 model, by way of some interesting math, to let radio amateurs predict the best times and frequencies to make a contact with a given location. This has been my main focus for a little over the past two years, and it’s a polyglot of mainly Python, Fortran, and Perl. It’s hosted on a machine I bought off of eBay and pay to colo.
Currently I’m working up a poster presentation for the HamSCI Workshop in a little under two weeks, talking to the HamSCI folks about ways we might collaborate, and generally trying to attract contributors and make this less of a solo project.
2: Tools to support the Flex 6000 series of SDR radios on Linux. Flex radios are some of the coolest toys available to the discerning hobbyist (i.e. this with cash to burn) and they do everything over Ethernet, using (more-or-less acceptably) documented protocols. But the only officially supported platforms are Windows, macOS, and iOS, and there was nothing available to use them with Linux, which was a bit problematic for me. So I wrote one app that makes their TCP API accessible to any app that uses the ubiquitous “hamlib” for CAT control (setting frequencies and levels, switching between transmit and receive, etc.), and another one that makes their UDP-based digital audio appear as PulseAudio devices. Together, they’re enough to run the most common “digital mode” apps with a Flex and have it work like a conventional radio plugged directly into your computer.
This one is seeing less active development at the moment, as it’s stable and does most of what I want it to do, but there are a bunch of things to add if I ever get the ambition, like:
I have three main experimental projects going that will likely never move past the “fun to hack on” stage:
A template for building “rapid community response websites” to encourage mutual support and resource sharing during natural disasters, social unrest, or even just large gatherings and events. This one is designed to be deployed on a Rapsberry Pi or cheap micro-VPS, run for days or weeks, and then get spun down.
A sci-fi game that borrows from interactive fiction and classic MUDs and MOOs to let players build influence and resources through dialogue and task automation (so really it’s just a modern workplace simulator wrapped in a game setting)
Community discussion and link sharing driven by strong (cryptographic) identity and subjective moderation: I subscribe not just to your posts but also your moderation feed, which gives me a filtered view of other posts flowing through the network.
There are also a couple of half-finished robots and handheld open source “communicator” (p2p messaging) hardware projects sitting in my home office that will hopefully see more progress when we move and I have a workbench that isn’t the desk I also use for my day job.
All of them are learning and creative expression projects first and foremost; while I’d love to actually see them out in the world, I’m trying to be realistic about what I can meaningfully commit during during a pandemic when I have two small children at home and my own mental health to maintain.
I’m considering building some kitchen management software. During the pandemic I did a lot more cooking at home and so far all the recipe/menu/inventory/shopping software has been a very poor fit.
I am slowly creating a routing stack in Minecraft.
Here’s a low effort demo I put together over the last holiday season. https://user.fm/files/v2-edf060e617295e76693bf0edc6f4a5e7/carts.mp4 - (carts bouncing around corners slowly is an artifact of how tightly packed I constructed the tracks, it’s not a bug I think)
So far you can statically define layer 3 routing tables and carts will traverse as many hops as necessary to get there. Physical layer is taken care of. I don’t see a need for a data link layer at this point.
This draws inspiration from Craftbook and other plugins that make minecarts more interesting to deal with, but have a fairly inadequate train station system compared to something more packet switched and redundant.
I expect to be reimplementing sad versions of DNS, NAT, OSPF, BGP out of necessity for a robust train network and also just as a learning experience.
Curious to hear if there is interest. Unreleased so far, not sure if it will get off the ground outside my friends’ server. Source at https://git.sr.ht/~phroa/intercart
I made phase.city recently, which is about the level of non-employment technical accomplishment I work up the ambition for these days: An HTML template and a cron job.
I maintain a few small things on the side. There’s a tiny blog engine I make substantial changes to every year or two. Sometimes I push mildly useful stuff to a dotfile repo. Mostly of late I find myself slowly iterating on a collection of notes and the hacky scaffolding around them.
Sometimes I remember a time when I thought my software work might be a way to build some form of lasting value, and daydream about the sort of world in which it might be. Generally though, I try not to kid myself.
I feel this, so much.
I have a little script that shows the current moonphase too - I show it in the status line of my tmux.
Latest web “project” was inspired by Eternal March: http://gerikson.com/cgi-bin/eternal.cgi
I set out to put it in my xmobar and got sidetracked - thanks for the reminder. :)
This is lovely.
I’m working on Piccle, a static site generator for photographers. It uses EXIF metadata embedded in your photos to build an explorable web portfolio. (You can see an example gallery on my website, though some pictures are NSFW.)
As far as I know I’m the only user so far (hence its prerelease tag!) but I’ve been working on it for a while and it’s stable/complete enough for my main uses. Over the past month I’ve focused more on editing photos/tidying my metadata, but I still want to add some extra features soon. I’m also planning to experiment with client-side rendering support – not least because the current rendering pipeline can be drastically simplified if that turns out to be a bad idea.
I really liked your pictures, great work!
Thanks! I’m glad you like them. :)
This is relevant to my interests - I’ve been searching for something I can selfhost as opposed to Flickr, and I like the idea of static generation.
Many moons ago I used something similar that was supposed to generate a portfolio but it lacked the metadata integration.
I have got a few projects lined up:
These are the ones I can think of, I probably have more I have forgotten.
I’m currently working on a little tool to help me (and any team) to keep track of the issuing and expiration of all types of certificates, domains, keys, etc. When possible it also should try alert when something has been misused and/or created/changed without consent.
As part of my work on an (unreleased) data interchange language, I recently dug into string literals. Exciting right? Turn out it is… If you spend a lot of time writing in a plain text format, having a language/format that does string literals well is quite nice.
So, I’m all ears if you have comments about string literals you use: what you like, what you don’t, and so on.
Since I am deep-diving into Raku, strings litteral are one of the subjects I judge other programming language in regards to Raku. You have so much layers of possibilities and choices of quoting constructs.
Raku/Perl provides a lot of variety, indeed.
Currently, with my language-in-progress, there is one basic string form:
"...". It has six default behaviors. Each behavior can be toggled off by adding a flag prefix. For example,
-e"..."disables escaping. My rationale is that only six flags need to be remember to get 2 ^ 6 = 64 varieties.
Here is the string literal syntax for my language: https://github.com/curv3d/curv/blob/master/docs/language/Strings.rst
Multiline strings can be indented without adding the initial white space to the string data:
|characters are metacharacters that separate the initial white space from the data.
You can interpolate a variable or an expression into a string using
$may be included as literal string data by escaping them like this:
$_. This lightweight escape syntax has some technical advantages which are described here: https://github.com/curv3d/curv/blob/master/docs/language/rationale/Char_Escape.rst
Thanks for sharing this. I appreciate the written rationale for character escaping.
I’m working on a minimalist blogging engine. It isn’t a static site generator, because I dislike most of those for reasons I have trouble putting into words. It will be very “web 1.0”, usable from absolutely anything with a browser. It will have an RPC API to make it easy to add frontends. I’ll possibly add support to allow posting and commenting via email.
Essentially it will be a tool to “enable Chris to write and publish frictionlessly.” I’ll put the source up somewhere once it’s usable, on the off-chance that someone else wants to use it.
Tentatively I’m calling it FWB, for “fight writer’s block” or “frictionlessly write blog”. I like overloaded TLAs and double entendre.
This sounds interesting! I personally use a static site generator to publish my blog (< 1 article a month). I would love to hear your gripes with static hosting, though
I’m not opposed on principle to static site generators. They’re a great solution for a lot of folks, and they have some real advantages, like being able to host a site on just about anything with a web server.
I’m sure I could get everything I want from a static site generator, but I’d end up writing a bunch of tooling to do it. Some of that tooling is going to cancel out the nice properties of a static site, so I may as well make one monolithic thing that is self-consistent.
Static site generators force me to think about things orthogonal to writing, like “what am I going to call this file, where am I going to put it, etc.” Ideally, I’d like to be able to type “M-x new-post” in emacs and have emacs spawn an empty buffer. Write some stuff, and then hit C-c to post a draft. I don’t have to deal with a “build” step or an upload step.
While I do the vast majority of my writing in emacs, I also want some device independence. When I’ve used static site generators in the past, they’ve tended to tie me to a Unix environment.
Finally, if I wanted to add comments to a static site, I need to either use a third-party service like disqus – extremely unpalatable –, find an existing static comment system, or roll my own.
I’m making a bunch of excuses for NIH.
The slow-burn project is a desktop-sized PnP for electronics. I’m already aware of OpenPnP and LitePlacer, but from a UX PoV, a miniaturized version of an industrial machine isn’t what I’m looking to make or use.
Naturally, it’s spun off something like a dozen or two sub-projects for things like precision, control, mechanics, optics, fabrication, etc…
Working on a self-hosted music server since I’m unhappy with the discoverability and organizational capabilities of the existing offerings.
I’m not in a rush to get a stable release (read: when i stop editing the migration files) out, so the project turned into a sandbox for tooling and practices experimentation. This is my first $modern_frontend_js_framework project as well, so I fiddled around a lot there.
I’ve enjoyed having a project with no need to execute and deploy. Work is a constant balance between delivery and QC/tooling/practices/whatnot, with a priority on the former, but I can mess around as much as I want here!
I’ve been working on Aft, an indie backend-as-a-service. It’s got an integrated database, with automatic prisma-style REST APIs, fine-grained access controls, user login, and scriptable RPCs.
Most of my software productivity outside of work is for a community for software professionals called Code & Supply that I co-run with a friend.
Hey! Curious if there’s still intention to upload videos from the first Abstractions conference?
Unfortunately, no. There’s only one video that wasn’t affected by the severe technical problems that our AV vendor didn’t catch before it was too late: Anatomy of a Great Pull Request by Sean Griffin. Everything else had bad video, bad audio, or both.
I am currently writing a Tumblr proxy a la nitter, only for Tumblr not Twitter. It’s basically invite-only for now, because I don’t have the experience to host public-facing infrastructure and am both afraid for my small servers life (cpu load) and livelihood (traffic & susceptibility to being hacked).
It’s pretty fun, has enough bugs to keep me on my toes when I am in the mood, and I am using it daily to read my various feeds on tumblr and other sites.
And because I had this architecture in place, it has turned into a more generic feed reader, now supporting Twitter, Instagram, RSS and even AO3 all potentially in one feed.
Oh, and for my next project I want to convert my RaspberryPi 4 into both a pipe organ and a piano, using aeolus and Pianoteq because I recently bought a MIDI keyboard/controller and have had much fun playing it, mostly learning to play it.
I’m working on memcached server rewrite in Rust. I’m doing this because:
I have all binary commands working(except stat) and preliminary performance benchmarks look promising.
At the moment I’m struggling with choosing a license for that project. The problem that I have is that I’d like to have all improvements made by anyone get contributed back to the project even if someone would use it internally. GPL2 seems to be good but on the other hand if someone will make some improvements for his own purposes he doesn’t need to release those improvements to the public. Frankly speaking I’d be happy to “outsource” this problem to someone else.
I have a little cluster of Raspberry Pi 4s in a little tower thingy, and I’ve been working on setting up some self-hosting stuff. And, I have this old Mac G4 Cube case… so this weekend I decided to put them together.
I give you… Apple Pi: https://imgur.com/a/2TyR7Dg
Now I need another project.
https://mkws.sh/ A small, no bloat, minimalist static site generator using
shas a templating language.
I’m working on hanami.run an email forwarding service. Everytime I want to build a new side project, I get a domain then setup email. After doing that 20 times, I said I’m going to solve this email thing once for all. So I simply create a service to handle emails. All you have to do is point MX records to mx1.hanami.run and mx2.hanami.run
Right now I’m trying to make team feature so a team with multiple member can share and manage the same domain.
Hopefully finding the time to add some more features to my rusty daemon controller. It works since ~2 years but I need to test some stuff like hot-reloading before throwing that at my server. (I dread testing stuff that can’t be automated easily, it’s just a pain and if you don’t do it correctly you’ll end up with a false sense of security..)
And hopefully contributing a little bit more to one of the crates I started helping to maintain.
And if I still have time after this maybe finally release a newer version of the app on f-droid that I don’t use at all, but people seem to like based on github interactions. Feels bad to just not do anything for a year. But the project got big enough that I’m in “should I rewrite this to kotlin/flutter” hell. No links on purpose to avoid any advertising.
But ultimately I was told that I’d have to do more for myself and stop doing so much PC work. Covid seems to have been the tipping point for my health regarding sitting in front of a PC all day.
I’m building a 1on1 solution for SSL and domain monitoring called https://siteguardian.dev main goal for me with this project outside of the actual product was to build and validate an Elixir/Phoenix stack for quickly developing micro-saas projects.
Been working on a webapp for managing my financial stuff. The data part is the tedious part because my bank only spews out the transaction details as a
.xlsfile and in an unorganised manner with the account details and all at the top. Had to manually clean that file and converts to csv which again has to be manually cleaned like removing unwanted commas and spaces then it fed to a Postgres db. It’s a lot of work but I get to improve my sql knowledge with it. Uses Go for most part and python for data processing part.
Really wished that the bank could have an API or a better transaction details system.
Partially related, but have you also considered using Beancount for the second part of your task? You’ll still need to clean up the
.xlsbut the rest can be handled quite elegantly using Beancount.
Ignore if you’re aware of Beancount already.
I have been interested in dynamic malware analysis lately and found that cuckoo-sandbox is too difficult to work with. Still in Python2, many dependencies, many configuration files and software bugs that are not easily understood.
I’m attempting to solve those issues in my own project, we’ll see how it goes.
Hmm, there are two main things I’m poking around with at the moment.
Making scheduled backups of my various git repos. Indexing and applying text summarization to various Google docs that I’ve created over time.
My current project is an application that lets you upload audiobooks, and create a private podcast feed that you can download and stream from.
The basics works fairly well. There are a couple of features that I’m currently working on.
Trying to port GNAT of ada from adacore to PKGSRC
It was born as a toy project for me (and a colleague) to scratch my own itch of getting rid of Google Analytics and getting to play with edge workers.
I really have to build an UI for these https://adi.tilde.institute/cl/, https://adi.tilde.institute/cbl/,https://adi.tilde.institute/fl/.
I am experimenting with blazor server using sqlite. A friend uses a spreadsheet for doing beer recipes commercially, and I would rather like to convert parts of it to be easier to extend and improve.
I’m very slowly working away on a Scala SQL abstraction. I used Slick for several years at my old job, and was both incredibly grateful for the type safety it offered for a large codebase using a large schema, and very frustrated with some of its frustratingly abstraction, especially around transactions, connections, and sessions. https://github.com/stephenjudkins/sqlitis
So I’m trying to replicate the good parts, while simply using backends for the already battle-tested Doobie and up-and-coming Skunk to handle the parts that Slick didn’t do well. Additionally, it includes a SQL generator and parser that could be useful even without the typesafe schema mapping, and should make a much smaller, simpler, and more hackable codebase than Slick.
I was making decent progress, until I had twins and now I don’t have as much time! But it’s been fun to spend some of my free time improving it. If Scala sticks around as a moderately popular language, and I keep writing Scala, I hope someday this could be a valuable tool that others can enjoy.
I’m writing something that will sync my tweets over to Mastodon. A lot of the solutions out there didn’t work so I’m just writing something quick that’ll do it for me.
So far I have it working - and had the classic “post all recent tweets until I hit ctrl-c” happen. So that was embarrassing, but it’s fun either way!
Still geeking out on a rust irc bot with a built-in web server to help make stuff searchable. Rewrite into async/await and latest Tokio was moderately successful. I bought into a server side templating library that I wanna get rid of. Maybe this will be my „excuse“ to learn Vuejs?
I can definitely recommend vuejs, but if you’re already starting a new frontend, I can also recommend svelte. I’ve done one side project in each of them now ;)
I am working on a yet-nameless open-source RPG in Zelda style just for the fun of it, together with a handful of friends. Currently in the planning phase; if you are interested, drop me a PM or an e-mail.
I haven’t written much code, but I’ve been scaffolding out a browser-based decentralized data-stream idea. The concept would function like BitTorrent but for streaming data on top of WebRTC. There’d be no storage, so I believe the only thing required is fast routing and packet signing.
The idea is almost exclusively motivated by not wanting to spin up servers every time I build a silly little interactive app.
Images of VMs and bootable live USB sticks running two different FAT32-capable versions of DOS. Partly because fiddling with DOS is quite fun in this era of vastly-complex multi-everything OSes, partly to see if it can be done, partly because I do have a sort of eventual notional product in mind.
I am using PC DOS 7.1 (not 7.01) and DR OpenDOS for this. I am not a big fan of FreeDOS – partly because it’s a little too different from real 1980s DOS for my preferences, and partly because I find the developer community rather unfriendly and unwelcoming. So, now that IBM offers PC DOS 7.1 as a free download, and Lineo made DR-DOS 7.01 FOSS before changing their minds and closing it again with DR-DOS 7.02, I have some alternatives to play with.
WIP links can be found here and in previous posts. https://liam-on-linux.livejournal.com/78306.html
working on some rust code for scrabble move generation. the hope is to be a library that people writing AI code can just use, rather than having to do all the standard board representation and move generation bits from scratch (as is currently the case).
this is all well-trodden ground from an algorithmic standpoint; i’m pretty sure everyone is using the move generator from this paper. i just want to do it in a way that is fast and reasonably easy to reuse in other projects (hence rust). step 2 will be to define some protocols and networking code to let AIs play each other with minimal extra code.
I’m working on a task management/small data federation/scriptable CMS server for a program called
It’s still very early stages, and only gets a fraction of my attention on a given week, but it’s proving an interesting exercise, and has prompted some interesting Janet writing.
I’m working on nachomemes: a declarative meme-generator made for Discord. There are a couple other declarative meme generators for Discord, but this bot aims to be dead simple to use and light + performant. It’s been an interesting exercise in many regards:
I’m working on https://littr.me
It’s a link aggregator and discussion platform similar to lobste.rs and old reddit, much more barebones in the features it offers though. The attraction point is that it is written from the bottom up to be a part of the ActivityPub based fediverse.
Its purpose is to independently host small to medium communities that focus on specific topics (I’ve done away with the concept of “subreddit”) and then allow their users to interact with other communities through the federation mechanism so there is somewhat a “networking effect” in play.
Most active side project that needs some love is a Hugo theme for self hosting recipes.
Other than that, I’ve got a backlog of tasks for my home server setup:
I’m mostly working on trying to create an alternative game client for an old classic MMORPG game that was part of more than a decade of my life — Ragnarök Online. You can hardly call it a client though, since I’m just finished with parsing GRF files, loading and actually rendering sprites. The work is more directed towards what will potentially be a GRF Explorer app in the more realistic and near future.
Also, during some spare time I still work on creating an app like GitHub and SourceHut, but that’s also still in its very early stages. Not sure it would really be interesting to anyone.
And last, I have been back streaming, and during those I worked on both projects, but mostly the first one.