What are you doing this week? Feel free to share!
Keep in mind it’s OK to do nothing at all, too.
Ha, love the Stormlight Archive :D! I was reading the books all the summer, and then more of the other books he wrote.
Btw that is one ambitious plan for a week!
Yes, it was a bit slow at first but now I’ve really bought into the story and the world. I like how one of the main characters starts his journey not at zero but in a worse state than that. (Not spoiling anything, I hope).
Luckily the plans are not carved in stone, but they still lead to better time use than having no plans. At least for me.
TIL about Algernon. I’ve been looking for a simple way to render markdown for personal notes/blog posts, will definitely try it out and see what I could come up with.
Trying to finish WIP tasks in ugit. Still not ready for a v1.0 release
I decided to participate in Ludum Dare 48 this weekend, hoping to finish in time for the Comp (that is, today at 3AM). I missed that deadline spectacularly so I switched to the Jam which ends tomorrow at 3AM, and I’m pretty sure I’ll make it; just a few more textures to draw and a little more coding and I should be done.
The game’s written in MonoGame and (really terrible, not very functional) F#, I’ll be publishing the source code and a windows build (though you should be able to compile it for other platforms without issues) on Github on this repo within today, so check back later if you wanna look at the code (I wouldn’t) or try the game!
Other than that, I’m planning to play Nier Replicant √1.5, get a MVP for the project I’m doing at work done, and with any luck meet my friends IRL for the first time in months on the weekend, provided the weather’s not too bad.
P.S. if there’s any other crustaceans who made games for the jam I’d be happy to play them!
Getting ready to move to another province and working on my AlpineConf talk about systemd.
My Beagle V beta board arrived, so I’ll be playing around with it after work.
I got my hands on an AMD 5950X CPU this weekend so I’m building a new computer with the parts I spontaneously bought to go with it. I was a kid in a candy store in a Micro Center for the first time ever.
Not exactly programming, but: fitting my new PinePhone motherboard so I can try to make the switch from Android as a daily driver. Will also be trying out convergence mode, with the USB-C hub that I ordered at the same time.
Fun fact: the hub and motherboard shipped to Australia from China, from an OSH project, faster than some big name Australian retailers ship domestically :)
I wish China would stop having concentration camps.
So do I. But - reading between the lines here - I’m not sure that a boycott of Chinese goods is the right approach either.
Which action do you think would help?
To be honest, I’m not sure. Looking at oppressive regimes that were overthrown, it was always either their own people who did it - e.g. South Africa, Russia, Czechoslovakia - or outright war, e.g. Germany and Japan.
(And even then things didn’t always work out so well. South Africa still has racial quotas, Russia has Putin, and Europe got Stalin as a result of WW2).
Coming from NZ with a strongly anti-Apartheid ex-South African mother, I was quite aware of a lot of the tactics applied to SA at the time. Trade boycotts, trade embargoes, sporting boycotts (or not; e.g. the highly controversial 1981 Springbok Tour). As far as I can tell, none of them worked; they only served to penalise the rank and file citizenry of both countries.
But this is very far from my area of specialisation. Perhaps I’m wrong, and the boycotts and embargoes had an impact that I’m not seeing.
I’ll probably spend a few hours in the evenings continuing to build the high-level prototyping environment for Mu, the computing stack bootstrapped up from machine code to type- and memory-safety. My goal is to make the environment convenient to use (because lower levels are not) but also encourage people to write tests, so that it’s tractable to rewrite projects in lower levels when they inevitably get too complex/slow. I think today’s scripting languages get it wrong by allowing projects to grow increasingly complex on shaky foundations. There should be a counter-force to nudge people when the tool isn’t a good fit for a task.
Done right, projects should go through a prototyping phase and then throw the prototype away. Our society fails to throw prototypes away on a huge scale, because successful prototypes don’t naturally start to include tests past some level of success.
Anyways, not sure why I’m ranting in this comment box. Right now the environment can draw lines and circles. I think I’ll support pretty-printing definitions next.
This looks really neat! I like the fact that you can see the underlying stuff generated by the lisp code and step through it; it seems really useful.
I’m tinkering around with an cryptographic offline IOU / Credit system for tracking transactions without a centralized bank. Ideally it would allow people to continue trading resources either off-grid or in the event of a natural disaster
Working on a Minecraft bot in python. After a lot of work I’ve finally got it mining at a cobble generator (sort of).
Working on a Minecraft bot in python
Working on a Minecraft bot in python
Very cool! Do you recommend any resources for this?
Well if you are interested in writing it in python, I have been using pyCraft as a library/reference. It does not implement the entire Minecraft protocol though, so you’ll have to add to it/write your own.
More generally, https://wiki.vg is the place for Minecraft protocol information. There is no better resource besides working at Mojang AFAIK.
I’m working on allowing guests to mark their attendance without signing up on eventlandr, i.e. without providing a email or phone number. Right now I’m thinking of just creating a secure random string and placing it in local storage, which should be a bit more resilient that just a cookie. This doesn’t work for noscript users, but I can’t think of any other way that could work.
On my log search tool I might try to create some tests that benchmark the volume of data uploaded during ingest and downloaded during search. I have zero optimizations on either path, so it’ll be nice to have a baseline to view my progress.
Did you consider bookmarkable tokens in URLs? Might be more user friendly and cross device but naturally brings all the issues that secrets in URLs have. The W3C has a good finding on capability URLs https://w3ctag.github.io/capability-urls/
Hmm interesting, thanks for the link! I’m wondering if it actually is more friendly for the average user because it means they now have to keep track of that link. The local storage case means you don’t have cross device, but you also don’t have to keep track of it. I expect most people will just access the site from their phone.
After having finished the main development roadmap on my new Haskell static site generator, I’m now focused on finishing the documentation website (based on https://diataxis.fr/) and making the release announcement.
Sounds cool, Haskell dev here! Landing page seems cool and I like the idea it approaches documentation in such a systematized manner, but it also feels daunting with all the “theory” and a lot of text. Is there a way for visitor of the web page to get a taste of it in a matter of seconds? Maybe a code example? Or some simple diagram depicting what it is about and how it could be used? Before visitor is ready to commit more time to read a bunch of text or watch a video, there should be an “appetizer” that is easy to swallow and gets them interested.
Btw one thing I would love to see in docs tool is compile error when an internal link becomes invalid.
Ah uff I realized only now that this is a theoretical framework! I thought it is a tool like that Docusaurus that implement the theory described in the video. That makes my feedback above misplaced, sorry about that!
Yea, I was talking about two separate things: a) the Haskell static site generator, and b) creating a website / documentation for it.
Diataxis is what I roughly use, as a concept-model, for the latter.
The repository is here, if you’re curious. I plan to announce it here in lobste.rs once docs are done and the final blog post is ready.
As for “compile error when an internal link becomes invalid”, I implemented that just a couple days ago!
Ah cool, I get it now! Although Ema seems to be relatively general -> not just for documentation? So more like Gatsby than Docusaurus, right?
internal links -> ha, super cool!
Yes, it is more like Gatsby (but with the type-safety of Haskell) … although it was SvelteKit (its hot reload feature) that inspired me in part to create it.
I use Ema to render my org-mode daily notes: https://github.com/srid/orgself - and I plan to use it to create a RSS reader app: https://github.com/srid/ema/issues/10
Gonna start writing a thing that reads your e-mail inbox and turns Patreon e-mail notifications into a bunch of RSS feeds, one per creator.
Doing it via e-mail because apparently Patreon’s API isn’t actively developed and sucks, and I want compatibility with other things like SubscribeStar that don’t offer it as an API period.
Amusingly I have been thinking about a project idea that does it the other way - i.e., summarize RSS feeds into emails, filtered to organized mailboxes. Then I can use one tool, like himalaya, to script and automate my reading workflows.
Trying to process around 300k events per second in Azure Stream Analytics, pinpoint where upstream delivery delays are coming from and waiting for responses to my 4 open Azure tickets 🙄
I haven’t worked with Azure since … 2011? … I think. I’d be keen to hear whether you have opinions on how it stacks up to alternatives like AWS.
Really miss AWS to be honest. The role based access control is really immature and rudimentary. Infrastructure as code deployments often have strange issues, but had plenty of them with AWS too (I’m using Pulumi). Seems like 90% of the users on Azure are just using VMs, VNets and SQL Server. A lot of the other services still feel like there’s a lack of production workloads on them (including Stream Analytics). I think the biggest thing I miss is AWS’s approach to serverless type of architectures. On AWS Lambda can be used to connect almost anything together and is pretty damn reliable. Azure Functions just seem like a thin veneer over virtual machines where you end up thinking far too much about how the VM beneath is working … so it’s no longer serverless at all.
Yeah that was my impression of Azure back then too … good for a lift & shift of Windows technologies (e.g. VMs and SQL Server), but for anything else, go with AWS.
Wasp is interesting in concept.
I’ve been interested in frameworks (in any a type-safe language) for writing full-stack apps. So far I’ve explored Haskell, F# and Rust, with the latter two in particular looking to be more promising.
Wasp is different then what you listed by being a DSL -> we are aiming for great ergonomics and ease of use while still utilizing the JS/TS ecosystem (and possibly even other languages like python or go in the future).
At first I thought you are looking into Haskell for the server side, but I see now you are looking int a language you will use on both sides. What about ghcjs, or purescript? As for the framework, IHP might be interesting?
My criteria is simple: use a type-safe FP’ish language on both backend and frontend.
That rules out JS, Python (not statically typed), and TS, Go (not fully type-safe). Haskell’s GHCJS satisfies it, with some downsides (noted in the linked post), as does .NET Blazor via F# (and Bolero). Rust too, via Wasm support (see yew.rs). PureScript relies on node.js, which I’d rather not use for backend. IHP is interesting on its own, but it doesn’t satisfy my criteria (it doesn’t use GHCJS for frontend AFAIK).
Rewriting my own mailing list manager with microservices. The monolith approach became impossible to manage.
I’ve been working on a library for creating AWS step functions directly from function code. So you have a property on your function which says what S3 buckets you read/write from, what DB permissions you need, as well as input/output shapes and fan out/in information. All that information is then taken to build a Step Function pipeline in CDK.
We have a fairly extensive and branching data pipeline and it’s been difficult to manage the body of each function separate from it’s infrastructure/permissions. Putting them together also allows our tests to know what permissions were declared and ensure that each step has the permissions it needs as well as throw a warning if you don’t use certain permissions.
We’ve already rolled out some basic functionality for this, specifically I/O shapes and bucket permissions, and it’s been working great. This week I’ll hopefully get DB permissions in place as well.