Set up a Mastodon instance and am using it. It’s neat, but Anglophone side already has a culture defined for it, if you know what I mean. Also, the UX is sometimes clumsy as a result of federation not being aggressive enough, IME.
Also dooping around on a client for it, because I love NIH and none of the clients hit my AESTHETIC preferences.
It’s federated, you see, so you get everyone else (well, there’s UX issues there, and possible social, but that’s for a blog post I may [never] write) - recently, Socialist Hot Take (and comics) Twitter seems to have adopted it overwhelmingly, and if you’re not part of the hivemind there, it might be a bit hard. (I’m not, so I just focus on technical stuff.)
Then you have the GNU Social (which seems to be a LOT of 4chan types) vs. the new Mastodon people… this article is a decent summary from what I’ve read.
The gist of it is that non-Lispy languages can benefit from macros by creating a skeleton syntax that is slightly above the level of lexer tokens, but below the level of an AST, in terms of meaning. I think the next step after this is to eval() the possibly-macro-expanded skeleton syntax and derive the real meaning from it.
non-Lispy languages can benefit from macros by creating a skeleton syntax that is slightly above the level of lexer tokens, but below the level of an AST, in terms of meaning
Thanks for pointing this out; I wasn’t aware of the phrase “skeleton syntax”, but the idea is something I’ve been ranting about for a while now, especially in Haskell. It’s really useful to have a “trivially parseable” syntax, like s-expressions or JSON or even XML or whatever, even if everyone interacts with a custom, “human-readable” language. There’s no reason why, say, a docstring extractor should require every sub-expression in an entire file to parse correctly; or why things like shorthand in new language versions should break all existing tooling.
Hm interesting, do you know how this scheme compares to what Julia and Elixir do? Those languages have both Lisp-like macro systems and non-Lisp syntax.
I know there there are many different levels of power and strictness to macro systems, but it’s not an area I’m very familiar with.
The Dylan paper is from 1999, and at the time this was probably true:
Lisp is the only language family (cf [9], [10]) that has succeeded in providing integrated macro systems along with simple and powerful syntax manipulation tools like these.
But I think that’s no longer true? Maybe due to their work or due to the work of others?
I am not sure how those two languages do it. In Julia’s case, macros must start with the @ character and are close to function calls. In Elixir’s case, I think they more resemble function calls (which don’t need parenthesis) with lazy arguments.
If I am understanding this paper, the proposed system is more powerful than these two languages.
What language are you implementing the macro system in?
Does it have anything in common with Template Haskell and scala.meta ? I think those systems also add some type safety constraints which Lisp-like systems don’t have.
Finishing off my first paper, on how to evaluate and compare AI systems for generating mathematical conjectures.
It’s been far too long in the making. Partly due to imposter syndrome, but also because I’m loathe to follow the IMHO low standards of the field (lack of meaningful statistics, no error bars, ambiguous data and little thought given to reproducibility).
The technical solution to this has been laborious, but straightforward: end-to-end automation, all the way from input data (verified by SHA256), through to the subsequent analysis, graphing and LaTeX rendering. Many thanks to Nix for making this rather painless :)
On the other hand, trying to “do the statistics right” seems to be a path to madness. I started off without much confidence in my statistical abilities, since I’ve only encountered rather basic statistics in my education and I know that it can be tricky. Since then I’ve done a whole bunch of research on the topic, spoken about it with academics, and even dropped in to a series of undergraduate lectures on research methods and experimental design. Unfortunately I now have enough knowledge and experience to confidently predict that I’ll probably mess something up, fail to take something into account and likely misinterpret a bunch of stuff!
I feel like statisticians must be in a constant state of existential crisis :P
work: turned in my notice, will be leaving the company at the end of December. I hope to spend the rest of my time on documentation and technical debt, but I’ve just been assigned a new feature, so… we’ll see. I’m doing the TLA+ workshop this Thursday.
home: been chatting with the developers of Alloy, who are cool people, and we’ve been dissecting some example specs. It’s a lot of fun and I’m going to hopefully have a couple of writeups soon.
Also, finalizing a big TLA+ project. Details soon. Foreshadowing.
@home: Preparing to release version 0.12 of Dramatiq. I want to improve some parts of the docs and some of the supporting material. I may even do a little intro screencast.
Thanks! I used ScreenFlow to record the video and Moom to size up the windows beforehand to 1280x720pt (2560x1440px) and many, many takes. When exporting the video I chose to do it at 1440p to avoid scaling artefacts. For the audio I used my apple earbuds in a fairly tall, echoey room and it shows :D
For work: I’m re-implementing some abandoned cart handling in the CMS we’re migrating to.
For me: I’m continuing work on siftrss, fleshing out my unit test suite to better cover edge cases concerning feeds with off characters, encodings, namespaces, etc. Ahhh, they joys of working with XML for no money. :)
Hey, it’s you! I saw that a while back when I happened to search for mentions of siftrss on Twitter. I’m glad I could lend some inspiration for your project!
I love how clean your code is. I don’t know much about Rust, but your code makes me want to give it a go. I’ve been meaning to get around to writing a serious project in a functional language.
My main thing (day job and personal projects) is Haskell, which may have something to do with the style of my Rust code. There’s a lot of idioms I ignore/wrap in Rust.
Didn’t get upstreamed, they special-cased a setter or whatever for the thing I was doing.
I use Rust because it’s a reasonably type-safe language with sweet-spots (super perf sensitive, no GC) that are complementary to Haskell (everything else)
Right on! I’ve poked at Haskell a couple times, but never used it to write anything of substance. Of course, my list of languages to try out grows faster than my list of projects… which is often longer than I have time for in the first place, haha. I’ve been meaning to try out Elm, as I wanted something new to play around with on the front end.
Rust sounds very interesting. Consider my interest piqued!
It’s not quite ready for public beta and doesn’t have a proper website yet, but you can get notified when a release is ready, if you want. :) https://taut.netlify.com
Home: fixing the long tail of bugs introduced by compressing my terrain as BC5. I had a typo in my BC5 decoder -> you fall through the world randomly. I was generating the collision quadtree with the original terrain and not the BC5 terrain (BC5 is lossy!) -> you fall through the world randomly. Trees were placed on the original terrain and I also missed a .0f in my code so they were often intersecting with the ground. Once that was all done I pushed out a release, which was immediately followed by a second release with fixes for AMD GPUs and Windows UAC.
I think the wireframe view looks quite cool now. You can see the finely tessellated terrain below the camera and the coarse terrain in the distance, you can the engine switch to 2 triangle tiles when they don’t overlap the terrain at all, and you can see the skirt extending to the horizon.
The only thing missing now is the seams. I keep trying to convince myself they’ll be easy and then when I start working on them it’s just so obviously not true. In particular the worst case looks like this. Maybe the 2 triangle tiles are stupid? It does only save like 150k triangles at best. Maybe they should have full res borders?
Work: I asked if I can take my hackathon project and try to sell it outside the company. I have no idea if it’s something people would actually spend money on (a faster LZ4) but I guess it would be nice to go through the whole marketing/sales pipe by myself, even for just a single sale.
Workwise: focusing on adding CI to some of our other workflows and doing some code cleanup by replacing ad hoc caching with a more systematic approach.
Otherwise: I am going to try to strip ME out of an old x61s I have; if that succeeds I may try to install Middleton’s BIOS on it. It’s too bad Coreboot doesn’t work on x61s, because they really are great machines.
Interesting that the x61s doesn’t work, since I’m currently using LibreBoot on an x60s from Minifree (née GLUGLUG).
TBH I don’t completely grok Libre/CoreBoot: it’s booting into GRUB 2, but I haven’t worked out how to alter the config; the documentation says it can be overridden by a libreboot_grub.cfg file, but that doesn’t work (maybe my install is too old?). I’m too chicken to reflash it myself, in case I brick the machine :)
It’s annoying to manually select stuff in GRUB on every boot (it tries to boot Trisquel by default, which I swapped out for NixOS years ago; so I have to scan for and chainload another GRUB from /dev/sda1 instead), but at least I can access GRUB’s CLI when I want to. I had my first experience with EFI recently, when trying to boot Linux from a USB drive; after about 20 attempts to navigate the boot menu I gave up. I have no idea how any tech-savvy person can put up with it.
Sadder that I have to keep telling people it existed at all in all the big discussions on ME’s and alternatives. The RISC workstations died hard apparently. You can still get it with PPC Mac’s on eBay alongside no backdoors in the hardware. They’re pretty usable if you’re doing native apps on Linux instead of the Web.
Ah, I forgot that Apple used OFW in the PPC era. I’ve never owned a Mac, although I briefly had an x86 Macbook as a work machine (I used it to run Linux in a VM; someone promptly asked if I’d swap with their Thinkpad ;) ).
Before getting my current x60s I used an OLPC XO-1, which ran OpenFirmware. I still use it semi-regularly due to the decent battery life and sunlight-readable screen :)
Yeah, it’s just another PPC box so it used Open Firmware like a lot of them. The SPARC’s were the other ones doing it IIRC. The laptop was pretty decent for $80. How was using the XO-1? I thought the OLPC’s would’ve been a crippling user experience. Never even tried them.
The GTK UI is a bit clunky, and I instinctively want to reach for the underlying filesystem rather than the “journal” abstraction, but those aren’t a problem when I spend most of my time in the terminal :)
Some of the bundled applications are a bit gimmicky, the “view source” concept was never fully realised and the bundled mozilla browser is far too heavyweight to run comfortably with 256MB RAM. Dillo and Netsurf run perfectly well though. One thing that’s very nice is the bundled Squeak/EToys system, although some of my projects were ambitious enough to hit a memory limit, such that trying to save to disk seems to hangs the machine :(
A USB drive or SD card is almost required, for extra storage and swap (although the latter will hasten the death of any flash memory). The OS still gets occasional updates, which are eating away more and more of the 1GB onboard storage. As I understand it, most hardware-specific tweaks have been pushed upstream, although booting into a separate OS like Debian does have a noticable impact on things like battery life.
The sunlight-readable screen makes it really great for reading from; especially when twisted into “tablet mode” (extra cursor and “game” keys are provided, which allow scrolling and navigation)
Thanks for the detailed write-up. Interesting. I also keep looking at Squeak recently since (a) it comes from the LISP/Smalltalk machine tradition of doing everything in one, consistent language that’s productive and memory-safe and (b) it’s an easy-to-use language with quite a bit of tooling and apps for something we don’t see posted often. Given I’m semi-disabled in remembering new things, I did consider taking a break from my focused on static typing or DbC to see what Smalltalk experience is like. Might bite me having dodged learning OOP all this time, though. I’m not sure if I’d get the full value out of it if I emulated structured, functional-ish programming in it using objects. What you think?
Note: The other route I’m considering is How to Learn Programs with Racket. Probably one I’ll do. This is more interim where I’m thinking if I should relearn Python for quick prototyping or try Squeak/Smalltalk.
Despite reading a lot about it (e.g. from Viewpoints Research Institute, among others)I’ve actually written very little “real” Smalltalk.
The EToys system (which I used on my XO) is a drag&drop programming system built in Squeak, much like the early versions of Scratch (I believe Scratch now uses JS), although it more closely follows OOP and IMHO it’s a ‘richer’ approach than Squeak’s.
If you’re going down the OOP rabbit hole for the first time, I think the most important thing to keep in mind is the difference between Smalltalk-style systems, which we might characterise as:
Everything is an object, including numbers, code, classes, etc
Method calls act like “message passing”, i.e. extremely late binding; we don’t know what will happen until the call is made; the behaviour may be dynamic (e.g. messageNotUnderstood)
Control flow, like if/then/else, for, etc., are just method calls (e.g. on boolean or array objects). Users can make their own control flow in the same way.
And Java-style systems:
Classes, methods, etc. are mostly a static convenience for organising code; they’re not first-class values. Data relies heavily on a few special “primitive types” like booleans and ints, which are also not objects.
Method calls act like jump instructions; their behaviour is constrained statically, e.g. using annotations and finally; dynamic behaviour like messageNotUnderstood is discouraged and, rather than being default, requires heavy wizardry to pull off.
Since code isn’t first class (Java recently gained lambdas, but they’re still a distinct language construct), and “primitive data” aren’t objects, lots of control flow is just structured programming. if/then/else, for, etc. are special syntactic keywords which compile down to certain instructions (e.g. branches). Adding new ones, or shadowing/overloading existing ones, requires hacking the compiler.
The merits of each can be argued at length, but the most important aspect is how this affects the style of code written in each language. For example, there might be “OO practices” which build up elaborate design patterns, towers of reflection, inversions of control, etc. which are actually just workarounds for one style of language, and a different language might just e.g. pass in a continuation.
Other nuances include e.g. classes (Smalltalk, Newspeak) versus prototypes (Self); reflection vs introspection; etc.
I’ve actually been playing quite a bit with Racket recently; it’s a nice system, if a little slow. It’s a shame that the PLaneT packaging system has recently been replaced by raco; it forces code to depend on ambient, OS-controlled, shell-scripted environments :(
Whilst I’ve written and used plenty of macros, I’ve not yet delved into call/cc or defining my own languages :)
Great writeup. Thanks! Far as Racket, you have to get on creating DSL’s to appreciate real power of the language per what its users tell me. Maybe do some HTML-like web programming, state machine stuff like recent Haskell article, or a low-level one that extracts to C or C++ similar to Ivory language.
I’m on site with a client this week, but more in a project management/leadership role. Delivery worker bee is more of my comfort zone, and I still think its a bit funny they trust me to run engagements.
Interesting work though! Penetration test of a (running) metro train system, including train/signal/power control and supervision. It is the culmination of over a year of off and on engagement with this client.
I’m working on a little Cocoa/Swift app in my spare time, coming from mostly web and server dev. It’s a simple speedrunning timer app, where a run can be split up into named ‘splits’, and some history is kept.
It feels a lot like unlearning a decade of techniques learned as a web dev: declarative ui, state management, etc. My first attempts were to try and fit that in Cocoa, and looking around for tools that may help. But macOS is a barren wasteland, with everyone focusing on iOS apparently.
So I’m trying to learn it more or less properly and the hard way. I’m not using Interface Builder, because I find it helps to learn how things actually work. (And xibs seem more a convenience any way.)
I’m still figuring out structure, splitting op classes that were implementing too many protocols, etc. Mostly have a document-based app up with working models and views, but need to start hooking up behaviour.
There’s some of the declarative, reactiveness alive within the Swift community in the RectiveCocoa and RxSwift communities. Each time I’ve tried to get into ReactiveCocoa (I’ve tried for each major version number) the lack of beginner documentation does me in. React has nailed this with a quick example app that introduces all the major concepts, I’m not sure why this doesn’t exist in ReactiveCocoa.
You can get pretty far with code driven UIs, but there’s definitely a large segment of developers that swear by Interface Builder and Storyboards. I’ve never been able to get into them myself.
ReactiveSwift seems equal parts awesome and daunting. I think it’d be very interesting to take a deep dive, but not sure if I’ll ever take the time. :)
I ended up working on my hardware projects. I put together the Monarch, and started working on a Game Boy art project (shameless plug).
I’ve got a couple of tasks this week:
Replacing my Mikrotik router with a Ubiquiti Security Gateway. I’ve been unable to convince the Mikrotik developers that they have a bug in their IPv6 Prefix Delegation support that prevents me from getting a v6 pool from my ISP.
I ordered a Jarvis adjustable frame to replace my IKEA hacked “desk”. I’ll continue to use the butcher block desk top that I have from the IKEA desk, since I’ve already got it, and it’s pretty awesome. This should arrive Thursday.
I need to replace all the capacitors on my Game Boy, as mentioned in the aforementioned blog post.
I need to update my Chrome extension to use newer APIs, since some I’m using are deprecated. I’ll probably use this opportunity to finally fully support Firefox Quantum and setup CI.
Probably won’t happen this week, but I’d like to get Joyent’s Triton running in KVM so I can see if I can’t shim the pieces needed to run Kubernetes natively, since I think the environment has much of what’s needed (with Crossbow for networking and Manta for storage). The official guides for Kubernetes on Triton are just running it in KVM.
The shell script linker/compiler took a little detour on a “hey this crazy idea might work better” expierment. That was wound back and it’s pretty close to done now. Trialling it on handful of internal tools to make sure it covers the functionality of the two tools it replaces and doesn’t introduce any regressions.
CD for the client is coming along nicely, will be utilising the newfound deployment freedom to get a bunch of features & fixes out shortly.
And on the car front, it turns out we need a new engine. Just heard today the mechanics are looking at a shipment of refurbished ones that came in from japan today, so hopefully just a few more days and it’ll be back.
I have thought about this problem for the Oil shell. One problem is that you can’t statically determine which tools and scripts another script uses (at least not 100% correctly). There are probably some practical ways around that though.
It’s pretty simple in the grand scheme of things - it will replace relative source (dot) statements with either an absolute reference according to the rules passed (eg –link ./=/usr/share/foo/) or with the contents of the target file (with recursive processing).
As a bonus it will do m4-style word replacements (–define LIB_DIR=/usr/share/bar)
It’s written in shell (aiming for posix compatible but with allowed known-safe exceptions for in-use /bin/sh implementations like local variables) and is essentially a bunch of argument processing that generates a series of sed rules, and a bit more shell to post process the resulting stream.
You’re right it’s not foolproof - my earlier solution relied on sh -v (and relied on all includes being outside of any functions ).
This is a static approach (its just recursive pattern matching) and for my purposes, it works because it removes the need for the fancy tricks to make relative includes work - you have a real relative Include in the source script and then build one with either the inclusion inline or as an absolute reference - so it doesn’t need to understand `”${0%/*}” or “$(cd $(dirname $0) && pwd -P)” type constructs.
OK I think it’s like zipapp then – it collects a bunch of shell source files into a single file?
I forget how py2exe works, but it might only do that. It doesn’t actually compile Python to native code AFAIK – it just puts in a single file with the Python interpreter.
Since your program is written in shell, I’m curious if OSH will run it? I just released a new version:
One “carrot” for trying it is that you will get better syntax checking with osh -n foo.sh. It would be interesting to combine the functionality – osh -n --recursive foo.sh could check the syntax of foo.sh and also every module sourced by foo.sh. That would involve basically the same module-walking algorithm that you implemented, as far as I understand.
Does your program use sed or something else? I’d be curious to see it. Technically I think you could do a better job with the OSH parser as a library, but I haven’t exposed it as a library yet.
@work: getting our Kafka ducks in a row before the real traffic surges start later this week; getting back in Scala (ugh, but … ugh); taking over a bunch of team lead/management responsibilities before my official end-of-probation date, which strikes me as a positive.
@home: chasing that SST/Warner Bob Mould guitar tone; replacing the dead motherboard in my adjectival big computer; getting ready for a big True Thanksgiving dinner for 50 of our friends.
Hobby project that I hack on a little every day when every one has gone to sleep: a simple space flight simulator. When I have version 0.0 out I hope to post that here. It doesn’t have a fancy tech stack (C++ and OpenGL) but in my head it’s cool for what I want to do (explore questions related to space flight within our solar system).
Set up a Mastodon instance and am using it. It’s neat, but Anglophone side already has a culture defined for it, if you know what I mean. Also, the UX is sometimes clumsy as a result of federation not being aggressive enough, IME.
Also dooping around on a client for it, because I love NIH and none of the clients hit my AESTHETIC preferences.
What do you mean? Especially as it’s your instance.
It’s federated, you see, so you get everyone else (well, there’s UX issues there, and possible social, but that’s for a blog post I may [never] write) - recently, Socialist Hot Take (and comics) Twitter seems to have adopted it overwhelmingly, and if you’re not part of the hivemind there, it might be a bit hard. (I’m not, so I just focus on technical stuff.)
Then you have the GNU Social (which seems to be a LOT of 4chan types) vs. the new Mastodon people… this article is a decent summary from what I’ve read.
Working on implementing the ideas in the D-Expressions: Lisp Power, Dylan Style paper.
The gist of it is that non-Lispy languages can benefit from macros by creating a skeleton syntax that is slightly above the level of lexer tokens, but below the level of an AST, in terms of meaning. I think the next step after this is to
eval()the possibly-macro-expanded skeleton syntax and derive the real meaning from it.Very cool stuff.
Thanks for pointing this out; I wasn’t aware of the phrase “skeleton syntax”, but the idea is something I’ve been ranting about for a while now, especially in Haskell. It’s really useful to have a “trivially parseable” syntax, like s-expressions or JSON or even XML or whatever, even if everyone interacts with a custom, “human-readable” language. There’s no reason why, say, a docstring extractor should require every sub-expression in an entire file to parse correctly; or why things like shorthand in new language versions should break all existing tooling.
Hm interesting, do you know how this scheme compares to what Julia and Elixir do? Those languages have both Lisp-like macro systems and non-Lisp syntax.
I know there there are many different levels of power and strictness to macro systems, but it’s not an area I’m very familiar with.
I have a wiki page here with links to different languages: https://github.com/oilshell/oil/wiki/Metaprogramming
(Feel free to edit.)
The Dylan paper is from 1999, and at the time this was probably true:
But I think that’s no longer true? Maybe due to their work or due to the work of others?
I am not sure how those two languages do it. In Julia’s case, macros must start with the @ character and are close to function calls. In Elixir’s case, I think they more resemble function calls (which don’t need parenthesis) with lazy arguments.
If I am understanding this paper, the proposed system is more powerful than these two languages.
What language are you implementing the macro system in?
Does it have anything in common with Template Haskell and scala.meta ? I think those systems also add some type safety constraints which Lisp-like systems don’t have.
I’m testing it out in Python right now. Final language will probably be Rust or Haskell.
Type safety would be really nice, especially for error messages. However, I need to get this working acceptably first :)
Finishing off my first paper, on how to evaluate and compare AI systems for generating mathematical conjectures.
It’s been far too long in the making. Partly due to imposter syndrome, but also because I’m loathe to follow the IMHO low standards of the field (lack of meaningful statistics, no error bars, ambiguous data and little thought given to reproducibility).
The technical solution to this has been laborious, but straightforward: end-to-end automation, all the way from input data (verified by SHA256), through to the subsequent analysis, graphing and LaTeX rendering. Many thanks to Nix for making this rather painless :)
On the other hand, trying to “do the statistics right” seems to be a path to madness. I started off without much confidence in my statistical abilities, since I’ve only encountered rather basic statistics in my education and I know that it can be tricky. Since then I’ve done a whole bunch of research on the topic, spoken about it with academics, and even dropped in to a series of undergraduate lectures on research methods and experimental design. Unfortunately I now have enough knowledge and experience to confidently predict that I’ll probably mess something up, fail to take something into account and likely misinterpret a bunch of stuff!
I feel like statisticians must be in a constant state of existential crisis :P
work: turned in my notice, will be leaving the company at the end of December. I hope to spend the rest of my time on documentation and technical debt, but I’ve just been assigned a new feature, so… we’ll see. I’m doing the TLA+ workshop this Thursday.
home: been chatting with the developers of Alloy, who are cool people, and we’ve been dissecting some example specs. It’s a lot of fun and I’m going to hopefully have a couple of writeups soon.
Also, finalizing a big TLA+ project. Details soon. Foreshadowing.
@home: Preparing to release version 0.12 of Dramatiq. I want to improve some parts of the docs and some of the supporting material. I may even do a little intro screencast.
EDIT: I ended up recording that screencast 🎉
nice job on the recording, I have tried a few in the past and was never satisfied with the result. What recording software/hardware do you use?
Thanks! I used ScreenFlow to record the video and Moom to size up the windows beforehand to 1280x720pt (2560x1440px) and many, many takes. When exporting the video I chose to do it at 1440p to avoid scaling artefacts. For the audio I used my apple earbuds in a fairly tall, echoey room and it shows :D
I put together https://websocket.email/ using openbsd and google cloud to help people automate email base integration tests and other things.
For work: I’m re-implementing some abandoned cart handling in the CMS we’re migrating to.
For me: I’m continuing work on siftrss, fleshing out my unit test suite to better cover edge cases concerning feeds with off characters, encodings, namespaces, etc. Ahhh, they joys of working with XML for no money. :)
I used your project as inspo for some Rust code I hacked on while streaming.
Hey, it’s you! I saw that a while back when I happened to search for mentions of siftrss on Twitter. I’m glad I could lend some inspiration for your project!
I love how clean your code is. I don’t know much about Rust, but your code makes me want to give it a go. I’ve been meaning to get around to writing a serious project in a functional language.
Thank you very much!
My main thing (day job and personal projects) is Haskell, which may have something to do with the style of my Rust code. There’s a lot of idioms I ignore/wrap in Rust.
Case in point, I got irritated with the useless (IMO) information hiding in the
rsscrate so I forked and patched it: https://github.com/bitemyapp/shiftrss/blob/master/Cargo.toml#L15Didn’t get upstreamed, they special-cased a setter or whatever for the thing I was doing.
I use Rust because it’s a reasonably type-safe language with sweet-spots (super perf sensitive, no GC) that are complementary to Haskell (everything else)
Right on! I’ve poked at Haskell a couple times, but never used it to write anything of substance. Of course, my list of languages to try out grows faster than my list of projects… which is often longer than I have time for in the first place, haha. I’ve been meaning to try out Elm, as I wanted something new to play around with on the front end.
Rust sounds very interesting. Consider my interest piqued!
Working on a native Slack client for macOS, in Swift 4.
Preparing to interview for a compiler engineer position.
holy cow this needs to be a thing! I’d love to see how you get on.
It’s not quite ready for public beta and doesn’t have a proper website yet, but you can get notified when a release is ready, if you want. :) https://taut.netlify.com
Great, thanks! Signed up!
What kind of compiler you going to be working on if it’s not NDA’d or anything?
I did sign an NDA, unfortunately. It’s a very domain-specific thing.
All good. Least you might get to do some interesting work.
Still working on an in-browser code/markdown editor:
https://www.dropbox.com/s/ixsgpq3wn35dfmj/editor%201.mp4?dl=0
It individually places all lines of text using SVG, so I got to write all the cursor movement and text wrapping code from scratch.
Home: fixing the long tail of bugs introduced by compressing my terrain as BC5. I had a typo in my BC5 decoder -> you fall through the world randomly. I was generating the collision quadtree with the original terrain and not the BC5 terrain (BC5 is lossy!) -> you fall through the world randomly. Trees were placed on the original terrain and I also missed a
.0fin my code so they were often intersecting with the ground. Once that was all done I pushed out a release, which was immediately followed by a second release with fixes for AMD GPUs and Windows UAC.I think the wireframe view looks quite cool now. You can see the finely tessellated terrain below the camera and the coarse terrain in the distance, you can the engine switch to 2 triangle tiles when they don’t overlap the terrain at all, and you can see the skirt extending to the horizon.
The only thing missing now is the seams. I keep trying to convince myself they’ll be easy and then when I start working on them it’s just so obviously not true. In particular the worst case looks like this. Maybe the 2 triangle tiles are stupid? It does only save like 150k triangles at best. Maybe they should have full res borders?
Work: I asked if I can take my hackathon project and try to sell it outside the company. I have no idea if it’s something people would actually spend money on (a faster LZ4) but I guess it would be nice to go through the whole marketing/sales pipe by myself, even for just a single sale.
Workwise: focusing on adding CI to some of our other workflows and doing some code cleanup by replacing ad hoc caching with a more systematic approach.
Otherwise: I am going to try to strip ME out of an old x61s I have; if that succeeds I may try to install Middleton’s BIOS on it. It’s too bad Coreboot doesn’t work on x61s, because they really are great machines.
Interesting that the x61s doesn’t work, since I’m currently using LibreBoot on an x60s from Minifree (née GLUGLUG).
TBH I don’t completely grok Libre/CoreBoot: it’s booting into GRUB 2, but I haven’t worked out how to alter the config; the documentation says it can be overridden by a
libreboot_grub.cfgfile, but that doesn’t work (maybe my install is too old?). I’m too chicken to reflash it myself, in case I brick the machine :)It’s annoying to manually select stuff in GRUB on every boot (it tries to boot Trisquel by default, which I swapped out for NixOS years ago; so I have to scan for and chainload another GRUB from /dev/sda1 instead), but at least I can access GRUB’s CLI when I want to. I had my first experience with EFI recently, when trying to boot Linux from a USB drive; after about 20 attempts to navigate the boot menu I gave up. I have no idea how any tech-savvy person can put up with it.
Very sad to see OpenFirmware die off :(
“Very sad to see OpenFirmware die off :(”
Sadder that I have to keep telling people it existed at all in all the big discussions on ME’s and alternatives. The RISC workstations died hard apparently. You can still get it with PPC Mac’s on eBay alongside no backdoors in the hardware. They’re pretty usable if you’re doing native apps on Linux instead of the Web.
Ah, I forgot that Apple used OFW in the PPC era. I’ve never owned a Mac, although I briefly had an x86 Macbook as a work machine (I used it to run Linux in a VM; someone promptly asked if I’d swap with their Thinkpad ;) ).
Before getting my current x60s I used an OLPC XO-1, which ran OpenFirmware. I still use it semi-regularly due to the decent battery life and sunlight-readable screen :)
Yeah, it’s just another PPC box so it used Open Firmware like a lot of them. The SPARC’s were the other ones doing it IIRC. The laptop was pretty decent for $80. How was using the XO-1? I thought the OLPC’s would’ve been a crippling user experience. Never even tried them.
The GTK UI is a bit clunky, and I instinctively want to reach for the underlying filesystem rather than the “journal” abstraction, but those aren’t a problem when I spend most of my time in the terminal :)
Some of the bundled applications are a bit gimmicky, the “view source” concept was never fully realised and the bundled mozilla browser is far too heavyweight to run comfortably with 256MB RAM. Dillo and Netsurf run perfectly well though. One thing that’s very nice is the bundled Squeak/EToys system, although some of my projects were ambitious enough to hit a memory limit, such that trying to save to disk seems to hangs the machine :(
A USB drive or SD card is almost required, for extra storage and swap (although the latter will hasten the death of any flash memory). The OS still gets occasional updates, which are eating away more and more of the 1GB onboard storage. As I understand it, most hardware-specific tweaks have been pushed upstream, although booting into a separate OS like Debian does have a noticable impact on things like battery life.
The sunlight-readable screen makes it really great for reading from; especially when twisted into “tablet mode” (extra cursor and “game” keys are provided, which allow scrolling and navigation)
Thanks for the detailed write-up. Interesting. I also keep looking at Squeak recently since (a) it comes from the LISP/Smalltalk machine tradition of doing everything in one, consistent language that’s productive and memory-safe and (b) it’s an easy-to-use language with quite a bit of tooling and apps for something we don’t see posted often. Given I’m semi-disabled in remembering new things, I did consider taking a break from my focused on static typing or DbC to see what Smalltalk experience is like. Might bite me having dodged learning OOP all this time, though. I’m not sure if I’d get the full value out of it if I emulated structured, functional-ish programming in it using objects. What you think?
Note: The other route I’m considering is How to Learn Programs with Racket. Probably one I’ll do. This is more interim where I’m thinking if I should relearn Python for quick prototyping or try Squeak/Smalltalk.
Despite reading a lot about it (e.g. from Viewpoints Research Institute, among others)I’ve actually written very little “real” Smalltalk.
The EToys system (which I used on my XO) is a drag&drop programming system built in Squeak, much like the early versions of Scratch (I believe Scratch now uses JS), although it more closely follows OOP and IMHO it’s a ‘richer’ approach than Squeak’s.
If you’re going down the OOP rabbit hole for the first time, I think the most important thing to keep in mind is the difference between Smalltalk-style systems, which we might characterise as:
messageNotUnderstood)if/then/else,for, etc., are just method calls (e.g. on boolean or array objects). Users can make their own control flow in the same way.And Java-style systems:
finally; dynamic behaviour likemessageNotUnderstoodis discouraged and, rather than being default, requires heavy wizardry to pull off.if/then/else,for, etc. are special syntactic keywords which compile down to certain instructions (e.g. branches). Adding new ones, or shadowing/overloading existing ones, requires hacking the compiler.The merits of each can be argued at length, but the most important aspect is how this affects the style of code written in each language. For example, there might be “OO practices” which build up elaborate design patterns, towers of reflection, inversions of control, etc. which are actually just workarounds for one style of language, and a different language might just e.g. pass in a continuation.
Other nuances include e.g. classes (Smalltalk, Newspeak) versus prototypes (Self); reflection vs introspection; etc.
I’ve actually been playing quite a bit with Racket recently; it’s a nice system, if a little slow. It’s a shame that the PLaneT packaging system has recently been replaced by raco; it forces code to depend on ambient, OS-controlled, shell-scripted environments :(
Whilst I’ve written and used plenty of macros, I’ve not yet delved into call/cc or defining my own languages :)
Great writeup. Thanks! Far as Racket, you have to get on creating DSL’s to appreciate real power of the language per what its users tell me. Maybe do some HTML-like web programming, state machine stuff like recent Haskell article, or a low-level one that extracts to C or C++ similar to Ivory language.
I’m on site with a client this week, but more in a project management/leadership role. Delivery worker bee is more of my comfort zone, and I still think its a bit funny they trust me to run engagements.
Interesting work though! Penetration test of a (running) metro train system, including train/signal/power control and supervision. It is the culmination of over a year of off and on engagement with this client.
I’m working on a little Cocoa/Swift app in my spare time, coming from mostly web and server dev. It’s a simple speedrunning timer app, where a run can be split up into named ‘splits’, and some history is kept.
It feels a lot like unlearning a decade of techniques learned as a web dev: declarative ui, state management, etc. My first attempts were to try and fit that in Cocoa, and looking around for tools that may help. But macOS is a barren wasteland, with everyone focusing on iOS apparently.
So I’m trying to learn it more or less properly and the hard way. I’m not using Interface Builder, because I find it helps to learn how things actually work. (And xibs seem more a convenience any way.)
I’m still figuring out structure, splitting op classes that were implementing too many protocols, etc. Mostly have a document-based app up with working models and views, but need to start hooking up behaviour.
There’s some of the declarative, reactiveness alive within the Swift community in the RectiveCocoa and RxSwift communities. Each time I’ve tried to get into ReactiveCocoa (I’ve tried for each major version number) the lack of beginner documentation does me in. React has nailed this with a quick example app that introduces all the major concepts, I’m not sure why this doesn’t exist in ReactiveCocoa.
You can get pretty far with code driven UIs, but there’s definitely a large segment of developers that swear by Interface Builder and Storyboards. I’ve never been able to get into them myself.
ReactiveSwift seems equal parts awesome and daunting. I think it’d be very interesting to take a deep dive, but not sure if I’ll ever take the time. :)
Last week
I ended up working on my hardware projects. I put together the Monarch, and started working on a Game Boy art project (shameless plug).
I’ve got a couple of tasks this week:
Probably won’t happen this week, but I’d like to get Joyent’s Triton running in KVM so I can see if I can’t shim the pieces needed to run Kubernetes natively, since I think the environment has much of what’s needed (with Crossbow for networking and Manta for storage). The official guides for Kubernetes on Triton are just running it in KVM.
Reading “Writing an Interpreter in Go” https://interpreterbook.com/
Well to follow on from last week:
The shell script linker/compiler took a little detour on a “hey this crazy idea might work better” expierment. That was wound back and it’s pretty close to done now. Trialling it on handful of internal tools to make sure it covers the functionality of the two tools it replaces and doesn’t introduce any regressions.
CD for the client is coming along nicely, will be utilising the newfound deployment freedom to get a bunch of features & fixes out shortly.
And on the car front, it turns out we need a new engine. Just heard today the mechanics are looking at a shipment of refurbished ones that came in from japan today, so hopefully just a few more days and it’ll be back.
What does the shell script linker/compiler do exactly? Is it an app bundle like say py2exe or zipapp?
https://docs.python.org/3/library/zipapp.html
What language is it written in?
I have thought about this problem for the Oil shell. One problem is that you can’t statically determine which tools and scripts another script uses (at least not 100% correctly). There are probably some practical ways around that though.
No, sorry not that kind of compiler.
It’s pretty simple in the grand scheme of things - it will replace relative source (dot) statements with either an absolute reference according to the rules passed (eg –link ./=/usr/share/foo/) or with the contents of the target file (with recursive processing).
As a bonus it will do m4-style word replacements (–define LIB_DIR=/usr/share/bar)
It’s written in shell (aiming for posix compatible but with allowed known-safe exceptions for in-use /bin/sh implementations like local variables) and is essentially a bunch of argument processing that generates a series of
sedrules, and a bit more shell to post process the resulting stream.You’re right it’s not foolproof - my earlier solution relied on
sh -v(and relied on all includes being outside of any functions ).This is a static approach (its just recursive pattern matching) and for my purposes, it works because it removes the need for the fancy tricks to make relative includes work - you have a real relative Include in the source script and then build one with either the inclusion inline or as an absolute reference - so it doesn’t need to understand `”${0%/*}” or “$(cd $(dirname $0) && pwd -P)” type constructs.
Happy to discuss it more if you wish.
OK I think it’s like
zipappthen – it collects a bunch of shell source files into a single file?I forget how py2exe works, but it might only do that. It doesn’t actually compile Python to native code AFAIK – it just puts in a single file with the Python interpreter.
Since your program is written in shell, I’m curious if OSH will run it? I just released a new version:
http://www.oilshell.org/blog/2017/11/10.html
One “carrot” for trying it is that you will get better syntax checking with
osh -n foo.sh. It would be interesting to combine the functionality –osh -n --recursive foo.shcould check the syntax offoo.shand also every module sourced byfoo.sh. That would involve basically the same module-walking algorithm that you implemented, as far as I understand.Does your program use sed or something else? I’d be curious to see it. Technically I think you could do a better job with the OSH parser as a library, but I haven’t exposed it as a library yet.
Maybe this is best taken off-thread - I’ll send you a message and we can go from there.
@work: getting our Kafka ducks in a row before the real traffic surges start later this week; getting back in Scala (ugh, but … ugh); taking over a bunch of team lead/management responsibilities before my official end-of-probation date, which strikes me as a positive.
@home: chasing that SST/Warner Bob Mould guitar tone; replacing the dead motherboard in my adjectival big computer; getting ready for a big True Thanksgiving dinner for 50 of our friends.
Got all the kubernetes and docker stuff all set up for helmspoint–a tool to deploy already trained machine learning models to the web.
This week, I’ll be
Hobby project that I hack on a little every day when every one has gone to sleep: a simple space flight simulator. When I have version 0.0 out I hope to post that here. It doesn’t have a fancy tech stack (C++ and OpenGL) but in my head it’s cool for what I want to do (explore questions related to space flight within our solar system).