Nice way to fold Windows into my existing build systems, finally, without requiring a Windows machine sitting in the corner, being special ..
Someone had an evenings worth of time on their hands.
Good seed for some hot GCAN fiction, tho.
Now wait for some mad lad that will create Apple ARM homelab using old iPhones with netboot. Actually this sound like fun project. And with stuff like that it can even be powered via PoE. Looking at the performance of Apple chips in phones and tablets it could be really interesting.
Not gonna lie, I’ve got my beowulf gloves on ..
performance of Apple chips
performance of Apple chips
If only this was something way newer than an iPhone 7…
Every time there’s an SGI thread, I consider what life would be like if they’d made the titanium-clad Unix workstation/laptop, for $1000, instead of than .. Apple.
That was a pivotal moment, and SGI failed to exploit its potential strength at that time.
As a hard-core SGI user in those days, it was astonishing to me that Apple, of all people, were making the cutest Unix laptop.
I’d love for the r/cyberdeck people to be getting all up in an SGI alternate-time line re-imaginin’..
One difference between modular synths and FOSS is that they’re considerably more expensive than their closed/monolithic equivalents! In hardware, that flexibility comes at a cost. I’ve got a small rack with six or seven modules, but it probably cost around $1500 to put together. It makes some cool sounds but I find it less musically useful than the Modal Electronics Skulpt synth I got for $300.
People interested in trying out modular synthesis without spending a bundle on hardware should check out VCV Rack, a free-as-in-beer GUI program for Mac/Win/Linux that emulates a modular synth. You can download a ton of free modules, including ones built from the same free firmware that’s in popular modules, and there’s also a store selling more modules. It’s a lot of fun to mess around with, and it can do things the hardware can’t, like polyphony.
It’s also free-as-in-speech (licensed under GPL3 with an exception for commercial plugins)
Oops — I remembered it had changed licenses, but thought it went the other direction.
VCVRack runs very, very well on my Linux DAW, and is a regular delight in the studio.
Some simply amazing modules available, because of the ethos of its F/OSS community. Eurorack hardware developers even use it as a platform to test their ideas ..
Me and my (awesome) Open Pandora can’t wait to add a Pyra to our collection .. such a cool community, some amazing software for the platform, and most of all - no big corporate players involved.
2020: DaringFireball blogpslains \r \n ..
Those are the carriage return and newline characters, and the blog post is about enter vs. return key codes. Related, but they don’t actually line up.
Indeed, Turbo Pascal that was taught in my high-school and I vividly remember struggling with a thought on procedures: “who on earth would use these when you can use write the code without them” 😂
I generally feel that it was a very good introduction language for that time, maybe it still is? At the time other schools at neighboring countries were doing high-school introduction to programming in C which to this day makes me shake my head in horror lol
Still quite a nice and friendly way of doing a cross-platform application that is easy to install and use ..
My reaction exactly. When I started university, we were taught Pascal with Delphi 5 (not using any of the GUI features, just as an IDE for writing command-line apps in Object Pascal). Delphi 3 was a free download, but it had a bunch of serious bugs (I spent a whole day tracking down something where one of the slightly obscure string-manipulation functions was documented to take a pascal string, actually took a C string, and for some reason the type checker let you pass either). We had the Free Pascal Compiler installed in the computer society’s machines, so I started writing the code in their lab, or in vim over ssh when I wanted to keep working on it from my room. Free Pascal Compiler made that course a lot easier. I never used it for anything after that course. It’s probably hard to avoid hating the first programming language that you were forced to learn, after the half dozen or so that you learned for fun.
 Well, actually, machine. At the time, the computer society had silver.sucs.org, which was a 200MHz (I think) Pentium (clone?) that was Internet facing and did mail, talker, and web hosting for the society and platinum.sucs.org, which was a 133MHz Pentium (I think with 32MB or RAM, maybe 64MB?) that ran remote X sessions for half a dozen SparcStation 2s, which we were using as dumb X servers. I’m still amazed that a 133MHz machine could run rich GUI sessions for so many users back then when today a 1GHz single-user mobile phone feels painfully slow.
 That was probably when my vim addiction started. Any decade now I’ll manage to kick it!
I recently did the “Glass Jar of Lake Water” experiment, which is this but in real actuality .. just take a clean glass jar, go to the nearest body of fresh, still water, gloop up a blob of algae and plants and water, seal the jar - then watch it grow for a few weeks.
It has been a wonderful daily treat to observe life adjusting to the new environment - I have watched fleets of Hydra form on the glass surface, hundreds of daphnia and other water-fleas, and larvae galore. The hydra have stabilised after a few weeks and grown quite big - and the rowboat beetle has been making its home of oxygen bubbles in a garden of algae that it tends to, daily.
I don’t give it too much direct sunlight - this raises the temperature of the water and can kill everything - but rather keep it in light, and give it an hour or so of real sunlight if things get too murky - some sort of balance has been attained, however, and now after 5 weeks, things seem to have gotten into a bit of equilibrium - every few days there will be more water fleas/daphnia, and then the hydra will grow, and a day or so later the water beetle will swim around knocking everything around.
Its a real treat to just look at a bit of pond water and see what is growing within. I encourage anyone with an interest in complex systems to try this experiment and see for yourself how nature will find its own equilibrium if you give it the right inputs …
Sounds like a nice experiment with my kids. Unfortunately, i don’t know anything about daphnia and stuff. Any suggestions how to learn that?
there are a few good instagram accounts and youtube channels who educate in a fun way. one i love is “life in jars”
As maxbittker mentioned, the “Life In Jars” channel on youtube is pretty neat - I learned my version of this experiment before the advent of the Internet though ..
Basically you take a clean jar, with no residues or anything, but nice and shiny and clean - and you swoop up a small amount of water and algae - a rough guide for how much algae is around 1/5th of the volume of the jar should have a bit of algae in it - don’t worry if you don’t get this right, just don’t fill the entire jar, for example.
Then, seal the jar and put it in an environment where the temperature will be stable, but not dark - and also not in direct sunlight. If you put it in direct sunlight, you will literally cook the contents - the only time you expose it to sunlight is if the algae doesn’t look like its getting enough sun, in which case you give it 30-minute bursts in the sunshine, then back to shade/cover - in order to put more energy into the jar.
As far as identifying daphnia and other critters - check Youtube as mentioned, and also this is pretty handy:
EDIT: Posted elsewhere, but including here for the fun:
Early days of the experiment:
6 weeks later:
After looking around a little, I got these links for further investigations. Now i have a few keywords like “ecosphere” to search for.
This sounds like great fun! Can you show us a picture?
Here you go, early days of the experiment:
Definitely a fun thing to check on every day .. Let me know if you try the same experiment, its always fascinating to see what develops.
Thanks for uploading those. Really interesting.
That hydra is an odd looking thing!
Hydra are amazing creatures .. they can reassemble themselves if you blend them. ;) Their mouths seal shut after every meal and then tear open for food when its caught. They poop from their mouths! They have Chloroplasts living in their cells which give them sugar from photosynthesis when there isn’t much else to eat. They can reproduce by budding new versions of themselves, anywhere on their own bodies. A single hydra cell can grow a complete new hydra, and they never grow old - but rather are pretty resilient, invincible little life forms. They are very much one of my favourite creatures alongside octopus and tardigrades ..
Having read the Wikipedia page for Hydra, can confirm amazingly odd creatures!
Truly inspiring! I love the fact that they can re-assemble if you scramble their cells ..
Along similar lines, discover.dev, and joy of computing:
The latter is published by the same folks!
Question to comment readers: why would I want to do this?
Aren’t CLI programs’ power in the fact they can be chained by a shell?
Considering wasm is a target architecture everyone has their cannons pointed at, why is this special at all? Why are we now glorifying a new VM? Is it because it’s the “VM of the future” or a VM with less legal restrictions (and thus the VM of the future)?
I think everyone needs to really start thinking critically about why wasm will change anything. With LLVM IR, we can target anything. So why are we targetting a VM? Because it’s more open (which isn’t true, because the underlying hardware will not be wasm (oh but what about wasm fpgas or -based cpus?))?
I see we are in the times of “programming for the web operating system”, but I wonder what the consequences will be. I know a lot of good that can happen, but why is no one focusing on the bad?
Someone made a PR to run uni in some online REPL service; I didn’t really like the UI of that and it was slow, but figured I’d try to see if you can run this in the browser with wasm: turns out you can.
It’s just intended for playing around/previewing with out downloading, nothing more.
This is really awesome .. Is there some way to make a websocket available?
Computer programming 40 years ago (when I started) was always dominated by two groups: the highly prosperous academic, or the dirt poor hacker kid.
The academic had experience on all the old metal. The hacker kid, only the new stuff. Or, depending on the hackers’ interests, often a bit of old stuff too. Both camps were often as competent as each other, but in different ways. The old guys could design software, then implement it and then write tests; the hackers mostly coded software, then fixed what was broken - or didn’t. Back then the more you hacked things, the more academic you actually became. Often, hackers wrote papers that academics would absorb with resistance.
Everyone had their favourite flavour of machine, language, and coding style. Your true value then, depended on how well you could shove that flavour down anyone else’ throats. Managers didn’t care, how you did it - just if you did it, and where it was running, and when could the users be given access to it, and so on.
The tooling was bonkers - there was always some new way to write code. A lot of time, you’d spend just writing tools, then hack up the project by gluing all the tools together.
Things haven’t changed much. In fact, they’ve stayed exactly the same. The only real ‘difference’ is the cyclomatic complexity has gone up a few factors, and there’s way more RAM than there needs to be, for most tasks. This is because the academics mostly win, and we get “good enough” bloated mediocrity in our operating environments, as happens always with academia, while industry keeps all the hackers fighting with each other for the latest/greatest shiny new toy that does - basically the same things that last years toys did, only ‘better’.
We had that problem back in the 70’s too. The computer industry is very, very decadent in this regard.
PR’s are only a thing because devs can’t manage branches and personal workflow, and would rather lean on the tool to solve their social problems.
Look, if you work in a team and are well organized, and the team is communicating well within itself, you don’t have to fret about PR’s. You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.
Alas, not all projects can attain this level of competence. PR’s are there to allow community, even if folks can’t organise well enough to know each other and how to contribute to each others work in a positive flow.
For many projects, a rejected PR is just as valuable as an auto-merge. It gets people communicating.
One potential problem with giving each dev their own branch and merging all branches at once to make a build is that build has a lot of potential changes and when something inevitably goes wrong it isn’t always clear what exactly caused it to break. Good communication and code quality can mitigate issues with integrating the code, but I don’t think it completely eliminates it. If the alternative is releasing a build with multiple PRs anyway then this might not be a problem, but if your alternative is releasing a build with every single change then its a distinct disadvantage.
Why would you want to have a merge party where you merge in a whole bunch of stuff at once rather than reviewing and merging smaller changes one at a time?
Maybe because your team is productive.
Because you’re pushing features forward, trust your fellow devs, and everyone is working well enough that it doesn’t matter - and it means that features can be tested in isolation. Plus, its very rewarding to get a branch merge done, you suddenly get a much bigger and better app for the next round of work ..
Not in the long run and not if your code is in production. One of the goals of review process is to get the others in the team acquainted with the changes, so they could support that in the future, when the author goes on a vacation or leaves the company. Merge parties cram way to much information for a purposeful comprehension. Remember, coding is mainly joint activity of solving business problems, and its product will require support and maintenance.
Our Merge parties include team review, so .. not really encountering the issues you mention.
How big are your change requests, how long do these reviews last?
Weekly, takes a day for the team, and then we start the 3-day tests ..
But look, whatever, not everyone’s use case is the same, The point is, scale according to your needs, but don’t ignore the beauty of PR’s as a mechanism, when you need them. If you don’t need them, do something else that works.
You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.
You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.
Is that really a thing? I’ve never been on a team that did that, and it sounds like a mess (to put it lightly). I can’t imagine that it would scale well.
For sure. I do it with the 3 other devs in my office space. We just tell each other ‘hey, I’m working on branch’, then when the time is right, we all sit together and merge.
I mean, its probably the easiest flow ever.
If you don’t do this, I wonder why? I guess its communication.
I do it with the 3 other devs in my office space.
I do it with the 3 other devs in my office space.
That makes sense: the “3” and “in my office space” are a particular set of constraints. If you’ve got a system worked on by more people, and distributed across locations, that doesn’t scale in quite the same way, I think.
Get back to me when you’ve tried it with more than 10 people on the team or a project with more than 500k lines of code…
Yeah, works just as fine at that scale too. Key thing is: devs communicating properly.
If I’d write an article called “Protobuffers Are Wrong” its content would mostly be mutually exclusive to this article. Because the #1 problem with Protobuf is performance: it has the need to do a ton of dynamic allocations baked into its API. That’s why I designed FlatBuffers to fix that. Most of the rest of Protobuf is actually rather nice, so I retained most of it, though made some improvements along the way, like better unions (which the article actually mentions).
Your comment and also some remarks in the article suggests to me that Protobuf was designed for Java and never lost that bias.
At the time Protobuf was designed, Google was mostly C++. It is not that unnatural to arrive at a design like Protobuf: 1) Start with the assumption that reading serialized data must involve an unpacking step into a secondary representation. 2) Make your serialized data tree-shaped in the general case, 3) allow arbitrary mutation in any order of the representation. From these 3 it follows that you get, even in C++: 4) the in-memory representation must be a dynamically allocated tree of objects. FlatBuffers questions 1) and 3) :)
To me it sounds like your issue is with whichever protobuf implementation you were playing with when you checked it out.
There are protobuf libs that will do the job fine without all the allocations. Are you aware there are other implementations, and that protobuf is by now a bit of a protocol in and of itself… ?
Link to these magical allocation-less Protobuf implementations?
At least internally to Google, Protobuf allocs are a huge cost, which they’ve so far been unable to eliminate. The best they can do is arenas. If it was easy to fix, they would have done it by now.
I can imagine ways in which you could read a Protobuf without allocations, but it be a) completely incompatible to the current API, b) not have O(1) or random access to data (unlike FlatBuffers) and c) not allow mutation. That would thus be entirely useless to most users of Protobuf.
I’m aware of nanopb.. using that without malloc is only possible in very limited situations, where you might as well have used an even simpler serialization method. It has some serious limitations and is.. slow. Compare that with FlatBuffers, which can be used with even less memory, is very fast, and can also be used with more complex datasets.
I use nanopb quite effectively, so none of your issues bother me in the slightest. Nevertheless it demonstrates that its quite possible to use Protobufs without any of the original issues you claim make it unsuitable.
Protobufs are an attempt at a solution for a problem that must be solved at a much lower level.
The goal that Protocol Buffers attempt to solve is, in essence, serialization for remote procedure calls. We have been exceedingly awful at actually solving this problem as a group, and we’ve almost every time solved it at the wrong layer; the few times we haven’t solved it at the wrong layer, we’ve done so in a manner that is not easily interoperable. The problem isn’t (only) serialization; the problem is the concept not being pervasive enough.
The absolute golden goal is having function calls that feel native. It should not matter where the function is actually implemented. And that’s a concept we need to fundamentally rethink all of our tooling for because it is useful in every context. You can have RPC in the form as IPC: Why bother serializing data manually if you can have a native-looking function call take care of all of it for you? That requires a reliable, sequential, datagram OS-level IPC primitive. But from there, you could technically scale this all the way up: Your OS already understands sockets and the network—there is no fundamental reason for it to be unable to understand function calls. Maybe you don’t want your kernel serialize data, but then you could’ve had usermode libraries help along with that.
This allows you to take a piece of code, isolate it in its own module as-is and call into it from a foreign process (possibly over the network) without any changes on the calling sites other than RPC initialization for the new service. As far as I know, this has rarely been done right, though Erlang/OTP comes to mind as a very positive example. That’s the right model, building everything around the notion of RPC as native function calls, but we failed to do so in UNIX back in the day, so there is no longer an opportunity to get it into almost every OS easily by virtue of being the first one in an influential line of operating systems. Once you solve this, the wire format is just an implementation detail: Whether you serialize as XML (SOAP, yaaay…), CBOR, JSON, protobufs, flatbufs, msgpack, some format wrapping ASN.1, whatever it is that D-Bus does, or some abomination involving punch cards should be largely irrelevant and transparent to you in the first place. And we’ve largely figured out the primitives we need for that: Lists, text strings, byte strings, integers, floats.
Trying to tack this kind of thing on after the fact will always be language-specific. We’ve missed our window of opportunity; I don’t think we’ll ever solve this problem in a satisfactory manner without a massive platform shift that occurs at the same time. Thanks for coming to my TED talk.
You might want to look into QNX, an operating system written in the 80s.
It should not matter where the function is actually implemented.
It should not matter where the function is actually implemented.
AHEM OSI MODEL ahem
I’ve been thinking along the same lines. I’m not really familiar with Erlang/OTP but I’ve taken inspiration from Smalltalk which supposedly influenced Erlang. As you say it must be an aspect of the operating system and it will necessitate a paradigm shift in human-computer interaction. I’m looking forward to it.
I’ve been finding myself thinking this way a lot recently, but I’ve also been considering a counterpoint: all software is fundamentally just moving data around and performing actions on it. Trying to abstract moving data and generalizing performing actions always just gets me back to “oops you’re designing a programming language again.”
Instead, I’ve started to try and view each piece of software that I use as a DSL for a specific kind of data movement and a specific kind of data manipulation. In some cases, this is really easy. For example, the jack audio framework is a message bus+library for realtime audio on linux, dbus does the message bus stuff for linux desktopy stuff, and my shell pipelines are a super crude data mover with fancy manipulation tools.
Rampant speculation: the lack of uniformity in IPC/RPC mechanisms boils down to engineering tradeoffs and failure modes. Jack can’t use the same mechanism that my shell does because jack is realtime. dbus shouldn’t use full-blown HTTP with SSL to send a 64 bit int to some other process. Failure modes are even more important, a local function call fails very differently from an RPC over a TCP socket fails very differently than an RPC over a UDP socket fails very differently than a multicast broadcast.
I feel like the abstractions and programming models we have/use leak those engineering tradeoffs into everything and everybody ends up rolling their own data movers and data manipulator DSLs. From my limited exposure, it seems like orgs that are used to solving certain kinds of problems end up building DSLs that meet their needs with the primitives that they want. You say those primitives are “lists, text strings, byte strings, integers, floats”, but I’d just call all of those (except maybe floats) “memory” which needs some interpretation layer/schema to make any sense of. Now we’re back into “oops I’m designing an object system” or “oops I’m coming up with rust traits again” because I’m trying to find a way to wrangle memory into some nice abstraction that is easily manipulable.
In conclusion I keep finding myself saying things very similar to what you’re written here, but when I’ve explored the idea I’ve always ended up reinventing all the tools we’ve already invented to solve the data movement + data manipulation problems that programs are meant to solve.
well, the fundamental problem imho is pretending that remote and local invokations are identical. when things work you might get away with it, but mostly they dont. what quickly disabuses you of that notion is, that some remote function calls have orders of magnitude higher turnaround time than local ones.
what does work is asynchronous message passing with state-machines, where failure modes need to be carefully reasoned about. moreover it is possible to build a synchronous system on top of async building blocks, but not so the other way around…
cap’n proto offers serialisation and RPC in a way that looks fairly good to me. Even does capability-based security. What do you think is missing? https://capnproto.org/rpc.html
Cap’n proto suffers from the same problem as Protobuffers in that it is not pervasive. As xorhash says, this mechanism must pervade the operating system and userspace such that there is no friction in utilizing it. I see it as similar to the way recent languages make it frictionless to utilize third-party libraries.
I use protobufs extensively in various projects, and for me they are just fine. I have none of the issues of the author of the article - I can put the libs anywhere/everywhere I want, and they solve lots of problems relating to transport of data between independent nodes.
Also, since they have enough meta data on board, they’re a pretty interesting way to derive a quick UI for an editor. So I use them not only as a transport layer, but as a description for what the user should see, in some cases.
Perhaps my view is too broad in scope beyond the horizon, but even though I can accomplish all of the above with something like JSON or XML, I still prefer the performance and ease of use of pbufs where I’m using them.
So, I think the argument is lost on me. Although, there are other ways to accomplish all of the above too, which I might learn about for my next project …
As a Developer, I consider the necessity of having a $PROJECT/platform tree one of the spices of life.
I mean, this whole argument is moot if you treat all the end-target user platforms as a responsibility, not a liability.
It is not unreasonable to be able to get most codebases, well-written and prepared for platform independence, ported to any of the things. The elephant in the room is that this is, of course, what F/OSS is all about: endless new platforms for the software to live on.
But the premise that devs shouldn’t ‘distribute’ their own software is mind-bogglingly dumb I can only assume its a troll.
Its the platform, duh!
Putting Working Source on all the platforms is an ultimate developer goal .. it perpetuates the value of the software. (is the only way software can be valuable, i.e. its running on something in front of a user).
Therefore, preparing for insertion into <build/packaging-system-de-jour> should be a no-brainer.
And is, btw, why IDE-dependence is so sad to see. Choose the languages, not the syntax highlighting, kiddos!
“Expert C Programming: Deep C Secrets”, Peter Van Der Linden
I have multiple copies because every time I loan it out, its gone for good.
The thing I like about Iosevka is that you can build it with different sets of ligatures, and I’ve configured my Emacs to use the version of Ioseveka specific to the language in the buffer. That’s sort of cool. Of course, I’ve now officially spent more time setting Emacs up to write Haskell and OCaml in than I have actually writing Haskell or OCaml, but, you know, you gotta start somewhere.
I think you’re doing fine. I’ve spent more time reading about people setting up their Emacs than I ever used Emacs
Hah! Nice. I can certainly empathise with spending more time configuring Emacs than actually using it to write things. Do you have your configuration published anywhere? I’d be interested in seeing the font set-up :)
In the interest of sharing, here’s mine: https://github.com/cmacrae/.emacs.d
Here’s some horrible elisp, assuming “Iosevka Haskell P” and “Iosevka ML P” are proportionally spaced versions of Iosevka with the appropriate ligatures defined:
(defun jfb/frame-is-displaying-?-buffer (frame mode)
(eq mode (buffer-local-value 'major-mode (car (buffer-list frame))))) ;; this assumes that the first buffer in a frame's list of buffers is the visible one. Sigh.
(defun jfb/frames-that-? (predicate)
(seq-filter predicate (visible-frame-list)))
(defun jfb/assign-font-to-frame (predicate-to-pick face-to-assign)
(setq frames (jfb/frames-that-? predicate-to-pick))
(if frames (set-frame-font face-to-assign
(defun jfb/define-haskell-font ()
(jfb/assign-font-to-frame (lambda (f) (jfb/frame-is-displaying-?-buffer f 'haskell-mode)) "Iosevka Haskell P:weight=ultra-light:slant=normal:width=normal:spacing=100:scalable=true"))
(defun jfb/define-ocaml-font ()
(jfb/assign-font-to-frame (lambda (f) (jfb/frame-is-displaying-?-buffer f 'tuareg-mode)) "Iosevka ML P:weight=ultra-light:slant=normal:width=normal:spacing=100:scalable=true"))
(defun jfb/fixup-fonts ()
(global-set-key [f9] 'jfb/fixup-fonts)
I always try to teach Haskell using nano otherwise everyone is like “ooo what are you using?” and “can I get something like that but for jEdit/Sublime/Atom/VS?”
How are you handling ligatures? AFAIU emacs doesn’t render opentype ligatures?
On MacOS it does!
It doesn’t out of the box, no - even if the new shaping engine might help on that front? - but you can tell it to use different glyphs for arbitrary character combinations. There’s apparently at least three different ways to tackle this, by using prettify-symbols-mode, a composition table, or font-lock. All of them, though, are specific to a single font, but there should be instructions for most of them nowadays.
Ligatures are some of the finest things known to man. Are you familiar with Chartwell? Are you doing that sort of thing in emacs?