Threads for nc

  1. 4

    This is a large reason I’ve been drawn more to Zig and Python than Rust in my personal graphics/audio/gamedev related projects. I like to prototype numerical algorithms quickly, and numpy is killer for that. When I want control over memory (which absolutely matters in certain prototypes), I’ll drop to Zig. It provides better guardrails than C or C++, gives me access to all my favorite C libs through @cImport, but is still much faster to prototype with than Rust. [1]

    Like the article mentions if you’re writing an OS kernel (or filesystem, or database, or a cryptography library, or a mission-critical scheduler, or any number of other multithreaded + safety + performance sensitive code [2]), Rust is a great choice. In established organizations, Rust also has an edge since larger companies are more focused on risk-mitigation, have larger teams, and benefit from stronger SDLC constraints to keep code consistent when dozens of programmers get their grubby paws on everything. But I can definitely see how a small startup that needs a bunch of short-term wins to survive would struggle with Rust. I’m sure there are startups building products where safety and performance are main features, and I bet Rust would provide an edge in some of those cases as well. But for a SaaS CRUD app with a handful of developers, it probably wouldn’t rank high on my list of languages to use.

    [1]: I’d pick Rust even for my prototypes if worked on my problems that were multithreaded, but the safety guarantees of Zig are strong enough for the small, single-threaded programs I usually write. I’ve really enjoyed contributing a few small Rust patches to open source projects though, so I might also pick it if I wanted to make collaboration easy.

    [2]: Although, it might even be tricky with some of these examples since they could require a decent chunk of unsafe code which is another sticking point for Rust.

    1. 1

      I know this may be a simple thing for many but I wanted to share it just in case someone else struggles with similar problem, and I believe they have easier time finding it from here than stumble upon my blog :)

      1. 2

        FYI, here’s a video I found helpful for getting an intuition for quaternions: https://www.youtube.com/watch?v=FRD0PgsY3pU

      1. 1

        I’m going to be rating games for 32bitjam since I submitted a PS1-inspired action game for it built with Raylib/Python earlier this week. I’ll probably do a small writeup on my blog with lessons learned. Maybe I’ll clean up the music I wrote for the game and publish it on soundcloud too (even though it was rushed, I am starting to like aspects of it :P). I’m also planning on playing around with Zig 0.10 to see if I can setup web builds with Raylib for the next game jam I do.

        1. 1

          For people interested in building games using Zig/Raylib with web output, I put together a template: https://github.com/charles-l/zig-raylib-template

        1. 8

          As a satisfied git user, this seems fair. The tldr is: fossil uses substantially the same data structure as git but makes completely opposite choices because it’s optimized for smaller development teams.

          I’d love to hear from someone who has used both extensively and prefers fossil. What’s it like? What’s lost that git users rely on? What huge wins does fossil open up?

          1. 8

            I actually go so far as to just use Fossil for most of my websites. The wiki in markdown mode works great for my sites.

            What’s lost that git users rely on?

            The big thing is, git allows mutability, in fact all of git is basically a big fungible ball of mud. Fossil is much more immutable. If you think your VCS should allow mutable history, Git is for you. If you think your commits are immutable history, you should take a serious look at Fossil. I’m not sure there is a “right” answer here, it just depends on your perspective. Personally I’m an immutable person, that does mean you get the occasional commit message that looks like: ‘oops, fix typo’. But you can be fairly confident Fossil’s history is what actually happened and not a cleaned up version of reality.

            1. 11

              For what it’s worth, the commit graph in git is immutable. The only thing mutable are the references. What’s different is that in git you can move the references however you please, with no restrictions to move them along the edges of the commit graph.

              In fossil, you can only converge the timelines. In git, you can jump between them.

              1. 7

                A big realization for me was that the git graph itself is versioned in the reflog. Using git reflog to restore old references is hugely valuable and I’m way more confident using git because of it. But to be fair, those commits are only stored on my local machine, will eventually be gc-ed and won’t be pushed to a remote. Fossil would track them everywhere.

              2. 6

                Could you elaborate a bit on the value of immutability to you? I fall into the mutability camp because I find the additional context is rarely worth the value when it costs so much to untangle. I’d rather use the mutability to create untangled context and have that be the only artifact that remains. I’m not disagreeing that the immutability has value to you, I’m just seeking understanding since I experience things differently.

                1. 5

                  I generally have two diametrically opposed views on immutability:

                  • In other people’s repos, I want immutability with no exceptions. If I clone your repo, nothing you do should be able to modify the history that I think I have, so I can always find a common ancestor between your version and my fork and merge back or pull in new changes.
                  • In my own repo, I want the flexibility to change whatever I want, commit WIP things, move fixes back to the commit that introduced the bug, and so on.

                  I think GitHub’s branch protection rules are a good way of reconciling this. The main branch enforces immutable history, as do release branches. Ideally tags would also be immutable. Anything else is the wild west: if you work from those branches then don’t expect them to still exist upstream next time you pull and you may need to resolve conflicts later. I’d like a UI that made this distinction a lot more visible.

                  1. 2

                    This is definitely a reasonable perspective.

                    Question, what is the point of committing WIP stuff?

                    1. 3

                      Locally, so that I have an undo button that works across files, so once something is working and I want to clean it up, I can always see what I’ve changed and broken during cleanup.

                      Publicly, so that I can get feedback (from humans or CI), incorporate it, and then clean up the result so that, when it’s merged, everything in the history is expected to be working. This means that other people can bisect to find things that broke and not have to work out if a particular version is expectedly or unexpectedly broken. It also means that people merging don’t see conflicts in places where I made a change in a file, discovered I didn’t need it, reverted it, and they did a real change in that location.

                      1. 1

                        Thanks!

                        For me and us @ $WORK, the undo button is either ZFS snapshots(exposed as ~/.snapshots) or $EDITOR’s undo functionality.

                        For human feedback, we have a shared user machine we work from or we use other more general purpose tools, like desktop screen sharing(typically tmux, or guacamole).

                        For CI feedback, since our CI/CD jobs are just nomad batch jobs, it’s just a nomad run project-dev.nomad command away.

                        I.e. we prefer general tools we have to use anyway to solve these problems, instead of specific tools.

                        1. 3

                          For me and us @ $WORK, the undo button is either ZFS snapshots(exposed as ~/.snapshots) or $EDITOR’s undo functionality.

                          That requires me to either:

                          • Have a per-source-tree ZFS dataset (possible with delegated administration, but a change from just having one for my home directory and one for my builds which doesn’t get backed up and has sync turned off)
                          • Tracking metadata externally about which of my snapshots corresponds to a checkpoint of which repo.

                          In contrast, git already does this via a mechanism that is the same one that I use for other things. Vim has persistent undo, which is great for individual files, but when a change spans a dozen files then trying to use vim’s undo to go back to (and compare against) the state from the middle of yesterday afternoon that worked is hard.

                          For human feedback, we have a shared user machine we work from or we use other more general purpose tools, like desktop screen sharing(typically tmux, or guacamole).

                          That requires everyone that you collaborate with to be in the same company as you (or introduces some exciting security problems for your admin team to have to care about), for your code review to be synchronous. The first is not true for me, the second would be problematic given that I work with people distributed across many time zones. Again, GitHub’s code review doesn’t have those problems.

                          For CI feedback, since our CI/CD jobs are just nomad batch jobs, it’s just a nomad run project-dev.nomad command away.

                          That’s fine if everyone running your CI has deploy permissions on all of the infrastructure where you do testing.

                          I.e. we prefer general tools we have to use anyway to solve these problems, instead of specific tools.

                          The tools that you use have a lot of constraints that would prevent me from using them in most of the places where I use git.

                          1. 1

                            Around CI/CD. For local testing with nomad it can be as simple as download the nomad binary then nomad agent -dev then nomad run <blah.nomad> and you can be off to the races, running CI locally.

                            We don’t do that, because @ $WORK, our developers are all in-house and it’s a non-issue, to share resources.

                            Just to be clear, I’m not trying to convert you, just for those following along at home.

                            Also, within Fossil’s commits, you can totally hide stuff from the timeline, similar to git rebase, using amend

                            1. 1

                              Thanks for the exchange! It’s interesting seeing the different trade-offs on workflow.

                              For those following along, another way, totally within fossil to do undo across large amounts of code change is just generate a sqlite patch file instead of a commit. it’s easy enough: fossil patch create <blah.patch> and to undo: fossil patch apply <blah.patch> The patch file will include by default all uncommitted changes in the repo.

                          2. 1

                            an undo button that works across files, so once something is working and I want to clean it up, I can always see what I’ve changed and broken during cleanup.

                            The staging area is underappreciated for this problem. Often when I hit a minor milestone (the tests pass!) I’ll toss everything into staged and then try to make it pretty in unstaged. With a good git UI it’s easy to look at the unstaged hunks in isolation and blow them away if I mess up. Good code gets promoted to the staging area and eventually I get a clean commit.

                            …and then I end up with lots of messy commits anyway to accommodate the “humans or CI” cases. :)

                          3. 1

                            In short, for backing it up or for pushing out multiple commits to create a design sketch.

                            1. 1

                              The Fossil “Rebase Considered Harmful” document provides a lot of reasons for committing WIP: bisect works better, cherry-picks work better, backouts work better. Read the doc for more: https://www2.fossil-scm.org/home/doc/trunk/www/rebaseharm.md

                              Rebase is a hack to work around a weakness in git that doesn’t exist in fossil.

                              The Git documentation acknowledges this fact (in so many words) and justifies it by saying “rebasing makes for a cleaner history.” I read that sentence as a tacit admission that the Git history display capabilities are weak and need active assistance from the user to keep things manageable. Surely a better approach is to record the complete ancestry of every check-in but then fix the tool to show a “clean” history in those instances where a simplified display is desirable and edifying, but retain the option to show the real, complete, messy history for cases where detail and accuracy are more important.

                              So, another way of thinking about rebase is that it is a kind of merge that intentionally forgets some details in order to not overwhelm the weak history display mechanisms available in Git. Wouldn’t it be better, less error-prone, and easier on users to enhance the history display mechanisms in Git so that rebasing for a clean, linear history became unnecessary?

                          4. 5

                            Sometimes it’s policy/law. When software starts mucking about with physical things that can kill/maim/demolish lives, stuff has to be kept track of. Think airplane fly by wire systems, etc. Fossil is good for these sorts of things, git could be with lots of policy around using it. Some industries would never allow a git rebase, for any reason.

                            • The perspective of Fossil is: Think before you commit. It’s called a commit for a reason.
                            • The perspective of Git is, commit every nanosecond and maybe fix the history later.

                            Of course history being immutable can be annoying sometimes, but history is messy in every version of history you look at, except perhaps kindergarten level history books :P I’m not a kindergartener anymore, I can handle the real history.

                            For me at $WORK, it’s policy.

                            1. 6

                              Ah, a policy reason certainly makes sense. I work in FinTech, where policy and/or law has not quite caught up to software to burden us with a more deeply ingrained moral responsibility to assure accountability of the software we write.

                              • The perspective of Fossil is: Think before you commit. It’s called a commit for a reason.
                              • The perspective of Git is, commit every nanosecond and maybe fix the history later.

                              Of course history being immutable can be annoying sometimes, but history is messy in every version of history you look at, except perhaps kindergarten level history books :P I’m not a kindergartener anymore, I can handle the real history.

                              This is condescending. Human beings make mistakes and storing those mistakes in all cases isn’t always a valuable artifact. Imagine a text editor that disallows text deletions. “You should have thought harder before typing it.” Come on, dude.

                              1. 7

                                Imagine a text editor that disallows text deletions.

                                We call that Blockchain Notepad.

                                1. 4

                                  Thanks, I hate it!

                                2. 3

                                  I had a :P in there :)

                                  I apologize, but come on, it’s not like your comparison is remotely fair either :)

                                  Human beings make mistakes and storing those mistakes in all cases isn’t always a valuable artifact.

                                  I agree. Fossil(and git) both have ways to fix mistakes that are worth cleaning up. Git just has more of them. See “2.7 What you should have done vs. What you actually did” in the OP’s link.

                                  1. 1

                                    One thing I’m seeing about fossil that illuminates things a bit is that it looks like it might be possible to limit what sets of changes you see as a product of querying the change sets. This seems useful - not to reproduce something git-like - but to limit only to the changes that are considered more valuable and final than anything more transient. If that’s the case, I can see the “mess” of fossil being less of a problem, though with the added cost of now needing to be comfortable with querying the changes.

                                3. 5

                                  I’d much rather have cleaned up history than try to bisect around half-finished commits.

                                  1. 3

                                    From a Fossil perspective, half finished commits belong locally or in .patch files to move around(which in Fossil land are sqlite3 files, not diffs). They don’t belong in commits.

                                    To be clear I agree with you, bisecting half-finished commits are terrible. Fossil just has a different perspective and workflow than Git when it comes to this stuff.

                                    1. 2

                                      I imagine that the way this would get handled in fossil land is people making local half commits then redrafting the changes cleanly on another branch and using that as the official commits to release

                                    2. 3

                                      There’s a series of steps that any change takes as it passes through decreasingly mutable tiers of storage, so to speak:

                                      • typing moves things from the programmer’s brain to an editor buffer
                                      • saving moves things from an editor buffer to a file
                                      • committing moves things from a file to a (local) repo’s history
                                      • pushing moves things from a local repo to a (possibly) public one

                                      The question is at what level a given change becomes “permanent”. With git it’s really only when you’ve published your history, whereas it sounds like fossil’s approach doesn’t really allow the distinction between the last two and hence that happens on every (local) commit.

                                      You could move the point-of-no-undo even earlier and install editor hooks to auto-commit on every save, or save and commit on every keystroke, but I think most people would agree that that would produce an unintelligible useless mess of history, even if it is “a more accurate record of reality” – so even in the fossil approach you’re still looking at a somewhat massaged, curated view of the development history. I think git’s model just makes that curation easier, by allowing you to create “draft” commits and modify them later.

                                      1. 2

                                        Fossil’s perspective would be, once it’s committed it is immutable, but you can do reporting on it and make it spit out whatever you want. i.e. Fossil really is just a fancy UI and some tooling around a SQLite database. There is basically no end to what one can do when your entire code tree is living in a SQL database.

                                        i.e. You don’t change the history, you change the report of history to show the version of reality that is interesting to you today.

                                        Fossil even includes an API for it: https://www2.fossil-scm.org/home/doc/trunk/www/json-api/api-query.md Not to mention the built-in querying available for instance in the timeline view

                                      2. 3

                                        While I disagree with conclusion, I appreciate you taking the time to explain this way of looking at it. The legality angle seems reasonable, (and, ofc, if you have no choice, you have no choice) but digging further I have some questions for you….

                                        1. By this line of reasoning, why is the fossil commit the unit of “real history”? Why not every keystroke? I am not just being facetious. Indeed, why not screen record every editing session?
                                        2. Given that the fossil commit has been deemed the unit of history, doesn’t this just encourage everyone to big-batch their commits? Indeed, perhaps even use some other mechanism to save ephemeral work while I spend hours, or even days, waiting for my “official work” to be done so that I can create clean history?

                                        I’m not a kindergartener anymore, I can handle the real history.

                                        This strikes me an almost Orwellian reversal, since I would say: “You (coworker) are not a kindergartner anymore. Don’t make me read your 50 garbage commits like ‘checkin’, ‘try it out’, ‘go back’, etc, when the amount of changes you have merits 3 clean commits. Have the basic professionalism to spend 5-10 minutes to organize and communicate clearly the work you have actually done to your current and future coworkers.” I am no more interested in this “true history” than I am interested in the 5 intermediate drafts of the email memo you just sent out.

                                        1. 2

                                          Don’t make me read your 50 garbage commits …

                                          It sounds like we are no longer discussing Fossil, but a way of using Git where you do not use rebase.

                                          Here’s what the Fossil document says:

                                          Git puts a lot of emphasis on maintaining a “clean” check-in history. Extraneous and experimental branches by individual developers often never make it into the main repository. Branches may be rebased before being pushed to make it appear as if development had been linear, or “squashed” to make it appear that multiple commits were made as a single commit. There are other history rewriting mechanisms in Git as well. Git strives to record what the development of a project should have looked like had there been no mistakes.

                                          Fossil, in contrast, puts more emphasis on recording exactly what happened, including all of the messy errors, dead-ends, experimental branches, and so forth. One might argue that this makes the history of a Fossil project “messy,” but another point of view is that this makes the history “accurate.” In actual practice, the superior reporting tools available in Fossil mean that this incidental mess is not a factor.

                                          Like Git, Fossil has an amend command for modifying prior commits, but unlike in Git, this works not by replacing data in the repository, but by adding a correction record to the repository that affects how later Fossil operations present the corrected data. The old information is still there in the repository, it is just overridden from the amendment point forward.

                                          My reading is that Fossil permits you to view a “clean” history of changes due to its “superior reporting tools” and the “correction records” added by the amend command. But unlike Git, the original commit history is still recorded if you need to see it.

                                          1. 1

                                            My reading is that Fossil permits you to view a “clean” history of changes due to its “superior reporting tools” and the “correction records” added by the amend command. But unlike Git, the original commit history is still recorded if you need to see it.

                                            Ok that is interesting… I had been assuming that they were dismissing the value of clean history, but it seems they are not, but instead solving the same problem but at a different level in the stack.

                                          2. 1

                                            By this line of reasoning, why is the fossil commit the unit of “real history”? Why not every keystroke? I am not just being facetious. Indeed, why not screen record every editing session?

                                            That’s what Teleport is for. Other tools obviously also do this.

                                            More generally, stuff in the commit tree will eventually make it to production and run against real data and possibly hurt people. The stuff that can hurt people needs to be tracked. The ephemeral stuff in between doesn’t much matter. If I was purposefully negligent in my code, no amount of ephemeral work would prove it, there would be some other mechanism in place to prove that (my emails to a co-conspirator maybe, recording me with that evil spy, etc).

                                            Given that the fossil commit has been deemed the unit of history, doesn’t this just encourage everyone to big-batch their commits? Indeed, perhaps even use some other mechanism to save ephemeral work while I spend hours, or even days, waiting for my “official work” to be done so that I can create clean history?

                                            Why do you need to commit ephemeral work? what is the point?

                                            Have the basic professionalism to spend 5-10 minutes to organize and communicate clearly the work you have actually done to your current and future coworkers.”

                                            LOL fair point :) But that goes back to the previous comments, what is the purpose of committing ephemeral work? From my perspective there are 2 general reasons:

                                            • Show some pointy haired boss you did something today
                                            • Share some code in progress with another person to help solve a problem, code review, etc.

                                            The 1st, no amount of code commits will solve this, it’s either trust me or come sit next to me and watch me do stuff(or screen record, video record my office, etc). If my boss doesn’t trust me to be useful to the organization, I’m at the wrong organization.

                                            The 2nd, is easily solved in a myriad of ways, from desktop/screen sharing, to code collab tools to sharing Fossil patch files around.

                                            I truly don’t understand the point of committing half-done work like Git proponents seem to think is an amazing idea. A commit needs to be USEFUL to be committed. Perhaps it’s part of a larger body of work, it’s very common to do that, but then it’s not ephemeral, you are doing a cleanup so you can then implement $FEATURE, that cleanup can happily be it’s own commit, etc.

                                            But committing every nanosecond or on every save is just idiotic from my point of view. If you want that sort of thing, just run against a versioned file system. You can do this with ZFS snapshots if you don’t want to run a versioned file system. Git is not a good backup tool.

                                            1. 4

                                              I think this is fundamentally a workflow difference.

                                              Proponents of git, myself included, use committing for many purposes, including these prominent ones:

                                              1. A way to save partially complete work so you don’t lose it, or can go back to that point of time in the midst of experimentation.
                                              2. A way to share something with a co-worker that will not be part of permanent history or ever merged.

                                              The 2nd, is easily solved in a myriad of ways, from desktop/screen sharing, to code collab tools to sharing Fossil patch files around.

                                              Yes, I suppose there are other ways to solve the sharing problem. But since we do everything in git anyway and will have a PR in Github anyway, it is very convenient to just commit to share, rather than introduce a new mechanism for sharing.

                                              I truly don’t understand the point of committing half-done work like Git proponents seem to think is an amazing idea. A commit needs to be USEFUL to be committed.

                                              Sharing code to discuss and backing up milestones of incomplete, experimental are both very useful to me.

                                              1. 1

                                                I think the disconnect is probably in what we consider “ephemeral.” You seem to think that we’re “idiotically [. . .] committing every nanosecond” (which, seriously, stop phrasing it like this because you’re being an asshole), but in most of my own use cases it’s simply a matter of wanting to preserve the current state of my work until I’ve reached a point where I’m ready to construct what I view as a salient description of the changes. In many cases this means making commits that roughly match the structure I’m after - a sort of design sketch - and maybe these don’t include all of the test changes yet, and I haven’t fully written out a commit message because I haven’t uncovered all the wrinkles that need ironing as I continue the refactor, and I find something later that makes more sense as a commit in the middle because it relates directly to that other block of work, and and and…

                                                An example: when I reach the end of the day, I may want to stash what I’m working on or - depending on the amount of work I’ve put in - I may want to push up a WIP commit so that if something happens on my workstation that I don’t lose that work (this is always a real concern for reasons I won’t go into). Maybe that WIP commit doesn’t have tests passing in it, and I and my team try to follow the practice of ensuring that every commit makes a green build, so I don’t want that to be the final version of the commit that eventually makes it into the main branch. The next day I come in, reset my WIP commit and add the test changes I was missing and now make the actual commit I want to eventually see pushed up to the main branch.

                                                I don’t know of anybody who thinks saving things in WIPs for untangling later is - as you say - “an amazing idea,” but it’s a natural extension of our existing workflow.

                                      3. 6

                                        I use both Fossil and Git at work, although we are almost done with moving all the Fossil repos to Git.

                                        Fossil is fine, but the immutability is kind of annoying in the long term. The lack of a rebase for local work is a drag.

                                        Its biggest problem is tooling. Nothing works with it. It doesn’t integrate with the CI system without a lot of effort, there’s no Bitbucket/Github-like system to use for PRs or code reviews, and it doesn’t integrate with the ticket system. Sure, it contains all those things, but they don’t meet the needs we (and most others, it seems) require.

                                        On a personal note, I dislike the clone/open workflow as I’d much rather have the database in the project directory similar to the .git directory. There are other little things I don’t like, but they are mostly because I’m used to Git, despite all its flaws.

                                        1. 3

                                          I would argue it’s because your perspective around Fossil is wrong when it comes to immutability. Fossil’s perspective is when you commit, it’s literally a commitment, it’s done. Be careful and think about your commits. Practically the only thing we have noticed is occasionally we get ‘oops fixed typo’ type commits.

                                          I agree with the clone/open workflow, but it’s that way for a reason, the perspective is, you clone locally once, and you open per branch/feature you want to mess with. So a cd is all you need to switch between branches. Once I figured that out, I didn’t mind the workflow that much, I just have a ~/fossil dir that keeps all my local open projects, and otherwise I mostly ignore that directory.

                                          I agree with the tooling problem, though I don’t think it’s quite as bad as you think. There is fossil patch for PR/code review workflows. The only special tooling fossil gives you here is the ability to copy and apply the patch to a remote SSH host. Perhaps that could be changed, but it allows you to develop your own workflow if you care about those sorts of things.

                                          I have a totally different perspective than the entire industry around CI/CD. CI/CD is just a specialization of running batch jobs. Since we have to run batch jobs anyway, we just use our existing batch job tooling for CI/CD. For us, that means our CI/CD integration is as simple as a commit hook that runs: nomad run <reponame.nomad> After that our normal nomad tooling handles all of our CI/CD needs, and allows anyone to start a CI/CD run, since it’s all in the repo, there is no magic or special tooling for people to learn. If you have to learn how production works anyway for batch jobs there is no sense in learning a diff. system too.

                                          1. 2

                                            It’s not just about perspective. I’m firmly in the mutable history camp, because a lot of my work - the vast majority of it, really - is experimenting and research. It’s all about coming up with ideas, sharing them, seeking feedback, and iterating. Most of those will get thrown out the window after a quick proof of concept. I see no point in having those proof of concepts be part of the history. Nor will I spend considerable time and effort documenting, writing tests and whatnot for what is a proof of concept that will get thrown out and rewritten either way, just to make the history usable. I’d rather just rearrange it once the particular branch is being finalised.

                                            Immutable history is great when you can afford it, but a huge hindrance when you can’t.

                                            With git, both can be accomplished with a little care: no force pushes to any branch. Done.

                                            What one does locally is irrelevant. Even with Fossil, you will likely have had local variations before you ended up comitting it. The difference with git is that you can make local snapshots and rearrange them later, using the same tool. With Fossil, you would have to find some other way to store draft work which is not ready to become part of the immutable history.

                                            I mean, there’ve been many cases over my career where I was working on a thing that became a single commit in the end, for days, sometimes even weeks. I had cases where that single commit was a single line changed, not a huge amalgamation or anything like that. But it took a lot of iteration to get there. With git, I could commit my drafts, share it with others, and then rewrite it before submitting it upstream. I made use of that history a lot. I rolled back, reverted, partially reverted, looked at things I tried before, and so on. With Fossil, I would have had to find a way to do all that, without comitting the drafts to the immutable history. It would have made no sense to commit them - they weren’t ready. Yet, I still wanted to iterate, I still wanted to easily share with colleagues.

                                            1. 3

                                              Clearly you and I are going to disagree, but I would argue that Fossil can handle your use-case just fine, albeit very differently than Git would handle it. You have clearly gotten very use to the Git workflow model, and that’s fine. That doesn’t mean the Fossil workflow model is wrong or bad or evil or anything, it’s just different than Git’s, because (I’d argue) it’s coming from a different perspective.

                                              Fossil does have ways to store draft work and ship it around, I mentioned two ways in the comment you are replying to, but you either didn’t see them or just chose to ignore them. fossil patch is actually pretty cool, as the patch file is just a sqlite3 file. Easy to ship/move/play with.

                                              1. 2

                                                I wasn’t saying the Fossil model is bad - it isn’t. It’s just not suitable for every scenario, and I have yet to see what benefit it would have over the mutable model for me. Just because it can handle the scenarios I want doesn’t mean it’s easy, convenient or straightforward to do it. Git can do immutable workflows too, and mutable ones too - it just makes the latter a lot easier, while the former possible if you put in the work.

                                                I did not see your comments about fossil patch before, I skipped over that part of your comment, sorry. However, that’s not suitable for my workflow: I don’t need a single patch, I can ferry one around easily, that wouldn’t be a problem. I work with branches, their history important, because I go often go back and revert (fully or partially), I often look back at things I tried before. That is important history during drafting, but completely irrelevant otherwise. Git lets me do dirty things temporarily, and share the refined result. Fossil lets me ferry uncomitted changes around, but that’s so very far from having a branch history. I could build something on it, sure. But git already ships with that feature out of the box, so why would I?

                                                I could, of course, fork the repo, and do my draft commits in the fork, and once it reaches a stage where it’s ready to be upstreamed, I can rebuild it on top of the main repo - manually? Or does Fossil have something to help me with that?

                                                I’m sure it works in a lot of scenarios, where the desire to commit often & refine is less common than think hard & write only when it’s already refined. It sounds terrible for quick prototyping or proof of concepts (which are a huge part of my work) within an existing project.

                                                1. 2

                                                  Fossil really is just a fancy UI and some tooling around a SQLite database. There is basically no end to what one can do when your entire code tree is living in a SQL database. You don’t need 100k confusing git commands, when you literally can type sqlite3 <blah.fossil> and do literally anything you want. If fossil will understand it for you after is of course an exercise left to the SQL writer. :)

                                                  That is important history during drafting, but completely irrelevant otherwise.

                                                  Fossil has a different perspective here. All history is important.

                                                  I think the big difference here is, Fossil’s UI and query tools are vastly more functional than Git’s. Literally an entire SQL implementation. Meaning you can hide/ignore loads of stuff from the history, so that in practice most of this ‘irrelevant history’ can be hidden from view the vast majority of the time.

                                                  I could, of course, fork the repo, and do my draft commits in the fork, and once it reaches a stage where it’s ready to be upstreamed, I can rebuild it on top of the main repo - manually? Or does Fossil have something to help me with that?

                                                  Yes, see: https://www2.fossil-scm.org/home/doc/trunk/www/branching.wiki

                                                  No need to fork, a branch should work just fine.

                                                  1. 2

                                                    Fossil really is just a fancy UI and some tooling around a SQLite database.

                                                    Similarly, git is just an UI over an object store. You can go and muck with the files themselves, there are libraries that help you do that. If git will understand it for you after, is an exercise left for whoever mucks in the internals. ;)

                                                    Fossil has a different perspective here. All history is important.

                                                    And that is my major gripe. I do not believe that all history is important.

                                                    Meaning you can hide/ignore loads of stuff from the history, so that in practice most of this ‘irrelevant history’ can be hidden from view the vast majority of the time.

                                                    It still takes up space, and it still takes effort to even figure out what to ignore. With git, it’s easy: it simply isn’t there.

                                                    No need to fork, a branch should work just fine.

                                                    According to the document, a branch is just a named, intentional fork. From what I can tell, the history of the branch is still immutable, so if I want to submit it upstream, I would still need to manually rebuild it first. Fossil maintains a single DAG for the entire repo (so the linked doc says), so if I wanted to clean things up before I submit it upstream, I’d need to rebuild by hand. With git, I can rebase and rewrite history to clean it up.

                                                    1. 2

                                                      I do not believe that all history is important.

                                                      Seconded.

                                                      We do not have to remember everything we do.

                                                2. 1

                                                  Can you explain a little bit more how things like code reviews work? I’m not skeptical that they can’t be done in fossil, it’s just that the workflow is so different from what I’m used to.

                                                  1. 2

                                                    I am by no means a Fossil expert, but I’ll give you my perspective. Fossil handles moving the code back and forth, the rest is up to you.

                                                    I work on a new feature and am ready to commit, but I want Tootie to look it over(code review). If we have a shared machine somewhere with SSH and fossil on it, I can use fossil patch push server:/path/to/checkout and push my patch to some copy for her to look at. If not I can fossil patch create <feature.patch> and then send her the .patch file(which is just a sqlite3 DB file) via any method.

                                                    She does her review and we talk about it, either in Fossil Forums or Chat, or email, irc, xmpp, whatever. Or she can hack on the code directly and send a new .patch file back to me to play with.

                                                    Whenever we agree it’s good to go, either one of us can commit it(assuming we both have commit rights). See the fossil patch docs.

                                          2. 1

                                            What do you mean of “the same data structure as git”, you know Fossil is using SQLite? I don’t know what the current Git data structure is, but from my experience with it, it is much more complicated to do something with compared to a SQL database.

                                            1. 2

                                              From the link under heading 2.3:

                                              The baseline data structures for Fossil and Git are the same, modulo formatting details. Both systems manage a directed acyclic graph (DAG) of Merkle tree structured check-in objects. Check-ins are identified by a cryptographic hash of the check-in contents, and each check-in refers to its parent via the parent’s hash.

                                              1. 1

                                                A merkle tree of commits containing the file contents.

                                              2. 1

                                                I’ve been using Fossil for my personal projects since summer 2020.

                                                The main advantages I see compared to git is that it is fairly easy to backup/move to a new server (the repository is just a single SQLite database and not a folder of a custom key/value format), as well as to give other contributors commit access (no shell access required).

                                                Beside this, I’m also more inclined to their philosophy of being immutable, as I would otherwise spent way too much time making my commits looks nice.

                                              1. 1

                                                Not sure how far along the UI side of things is, but if the project manages to build a UI framework that’s properly integrated with the rest of the OS and marginally nicer than Qt/GTK, I’d be very interested in seeing where it goes. I’m not a big fan of Obj-C or Cocoa, but at this point I just want some viable alternatives to Qt and GTK :\

                                                My benchmark for if an OS is usable is if it can run a modern, hardware-accelerated browser. If the screenshots are up-to-date and they really have Firefox running on the new framework, that’s already a huge win. I definitely would be happy to pay for an open source nix-like OS that has a better lightweight desktop/laptop experience than what Linux currently offers. XFCE is fine, but certainly has some rough edges.

                                                1. 3

                                                  Hrm… I’m not seeing a significant difference in the demos (to be fair, I run a fairly non-standard Firefox + Linux setup which may not have been tested), and I soon realized the main hero “example” is actually a pre-made video 🤨

                                                  1. 2

                                                    I had one demo go faster with normal SVG, one made no difference, third was faster with SSVG. Firefox on Linux.

                                                  1. 3

                                                    There were some really nice examples in here. I sometimes have a hard time applying some of Tufte’s visualization principles, and this article gave me some new ideas. I especially liked the concept of “background data” that gives the viewer a basis of comparison when analyzing a subset.

                                                    1. 8

                                                      The only problem with lots of custom aliases (or custom keybindings in other programs like editors), is that the muscle memory burns you every time you have to work on a remote machine. I used to go custom-to-the-max with my config, but I’ve gradually shifted back to fewer and fewer aliases except for the most prevalent build/version control commands I run dozens of times each day.

                                                      1. 9

                                                        When I need to remote into machines where I can’t set up my shell profile for whatever reason, I just config ssh to run my preferred shell setup commands (aliases, etc) as I’m connecting.

                                                        My tools work for me, I don’t work for my tools.

                                                        1. 5

                                                          You mean, could single session only? Care to share that lifehack? I’m assuming something in ssh_config?

                                                          1. 2

                                                            Yeah, single session only. There are a bunch of different ways to skin this cat — LocalCommand and RemoteCommand along with ForceTTY in ssh_config can help.

                                                            Conceptually you want to do something like (syntax probably wrong, I’m on my phone)

                                                            scp .mypreferedremoterc me@remote:.tmprc; ssh -t me@remote “bash —rcfile ~/.tmprc -l; rm .tmprc”

                                                            which you could parameterize with a shell function or set up via LocalCommand and RemoteCommand above, or skip the temp file entirely with clever use of an env variable to slurp the rc file in and feed it into the remote bash (with a heredoc or SendEnv/SetEnv)

                                                        2. 2

                                                          every time i have to work on a remote machine i do the commands through ssh or write a script to do it for me.

                                                          1. 2

                                                            naming a meta-archive-extracter, “atool” doesn’t help either. OP used unzip for this but it is overloaded. uncompress also is taken.

                                                            What word would you guys use for aliasing it?

                                                            1. 3

                                                              I use extract as a function that just calls the right whatever based on the filename.

                                                              1. 2

                                                                I think prezto comes with x alias, and I like it a lot. It’s burns easily into the muscle memory.

                                                              2. 2

                                                                To defeat muscle memory when changing tools, I make sure the muscle memory command fails:

                                                                alias unzip = “echo ‘use atool’”

                                                                It doesn’t take many times to break the muscle memory. Then I remove the alias.

                                                                1. 1

                                                                  Is atool there by default on Linux boxes?

                                                                  1. 1

                                                                    Nope. At least I’m not aware of any Linux distro installing it by default.

                                                                    But being installed by default is IMHO totally overrated. The main point is that it is available in many Linux distribution’s repos without having to add 3rd party repos—at least in Debian and all derivatives like Devuan, Kali oder Ubuntu.

                                                                    1. 2

                                                                      I understand, but it’s not the same. If I don’t have a shell regularly there, and not my own dotfiles, I likely want to avoid installing and removing system packages on other people’s systems. When stuff breaks, I want the minimum amount of blame :)

                                                                      Not that this is not a useful tool.

                                                                      1. 1

                                                                        Ok, granted. Working as a system administrator it’s usually me who has to fix things anyway. And it happens only very, very seldom that something breaks just because you install a commandline tool. (Saying this with about 25 years of Linux system administration experience.)

                                                                        Only zutils can theoretically have an impact as it renames commandline system tools and replaces them with wrappers. But so far in the past decade, I’ve never seen any system break due to zutils. (I only swa things not working properly because it was not installed. But that was mostly because I’m so used to it that I take it as given that zutils is installed. :-)

                                                                        1. 2

                                                                          Yep, different role. I did some freelance work a long ago, and learned on (fortunately) my predecessor’s mistake: they hired me to do some work, because I guess someone before me updated some stuff, and that broke… probably PHP version? Anyway, their shop didn’t work any more and they were bleeding money till I fixed it. It was one of my early freelance jobs, so that confirmed the age-old BOFH mantra of if it ain’t broke, don’t fix it. So given time, I would always explicitly ask permission to do this or that or install the other, if needed.

                                                                          But I went a different route anyway, so even though I am still better than average, I think, I’m neither good nor professional. But I think old habits die hard, so that’s why I’m saying “if this stuff isn’t there by default, you’ll just have to learn your tar switches” :)

                                                                2. 2

                                                                  muscle memory burns you every time you have to work on a remote machine

                                                                  Note that this doesn’t apply for eshell as the OP is using: If you cd to a remote machine in eshell, your aliases are still available.

                                                                  1. 1

                                                                    Command history and completion suggestions have really helped me avoid new aliases.

                                                                  1. 2

                                                                    A customer had a program that opened a very large spreadsheet in Excel. Very large, like over 300,000 rows. They then selected all of the rows in the very large spreadsheet, copied those rows to the clipboard, and then ran a program that tried to extract the data. The program used the Get­Clipboard­Data function to retrieve the data in Rich Text Format.

                                                                    The customer is always right, but the customer is also often dumb.

                                                                    1. 18

                                                                      Copy/paste is the only form of IPC for GUIs in many cases, though. It could be stupidity, but it might be necessity :|

                                                                      1. 2

                                                                        Yeah I do a lot of cross-GUI automation with AutoHotKey and the best way to move data around is to store it on the clipboard.

                                                                      2. 5

                                                                        Next time, we’ll see what we can do to extend this timeout.

                                                                        :sigh:

                                                                        1. -1

                                                                          The blog post could have been a tweet, and the “next time” could have been a second tweet.

                                                                      1. 16

                                                                        In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs. Then Node started warning me about security problems in some of those libraries. I ended up taking some time finding alternative packages with fewer dependencies.

                                                                        On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t. It’s cool to look at how tiny and efficient code can be — a Scheme interpreter in 4KB! The original Mac OS was 64KB! — but yowza, is it ever difficult to code that way.

                                                                        There was an early Mac word processor — can’t remember the name — that got a lot of love because it was super fast. That’s because they wrote it in 68000 assembly. It was successful for some years, but failed by the early 90s because it couldn’t keep up with the feature set of Word or WordPerfect. (I know Word has long been a symbol of bloat, but trust me, Word 4 and 5 on Mac were awesome.) Adding features like style sheets or wrapping text around images took too long to implement in assembly compared to C.

                                                                        The speed and efficiency of how we’re creating stuff now is crazy. People are creating fancy OSs with GUIs in their bedrooms with a couple of collaborators, presumably in their spare time. If you’re up to speed with current Web tech you can bring up a pretty complex web app in a matter of days.

                                                                        1. 24

                                                                          I don’t know, I think there’s more to it than just “these darn new languages with their package managers made dependencies too easy, in my day we had to manually download Boost uphill both ways” or whatever. The dependencies in the occasional Swift or Rust app aren’t even a tenth of the bloat on my disk.

                                                                          It’s the whole engineering culture of “why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application, and then implement your glorified scp GUI application inside that, so that you never have to learn anything other than the one and only tool you know”. Everything’s turned into 500megs worth of nail because we’ve got an entire generation of Hammer Engineers who won’t even consider that it might be more efficient to pick up a screwdriver sometimes.

                                                                          We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                                                                          That’s the argument, but it’s not clear to me that we haven’t severely over-corrected at this point. I’ve watched teams spend weeks poking at the mile-high tower of leaky abstractions any react-native mobile app teeters atop, just to try to get the UI to do what they could have done in ten minutes if they’d bothered to learn the underlying platform API. At some point “make all the world a browser tab” became the goal in-and-of-itself, whether or not that was inefficient in every possible dimension (memory, CPU, power consumption, or developer time). It’s heretical to even question whether or not this is truly more developer-time-efficient anymore, in the majority of cases – the goal isn’t so much to be efficient with our time as it is to just avoid having to learn any new skills.

                                                                          The industry didn’t feel this sclerotic and incurious twenty years ago.

                                                                          1. 7

                                                                            It’s heretical to even question whether or not this is truly more developer-time-efficient anymore

                                                                            And even if we set that question aside and assume that it is, it’s still just shoving the costs onto others. Automakers could probably crank out new cars faster by giving up on fuel-efficiency and emissions optimizations, but should they? (Okay, left to their own devices they probably would, but thankfully we have regulations they have to meet.)

                                                                            1. 1

                                                                              left to their own devices they probably would, but thankfully we have regulations they have to meet.

                                                                              Regulations. This is it.

                                                                              I’ve long believed that this is very important in our industry. As earlier comments say, you can make a complex web app after work in a weekend. But then there are people, in the mentioned above autoindustry, that take three sprints to set up a single screen with a table, a popup, and two forms. That’s after they pulled in the internet worth of dependencies.

                                                                              On the one hand, we don’t want to be gatekeeping. We want everyone to contribute. When dhh said we should stop celebrating incompetence, majority of people around him called this gatekeeping. Yet when we see or say something like this - don’t build bloat or something along the line - everyone agrees.

                                                                              I think the middle line should be in between. Let individuals do whatever the hell they want. But regulate “selling” stuff for money or advertisement eyeballs or anything similar. If an app is more then x MB (some reasonable target), it has to get certified before you can publish it. Or maybe, if a popular app does. Or, if a library is included in more then X, then that lib either gets “certified”, or further apps using it are banned.

                                                                              I am sure that is huge, immensely big, can of worms. There will be many problems there. But if we don’t start cleaning up shit, it’s going to pile up.

                                                                              A simple example - if controversial - is Google. When they start punishing a webapp for not rendering within 1 second, everybody on internet (that wants to be on top of google) starts optimizing for performance. So, it can be done. We just have to setup - and maintain - a system that deals with the problem ….well, systematically.

                                                                            2. 1

                                                                              why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application

                                                                              Yeah. One of the things that confuses me is why apps bundle a browser when platforms already come with browsers that can easily be embedded in apps. You can use Apple’s WKWebView class to embed a Safari-equivalent browser in an app that weighs in at under a megabyte. I know Windows has similar APIs, and I imagine Linux does too (modulo the combinatorial expansion of number-of-browsers times number-of-GUI-frameworks.)

                                                                              I can only imagine that whoever built Electron felt that devs didn’t want to deal with having to make their code compatible with more than one browser engine, and that it was worth it to shove an entire copy of Chromium into the app to provide that convenience.

                                                                              1. 1

                                                                                Here’s an explanation from the Slack developer who moved Slack for Mac from WebKit to Electron. And on Windows, the only OS-provided browser engine until quite recently was either the IE engine or the abandoned EdgeHTML.

                                                                            3. 10

                                                                              On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                                                              The problem is that your dependencies can behave strangely, and you need to debug them.

                                                                              Code bloat makes programs hard to debug. It costs programmer time.

                                                                              1. 3

                                                                                The problem is that your dependencies can behave strangely, and you need to debug them.

                                                                                To make matters worse, developers don’t think carefully about which dependencies they’re bothering to include. For instance, if image loading is needed, many applications could get by with image read support for one format (e.g. with libpng). Too often I’ll see an application depend on something like ImageMagick which is complete overkill for that situation, and includes a ton of additional complex functionality that bloats the binary, introduces subtle bugs, and wasn’t even needed to begin with.

                                                                              2. 10

                                                                                On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                                                                The problem is that computational resources vs. programmer time is just one axis along which this tradeoff is made: some others include security vs. programmer time, correctness vs. programmer time, and others I’m just not thinking of right now I’m sure. It sounds like a really pragmatic argument when you’re considering your costs because we have been so thoroughly conditioned into ignoring our externalities. I don’t believe the state of contemporary software would look like it does if the industry were really in the habit of pricing in the costs incurred by others in addition to their own, although of course it would take a radically different incentive landscape to make that happen. It wouldn’t look like a code golfer’s paradise, either, because optimizing for code size and efficiency at all costs is also not a holistic accounting! It would just look like a place with some fewer amount of data breaches, some fewer amount of corrupted saves, some fewer amount of Watt-hours turned into waste heat, and, yes, some fewer amount of features in the case where their value didn’t exceed their cost.

                                                                                1. 7

                                                                                  We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                                                                                  But we aren’t. Because modern resource-wastfull software isn’t really realeased quicker. Quite the contrary, there is so much development overhead that we don’t see those exciting big releases anymore with a dozen of features every ones loves at first sight. They release new features in microscopic increments so slowly that hardly any project survives 3-5 years without becoming obsolete or out of fashion.

                                                                                  What we are trading is quality, by quantity. We lower the skill and knowledge barrier so much to acompdate for millions of developers that “learned how tonprogra in one week” and the results are predictably what this post talks about.

                                                                                  1. 6

                                                                                    I’m as much against bloat as everyone else (except those who make bloated software, of course—those clearly aren’t against it). However, it’s easy to forget that small software from past eras often couldn’t do much. The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                                                                                    1. 5

                                                                                      The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                                                                                      Seems some people (@neauoire) do want exactly that: https://merveilles.town/@neauoire/108419973390059006

                                                                                      1. 6

                                                                                        I have yet to see modern software that is saving the programmer’s time.

                                                                                        I’m here for it, I’ll be cheering when it happens.

                                                                                        This whole thread reminds me of a little .txt file that came packaged into DawnOS.

                                                                                        It read:

                                                                                        Imagine that software development becomes so complex and expensive that no software is being written anymore, only apps designed in devtools. Imagine a computer, which requires 1 billion transistors to flicker the cursor on the screen. Imagine a world, where computers are driven by software written from 400 million lines of source code. Imagine a world, where the biggest 20 technology corporation totaling 2 million employees and 100 billion USD revenue groups up to introduce a new standard. And they are unable to write even a compiler within 15 years.

                                                                                        “This is our current world.”

                                                                                        1. 11

                                                                                          I have yet to see modern software that is saving the programmer’s time.

                                                                                          People love to hate Docker, but having had the “pleasure” of doing everything from full-blown install-the-whole-world-on-your-laptop dev environments to various VM applications that were supposed to “just work”… holy crap does Docker save time not only for me but for people I’m going to collaborate with.

                                                                                          Meanwhile, programmers of 20+ years prior to your time are equally as horrified by how wasteful and disgusting all your favorite things are. This is a never-ending cycle where a lot of programmers conclude that the way things were around the time they first started (either programming, or tinkering with computers in general) was a golden age of wise programmers who respected the resources of their computers and used them efficiently, while the kids these days have no respect and will do things like use languages with garbage collectors (!) because they can’t be bothered to learn proper memory-management discipline like their elders.

                                                                                          1. 4

                                                                                            I’m of the generation that started programming at the tail end of ruby, and Objective-C, and I would definitely not call this the golden age, if anything, looking back at this period now it looks like mid-slump.

                                                                                          2. 4

                                                                                            I have yet to see modern software that is saving the programmer’s time.

                                                                                            What’s “modern”? Because I would pick a different profession if I had to write code the way people did prior to maybe the late 90s (at minimum).

                                                                                            Edit: You can pry my modern IDEs and toolchains from my cold, dead hands :-)

                                                                                      2. 6

                                                                                        Node is an especially good villain here because JavaScript has long specifically encouraged lots of small dependencies and has little to no stdlib so you need a package for near everything.

                                                                                        1. 5

                                                                                          It’s kind of a turf war as well. A handful of early adopters created tiny libraries that should be single functions or part of a standard library. Since their notoriety depends on these libraries, they fight to keep them around. Some are even on the boards of the downstream projects and fight to keep their own library in the list of dependencies.

                                                                                        2. 6

                                                                                          We’re trading CPU time and memory, which are ridiculously abundant

                                                                                          CPU time is essentially equivalent to energy, which I’d argue is not abundant, whether at the large scale of the global problem of sustainable energy production, or at the small scale of mobile device battery life.

                                                                                          for programmer time, which isn’t.

                                                                                          In terms of programmer-hours available per year (which of course unit-reduces to active programmers), I’m pretty sure that resource is more abundant than it’s ever been any point in history, and only getting more so.

                                                                                          1. 2

                                                                                            CPU time is essentially equivalent to energy

                                                                                            When you divide it by the CPU’s efficiency, yes. But CPU efficiency has gone through the roof over time. You can get embedded devices with the performance of some fire-breathing tower PC of the 90s, that now run on watch batteries. And the focus of Apple’s whole line of CPUs over the past decade has been power efficiency.

                                                                                            There are a lot of programmers, yes, but most of them aren’t the very high-skilled ones required for building highly optimal code. The skills for doing web dev are not the same as for C++ or Rust, especially if you also constrain yourself to not reaching for big pre-existing libraries like Boost, or whatever towering pile of crates a Rust dev might use.

                                                                                            (I’m an architect for a mobile database engine, and my team has always found it very difficult to find good developers to hire. It’s nothing like web dev, and even mobile app developers are mostly skilled more at putting together GUIs and calling REST APIs than they are at building lower-level model-layer abstractions.)

                                                                                          2. 2

                                                                                            Hey, I don’t mean to be a smart ass here, but I find it ironic that you start your comment blaming the “high-level languages with package systems” and immediately admit that you blindly picked a library for the job and that you could solve the problem just by “taking some time finding alternative packages with fewer dependencies”. Does not sound like a problem with neither the language nor the package manager honestly.

                                                                                            What would you expect the package manager to do here?

                                                                                            1. 8

                                                                                              I think the problem in this case actually lies with the language in this case. Javascript has such a piss-poor standard library and dangerous semantics (that the standard library doesn’t try to remedy, either) that sooner, rather than later, you will have a transient dependency on isOdd, isEven and isNull because even those simple operations aren’t exactly simple in JS.

                                                                                              Despite being made to live in a web browser, the JS standard library has very few affordances to working with things like URLs, and despite being targeted toward user interfaces, it has very few affordances for working with dates, numbers, lists, or localisations. This makes dependency graphs both deep and filled with duplicated efforts since two dependencies in your program may depend on different third-party implementations of what should already be in the standard library, themselves duplicating what you already have in your operating system.

                                                                                              1. 2

                                                                                                It’s really difficult for me to counter an argument that it’s basically “I don’t like JS”. The question was never about that language, it was about “high-level languages with package systems” but your answer hyper focuses on JS and does not address languages like python for example, that is a “high-level language with a package system”, which also has an “is-odd” package (which honestly I don’t get what that has to do with anything).

                                                                                                1. 1

                                                                                                  The response you were replying to was very much about JS:

                                                                                                  In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs.

                                                                                                  For what it’s worth, whilst Python may have an isOdd package, how often do you end up inadvertently importing it in Python as opposed to “batteries-definitely-not-included” Javascript? Fewer batteries included means more imports by default, which themselves depend on other imports, and a few steps down, you will find leftPad.

                                                                                                  As for isOdd, npmjs.com lists 25 versions thereof, and probably as many isEven.

                                                                                                  1. 1

                                                                                                    and a few steps down, you will find leftPad

                                                                                                    What? What kind of data do you have to back up a statement like this?

                                                                                                    You don’t like JS, I get it, I don’t like it either. But the unfair criticism is what really rubs me the wrong way. We are technical people, we are supposed to make decisions based on data. But this kind of comments that just generates division without the slightest resemblance of a solid argument do no good to a healthy discussion.

                                                                                                    Again, none of the arguments are true for js exclusively. Python is batteries included, sure, but it’s one of the few. And you conveniently leave out of your quote the part when OP admits that with a little effort the “problem” became a non issue. And that little effort is what we get paid for, that’s our job.

                                                                                              2. 3

                                                                                                I’m not blaming package managers. Code reuse is a good idea, and it’s nice to have such a wealth of libraries available.

                                                                                                But it’s a double edged sword. Especially when you use a highly dynamic language like JS that doesn’t support dead-code stripping or build-time inlining, so you end up having to copy an entire library instead of just the bits you’re using.

                                                                                              3. 1

                                                                                                On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                                                                                We’re trading CPU and memory for the time of some programmers, but we’re also adding the time of other programmers onto the other side of the balance.

                                                                                                1. 1

                                                                                                  I definitely agree with your bolded point - I think that’s the main driver for this kind of thing.

                                                                                                  Things change if there’s a reason for them to be changed. The incentives don’t really line up currently to the point where it’s worth it for programmers/companies to devote the time to optimize things that far.

                                                                                                  That is changing a bit already, though. For example, performance and bundle size are getting seriously considered for web dev these days. Part of the reason for that is that Google penalizes slow sites in their rankings - a very direct incentive to make things faster and more optimized!

                                                                                                1. 9

                                                                                                  I loved my community college. Started going when I was in high school (homeschooled) and graduated early because of it. Got a masters in literature from a local state university, couldn’t find a job and didn’t want to teach, so I went back to that same community college and took some Cisco cert classes. The CCNA got me an interview at a tech company as a support person. Taught myself programming while I worked there and have moved up to lead/manager of the backend team at that same company.

                                                                                                  tldr; community colleges are great! And if I can do, so can you!

                                                                                                  1. 3

                                                                                                    I loved my community college. Started going when I was in high school (homeschooled) and graduated early because of it.

                                                                                                    Lol, I did the same thing, and that saved me two years at a state school getting my BS since I could transfer my credits from my associates. Same degree as everyone else, just half the time and money ;)

                                                                                                    In my experience, many of my community college profs were as good or better than the ones at the state/private universities I attended (I had some great gen eds and math classes). I didn’t see a clear correlation with cost and quality of teaching (in fact, the masters classes I took at one of the top engineering colleges in the country had the worst prof I ever came across, so it actually might be a negative correlation for me, though my experience there may have been an outlier).

                                                                                                    1. 9

                                                                                                      As an academic drop out, my experience is that where graduates end up teaching is more or less random. It helps to be a better teacher or researcher, but a lot of it comes down to factors like: Did you apply to a grad program that is hot seven years later when you finish? Are you willing to move anywhere? Anywhere, anywhere? Do you have loved ones, lol? How long can you go without taking a salary? How about a little longer? Are you able to write a strong recommendation letter for yourself for your professor to sign? Can you impress people who know nothing about your subfield in an interview? Etc. etc. So all things equal, it’s better to be a good teacher, but it’s lost in the noise of other factors. Community colleges are more likely to get people who enjoy teaching as profession but need to be able to live in a specific geographic location, so there’s a slightly positive bias there, although it’s not super strong because you aren’t ever really evaluated.

                                                                                                      1. 4

                                                                                                        At a university the main job of a professor is to obtain grants and do research; teaching is of a secondary concern. I’m sure that many professors would love to skip out on teaching (and probably do by passing it off to their slaves grad students).

                                                                                                        1. 3

                                                                                                          At one point, my former department looked into just cancelling classes. They budgeted out that, if everyone took a 10% pay cut, we could just stop teaching entirely. It was ultimately decided that this would be bad for recruiting, so they didn’t go through with it, but the university lawyers had agreed that they couldn’t stop us.

                                                                                                    1. 6

                                                                                                      See also https://github.com/Nozbe/WatermelonDB

                                                                                                      I’ve also thought this is a good direction for UI development, since I always thought it was weird parts of the state are duplicated in both the GUI toolkit and the underlying data store (and it inevitably ends up out of sync at some point which is a fun source of bugs). I hope we see more tools developed this way to see what the pros/cons are.

                                                                                                      I experimented a bit with this idea with Python + SQLite + Skia, and was surprised that I could implement drag and drop by updating a position record in the database and refetching it every frame to render the UI. Suffice to say, SQLite is definitely not a bottleneck for tracking interactive state changes in a GUI.

                                                                                                      While SQL is a bit clunky, the relational model is a much better match for the problem than OOP. In fact, a lot of the techniques applied in data-oriented design (which has been popular in gamedev circles recently), essentially structure entities as relational database tables (albeit using a very barebones, in-memory representation). There are plenty of examples referencing relational database in the Data Oriented Design book.

                                                                                                      1. 3

                                                                                                        Oof, I never thought about it before, but it makes sense that python can have data races. I’m assuming the GIL locks atomically on each instruction so if you interleave load, change, and store instructions in different threads it’s possible to end up with races. I don’t generally write multithreaded code that way in Python, but I wonder how many bugs I’ve created since I didn’t realizing this :(

                                                                                                        1. 4

                                                                                                          I’m assuming the GIL locks atomically on each instruction

                                                                                                          I just realized an exception to this: a lot of instructions that look primitive can actually call back into Python code (for example BINARY_ADD can call an __add__ method) so those must yield the GIL in the same way a CALL_FUNCTION instruction would. So, whether an instruction is atomic can depend on what data is passed in.

                                                                                                          1. 3

                                                                                                            I’m assuming the GIL locks atomically on each instruction

                                                                                                            That appears to be the case. The official Python FAQ has a question, What kinds of global value mutation are thread-safe?, with the answer:

                                                                                                            In general, Python offers to switch among threads only between bytecode instructions; how frequently it switches can be set via sys.setswitchinterval(). Each bytecode instruction and therefore all the C implementation code reached from each instruction is therefore atomic from the point of view of a Python program.

                                                                                                            For those of us who don’t have the Python bytecode model memorized, the standard package dis can provide some insight into what is actually atomic. Although I don’t believe there are any guarantees that the details of bytecode compilation won’t change between versions.

                                                                                                          1. 2

                                                                                                            Very nice article! I’ve always been curious about the process for writing kernel drivers. Figuring out how the hardware works has always seemed like magic to me, so knowing that these i2c tools exist is useful (even if they are a dangerous to use blindly).

                                                                                                            1. 4

                                                                                                              It appears the author has discovered namespaces and monkeypatching (or annotations, depending on how dynamic you want to be with it), which have been renamed to domains and filters.

                                                                                                              Going to the main website:

                                                                                                              The Web’s trust system is broken. Blockchains can fix it.

                                                                                                              Er… someone drank a little too much kool-aid.

                                                                                                              1. 10

                                                                                                                I feel the same about ‘the dragon book’. it’s a terrible book. everyone who hasn’t read it recommends it. it’s really unhelpful

                                                                                                                1. 13

                                                                                                                  I found it very helpful when I read it.

                                                                                                                  I think the issue with this thought process (and the parent post) is that it assumes that everyone will have the same experience with a book. I’m not sure where that idea would come from though - would you expect to be able to understand a math textbook without effort?

                                                                                                                  Basically this boils down to “I had a bad experience with this book, so no one should recommend it.”

                                                                                                                  Edit: another way to think about this. Personally, I get nothing of out of “instructional” videos. I can’t focus, the tempo is all wrong, and watched content just has no retention for me. Does that mean that no one should suggest videos as a learning resource? Of course not! Many people love learning from those videos and get a lot out of that style of content. The efficacy it has for me doesn’t have any bearing on how effective it will be for someone else.

                                                                                                                  1. 4

                                                                                                                    …would you expect to be able to understand a math textbook without effort?

                                                                                                                    then

                                                                                                                    Personally, I get nothing of out of “instructional” videos.

                                                                                                                    Maybe you need to try harder, did you really expect to get anything out of the videos without effort?

                                                                                                                    Seriously, though, why is failure to get much out of Famous CS Book chalked up to a lack of effort (it’s not just you, other threads in this discussion contain the same accusation / assumption), but failure to learn from a YouTube video is just a difference in how you learn versus other people?

                                                                                                                    1. 3

                                                                                                                      I think you misread or misunderstood what I wrote - it’s not different, and that was my point. I don’t tell other people they shouldn’t try to learn from videos, because I know that they are valuable resources for the people who learn well from them. I recognize that’s something valuable to others even though it isn’t valuable to me.

                                                                                                                      Now flip it around. Just because someone doesn’t enjoy/can’t understand/can’t internalize information from a certain type of book doesn’t mean that the well-regarded book shouldn’t be recommended. It means that individual should probably not consider books like that in the future.

                                                                                                                      1. 2

                                                                                                                        Then why the thing about effort? Effort has nothing to do with it if we’re talking about different learning styles. And yet, I repeatedly see people accuse others of low effort because they didn’t get anything from a famous CS or math text.

                                                                                                                        I think the issue with this thought process (and the parent post) is that it assumes that everyone will have the same experience with a book. I’m not sure where that idea would come from though - would you expect to be able to understand a math textbook without effort?

                                                                                                                        I agree with the first sentence, but the second half of the second sentence (after the hyphen) just seems totally unrelated. You’re accusing the person of not putting in enough effort, yet you claim that it’s about learning styles.

                                                                                                                        Which is it? Learning styles? Or lack of effort?

                                                                                                                        1. 8

                                                                                                                          Just going to pop in and point out that there is no evidence for learning styles being a thing. Yes, engaging multiple sensory modalities leads to better retention. No, different individuals don’t do significantly differently with particular modalities in the absence of major issues (blind people obviously can use visual, etc.).

                                                                                                                          1. 3

                                                                                                                            I think you’re picking on words and ascribing your own slant/opinions to them. This will probably be the last response that I’m going to write in this chain.

                                                                                                                            I’m not ascribing any virtue to either learning style - that’s something you appear to be doing. Who’s to say that “effort” isn’t an important part of the different learning style? Maybe, for some people, the effort of reading the book is what allows the information to sink in, while for others it’s an impediment to their learning?

                                                                                                                            For the record, I can’t get through most math papers due to the effort I’d need to put in to understand their notation, and I have a strong foundation in math. I usually read others’ summaries and then augment my understanding with the paper as a reference. I don’t think this makes me “worse” - but I do know that I could do it myself if I had the patience for it (because I’ve done it before).

                                                                                                                            The question was “Would you expect to be able to understand a math textbook without effort?”. I know that I cannot honestly answer no to that question.

                                                                                                                    2. 4

                                                                                                                      The best compiler book I ever read was Per Hansen’s On Pascal Compilers; Wirth of course also writes broadly and well on compilers.

                                                                                                                      Something that the Dragon Book covers that a lot of others don’t though is a broad view of parsing theory. Many books tend to discuss recursive descent parsing and call it a day. If they cover other parsing mechanisms, it’s often only a brief mention.

                                                                                                                      1. 10

                                                                                                                        Indeed, my complaint with the dragon book is that it’s mostly a book about parsers, not compilers which is not what I’d signed up for when I picked it up. It does touch on other topics, but very superficially, and I felt kinda cheated.

                                                                                                                        Also, even seen as a good book on parsing, it needs an update – the 2nd edition mentions neither parser combinators nor PEGs/packrat parsing, which are important, well-known techniques these days, but were still very new when the 2nd edition was published (2006).

                                                                                                                      2. 2

                                                                                                                        I feel the same about ‘the dragon book’. it’s a terrible book.

                                                                                                                        The dragon book was the text for my undergraduate compilers course. It was heavy on theory and light on practice. I think it makes a poor undergrad textbook, but that doesn’t make it a poor book in general. It has its place; I’m not going to slam it. That place may not be in an undergrad course though.

                                                                                                                        As for SICP, I’ve read about 2/3 of it. This was extracurricular reading for me when I was an undergrad. I should really pick it up again and finish it. I worked my way through it, trying things in my Scheme interpreter as I went, and had quite a lot of “aha” moments. I’m not great at math and have no physics background whatsoever. In fact I started out majoring in philosophy.

                                                                                                                        1. 2

                                                                                                                          It’s a reference book for compiler algorithms, it’s not really an intro guide. I’ve very rarely cracked my copy open, but I usually know what I’m looking for before I’ve opened it.

                                                                                                                          Generally, I find reference books less useful since google (or DDG :)) is usually gets me the answer I need faster.

                                                                                                                          1. 2

                                                                                                                            What were the issues with the dragon book? I found it explained a lot of things students were having difficulty with back when I was a TA.

                                                                                                                            1. 2

                                                                                                                              My problem with the dragon book was that I tried to read it using a horrible translation into Brazilian Portuguese. Whoever translated it had no idea about computing jargon and used wrong Portuguese terms in the translation. Things like string became the Portuguese word for yarn, and so on, it was horrible. So, it is not that the dragon book has problems, but my experience with it sucked and it kinda spoiled me against it. That’s when I went looking for other books and found SICP and others.

                                                                                                                          1. 22

                                                                                                                            This post strikes me as sour grapes.

                                                                                                                            I quite enjoyed reading through SICP a few years ago and don’t have a very strong grasp of math or physics (though I am a big fan of Lisp). While it’s certainly not entry-level, it’s got some great code examples that aren’t the boring and contrived crap examples that fill most textbooks. I agree that it’s probably a bad idea to hand it to someone who’s never programmed before, but usually it’s recommended to relatively experienced programmers with some awareness of Lisp, who are interested in books that change the way you approach programming (it certainly changed mine). It’s a bit of a self-selecting audience, and I’ve never seen it mentioned outside of HN or lobste.rs-like communities.

                                                                                                                            It’s not the type of book that really helps you land a job, but if you want to get a deeper understanding how Lisp programmers think about problems, it’s one of the best (along with The Little Schemer). It’s not considered a classic just because of the smug lisp weenies ;)

                                                                                                                            If you can’t continue or finish a book, you get nothing from it.

                                                                                                                            I wholeheartedly disagree. Technical books especially are rarely written to be read from cover to cover. Find the useful parts of the book that interest you and skip the rest.

                                                                                                                            1. 6

                                                                                                                              This post strikes me as sour grapes.

                                                                                                                              That’s too bad, it wasn’t my intention to be negative. Though I can imagine my guesses could be seen as cynical.

                                                                                                                              I wholeheartedly disagree. Technical books especially are rarely written to be read from cover to cover. Find the useful parts of the book that interest you and skip the rest.

                                                                                                                              “continue” in the quote you mentioned is my weasel word.

                                                                                                                              It is my goal to read through SICP eventually. I keep trying every few years.

                                                                                                                              1. 4

                                                                                                                                Technical books especially are rarely written to be read from cover to cover.

                                                                                                                                Surely SICP is a counterexample, given it was written as the core textbook for MIT’s introductory computer science courses?

                                                                                                                                1. 1

                                                                                                                                  SICP hasn’t been used for intro CS at MIT since 2008.

                                                                                                                                  The course(s) now use Python, apparently.

                                                                                                                                  https://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Computer_Programs#Coursework

                                                                                                                                  1. 2

                                                                                                                                    That doesn’t change the purpose the book was written for, which was my point - a companion textbook for a course is something meant to be worked through, typically.

                                                                                                                                2. 2

                                                                                                                                  Might it not depend on what you’re looking to get out of the experience?

                                                                                                                                  SICP offers a sincere challenge for many people, and there’s certainly a GOOD argument to be made that for those who choose to rise to that challenge the payback will be substantial.

                                                                                                                                  However I often see SICP recommended to people who just want to learn how to program, or gain a grasp of basic/high level CS concepts. It strikes me that SICP may not be the best way to guide people into the pit of success :)

                                                                                                                                1. 2

                                                                                                                                  Looks interesting! In case anyone else wants to try it out, pip install HTSQL didn’t work for me (looks like the latest pip version is from 2013 and is still using 2.7 syntax which 3.0+ versions don’t like). However, the GitHub repo was updated more recently, so I was able to clone it and pip install . successfully.

                                                                                                                                  1. 11

                                                                                                                                    This is entirely subjective but I have looked at zig a few times and it does feel enormously complicated to the point of being unapproachable. This is coming from someone with a lot of experience using systems programming languages. Other people seem to really enjoy using it, though… to each their own.

                                                                                                                                    1. 8

                                                                                                                                      Huh, that’s interesting to hear. Out of curiosity what where the features you found the most complicated?

                                                                                                                                      I’ve had the exact opposite experience actually. I’m comparing against Rust, since it’s the last systems language I tried learning. Someone described rust as having “fractaling complexity” in its design, which is true in my experience. I’ve had a hard time learning enough of the language to get a whole lot done (even though I actually think rust does the correct thing in every case I’ve seen to support its huge feature set).

                                                                                                                                      Zig, on the other hand, took me an afternoon to figure out before I was able to start building stuff once I figured out option types and error sets. (@cImport is the killer feature for me. I hate writing FFI bindings.) It’s a much smaller core language than rust, and much safer than C, so I’ve quite enjoyed it. Although, the docs/stdlib are still a bit rough, so I regularly read the source code to figure out how to use certain language features…

                                                                                                                                      1. 17
                                                                                                                                        • An absurd proponderance of keywords and syntaxes. threadlocal? orselse? The try operator, a ++ b, error set merging?
                                                                                                                                        • An overabundance of nonstandard operators that overload common symbols. When I see | and % I don’t think “saturating and wrapping”, even though it makes sense if you think about it a lot. Mentioned earlier, error set merging uses || which really throws me off.
                                                                                                                                        • sentinel terminated arrays using D, Python, and Go’s slicing syntax is just cruel.
                                                                                                                                        • why are there so many built-in functions? Why can’t they just be normal function calls?
                                                                                                                                        • there seem to be a lot of features that are useful for exactly one oddly shaped usecase. I’m thinking of anyopaque, all the alignment stuff… do “non-exhaustive enums” really need to exist over integer constants? Something about zig just suggests it was not designed as a set of orthogonal features that can be composed in predictable ways. That isn’t true for a lot of the language but there are enough weird edges that put me off entirely.
                                                                                                                                        • Read this section from the documentation on the switch keyword and pretend you have only ever used algol family languages before:
                                                                                                                                        Item.c => |*item| blk: {
                                                                                                                                                    item.*.x += 1;
                                                                                                                                                    break :blk 6;
                                                                                                                                                }
                                                                                                                                        

                                                                                                                                        It’s sigil soup. You cannot leverage old knowledge at all to read this. It is fundamentally newcomer hostile.

                                                                                                                                        • what does errdefer add to the language and why does e.g. golang not need it?
                                                                                                                                        • the async facility is actually quite unique to zig and just adds to the list of things you have to learn

                                                                                                                                        Any of these things in isolation are quite simple to pick up and learn, but altogether it’s unnecessarily complex.

                                                                                                                                        Someone said something about common lisp that I think is true about Rust as well: the language manages to be big but the complexity is opt-in. You can write programs perfectly well with a minimal set of concepts. The rest of the features can be discovered at your own pace. That points to good language design.

                                                                                                                                        1. 13

                                                                                                                                          I’m thinking of anyopaque, all the alignment stuff…

                                                                                                                                          Maybe your systems programming doesn’t need that stuff, but an awful lot of mine does.

                                                                                                                                          what does errdefer add to the language and why does e.g. golang not need it?

                                                                                                                                          A lot! Deferring only on error return lets you keep your deallocation calls together with allocation calls, while still transferring ownership to the caller on success. Go being garbage-collected kinda removes half of the need, and the other half is just handled awkwardly.

                                                                                                                                          I didn’t quite see the point for a while when I was starting out with Zig, but I pretty firmly feel errorsets and errdefer are (alongside comptime) some of Zig’s biggest wins for making a C competitor not suck in the same fundamental ways that C does. Happy to elaborate.

                                                                                                                                          Someone said something about common lisp that I think is true about Rust as well: the language manages to be big but the complexity is opt-in. You can write programs perfectly well with a minimal set of concepts.

                                                                                                                                          Maybe, if you Box absolutely everything, but I feel that stretches “perfectly well”. I don’t think this is generally true of Rust.

                                                                                                                                          I think Zig’s a pretty small language once you actually familiarise yourself with the concepts; maybe the assortment of new ones on what looks at first blush to be familiar is a bit arresting. orelse is super good (and it’s not like it comes from nowhere; Erlang had it first). threadlocal isn’t different to C++’s thread_local keyword.

                                                                                                                                          I get that it might seem more unapproachable than some, but complexity really isn’t what’s here; maybe just a fair bit of unfamiliarity and rough edges still in pre-1.0. It’s enormously simplified my systems programming experience, and continues to develop into what I once hoped Rust was going to be.

                                                                                                                                          1. 8

                                                                                                                                            Being unfamiliar, nonstandard and having features for what you consider ‘oddly shaped usecase’ may be exactly what makes it a worthwhile attempt at something ‘different’ that may actually solve some problems of other languages, instead of being just another slight variant that doesn’t address the core problems?

                                                                                                                                            I personally think it’s unlikely another derivative of existing languages is likely to improve matters much. Something different is exactly what is needed.

                                                                                                                                            1. 2

                                                                                                                                              The question is how different we need. Everything must be different or just some aspects?

                                                                                                                                            2. 3

                                                                                                                                              what does errdefer add to the language and why does e.g. golang not need it?

                                                                                                                                              This is a joke, right? Please tell me this is a joke.

                                                                                                                                              Go does not need errdefer because it (used to?) has if err != nil and all of the problems that came with that.

                                                                                                                                        1. 6

                                                                                                                                          I need to learn OpenGL. Hard to say what else I’ll need, there’s little sense in making plans.

                                                                                                                                          1. 1

                                                                                                                                            OpenGL ES or something like OpenGL 4?

                                                                                                                                            1. 1

                                                                                                                                              The oldest reasonable thing that will work on Linux desktops and macOS. I need it for rather simple texture transformations, because those are incredibly slow on a CPU, but I like to have deep understanding.

                                                                                                                                              1. 2

                                                                                                                                                Potentially useful resource on writing shaders? https://thebookofshaders.com/

                                                                                                                                                As far as my experience has gone, most of the buffer management stuff for OpenGL is pretty boilerplate (it’s just slinging buffers to the GPU and sometimes fetching results back), so most of the interesting work happens in the shaders.

                                                                                                                                                1. 1

                                                                                                                                                  Also if you don’t need realtime rendering since it sounds like you might be doing batch processing, it could make sense to just use gpgpu features like OpenGL compute shaders or even OpenCL.

                                                                                                                                                2. 2

                                                                                                                                                  People say that the book of shaders is good. I’ve found that going to shadertoy and tweaking some simple shaders heavily and googling built in gl mathematical functions to understand the ways I can modify the colors and point positions to be more interesting.