Threads for znpy

    1. 1

      LinuxCNC controls CNC machines. It can drive milling machines, lathes, 3d printers, laser cutters, plasma cutters, robot arms, hexapods, and more.

      Runs under Linux (optionally with realtime extensions).

      I see the dependency on Linux (even the name!) as unfortunate.

      1. 4

        What are your main concerns? What would you do differently?

        1. 1

          I’d care about supporting non-Linux OSs, particularly the OSS ones, and avoid including Linux in the project name.

          1. 5

            Patch or gtfo then :)

          2. 2

            Cross-OS for its own sake is fair enough I suppose, though I’d still be interested to know what about Linux you think is unfortunate as the OS of choice for this project, or why you think another would be better.

            My take on this is: it’s a piece of software designed for industrial control with hard real-time requirements. As one of my mentors liked to say, a mill or a lathe isn’t the kind of equipment that just hurts you, it rips your arm off and beats you to death with it. I’m glad that they’re limiting their scope to a single kernel. Last I read an article about achieving hard real-time on Linux, it wasn’t exactly lacking nuance or pitfalls. Add more supported kernels and you multiply the chances that you introduce a bug, miss timing, and destroy the work/machine/operator.

            I’d also like to point out that they don’t owe anyone cross-OS support. Including Linux in the name actually emphasizes their right to build what they want. The creators set out to create a CNC machine controller for Linux. If you want support for another OS, the source is GPLv2 :)

            1. 1

              what about Linux you think is unfortunate as the OS of choice for this project,

              I’ll say as a start I don’t think there’s anything terribly wrong with doing CNC with Linux.

              Yet my suspicion is that, despite the name, there’s technically not much tying this project to Linux specifically.

              Which makes the name truly unfortunate.

              1. 6

                A brief look at the project reveals that they ship Linux kernel modules for controlling hardware via supported interfaces, and for any use with real hardware Linux kernel with -rt patchset is needed on the controlling computer. This surely makes moving to another kernel quite a large effort, as realtime drivers need to be ported. And the recommended installation method is a Debian-derivative GNU/Linux distribution.

                So I would expect getting a good enough port to be a large undertaking, with benefits of using another OS inside that black box even harder to realise because the number of users with hardware access matters for testing.

                1. 1

                  So it is forcibly an ugly design, as we know it has to resort to kernel modules as Linux itself is ill-suited to abstract the hardware and enable doing the heavy lifting in user space. Noted.

                  Maybe the name is not wrong, in hindsight.

                  1. 5

                    Sure, they do have a userspace implementation, and apparently recommend the standard Preempt-RT patchset with user-space control implementation for some cases. It’s just that for many cases RTAI kernel and a kernel-space driver yield lower latency. Indeed, Linux is not a microkernel, so context switches have some costs.

                    Sure, making something realtime on top of a general-purpose non-realtime OS has its drawbacks, and the best latency will need some amount of special-case work. But it improves the chances of reusing some computer that is already around.

                    User-space real-time API-wise they claim to support Preempt-RT, RTAI and Xenomai kernels (all Linux-based).

                  2. 3

                    Such machines rely on hard realtime controls and are very often done in microcontrollers. Latency is a huge constraint of the project. The project itself is very old: it dates back to the mid 90s: there wasn’t a lot of CPU power back then (especially on the ARM targets) and less knowledge and development to achieve hard real-time.

                    1. 1

                      Are you aware of an open effort using microcontrollers?

                      1. 2

                        GRBL, I guess? Somewhat limited and recently inactive, but apparently usable for many usecases.

                        1. 3

                          recently inactive

                          There seems to be a maintained fork. There’s not much in terms of changes, but I suspect that’s because it reached a “just works” point.

                          No strange latency surprises to be had with these microcontrollers.

                          1. 2

                            Yes, I meant this repository as the main development line, and it doesn’t seem to have firmly decided to never get to 5-axis support, so there is a clearly stated missing feature with respect to which it could be active but isn’t. Doesn’t matter for use cases where it fits, of course.

                      2. 1

                        The project wiki (section 4) describes why they chose not to use an external microcontroller as a motion controller.

                        It also says there was a hard fork which adds support for a particular external microcontroller.

                        1. 1

                          I see… ultimately they do like their scope and meters, on their computer.

              2. 1

                Yet my suspicion is that, despite the name, there’s technically not much tying this project to Linux specifically.

                I’m unfamiliar with the real-time / CNC space. What other non-GPL/OSS kernel systems have support for the kind of real-time performance required by these machines?

                1. 1

                  What other non-GPL/OSS kernel systems have support for the kind of real-time performance required by these machines?

                  As a reminder, Linux isn’t exactly awesome at real time, particularly awful without rt patchset (and project seems to work w/o that), thus the requirements can’t be that strict, and there should be plenty of systems that meet them.

                  1. 3

                    In the system requirements documentation:

                    It can, however run on a standard kernel in simulation mode for purposes such as checking G-code, testing config files and learning the system.

                    So, non-RT Linux allows to do a subset of work that doesn’t involve driving hardware.

      2. 3

        Are you coming at it from a principle perspective or practical?

        From a practical point, several of the newer commercial control systems like Heidenhain and Siemens and probably more I can’t remember, have Linux base for the user interface.

        Both works fine in daily use at the day job.

        And is Windows really a better option? I know Datron’s Next control is lovely and easy to use on the commercial side. Others include Centroid, UCCNC, Kflop, Planet CNC and Mach3 & 4.

        All require specific electronics anyway that usually does the heavy lifting in serious (non hobby) use.

        From a principle point, I’d like to hear more.

    2. 19

      Mastodon used about ~2.5 GB out of the 4 I have on my Pi. With Pleroma, the total used RAM is only about ~700 MB. That’s crazy!

      I agree it’s crazy. Crazy less bloated, and crazy still bloated.

      700MB. Christ.

      1. 27

        To be clear, the 700 MB is the total RAM usage, i.e. by all programs and not Pleroma alone.

      2. 21

        That 700MB includes a Postgres database and a webserver.

        1. 9

          I wonder if we can still run pleroma in a 256mb ram system. Most of the ram is used by postgres, and that can be configured to use a lot less.

          1. 11

            I bet you can but PostgreSQL is also very tricky to limit RAM usage to a certain cap. First off the defaults are very conservative in most cases you would be cranking all the values up not down but you already know that as if I recall correctly I saw some great articles regarding PostgreSQL inner-workings from your blog posts on pleroma development.

            That said there are several configs that have direct and indirect influence on how much memory PostgreSQL will use: shared_buffers - the actual working set of the data the DB hacks on, that will be the largest immediate RAM allocation. Then we have the tricky parts like work_mem which is a per connection allocation but not a per connection limit. If your work mem is 8 MB and you execute a query which has 4 nodes in the resulting plan you can allocate up to 4*8 MB for that one connection. If you add to that parallel query execution then multiply that by concurrently running workers. I assume pleroma uses a connection pool so that alone can bump RAM usage a lot. Add to that things like maintenance_work_mem for tasks like vacuums and index rebuilds and you quickly can see how the actual memory usage can fluctuate on a whim.

            To the point.

            I agree it’s crazy. Crazy less bloated, and crazy still bloated.

            700MB. Christ.

            I simply think @ethoh is wrong. 700 MB usage is crazy low for a RDBMS and we are talking about RDBMS + a whole app using it. Databases are designed to utilize memory and avoid hitting the disk when not necessary. Unused memory is wasted memory.

            1. 3

              700 MB usage is crazy low for a RDBMS

              I don’t really get how you can make this claim with no reference at all to the data storage needs of the application. A fair metric would be the overhead of the DB relative to the application data. In this case we’d need to know some things about how Mastodon and Pleroma work, and how OP managed his instances of them.

              1. 4

                I don’t really get how you can make this claim with no reference at all to the data storage needs of the application.

                In similar fashion the OP claimed that 700 MB is crazy bloated. I was making a reference to that. However to back up my claims with some quick napkin calculations:

                Default shared_buffers for PostgreSQL 12 is 128 MB. Per PostgreSQL documentation the recommended setting is roughly 25% of available system RAM then measure.

                If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for shared_buffers is 25% of the memory in your system.


                The system in question has 4 GB of RAM so by that logic 1 GB for shared_buffers would be a reasonable setting - hence 700 MB at that point could be considered crazy low.

                Default work_mem is 4 MB, max_worker_processes is set to 8 and max_connections by default is 100 ( This means that query execution can easily eat up to 3.2 GB by default in the absolutely unlikely worst case scenario.

                maintenance_work_mem is by default an additional 64 MB.

                So we are looking at PostgreSQL itself using anywhere between 128 MB and 3 GB of RAM with it’s default settings that are ultra conservative and usually the first thing everyone increases. This is before considering the actual data and application workload.

                By this logic, personally for me 700 MB for PostgreSQL on a running Pleoroma instance including the memory used by Pleroma itself is crazy low.

                1. 5

                  But, this little Pi is not a dedicated database server, it at least hosts the app too? And defaults are just defaults. Maybe indicative of PG usage in general, across every application that uses it, but that’s a really broad brush to be painting such a tiny picture with! I still think there are a few different species of fruit being compared here. But I do appreciate your explanation, and I think I understand your reasoning now.

              2. 1

                Fwiw, my Pleroma database is approaching 60GB in size.

                1. 1

                  Due to shit posting or bot? You can clean it up a little bit by expiring remote messages older than 3months

                  1. 2

                    I have a dedicated 500GB NVMe for the database. Storage isn’t a problem and it’s nice for search purposes.

          2. 2

            I’m still not convinced that postgresql is the best storage for ActivityPub objects. I remember seeing in pleroma that most of the data is stored in a jsonb field, and that makes me think that maybe key-value stores based on object’s IDs would be simpler and maybe(???) faster.

            I’m currently implementing a storage “engine” based on this idea and I’m saving the plain json as plain files in a directory structure. It, of course, is missing ACID[1] and other niceties, but I feel like the simplicity of it is worth for an application that just wants to serve content for a small ActivityPub service without any overhead.

            [1] IMHO ACID is not a mandatory requirement for storing ActivityPub objects, as the large part of them (activities) are immutable by design.

            1. 5

              Misskey used to use a nosql / document store. They switched to postgresql because of performance issues. I’m sure you could build an AP server with a simpler store, but you we do make heavy use of relational features as well, so the relatively ‘heavy’ database part is worth it for us.

              1. 2

                Yes. One problem with a off the shelf key value store in this setup is that scanning over the whole keyspace to be able to filter objects is way less efficient than a good indexed db. (Even though I’m not there yet), I’m thinking of adding some rudimentary indexes based on bloom filters on properties that might require filtering.

                1. 4

                  postgresql provides indexing for json objects, so it makes a lot of sense to use it even for this kind of use case. Even sqlite has some json support these days.

            2. 2

              I am not convinced to store tons of small files individually, they are usually less than 1kb. The overhead from inode will waste 75% of a 4k, and you will also run out of inodes pretty quickly if your fs is not tuned for tons of small files.

              1. 3

                inodes are a legacy filesystem problem. Use ZFS :)

              2. 1

                The idea behind federation would be that most instances would have a small number of users with small local storage needs.

          3. 1

            Not really for recent releases, you need at least 512MB for a stable instance. Pleroma itself use <200MB RAM, and postgresql can use another 200MB, depends on your configuration.

      3. 10

        Total RSS for my Pleroma instance on Arch x86_64 (which is extremely lightly used) is ~115MB. There’s a bunch of other RSS being used by the Postgres connections but that’ll depend on your precise configuration.

        For comparison, my honk instance on Arch armv7l is using 17MB (but it admittedly bare-bones compared to Pleroma.)

        1. 2

          How is honk working for you? Do you want to share a link to your instance? I’ve been considering installing it myself. It seems cool, but the only honk deployment I’ve seen in the wild is the developer’s. If we’re talking about saving resources, honk seems to be better for that than Pleroma :)

          1. 3

            I run it for my single user instance. Haven’t tried upgrading since I installed it.

            It generally works as expected, if a little rough - I edited a bunch of the default templates and found the terminology a little obtuse, and threads where some replies are private don’t show any indication which can be a bit confusing.

            I may setup Plemora at some point, as I would like the extra features, but I might never get around to it because honk is so trouble-free and works alright.

          2. 2

            Pretty well - I just run the binary in a screen session on one of my servers for now.

   - mainly using it as a publish-on-your-own-space feeder for tweets via IFTTT.

            1. 3

              Have you looked into crossposting using one of the open source crossposters?

              I’m assuming that they won’t work because honk has fewer features than Mastodon, but I don’t actually know.

              1. 2

                I did try moa for a while but the link [from moa to twitter] kept disappearing for some reason - I did intend to self-host it but never got around to it. IFTTT is easier for now and if I want to shift off IFTTT, I’ve already got “RSS to Twitter” code for other things I can easily repurpose.

                [edited to clarify “link”]

      4. 4

        Fwiw it’s a bit over 300MBs on my (single user) instance.

        1. 3

          I still think that 300MB is a lot, especially when cheaper VPS can have only 500MB of RAM.

          1. 3

            In fairness, 512 mb is a ridiculously low amount of memory.

            Nowadays it’s possible to configure a physical system with 128c/256t and literally terabytes of ram and we’re still paying 2013 prices for 1c/512mb VPS instances.

            Think about this.

            1. 1

              I’ve been mostly using (and recommending) the smallest hetzner vps instances, which have 2gb of ram and cost just under 3 euro per month. although looking at lowendbox, i see that you can get a 1gb vps for $1 per month.

    3. 45

      Here’s my take on it:

      test "example" {
      fn foo() i32 {
          return 1234;
      ./test.zig:2:8: error: expression value is ignored
      1. 5

        Is this what she meant? I assumed they weren’t validating the output of the function before doing something with it. Not that they weren’t using it.

        Specifically, loading this new config put a NULL/nullptr/nil/None type thing into the value, and then something else tried to use it. When that happened, the program said “whoops, can’t do that”, and died.

        They obviously are using a dynamically typed language or Java (as much as Rachel belabors the point that she’s not calling out the language for the blame) that allowed the return value to have a null field when it really shouldn’t have.

        1. 3

          They obviously are using a dynamically typed language or Java

          Foo * f = g();

          happens in plenty of staticly typed languages that aren’t Java and could be considered as “not validating the return value” in this sense.

          Even in say Haskell you can do

            f (fromJust (g h))

          if you choose.

          1. 1

            You are, of course, technically correct. But my statement was context-driven by the assumption that this is backend web code. That’s most likely not C, C++, or Haskell. It’s most likely PHP, JavaScript, Java, Python, or Go.

            I guess this could happen in Go, though. So I’ll change my statement to “They’re almost definitely using a dynamically typed language or Java or Go.” ;)

            And I struggle to believe that this would happen with a language like Haskell anyway. I know it can, but I’d put money down that they weren’t using Haskell or Rust or Ocaml or Kotlin or Swift.

            1. 7

              You’re missing the whole point. Language is not the problem.

              The problem is the culture.

              Their culture was “somebody else have me incorrect input so it’s their fault if the business went down”.

              Which is complete nonsense. And you’re bikeshedding on the language when it was explicitly said to ignore the language.

              The bigger discussion, possibly even a philosophical discussion, is what do we do when we get incorrect input? Do we refuse to even start? Can we reject such input? Shall we ignore the input? And if we ignore it will the software behave correctly anyway for the rest of the correct input (configuration or whatever) ?

              1. 1

                Yeah, my comment about language choice was somewhat tangential. Originally, I wasn’t even sure if we were all reading the same thing from the article. The user I replied to made a point about not using a return result. My reading of the article lead me to believe that they just chose not to check the returned value. Based on her description of the issue being around null, I then made an off-the-cuff remark that I believe the only way that’s likely is because they are using a language that makes that kind of null-check “noisy”.

                Not at all disagreeing with the bigger picture. Yes, it’s a culture issue. Yes, we need to discuss validating input and whether input validation is sufficient.

                Language is a very small point, but I’m not sure it’s totally irrelevant. It’s a small problem that such checks are noisy enough that it makes developers not want to do it, no? I wish that she went into a little more detail about exactly what the devs aren’t checking.

        2. 3

          Don’t validate, parse. Validation errors are parsing errors and can be encoded as an Err result (in languages like Rust)

    4. 4

      Zoom Is big enough, it’s wrong to say it “doesn’t understand” gdpr.

      It’s purposely reading cookie files and planting a cookie and breaking the gdpr.

    5. 21

      Kudos to the original author, not only for the high quality original content but for the creative use of phpBB as their blogging platform. Not only does this give simple content mark up, inline attachments and search for free, it also provides an all in one user commenting and subscription mailing capability by changing some basic forum settings, great idea!

      Another pleasant side effect is that it didn’t bring my current gen CPU to its knees rendering drop shadows, rounded corners and scrolling effects! :D

      1. 2

        Agner’s been using forum software for his blog for a long time, although oddly enough the move to phpBB is recent (2019). From 2009–2019 he used a more minimalist forum that I don’t recognize. Possibly his own software?

        1. 2

          Interesting, the generator suggests AForum 1.4.2, but didn’t find anything further.

          I first hosted and managed phpBB back in 2001, which is I guess why I find its use here so refreshing between the posts about frameworks and ways to bring back ‘the old web’.

          With all the content management systems and hosted solutions out there, phpBB is a great example of an opensource community and product that has provided a reliable platform and consistent migration path for user owned, self hosted content for close to 20 years.

          Sure it’s using PHP, which appears increasingly out of vogue with the modern wave a web development. But the stack is so ubiquitous it can be hosted reliably for pocket change.

      2. 1

        this is such a cool idea, i wonder why i haven’t seen it before. gotta investigate if it will work for me.

        also it didn’t have a full screen banner image to scroll past, just a little BBS menu.

        1. 5

          i wonder why i haven’t seen it before.

          I take it you didn’t surf the web much 10-15 years ago. Web forums software was king and there was all sorts of usage in all creative ways. With forum software authors themselves encourage such creative use and always pointing out that you could use it as a blog, a news site, issue tracker, an archive front end, etc.

          Back in the day, I had this private phpbb forum in which I would archive thousands of posts per day I scrapped from a curated list of blogs via RSS. PHPbb shipped with its own custom made full text index. Searching large corpus was a breeze.

          Good memories.

          1. 2

            The PHP forum wars of the 00’s is an era that shouldn’t be forgotten! and phpBB survived through it all.

            Not only the inbuilt full text searching, the built in template and caching engine is also worth a mention.. a great piece of work that dramatically dropped resource usage in tightly packed shared hosting environments.

      3. 1

        Not a fan of php, but no complaints about the simplicity of the page phpbb generates.

      4. 1

        It made so much sense to me that I didn’t realize at all I was looking at an instance of phpBB.

        Awesome idea.

    6. 6

      If anything, the Linux kernel should consider sourcehut. It’s probably the only platform that works well with open standards and decentralised systems.

      1. 11

        I think Sourcehut is an incredible product and speaks very well of Drew Devault’s technical and business acumen, but I’m not convinced that where Linux hosts its source code is its biggest problem to be worrying about.

        1. 13

          I don’t think it’s the most pressing matter right now, but it is important to keep in mind that it was once and it was solved. Due to the whole BitKeeper mess, Torvald’s started working on git as a means to perform distributed version control - and mail was the cornerstone. Now, GitHub is lobbying to drift apart from email and use their non-standard and close issue management system. It does look familiar.

      2. 17

        Alpine adopted sourcehut and it didn’t end well. Sourcehut’s extreme opinions on mail left package maintainers unable to send mail because Devault didn’t like the settings you sometimes couldn’t even change. After some developers left the project, they instead switched to GitLab.

          1. 14
        1. 2

          Alpine seems to be a bit all over the place at the moment… Their repos look to be self-hosted cgit, their mailing lists are Sourcehut, their bug tracker is GitLab and their wiki is self-hosted Mediawiki.

          Edit: ah, their cgit-hosted repos are a mirror of what’s in GitLab, so I guess that’s an interim measure.

        2. 3

          This is an extremely uninformed and shitty take on what happened with Alpine Linux which doesn’t map at all onto reality. Keep your slander to yourself.

          1. 8

            As someone with no stake in this, I went and had a look at the available evidence, and… maybe sourcehut needs to correct the record somewhere?

            After looking over the alpine mailing lists, I got the distinct impression that some widely-used mail clients can’t send mail to sourcehut and some alpine contributors have refused to use it on that basis.

            1. 20

              It never caused any developers to “leave the project”. The arguments were not being made in good faith, either. There was a subversive element with a personal vendetta against me who were pulling strings under the table to crank up the drama to 11. As a matter of fact, it was settled after an IRC discussion and the Alpine mailing lists (still running sourcehut) work fine with those mail clients now. All I pushed for was due consideration of the rationale behind the design decision, and during the discussion, a compromise was reached and implemented.

              And they didn’t “switch” to GitLab from sourcehut, either. They were never using more than the mailing list features of sourcehut, and there was never a plan to start using them. The sourcehut mailing lists are still in use today, and the GitLab migration was planned in advance of that and completed for unrelated reasons.

              There are persistent, awful comments from a handful of people who spread lies and rumors about what happened as if I attempted a hostile takeover of Alpine Linux. It’s complete horseshit, and incredibly offensive and hurtful, especially in light of how much time and effort I’ve devoted to improving Alpine Linux. I am a maintainer of many packages myself, and a contributor to many of their repositories!

              1. 8

                Thanks, I appreciate the reply - I can only imagine this has all been quite draining, especially while trying to get a new product off the ground.

    7. 2

      Smartphones are replacing the PC, and if Android doesn’t become self-hosting we may be stuck with locked down iPhone derivatives in the next generation.

      That’s…. quite a statement. The main difference between mobile and PC/servers is that mobile is power constrained.

      I don’t necessarily buy the argument, but I love that someone finally did this.

      At one point in my life, I setup a mixed PoC environment, where I could code on my phone, but the code was hosted on a build server that I was remoted into, and could fetch and install the APK every recompile. Imo, this is still the best of both worlds since I don’t think even high-end mobile can out-perform machines with dedicated power supplies in terms of build time.

      This is still a really neat project tho

      1. 1

        I think it’s hyperbole.

        Assume a phone/phablet that can be used in landscape mode while being plugged in to a powerbrick. Add Bluetooth keyboard and mouse. Have network access and work “in the cloud”. People have actually been doing this. Yes, it’s rare, and people who want 8 fast cores to compile locally aren’t the target audience, but it’s possible already.

        1. 2

          Yeah people do a lot of stuff but this doesn’t mean that’s an optimal way to do it.

          Add a Bluetooth keyboard and mouse, boom now you have three things to charge.

          Yeah you can work on the cloud, but have fun doing actual work on a 5 to 6 inches phone. Unless your job is doing tik toks or stuff like that.

          If you want to carry am external display… You might as well carry a laptop.

          1. 1

            True, I’m not advocating this - but I do think we’re going in a direction where devices get smaller and people are able to do more. So “in the majority sense” this transition works.

            I just hate how people are declaring “the PC is dead” just because its use has plummeted to lower market share, and then people will rightfully disagree. Doesn’t change the fact that most people use smartphones and laptops and PCs are the minority now. And soon maybe laptops will join that group.

    8. -3

      Tiling window managers are not for the faint of heart.

      Oh shut up… I stopped here. I don’t feel anything good will follow.

    9. 2

      It’s nice to see jupyter catching up with Mathematica.

      Granted, Mathematica by Wolfram is light years ahead in pretty much everything, but it’s also very, very, very expensive.

    10. 11

      Web browsers that will render the modern web cost millions of dollars to produce.

      Who else has the incentive to do that?

      Is he suggesting that someone (not him, presumably) fork Chrome and remove the extensions and features he doesn’t like, and backport security fixes weekly?

      Google’s incentives are clear, and no one is forced to run browsers with these featuresets he complains about.

      What, exactly, is he proposing, and to whom?

      1. 20

        What, exactly, is he proposing, and to whom?

        “I call for an immediate and indefinite suspension of the addition of new developer-facing APIs to web browsers. “

        The article is very short and doesn’t need a lot of interpretation. He simply wants the companies that create browsers to stop adding these new features and in some cases start removing them. This may happen with Firefox. By removing 25% of their workforce it might take a little bit longer to add new features.

        1. 9

          Firefox wants feature parity with Chrome.

          Google wants a large technical moat around browser competitors, as well as full, native-app-like functionality on ChromeOS devices (hence webmidi and webusb et c).

          Why would they stop? Because Drew said so?

          More importantly, why should they?

          1. 6

            Google wants a large technical moat around browser competitors

            More importantly, why should they?

            Should they be allowed to rig the market such that it’s impossible to compete in? It sounds like you agree they’re doing that, and I don’t see how that’s a good thing by anyone’s standards.

            1. 12

              It seems to me that Google is playing the embrace-extend-extinguish game, but in a different way: they’re extending the scope so broadly and with features so hard to implement that even companies comparable in size to Google don’t can’t compete against it (think of Microsoft dropping trident and forking chromium, and think of opera basically becoming a chromium skin)

            2. 1

              Nobody’s rigging anything by releasing free software (Chromium).

              1. 10

                I’m not sure if that’s true. Google has arguably “won” the browser wars by open-sourcing chromium. Everyone (almost) chose to contribute to Google’s dominance rather than compete with them. You can’t realistically fork Chromium anyway, with the limited resources you left yourself with, so all you can do is contribute back to Google while sheepishly adopting everything they force upon you.

          2. 2

            They shouldn’t stop because Drew said so. It looks like they’ll stop whenever this becomes a financial burden.

            1. 2

              We’ll end up with a situation like before, with IE 6: All competitors well and truly crushed with a more featureful, “better” browser, and then decide to throw in the towel when it comes to maintenance. Yay…

      2. 12

        Web browsers that will render the modern web cost millions of dollars to produce.

        Yes, and the proposal is to stop adding features to keep the cost from rising further.

        You know, there might be viable new competition, if writing a browser wouldn’t involve also writing a half-assed operating system, an USB stack, an OpenGL driver wrapper, …

        1. 2

          I’m not sure that there is a legal or moral argument that they shouldn’t be permitted to. There certainly isn’t a logical one that, from their perspective, they shouldn’t.

          1. 5

            how is moral that a private american corporation has de facto unlimited power over a technology developed by thousands of people around the world over the years and it’s free to steer the future of such a critical technology? If Google behaves like Google, this will lead to the exploitation of billions of user around the world. This is the equivalent of buying out all the sources of water and then asking for money at the price you decide. Web Technologies are now necessary to create and operate the tools we use to reproduce socially, to work, to study, to keep families and communities together: letting a bunch of privileged techbros in California exploit these needs freely is in no way moral.

            1. 3

              It’s nothing like buying out the water supply. In this case there are still alternate browsers with differing feature sets.

              1. 5

                Not if Google keeps breaking the standards and no dev supports the alternative browsers. Yes, you can have nice browsers that work for niche sites, but that might become a separate web entirely that will be simply ignored by the vast majority of the users.

            2. 1

              Because steering is an abstraction, at any point you can use the version of Chromium from that day for all time if you so wish.

              Google gets free expression just like anyone else does, and can add any features they like to their own free software project.

              1. 1

                A browser can be used only if the websites are compatible. The situation where chromium is a viable, secure option now might change in the future, rendering it a non-viable option. Why do you think Google will keep supporting chromium after winning this war? It might happen but it might not.

      3. 1

        Yeah, I don’t really understand.

        His proposal seems to be “give the last vestiges of control over the web to Google”? It might make more sense if the title were “Google needs to stop”.

        1. 2

          At the moment, Google is deciding where web browsers go, approximately unilaterally. The two aren’t precisely equivalent, but they’re far too close for comfort.

    11. 4

      On the other hand, a lot of projects do not offer a HACKING file that contains a high level overview of the codebase and a brief tutorial (reference) on how to setup a development environment and how to start a development version (this is particularly true of a stable version is already installed on your machine).

      For an example of this thing done correctly look at ansible: they have some fairly good guide on how to start hacking once you have cloned their source code repository.

    12. 16

      Backward compatibility: when code already written is more important than code yet-to-be written; thereby picking ongoing pain for new developers over a fixed amount of pain for existing ones.

      1. 22

        Backwards compatibility: when developers can trust that code they write tomorrow won’t be broken by a language spec change two years from now.

        1. 18

          …instead it’ll be broken by a subtle compiler change that was within the spec, but the old compiler didn’t warn about this possibility.

          1. 4

            Had something exactly like this moving a half million LOC codebase at work from gcc 4.8 to 7. It’s literally nightmare fuel upgrading a compiler for this work project.

          2. 1

            That’s what testing is for.

            1. 1

              Testing isn’t all encompassing and never can be.

          3. 1

            That’s what tools like Frama-C are for, as well as things like the Clang static analyzer or ubsan.

            1. 10

              I’d rather have a language where I don’t need such tools.

              1. 2

                Sure – for most programs a high level garbage collected language with a good type system is the right choice.

                However, the tooling for C tends to allow for more safety at compile time than most languages, if you decide to use it to its full extent. Most people don’t, and just wing it – but if you want to put in the effort and do the annotation work, you can statically guarantee, for example, that you’ll never run out of memory at runtime, overflow the stack, or have accidentally non-terminating loops.

                SPARK, as far as I’m aware, is the major competition if you want high assurance code. It probably does the job better, though I haven’t really had a chance to experiment with it.

                1. 1

                  I was thinking about Rust.

        2. 11

          Backwards compatibility: forcing your kids and grandchildren to make the same mistakes you did.

    13. 5

      Despite having around 3 decades of computer usage underneath my belt, spreadsheets are one of those things that I never quite got to grips with. I’ve been meaning to take a closer look at LibreOffice for some financial stuff I’m dabbling in and was pleasantly surprised to see they have “books” for each LO product and these seem to be extremely clear and well-written. I’m going to dig into the one for Calc for sure.

      1. 3

        I think for a lot of programmers spreadsheets are kinda superfluous. Certainly in my case, I typically just write a small program where another person would use a spreadsheet.

      2. 3

        I know it’s counterintuitive, but I would advise to just learn Excel from Microsoft. Long story short, there’s much more documentation and a lot more tutorial/courses. Just invest 20-40 euros on some good udemy course and you should be good to go. Once you’re familiar/confident it’s easy to switch back to calc, most things map 1 to 1.

        I still can’t understand free software people haven’t monetized training. Red Hat makes good money off training (and certification).

        Good example would be not only LibreOffice, but also stuff like Kdenlive. Make some courses, selli it off udemy (or similar) and keep it updated.

        1. 3

          I still can’t understand free software people haven’t monetized training.

          Um, they did.

          1. 1

            This mostly seems like corporate training to me; I think the previous poster was talking more about simpler training for interested hobbyists and “power users”, like a 4-hour course you pay $20 for or something.

            1. 1

              Anytime you have to work with reconciling figures from 2 systems, Excel comes in handy. Plenty of tools within easy reach to isolate differences etc.

    14. 3

      My roommate is away on vacation and I have the house for myself. I’m relaxing and cleaning a bit, because my roommate is not that of a clean person and I would like to enjoy e clean house once in a while (lockdown didn’t help of course, and general work-from-home isn’t helping either). If everything goes okay I will get rid of this problem in a few months.

      On the tech side, I want to finish reading the Google Sre book. It’s half enlightening and half worthless in the sense that it shows some best practices but a considerable number of such practices are impracticable because we don’t have the infrastructure or the staff numbers that Google has. Still an interesting read, hopefully I can improve something at work.

    15. 2

      Yes we do. We have a galera mysql cluster that spans multiple nodes and of course it makes sense to use such cluster to host multiple databases. We then grant individual users access to a specific database, as long as they log in from a certain subnet.

      We believe in a you-build-it-you-run-it philosophy so we give developers a certain degree of access to production databases (although not full access of course).

      1. 1

        mysql gained the ability for row level security? That’s news! How painful is it compared to PG’s or Oracle’s do you know?

        1. 1

          I don’t know, but I wasn’t speaking about row level security. We usually grant permission for certain dml commands on whole databases to certain members of a development team (usually the team leader and the vice-team leader).

    16. 2

      I’ve used “built in” security like you describe a couple of times, for web apps that used an LDAP DIT as primary “database”. While it can be a steep learning curve for devs not used to working with LDAP in general (particularly if the ACLs get complex), I really like this setup.

      I’ve found that for all the hooplah the last few years about “no sql” databases, OpenLDAP has most if not all those bases covered - especially for the non-extreme use cases most people need - but with solid, well understood built in support for schemas, indexes, replication, auditing, etc.

      1. 1

        Upcoming for LDAP. It’s quite a beast to tame, but it does the job immensely well.

    17. 4

      Definitely recommend SBCL unless you can afford a Lispworks license. CLISP is super slow. Lispworks is great but the free version is limited. SBCL is very fast and full featured.

      Also good work on this guide, looks really good!

      1. 4

        I use SBCL and would recommend one using it after they have setup Emacs + Sly (or SLIME). But when trying out Lisp, it is likely that the person will try running things on a bare REPL from the terminal. In that scenario CLISP is a much better choice. It has the best out of the box REPL by far. History, auto-completion, colors, etc.

        CLISP is super slow

        We should learn from the Python and Ruby community on how to sell it. It ain’t slow, ‘fast enough’ 😉.

        1. 1

          That may be true, good point.

      2. 3

        For the record, this guide is not authored by me. Kudo to Prof. Sean Luke!

      3. 1

        Interesting, could you tell is what are the advantages of lispworks over sbcl?

        1. 1

          Lispworks comes with a great development environment, libraries and tools, and from what I hear the performance is excellent.

    18. 1

      When reading about these topics, think of elasticsearch: it is facing direct competition from Amazon in their elasticsearch-as-a-servicr offering. And since their software is not agpl-licensed they cannot benefit from the improvements that Amazon is introducing in their internal version of elasticsearch (and neither can we all).

    19. 2

      I benchmark myself against two main things: salary and how unfit I feel when I compare myself to a reasonable number of job postings.

      For example, two years ago I had no idea what LDAP, iscsi, openid, Kerberos and a lot more things were. Now I know enough to be able to speak about them in an interview. I made progress since my last job hop.

      Another additional thing, maybe: autonomy. How autonomous I am in performing my current job. That has only sense within a certain company though, and if the company develops custom proprietary software the progress might mean progress but only in the micro-universe of that software, so take it with a grain of salt.

    20. 2

      What does it mean for things like Python asyncio / coroutines or even nodejs?

      1. 4

        Not much, because while this IS better, it is Linux-only and Python and Node.js need to support Windows, macOS, etc. It is also pretty different programming model, so it is hard to abstract over using a portability layer.

        1. 2

          I’m also not sure if it’s that relevant for node. From what I’m reading, it looks perfect for the likes of Go, where you want to have threads of execution which can block but where those threads of execution aren’t represented by OS threads. From what I can see, node’s programming model fits very comfortably on top of the existing epoll.

          1. 6

            FWIW I’m pretty sure the history is (roughly) that C++ programmers at Google wanted to use Go-like concurrency, hence support for user level threading. There are a bunch of CppCon videos about it that may give more color on that.

      2. 3

        My thought is that for languages that already made the big investment in userspace threading, it’s unlikely they will rip the scheduler out.

        I think this is most interesting for mostly automatically making “regular” threaded languages “just work”. Think: a more powerful language-agnostic gevent.monkeypatch_all()

      3. 3

        If my understanding of the Gil problem is correct, this might mean very little to nothing. As far as I understand the problem with the main python interpreter is that a significant number of its internal data structures and functions are not thread-safe.

        If such problems were solved we could as well run multithreaded python off regular pthreads.