Threads for sknebel

  1.  

    It would have been helpful if the article included stats for how many active users there were at the various scaling points. in particular I’m curious if the influx caused a lot more load even for instances where signups were closed just because there’s a lot more overall activity, or whether the increased load just happened because they didn’t disable signups.

    1.  

      I’m curious if the influx caused a lot more load even for instances where signups were closed just because there’s a lot more overall activity,

      From other posts about the same instance: yes, both overall activity and also local users becoming more active (and local users that had a dormant account now starting to use it). (chaos.social first limited signups per day and now stopped them entirely a few days ago)

    1. 3

      Maybe the solution when you start an agpl project is to always sell gpl licences for businesses?

      1. 2

        I don’t see what purpose that would serve here. Few are going to be concerned about mold being AGPL, those that are would probably just as satisfied with a non-open dual-license option.

        And in general, putting a GPL version out there means any buyer can just redistribute it to the rest of the world, effectively making it GPL for everyone, so why bother with AGPL in the first place?

        1. 3

          Because the problem the author of mold has is that companies know how to buy things not how to donate to open source projects. Selling a different licence is something that fits into that paradigm.

      1. 3

        I’m sorry, lossless jpeg recompression Is not a feature that is a selling point: it doesn’t impact new images, and people aren’t going to go out and recompress their image library. I really don’t understand why people think this is such an important/useful feature.

        1. 15

          Realistically I think the JPEG recompression is not something you sell to end users, it’s something that transparently benefits them under the covers.

          The best comparison is probably Brotli, the modernized DEFLATE alternative most browsers support. Chrome has a (not yet closed) bug to support the JXL recompression as a Content-Encoding analogous to Brotli, where right-clicking and saving would still get you a .jpg.

          Most users didn’t replace gzip with brotli locally. It’s not worth it for many even though it’s theoretically drop-in improved tech. Same things are true of JPEG recompression. But large sites use Brotli to serve up your HTML/JS/CSS, CDNs handle it; Cloudflare does, Fastly’s experimenting, if you check the Content-Encoding of the JS bundle on various big websites, it’s Brotli. Same could be true of JPEG recompression.

          I don’t think you stop thinking about handling existing JPEGs better because you have a new compressor; existing content doesn’t go away, and production doesn’t instantly switch over to a new standard. I think that’s how you get to having this in the JXL standard alongside the new compression.

          Separately, if JXL as a whole isn’t adopted by Chrome and AVIF is the next open format, there’s a specific way JPEG recompression could help: AVIF encoding takes way more CPU effort than JPEG (seconds to minutes, depending on effort). GPU assists are coming, e.g. the new gen of discrete GPUs has AV1 video hardware. But there’s a gap where you can’t or don’t want to deal with that. JPEG+recompression would be a more-efficient way to fill that gap.

          1. 2

            AVIF encoding takes way more CPU effort than JPEG (seconds to minutes, depending on effort).

            Happily most modern cameras (i.e. smartphones) have dedicated hardware encoders built in.

            1. 4

              None I know of has a hardware AV1 encoder, though. Some SoCs have AV1 decoders. Some have encoders that can do AVIF’s older cousin HEIF but that’s not the Web’s future format because of all the patents.

              I’d quite like good AV1 encoders to get widespread, and things should improve with future HW gens, but today’s situation, and everywhere you’d like to make image files without a GPU, is what I’m thinking of.

              Companies already do stuff like this internally, and there’s a path here that doesn’t require end-users know the other wire format even exists. It seems like a good thing when we’re never really getting rid of .jpg!

              1. 1

                Ah, sorry my bad I was thinking of the HEVC encoders, durrrrrr

          2. 9

            It’s just good engineering. We’ve all seen automated compression systems completely destroy reuploaded images over the years. It’s not something that users should care about.

            1. 1

              Indeed, but my original (now unfixable) comment meant for end users. The original context of the current jpegxl stuff is the removal of jpegxl support from chrome, which meant I approached this article from the context of end users rather than giant server farms.

              1. 3

                I don’t quite understand how you get here. Browsers are used for getting images from servers to regular users’ faces. If they support JXL, servers with lots of JPEG images for showing to regular users can get them to the regular users’ faces faster, via the browser, by taking advantage of the re-encoding. Isn’t that an advantage for regular users?

            2. 6

              The conversion can be automatically applied by a service like Cloudinary. Such services currently offer automatic conversion of JPEG to WebP, but that always loses quality.

              1. 6

                people aren’t going to go out and recompress their image library

                I’m not sure why you’d assume that. For many services that stores lots of images it is an attractive option, especially given that it isn’t just identical image, but can enable recreating the original file.

                E.g. Dropbox has in the past come up with their own JPEG recompression algorithm, even though that always required recreating the source file for display.

                1. 2

                  You’re right - I didn’t make this clear.

                  Regular users are not the ones who care, but they’re the people for whom it would need to be useful in order to justify reencoding as a feature worthy of the attack surface adding JPEGXL on the web. The fact that it kept being pulled up to the top of these lists just doesn’t make sense in the context of browser support.

                  A much more compelling case can be made for progressive display - not super relevant to many home connections now, but if you’re on a low performance mobile network in remote and inaccessible locations or similar (say AT&T or TMobile in most US cities :D ) that can still matter.

                  That said I think google is right to remove it from Chrome if they weren’t themselves going to be supporting it, and it doesn’t seem that many/any phones are encoding to it (presumably because it’s new, but also modern phones have h/w encoders that might help with heir, etc?)

                  1. 2

                    Most regular users don’t host their own photos on the web, they use some service that they upload images to. If that service can recompress the images and save 20% of both their storage and bandwidth costs, that’s a massive cost and energy saving. I wouldn’t be surprised, given that the transform appears to be reversible. If they’re already doing the transcoding on the server side and just keeping small caches of JPEG images for frequently downloaded ones and transcoding everything else on the fly. If browsers support JEPG XL the their CPU and bandwidth costs go down.

                    The surprising thing here to me is that the Google Photos team isn’t screaming at the Chrome team.

                    1. 2

                      the jpeg-xl<->jpeg transcoding is simply an improvement of the entropy coder, but more importantly as long as there is no change in the image data the “cloud storage” provider is more than welcome to transcode however they feel - I would not be surprised if the big storage services aren’t already doing any as good or better than jpeg-xl.

                      The reason it can do lossless transcoding is that it is able to essentially a jpeg using a different extension to indicate a different entropy coder. There is nothing at all stopping cloud storage providers doing this long before jpegxl existed or was a standard, and they don’t have the compatibility requirements a standards body is worried about, so I would not be surprised if providers were already transcoding, nor would I be surprised if they were transcoding using their own system and not telling anyone for “competitive advantage”.

                      The surprising thing here to me is that the Google Photos team isn’t screaming at the Chrome team.

                      Why? They’re already free to transcode on the server end, which I assume they do anyway, and I would assume they do a better job of than jpeg-xl. For actual users, Chrome already supports a variety of other formats superior to jpeg (not xl), and seemingly on par with jpeg-xl (+/- tradeoffs). In my experience online views (vs. “download image…” button) use resized images that are smaller than the corresponding JS most such sites use (and reducing hypothetical full resolution image isn’t relevant, because they essentially treat online as being “preview by default” and also why provide a 3x4k version of a file to be displayed in a non-full screen window on a display with a smaller resolution than the image?).

                      The downside for Chrome of having jpeg-xl is that it’s yet another image format, a field renowned for its secure and robust parsers. I recall Safari having multiple vulnerabilities over the years due to exposing parsers for all the image formats imaginable, so this isn’t an imaginary worry.

                      Obviously in a year or so, if phones have started using jpeg-xl the calculus changes, it also gives someone time to either implement their own decoder in a secure language, or the chrome security folk have time to spend a lot of effort breaking the existing decoder library, and getting it fixed.

                      But for now jpeg-xl support in chrome (or any browser) is a pile of new file parsing code, in a field with a sub optimal track record, for a format that doesn’t have any producers.

                      To me the most repeated feature that is used to justify jpeg-xl is the lossless transcoding, but there’s nothing stopping the cloud providers transcoding anyway, and moreover those providers aren’t constrained to requirements specified by a standard.

                2. 4

                  I would actually go and re-encode my share of images, which for some reason* exist as JPEG, if I knew that this won’t give me even more quality loss.

                  * Archival systems are a typical reason to have high amounts of jpeg stuff laying around.

                  1. 4

                    This repacking trick has been around for a while, e.g. there’s Dropbox Lepton: https://github.com/dropbox/lepton

                    1. 1

                      But the vast majority of users aren’t going to be doing that.

                    2. 2

                      You realized your first order mistake. The second order mistake is what really should be corrected.

                      1. tone, “I’m sorry”, this is passive aggressive and not productive.

                      2. Not realizing why this would be an advantage for a new format. This is your Chesterton’s Fence moment.

                    1. 3

                      I feel like this comes too late. Windows on ARM devices have been out and have been widely panned as being pretty terrible, with basically no native software and a slow x86 compat layer. And married to Qualcomm chipsets. I don’t see them put ARM that front-and-center to overcome this – devs don’t trust it’s worth the effort, users don’t want it because the devs don’t support it.

                      Reasons to think otherwise?

                      Is this hardware interesting as an ARM PC in general?

                      1. 4

                        I’m still waiting for benchmarks, but the Gen 3 SoC looks like it should be pretty fast. The x86 compat layer is a lot faster than it was at launch. We (Microsoft) worked with Arm on some extensions to make x86 emulation more efficient and I believe the cores in this SoC include at least some of them.

                        Personally, I think that x86’s days are numbered. Intel’s roadmaps slip every time I read about them. AMD is an Arm partner and is well placed to switch to shipping Arm devices if there’s demand. The Arm MacBooks are selling (and performing) extremely well. Graviton sales in AWS are (from their last financials) selling far better than Amazon expected and are very popular with customers. At this point, Windows is the only OS that I’d rather run on x86 than Arm hardware and that’s not a sustainable position.

                        1. 3

                          We (Microsoft) worked with Arm on some extensions to make x86 emulation more efficient and I believe the cores in this SoC include at least some of them.

                          Can you elaborate on the extensions? I’m having trouble finding search results that aren’t M1

                          1. 1

                            AMD is an Arm partner

                            On the other hand, IIUC, AMD is the only company other than Intel that can legally offer native x64 processors. Wouldn’t it be better for them to take advantage of that unique position? Is it somehow not possible to produce an x64-based SoC that’s as power-efficient as what’s possible with ARM64?

                            1. 1

                              It’ll be interesting to see how the gaming PC market shakes out. That doesn’t seem to be that interested in power efficiency, and it also remains a bastion on profitability.

                              1. 2

                                I think that will depend a lot on how well cloud gaming works. I’ve basically stopped playing games locally on my Xbox One S because they are faster and prettier with cloud gaming (and ‘install’ instantly). If the network is fast enough, I’d rather play games in the cloud than locally.

                                In terms of emulation, the Xbox One shipped with a PowerPC emulator that emulated Xbox 360 games and the modern Arm emulator for Windows is based on the same underlying technology. If you’re spending a lot of time in system libraries (e.g. DirectX) or in the GPU then emulator performance isn’t nearly as critical, but for CPU-bound games there may be some overhead.

                                1. 1

                                  I don’t want to say game streaming is dead in the water and Stadia isn’t the best example (lol Google and long-living products and all) but I’ve no high hopes. I already forgot the name of Amazon’s offer (and afaik it was US only), and so on. The MS one also hasn’t garnered much publicity in my circles (which lean towards Germans and MMO players, I have to admit).

                                  I mean, as someone with a dedicated gaming PC I’m also not the target audience but it’s a cool thing if it exists, just not sure it’s a mass market thing, looking at internet speeds and all.

                                  1. 1

                                    Xbox Cloud Gaming was in the news yesterday with over 20M active subscribers and the financials released yesterday suggest that it’s quite profitable. It integrates with the Xbox Game Pass, which has a large library of games that you have access to for a monthly subscription on either Xbox or Windows machines and with the cloud gaming bit you can play them on pretty much any device that has an Xbox controller.

                                    A single AAA title can cost as much as a six-month subscription to Game Pass, which lets me play almost 500 games (a few more on console than PC). With cloud gaming, I can try most of them almost instantly: it takes a few seconds to launch and then I’m in the game, whereas with a local install it can take hours to install a 20 GiB game and then 10 minutes to learn that I don’t like it. I’ve played a bunch of games on it that I would never have tried if buying had been the only option, just as I’ve watched a load of things on Netflix where I’d never have bought the DVD. Some of them lasted 10 minutes before I decided I didn’t enjoy them, others I’ve played for tens of hours.

                                    Cloud Gaming works really well with a 20 Mb/s connection and moderately well with a 10 Mb/s connection. That’s a struggle in some rural areas but that’s a lower requirement than the cheapest broadband package that you can buy in most cities in the UK, at least. If you have a 20 Mb/s connection, it takes over two hours to install a 20 GiB game (and that’s pretty small by modern game standards), with cloud gaming you can play it instantly, so the experience is a lot better for anyone with lower speeds that are above this threshold. The GPU in the cloud servers is (I believe) the same as in the Xbox Series X, which is a lot more powerful than anything that you could put in a laptop power budget.

                                    If you’ve got a fast connection and a dedicated gaming PC, you’ll probably see better performance locally, but you need to keep upgrading the machine and that’s a treadmill that I was happy to step off.

                                    1. 1

                                      It integrates with the Xbox Game Pass

                                      This is it. Until I see at least some realistic projection of “people who signed up because of streaming” versus “people who signed up for the game pass and then clicked a button to say ‘yes, cloud gaming is also fine’”.

                                      Look, I get that /some/ people find it awesome, I also know a few. I’m not deep enough in the industry to be able to make a lot of sense of a number of 20m. World of Warcraft at some point had 12m subscribers, so is 20m good? Steam says there are currently nearly 1m people playing just the 2 top games. Currently, not monthly.

                                      You don’t have to try to sell it to me - and maybe “mass market” was really badly phrased. But I’m not convinced it will be “winning” or dominate the market in any form.

                            2. 2

                              I have an ARM windows hybrid laptop/tablet (that I got for cheap fortunately), and originally MS said that x64 emulation was coming, so that you could run Windows programs on your Windows computer.

                              Now it’s here, on Windows 11. This computer is not “eligible” for Windows 11. So I’m stuck with a Windows computer that can’t run many Windows programs.

                              So now I’m just hoping that Linux will work on it some day. It’s not a powerhouse but it’s more than capable of basic computing tasks, but dragged down by MS.

                              1. 1

                                I have an ARM windows hybrid laptop/tablet (that I got for cheap fortunately)…This computer is not “eligible” for Windows 11.

                                Is the device an 835?

                                Although a lot has been written about Windows 11 requirements on x86, the issues with ARM are different. As far as I know the 835 was the “first” ARM64 CPU that Windows could use, but extra instructions were added later (ARM 8.0 vs 8.1/8.2) which are being depended upon now - see https://en.wikichip.org/wiki/arm/armv8.1. The second revision (850) should run Windows 11.

                                1. 2

                                  Is the device an 835?

                                  Yep. :)

                                  The second revision (850) should run Windows 11

                                  Then I guess Linux is my only hope, since buying another one is not a solution for me.

                                  Also building programs for it is painful. Github doesn’t even offer windows-aarch64 CI. Tried to build an aarch64 version of Joplin for my partner who’s using that computer, and had to give up.

                                  1. 1

                                    Also building programs for it is painful.

                                    I bought a Galaxy Book Go (7c Gen 2, low end CPU but newer generation), and my experience was basically bimodal.

                                    On the one hand, source compatibility of Amd64 code is almost perfect. Seeing how the x64 emulation works (munge the Win32 ABI layer with only one usermode set of DLLs) explains this a bit, because it essentially dictates that the API needs to be exceptionally compatible.

                                    On the other hand, build system compatibility is horrifying. Each project ends up specifying its own toolchain, but unlike x86, here it’s important to have a toolchain that actually targets the system. Things like msbuild are particularly painful, because they explicitly list every architecture and release combination, so every single project file needs to be updated to indicate that ARM64 is a target, and that it needs to use the “right” SDK and compiler, which may not be the one the project explicitly targets elsewhere.

                                    This meant that porting my own code was a non-event, but I only ported a handful of other projects that I desperately needed.

                                    That said though, the x64 compatibility is nice, but it was still usable on 10. There’s an x86-to-arm64 cross compiler in Visual Studio which is what I was using until the native arm64 compiler came along (which is still in preview.) According to the system requirements the native arm64 compiler isn’t supported on 10, but I wouldn’t be at all surprised if it worked. That compiler cut my build times almost in half, so it’s a very welcome development.

                                    1. 1

                                      Trying to cross-compile from Linux also doesn’t make it easier.

                                      But I didn’t want to even try to build an Electron app on the device itself, which has 4 GB RAM, with emulated tools that might or might not work at all.

                                      GH closed the issue but said they would add better Windows support on CI at some point.

                            1. 24
                              • You can safely skip to 5:42; before that is just a preamble about math and music notation as a metaphor for backward compatibility.
                              • So, Nim 2 doesn’t have any big breaking changes, and there are some workarounds/polyfills to be able to use new features in a way that can still compile in 1.X.
                              • You can specify default values for object (struct) fields, yay
                              • An optional “strict defs” compiler mode emits warnings if a variable is possibly used before being given a value, if you don’t want to rely on default values.
                              • “out” annotation on a function parameter tells the compiler the variable passed to that arg will be initialized when the fn returns.
                              • Major improvements to the effects system that make it a lot more useful. I’m excited about this! A function that takes callbacks can specify it has all the effects of its callbacks. And a “forbids” annotation prevents a function from having a certain affect, either through regular calls or callbacks. This will be great for limiting the propagation of things like unsafe behavior or I/O through a codebase.
                              • Unicode characters can be used as custom infix operators — apparently math people strongly requested this, which makes sense, but I’m wondering now what characters can be used? Can I use emoji as operators now? And how long until we have an APL syntax implemented as Nim macros?
                              • Some minor enum quality-of-life improvements
                              • “Tasks”, a new infrastructure library that just wraps an expression for lazy evaluation, for use with threads. This just sounds like a lambda with no args or return value to me; not sure what’s new about it, but I haven’t read any real docs yet.
                              • And last but not least, the newish memory manager ORC “Optimized Ref Counting” introduced in 1.4 is finally baked enough to become the default. It’s gotten many more optimizations in 2.0. This is the kind of state-of-the art GC that is showing up in newer languages like Swift, where memory management is automatic, heap objects are ref-counted but most (“80%”) retain/release calls are optimized out at compile time. ORC also has a fast cycle collector so you don’t have to worry about leaks as in Swift.
                              • ORC also allows a single heap to be shared between threads, no need to copy objects passed across thread boundaries as in e.g. Erlang.
                              • To be released by the end of this year “or we’ll die trying” :)

                              I’m excited, though I haven’t actually used Nim in more than a year. It’s still a language-crush. Yeah, I’m the guy in the meme who’s holding hands with C++ but looking over his shoulder at Nim.

                              Actually I’m holding hands with Go and TypeScript too (hey, it’s complicated!) Go is more of a work obligation: its error handling is as wretched as ever (half the LOC I produce are “if err != nil {return nil, err}”) and the hyped new generics feature is less useful than I’d hoped. TypeScript is definitely a crush: very fun to use, very productive, but I don’t think I can settle down with it long term because a lot of what I write is low level and needs to run fast and compile native.

                              By this analogy I guess Rust is the one I had a crush on at first, until the night it blew up and went psycho on me, screaming about all the stuff I’d borrowed without proving I’d give it back, and I had to call the cops. 😬

                              To me, Nim feels like a nice compromise between Rust and TypeScript. Less strict/rigid than the former, more efficient than the latter.

                              1. 5

                                thanks for writing it up!

                                1. 4

                                  Unicode characters can be used as custom infix operators — apparently math people strongly requested this, which makes sense, but I’m wondering now what characters can be used? Can I use emoji as operators now?

                                  There is a fixed list of operators.

                                  “Tasks”, a new infrastructure library that just wraps an expression for lazy evaluation, for use with threads. This just sounds like a lambda with no args or return value to me; not sure what’s new about it

                                  I asked Araq during the talk and he said there are some subtle differences with threading, but maybe it could be made a special variant of a proc().

                                  1. 3

                                    To me, Nim feels like a nice compromise between Rust and TypeScript. Less strict/rigid than the former, more efficient than the latter.

                                    I feel the same, and it’s what’s led to me using Nim pretty extensively for the past 12-ish months, and I love it. I’ve got some problems with it that I run into occasionally, but I feel that way about every language :)

                                    Looking forward to 2.0!

                                    1. 2

                                      Is Nim still treating Javascript as a first class citizen for its output (this is very nice in my view)? Are there significant changes/enhancements in this area (I am not looking for anything specific, but you had an informative post above, thank you for that, and I just wanted to pick your knowledge a bit more).

                                      1. 3
                                        1. AFAIK, yes.
                                        2. I don’t know of any, but I’ve been out of touch with Nim for a while.
                                        1. 3

                                          Yes. Unfortunately, the JavaScript it produces is quite bloated (several kilobytes for a “hello world” script) and Araq doesn’t want to do anything about it.

                                          1. 1

                                            thank you, do you know reasoning behind it. Eg, should the JS minimization/optimization pipeline take care of the bloat you mention, or there are more complex issues there?

                                            1. 2

                                              His reasoning is that computers with a slow/limited internet connection don’t exist or don’t matter, pretty much.

                                      1. 7

                                        I think this article is flawed by only considering the billing and not the people / on-site work you suddenly need when you’re not “cloud” anymore and how much harder that labor is getting to find because we need less of it.

                                        1. 29

                                          There’s a lot of options between “the cloud” and “we literally own the land, the DC, the genset and the racks for our servers” - those options have been available for at least the last two decades, and have only gotten better in that time.

                                          For example, plenty of places will happily colo your owned or long-term leased hardware, providing power, connectivity and remote hands when needed; your existing team of ops who were fighting with the Amazon Rube Goldberg machine can now be used to manage your machines using whatever orchestration and resource management approach works for your needs.

                                          1. 3

                                            We fit somewhere on that spectrum more towards the “we literally own everything” side, but not quite.

                                            We do have our own location, generator, PDUs, CRAC units, etc. but you can pay vendors to do a lot of the work. Fan goes out on a PDU? Email the vendor and hold the DC door open for them.

                                            I don’t know the exact cost of all this stuff, but a lot of that stuff will last you a long time.

                                          2. 12

                                            I don’t think it fails to consider the people/on-site work you need at all.

                                            They say:

                                            Now the argument always goes: Sure, but you have to manage these machines! The cloud is so much simpler! The savings will all be there in labor costs! Except no. Anyone who thinks running a major service like HEY or Basecamp in the cloud is “simple” has clearly never tried. Some things are simpler, others more complex, but on the whole, I’ve yet to hear of organizations at our scale being able to materially shrink their operations team, just because they moved to the cloud.

                                            It sounds to me like they thought about it and concluded that they’re spending just as much on labor to manage AWS as they would to manage servers in a colo.

                                            1. 10

                                              I’ve yet to hear of organizations at our scale being able to materially shrink their operations team, just because they moved to the cloud.

                                              I think this quote is intended to address it. Especially since “own hardware” presumably doesn’t mean “a stack of machines in our office”, nor “we do everything ourselves”. (Now does that math indeed work out like that for them? shrug)

                                            1. 18

                                              [Edit] I realize I read what I wanted from the article rather than what was actually written — I’m leaving this here but recognize that it’s not quite what the article is about.

                                              I really like this, and I think this advice extends beyond just Linux troubleshooting. It’s really advice on how to teach people and how people learn. Answers are 20% of the learning process, 80% is understanding how to get to the answer, and it’s so critical for developing skills. I could rant about the US education system teaching-to-the-test which is focusing on that 20% and how terrible it is.

                                              One of my roles at my current job is helping people learn Rust, and when someone comes to me with a confusing type error I always make an effort to explain how to read the error message, why the error message occurs, and the various ways to fix it. If I instead just provided the fix, there would be no learning, no growth and no development of self-sufficiency. It takes longer, and sometimes people just want an answer, but I stay firm on explaining what is going on (partially because I don’t want to be fixing everyone’s basic type errors). I wonder if part of the issue with Linux troubleshooting advice is that it doesn’t have that same feedback mechanism — someone not learning doesn’t affect the author of the advice in any way so there is no real push for building self-sufficiency.

                                              Anyway, I think this post was really short and to the point, and I completely agree with the message, but I also think it’s interesting just how much it extends beyond Linux diagnostics and into learning everywhere.

                                              1. 3

                                                I agree, it does work as “How to write good troubleshooting advice” in general (which IHMO would be a better title anyways)

                                                1. 1

                                                  Dave Jones (EEVBlog) does an excellent job of this in the world of electronics, a playlist of his troubleshooting videos and his approach: https://www.youtube.com/playlist?list=PLvOlSehNtuHsc8y1buFPJZaD1kKzIxpWL

                                                1. 6

                                                  Frameworks have different goals than you or your team.

                                                  That whole section applies exactly the same to regular libraries.

                                                  Frameworks make trade-offs that harm maintainability of the projects built in them.

                                                  The main claim in this section is that once you have a class that inherits methods from a framework, you are now responsible for maintaining these methods. I honestly find this claim ridiculous. You are responsible for maintaining whatever is using those methods, just like with libraries.

                                                  The section then goes into a rant about performance, which is frankly unrelated to frameworks.

                                                  Frameworks are designed to take your project hostage.

                                                  Frameworks are supposed to be fairly foundational pieces of code upon your code rests. In that respect, they are similar to languages. They abstract operational details, just like languages abstract your code running on a CPU. Of course that means that you will write the code for that one abstraction, and that it won’t be easy to transfer it to some other one.

                                                  The example given for the mixing up of domain code across layers is honestly unconvincing. All of the details that are said to be concerns of other layers are important for business logic.

                                                  Frameworks offer some their benefits, and don’t harm maintainability, when used in a decoupled fashion.

                                                  Some of the frameworks most important benefits are lost when they are used in a decoupled fashion. Good frameworks will take care of security for you if you utilize them fully, but if you don’t, then you will need to care a lot more about it.

                                                  Decoupling won’t even improve maintainability when compared to the tight coupling. All of the methods will still be there, just maybe through your own layer. You now have an extra layer to maintain, and large changes in the framework are still likely to require restructuring of that layer.

                                                  The only real thing you buy with decoupling is the ability to change frameworks at will. But projects rarely change frameworks anyways.

                                                  But why are there no frameworks that offer this?

                                                  I don’t understand the author here. Hasn’t he seen libraries? Hasn’t he seen lightweight frameworks? There’s plenty of them. Flask and SQLAlchemy is one combo of two frameworks in the style he’s talking about that I like.

                                                  I think the author has been burned by big, overarching frameworks. I also think the author does not understand the main users of such frameworks: consulting companies creating websites that have very similar requirements for their clients. Frameworks make perfect sense in such projects, as the core requirements are unlikely to change significantly, and the solution that satisfies them is already done.

                                                  1. 7

                                                    As a former advocate of “use libraries not frameworks”, and after having to write per-project framework-like code with flask + whatever again and again… he has no idea how unnantainable his “solution” quickly become

                                                    1. 2

                                                      I hope to think that I do have an idea how (un) maintainable my “solutions” are. It’s my job as architect :)

                                                      But I wrote the article with a few examples that worked really well in mind. Some projects where domain was prime and central and where delivery mechanisms or storage mechanisms weren’t the heart of software, but a detail. Not built on top of some ORM or HTTP lib, but where that HTTP lib was used merely to call the domain. And where the ORM was called by the domain through abstractions (ReportsRepository.save_pdf(report), for example) rather than the base classes of anything related to the domain.

                                                      1. 2

                                                        Some projects where domain was prime and central and where delivery mechanisms or storage mechanisms weren’t the heart of software, but a detail.

                                                        You might want to take a peek into Odoo. Not because it’s well designed, but because it manages to put a framework strong enough that you can only write the domain code, and if needed, the delivery or storage mechanisms are fairly inconsequential to the domain code. It is genuinely one of the weirdest frameworks I have ever worked with, but it is extremely well specialized for what kinds of requirements the clients need, and it what ways the development for those clients is done.

                                                        And where the ORM was called by the domain through abstractions (ReportsRepository.save_pdf(report), for example) rather than the base classes of anything related to the domain.

                                                        This feels fairly useless to me. Of course, not all code needs to be done in ORM classes. But if you’re just calling UserRepository.update_username(user, new_username) then what’s the point? Most logic is defined in relation to actors/objects. And it should live together with their storage. The actions are related to the objects. So make them live with the objects. If you apply proper object oriented design / database normalization, the places for the actions to live become pretty natural. I get it if you don’t want to tightly couple with the transport layer, but for most applications, the storage layer is the core of their offering.

                                                    2. 6

                                                      Frameworks are supposed to be fairly foundational pieces of code upon your code rests. In that respect, they are similar to languages.

                                                      Perhaps the author has been burned, as I have, with frameworks like Laravel, CakePHP and Rails that (often gratuitously) change everything and the kitchen sink in major releases, which makes upgrading at best a chore, at worst nigh-impossible. There aren’t that many languages that change everything around willy-nilly every major release.

                                                      1. 2

                                                        There aren’t that many languages that change everything around willy-nilly every major release.

                                                        Depends on the language. Python 2 to 3 migration still echoes in the minds of many. Newer languages have fairly frequent breaking changes as well.

                                                        1. 1

                                                          There was a period of several years where installing something written in rust meant figuring out exactly what rust version it and its libraries would build under and using rustup to switch to that version temporarily. I think it’s better now, but I don’t use rust that much.

                                                          1. 1

                                                            Rust is now nearly entirely backwards compatible, with each crate able to select a language version that crate will be compiled with. It’s a compiler bug if a crate that compiled in an older Rust version doesn’t compile in the latest nightly.

                                                        2. 1

                                                          That too, but that wasn’t the reason or background for writing this post. It was reversed. I worked in some codebases where the domain was central and depended on nothing (Hexagonal architecture and Clean Architecture). Where the ORM was a detail, tucked away somewhere. Where HTTP, CLI and message-buses were details, tucked away.

                                                          The domain was all about the business. Not about HTTP, databases, relational-tables, commandline arguments or message-bus rewinding.

                                                          Yet nearly all frameworks that I worked on- and with put all these details first and foremost. Businesslogic in controllers. The ORM filled with validation logic. Everything built on top of subclasses that offer giant interfaces, being arbitrary which pieces were used here and which not.

                                                          So no, I haven’t been burned by Rails or Laravel (or React or Angular), I’ve seen how prolific, simple and maintainable software can be without those frameworks woven through the entire domain logic.

                                                          1. 3

                                                            I’d love to see real-world example code like that where there aren’t layers upon layers of abstraction to “hide” the actual ORM code. I’m working on a project without any framework right now, but we still have the some of the business logic in the same file as the code to retrieve the object from the database.

                                                            As a simple example, we’d have a user “model” file which contains code to check a password hash against a record in the database and checking whether that user is active. That involves retrieval of the relevant fields and checking the hash (which is of course done by a library).

                                                            1. 1

                                                              This sounds like there isn’t any architecture at all. But maybe I understand you wrong.

                                                              A framework will impose an archictecture. Which, as I tried to clarify in my article, is often the wrong, or not the best architecture. But in all cases, having no architecture is arguably worse than having a poorly fitting one.

                                                              I’d suggest you read up on Hexagonal Architecture. @hgraca has some excellent posts on this, for example[1]. And while his posts are very complete, that makes them obtuse, and makes it look like “layers upon layers”, while in reality it really is very simple.

                                                              In your example, I’d suggest, without even knowing all the details (which is crucial, so take this with a bucket of salt!), I’d say to split out the responsibilities:

                                                              1. check the password hash - CheckPasswordCommand.new.call(), PasswordService.validate(), $passwords_repo->get_by_hash($hash), anything, really.
                                                              2. check user active - Again, a command, service, repo, etc. As long as the stuff that “fetches whether the activity data” and the stuff that “checks if this is still active” lives in distinct places. One is plubming and utterly uninteresting: reading stuff from a database (or cookie, or session-storage, or whatver, boring) the other is domain-specific and contains businesslogic and -concepts such as “what even is active?”

                                                              So, in your example, -again without knowing details- I already see at least four different responsibilities. Yet it sounds all these are crammed into one algorithm, into one class/function/unit. I’d start with splitting this out.

                                                              Now, this pattern is often seen with frameworks -eventhough you mention there’s no the framework here- . Where typically some User class accumulates all such businesslogic, and takes care of handing the storing of user-data, and handles stuff like avatars, and deals with registration, validation, confirmation, deletion, emails, SMS, 2FA, password hashing, sessions etc. A whole subsystem (or entire SAAS product) accumulated in a single class: it can hardly get worse, spaghetti-wise. And caused by frameworks that not just allow this, but encourage it - often by making “the proper way” much harder than “quickly cramming it in that already messy class”.

                                                              [1] https://herbertograca.com/2017/11/16/explicit-architecture-01-ddd-hexagonal-onion-clean-cqrs-how-i-put-it-all-together/

                                                              1. 2

                                                                This sounds like there isn’t any architecture at all.

                                                                There’s some. But mostly I’m just looking for a good way to organize things.

                                                                AFAIK Laravel tries to implement the hexagonal architecture - it has commands, models, repositories, services and transformers. I’ve worked with it and found it often was needlessly indirect and its use of dependency injection made everything very “mushy”.

                                                                The thing with this architecture with its layers is that very often when you make a change, you have to touch many different files, just because the change needs to percolate through all the layers. And when everything is so “disconnected”, you often end up implementing various different methods for finding objects instead of exposing the ORM’s query filtering directly because that would be a breach of abstraction. e.g. FindUsersByRole, FindUsersByPartialName (for autocomplete), FindUsersBySlug etc. And when these need more complex filtering, or paging, or sorting, you end up reimplementing the same basic functionality in each of them. Or building a meta-ORM to describe the filtering in an ORM-agnostic way. But that’s even higher up in the clouds and extra busywork for no good reason.

                                                                Like the post you quoted says things like

                                                                The persistence interface is an abstraction layer over the ORM so we can swap the ORM being used with no changes to the Application Core.

                                                                and

                                                                The repository interface is an abstraction on the persistence engine itself. Let’s say we want to switch from MySQL to MongoDB.

                                                                I mean, who does that? Sure, I’ve been in one or two projects where we did end up deciding to change the storage layer. And even then it was from one SQL db to another SQL db, with very minor changes regardless of the architecture. But in the vast majority of projects I’ve worked on, my experience has been that you really don’t need to do that ever over the lifetime of a project. Most HTTP XML services aren’t suddenly going to be accessed over SSH with JSON output, instead.

                                                                The changes you do tend to make are more customer-driven and unpredictable and tend to be more cross-cutting. Oftentimes, a 3rd party service gets taken out and replaced by another. That happens all the time but usually you’ll have everything related to that service isolated in its own place already anyway.

                                                                I think my point is that making the core stuff like db abstraction and networking layer so extremely pluggable has a cost, too. The cost being that you have to work hard to trace the control flow through various layers and with dependency injection you’ll have to work extra hard to find out what concrete class is being used where (which you have to know when debugging, which is where you spend 80% of the time you spend on code). You’ll also have many more lines of code, and places for bugs to hide.

                                                                Perhaps the point is that you don’t know what will need to be replugged, so you just wrap everything in layers to make sure anything can be replugged?

                                                                Like I said, all I wanted was a good way to organize things so people can find where to make their changes, and make it easy to make those changes, into the future for a maintainable application.

                                                        3. 4

                                                          They also make perfect sense if you have a (small) amount of business logic and don’t really care for the interface part, e.g. exposing something via HTTP, but you don’t want to reinvent the wheel, you just write 5 lines of glue code in your controller. Chances are good that every update to the framework will just work[tm] for the next few years and you did not spend weeks writing your own HTTP layer (esp. with authentication).

                                                          1. 2

                                                            Indeed. I think there are many use-cases where frameworks make a lot of sense. But I also think those are far fewer than most teams or devs use them for.

                                                            There always are tradeoffs, I tried to lay them out in my article, but realize that maybe I didn’t touch on the positive parts of the frameworks enough (the post is long enough already as it is). Knowing those trade-offs is important, I think. Too many people will just blindly grab a framework rather than think through their domain, architectural fittingness, first, and then decide what, if any, framework fits their use-case best.

                                                          2. 2

                                                            I don’t understand the author here. Hasn’t he seen libraries? Hasn’t he seen lightweight frameworks? There’s plenty of them. Flask and SQLAlchemy is one combo of two frameworks in the style he’s talking about that I like.

                                                            Author here. Flask and SQLAlchemy, by the definition clearly layed out, aren’t frameworks, but indeed, “libraries”.

                                                            I’m not arguing against code-reuse. I’m putting forward that frameworks aren’t a very good manner of code-reuse, but that libraries, in a fitting, use-case specific architectural pattern are.

                                                            You need HTTP? Place an HTTP interface like flask in front of your domain. Have it call that that domain. Have it call Commands, Services, Procs. Have it send Messages. Have it deliver Events on a bus. Whatever you need. But Please, keep the HTTP out of the domain. It’s a delivery detail, not a framework where to place business logic.

                                                            1. 4

                                                              Flask has a smaller surface, but it still seems fairly clearly to be a framework under that definition. It controls when and how your code gets called, it has default behavior that you can add code to override, … (Which tbh is a fairly natural thing to do for an HTTP implementation). And frameworks like Django have very similar interfaces - Django just brings more options for deeper integration to the table if you want them, whereas in Flask such things live in community extensions.

                                                          1. 2

                                                            This is neat, but why is there no certbot for Postgres? I just want it to verify my server’s DNS and give me a signed cert based on that which my client can verify with the OS’s CA.

                                                            1. 8

                                                              acmetool, to name one example, maintains an up-to-date TLS cert in /var/lib/acme/ and will (temporarily) start up a minimal HTTP server to answer the ACME challenge protocol if it needs to. You can then configure any other services you want to use that TLS cert, like IMAP or SMTP or Postgres even if the server isn’t a web server.

                                                              1. 5

                                                                That would be nice, but note that it doesn’t fix the core problem that the bit.io folks explained: almost all postgres clients default to unsafe TLS settings. So even if you present a completely valid cert, those clients don’t care, and will proceed on the basis that any TLS handshake took place, regardless of what the cert said. IOW, I can still trivially MitM the connection and serve literally any TLS cert, and the client will be none the wiser.

                                                                Of course you can reconfigure the client for full validation, but that leaves the brittleness problem: you have to never screw up, with any of your clients, because if you forget or regress a client, it’ll silently fail open. The only fix for that is in pushing the postgres clients to change their default, though making it trivial for the servers to serve valid zero-config TLS is indeed a likely prerequisite.

                                                                1. 2

                                                                  Right, the clients need to use the OS Certificate Authority to verify that the certificate is for the domain you think it’s for. Seems like a solvable problem, but there’s a chicken and the egg component because clients don’t ship with ability to do normal domain validation and certbot doesn’t support Postgres out of the box.

                                                                  1. 2

                                                                    Yup, in the longer term, making it trivial to use TLS with postgres, and changing all the clients to strict validation, would be the way to go.

                                                                2. 1

                                                                  How about having the server prove knowledge of a secret that the client knows on connection? The secret could be in the PG URL (could even use the user + password because these are large random strings in most cloud deployments).

                                                                  If no new info is added to the URL then no action is required beyond adding this functionality to the code of clients and servers and then eventually enabling it by default.

                                                                  1. 1

                                                                    The usually way to do this is to have a private Certificate Authority, and in “verify-full” mode the client expects the server to present a certificate signed by that authority. That works but it’s an unnecessary hassle. Just have the server prove to Let’s Encrypt that it controls the domain and LE can sign a cert, like they do for millions of websites.

                                                                  2. 1

                                                                    I don’t know any specialized tool supporting Postgres, but configuring TLS using the certbot-created certificates and having certbot reconfig Postgres shouldn’t be that much work even if you have to do it by hand?

                                                                    1. 1

                                                                      Sure. The problem is that the clients don’t verify that properly configured certificate appropriately before sending credentials. So an attacker who gets in the middle of a client/server connection can just present any old invalid certificate they care to, get the client’s credentials, and relay them to the server.

                                                                      Getting postgres clients to appropriately validate the TLS handshake “shouldn’t be that much work” either, I suppose. But it’s harder than it should be, unfortunately.

                                                                      1. 1

                                                                        Yeah, it’s not so hard, you just copy the certs in a post renewal hook, but it should be out of the box like Apache and Ngnix.

                                                                    1. 2

                                                                      Someone had fun with the puns :)

                                                                      1. 2

                                                                        Interesting. My own feed reader only consumes podcast feeds and I’ve not run into any of these problems, but that is just to say that it’s a toy project that reads the ~10 podcast feeds I occasionally listen to.

                                                                        1. 1

                                                                          Interesting. My own feed reader only consumes podcast feeds and I’ve not run into any of these problems, but that is just to say that it’s a toy project that reads the ~10 podcast feeds I occasionally listen to.

                                                                          Lucky!

                                                                          I think a while back I found that postcast feeds could differ slightly in how they end up storing podcast metadata / artwork. Particularly if it’s itunes/google music. I’m not brave enough to revisit.

                                                                          1. 2

                                                                            I’d expect overall podcast feeds to be more standardized than random other feeds, because they care much more about being readable by a few standard aggregators. Not quite as strict nowadays probably than back in the day were iTunes was the podcast thing everyone cared about, but still the standards required by that are more likely to be paid attention to.

                                                                            1. 2

                                                                              Yeah, as I tried to hint at, I think I am reading: canonical url, title, and media url. maybe length of media if given. Nothing with previews, images, etc - hence toy project that exactly fills my own needs (and one of my very rare non-open source projects).

                                                                              1. 1

                                                                                For podcasts pretty much everyone uses the iTunes extensions.

                                                                            1. 15

                                                                              I’m uneasy about this, given the well-known downsides to forking. SQLite is already pretty extensible through UDFs, virtual tables, virtual filesystems. It’s also not the most hacking-friendly codebase, being written in dense, somewhat archaic C.

                                                                              1. 12

                                                                                Forking is basically the core point of open source, people don’t do it enough.

                                                                                1. 4

                                                                                  No, it just creates more confusion and headaches. Imagine if we had 15 different versions of nginx or mysql or whatever else runs this world of ours. Every time you fork the complexity to understand it all multiplies.

                                                                                  1. 5

                                                                                    Why would you ever release software with a free license that allows people to make their own version if you don’t want them to make their own version? Just because there aren’t 15 websites for nginx or mysql or whatever doesn’t mean there aren’t tens of thousands functional forks out there in the wild in private use. Making a public fork is a practical act, the same way that making a private one is. If you really don’t want people to do that, don’t do open source.

                                                                                    1. 5

                                                                                      We do have those exactly now. They’re just rebranded as ‘AWS Aurora’, ‘CloudSQL’, ‘TimescaleDB’, and so on.

                                                                                      1. 2

                                                                                        Only because version control software is inadequate. Competitors should be able to agree on things to ease the complexity burden without needing to break things out into seperate libraries.

                                                                                        1. 7

                                                                                          I don’t understand. A fork is typically done over disagreement on the direction of the project. That is a social issue and no amount of tooling can fix that.

                                                                                          See mysql vs. mariadb or OpenOffice.org vs LibreOffice or ownCloud vs. NextCloud etc. etc.

                                                                                          1. 8

                                                                                            A fork is typically done over disagreement on the direction of the project.

                                                                                            This is a problem. People have got a bad taste for “forking” because most famous forks were like this. Yet it doesn’t have to be this way. Ubuntu, Mint, and all the other Debian forks are generally not hostile to Debian. Conversations has a large cloud of friendly forks that add their own spin or functionality. It doesn’t have to be about a fight.

                                                                                            1. 4

                                                                                              A disagreement doesn’t have to be complete. There will still be some things they agree on, otherwise it wouldn’t be a fork, it would be a new project from scratch.

                                                                                              1. 1

                                                                                                E.g. some projects mostly keep patchsets on top of upstream projects around. Is that the kind of thing you think better VCS could help with?

                                                                                                1. 1

                                                                                                  I have really a lot to say on the topic, the entry point is datalisp.is.

                                                                                                  But the short answer to your question is “yes” - it could also help discover and stabilize common core functionalities (like these “$language; only the good parts” books or linters do to shrink the api of programming languages with footguns). Really it is about making a better web.

                                                                                        2. 1

                                                                                          I politely disagree. The core points of open source are freedom to redistribute and clear attribution/derivation.

                                                                                          1. 3

                                                                                            derivation

                                                                                            Also known as “forking”…

                                                                                            1. 3

                                                                                              | Also known as “forking”…

                                                                                              Or linking, or any number of things, one of which is forking. So yeah I don’t agree that forking is “the core point” of open source.

                                                                                        3. 11

                                                                                          What are the downsides? Splitting the contributor pool?

                                                                                          If the main project doesn’t take contributions, then that’s not an issue … the contributor pool is already split

                                                                                          Either way, I think some amount of forking is healthy, even if the original project is awesome. You can call it “parallel research”

                                                                                          I don’t know about this case, but I think, in most cases, it’s not wasting anyone’s time

                                                                                          Though one that confused me was the ffmpeg fork, I remember I had to choose between 2 things to install once, and I didn’t understand the difference

                                                                                          1. 3

                                                                                            No, the biggest downside is if this gets popular and splits the user base and then we have different feature sets available depending on which “SQLite” you are targeting and nothing is portable any more.

                                                                                            1. 10

                                                                                              That’s also true if they write their own thing that is not a fork. A fork of sqlite is not sqlite and should not be expected to be. It is its own thing

                                                                                              1. 5

                                                                                                Eh the whole point of forking is to add features! They list 3 at the end.

                                                                                            2. 4

                                                                                              I agree. I will probably never use this, however if they pilot replication hooks and drh picks up some version of the idea, I’ll be happy. Even though SQLite is written in an archaic, opinionated dialect of C, drh is not at all opposed to picking up modern ideas and paradigms.

                                                                                              Native WASM UDFs are stupid though. Every SQLite library that I know of allows defining UDFs in the native language, since all you have to do is hand a C function pointer and user data pointer to the SQLite library. And anything that wraps SQLite obviously can already interface with C.

                                                                                              In other words, with the current UDF interface, you can already trivially extend SQLite to run WASM UDFs with any runtime of your choice.

                                                                                            1. 5

                                                                                              Seems a bit premature… anyone can copy the source tree in an empty repository, at least give us something why we should take your fork seriously except the fact that you don’t like how upstream does things.

                                                                                              1. 20

                                                                                                I absolutely just want a tiny, silent, cool ePaper laptop capable of running Linux/BSD in a purely text mose.

                                                                                                1. 9

                                                                                                  I would also love a Linux compatible e-ink laptop. I look for one from time to time, but there has never been one that has been worth the price for me. There are some things on the market that come close, but they normally have a few things that I don’t like and come with a price tag too high for me to want to compromise.

                                                                                                  1. 7

                                                                                                    Exactly what I’m dreaming of. I even asked MNT founder about it: https://mamot.fr/web/@ploum/109082688438688769

                                                                                                    I’ve written about my quest here : https://ploum.net/the-computer-built-to-last-50-years/

                                                                                                    I thought that Astrohaus was nailing it.

                                                                                                    Unfortunately, I’m really angry against Astrohaus for the Freewrite. Their software are a shame, force using a proprietary cloud and are full of bugs. My Freewrite, despite its weight, have no more battery than my laptop. The traveler has a very very bad keyboard to the point of making it unusable for me (I had to send it back because some keys were always quadrupled. Now, the space is only working if I press it really violently). See gemini://rawtext.club/~ploum/2021-10-07.gmi

                                                                                                    Placing all my hope on the MNT Pocket even if I would need to adapt my layout to the keyboard. Hoping to see an eink version soon to use with only a terminal. Neovim, Neomutt and Offpunk are all I need 95% of the time ;-)

                                                                                                      1. 4

                                                                                                        I’ve written about my quest here : https://ploum.net/the-computer-built-to-last-50-years/

                                                                                                        This is very interesting, thank you for sharing. One point I’m unsure about is storage… I’m not aware of any existing storage technologies that would last more than a dozen years. Mechanical drives fail because they’re just fragile, especially in a computer than can be easily moved around. SSDs/flash are less fragile but blocks still “go bad”, though wear leveling helps a little I guess. Maybe some purpose-built SSD with a huge number of spare blocks would last 50 years?

                                                                                                        1. 4

                                                                                                          SSDs also require power, at least sporadically, for them to retain data. I’ve seen a recommendation to power up and read all the data on a SSD once yearly to make sure there’s no data loss.

                                                                                                        2. 2

                                                                                                          addendum: I think your blogpost would be worth its own submission

                                                                                                          1. 1

                                                                                                            Thanks. It has already been submitted : https://lobste.rs/s/1b1rxk/computer_built_last_50_years

                                                                                                            1. 1

                                                                                                              my bad, missed that when looking for it.

                                                                                                          2. 2

                                                                                                            It has come up in discussions around the reform in the past (I think on the reform forums), and @mntmn said that it’d be an interesting option, but at least at the time there wasn’t really anything good enough available that could be easily used.

                                                                                                            Would not surprise me to see someone do it as a modding project though, if they can find a usable panel in a close-enough size.

                                                                                                          3. 3

                                                                                                            I haven’t tried it, but the Remarkable2 is apparently running linux; https://www.mashupsthatmatter.com/blog/USB-keyboard-reMarkable2 walks through adapting it to take a USB keyboard, but looks like it’s still a bit of work.

                                                                                                            1. 1

                                                                                                              I’ve seen that it was possible to install Parabola Linux on the rM1. I haven’t tried it yet but it’s indeed a very interesting possibiity.

                                                                                                              1. 1

                                                                                                                I have an rM2, and it has some Linux distro installed by default. I haven’t messed around with installing a totally separate OS, but I have used https://toltec-dev.org to install some homebrew apps as well as general Linux utilities

                                                                                                          4. 9

                                                                                                            I read this comment from my Kobo Clara HD e-reader which is running full NixOS (with Rakuten’s vendor kernel) - it’s not a laptop but it is kinda a tablet and does support OTG.

                                                                                                            I’m hoping the Kobo Clara HD 2e is similarly hackable because it has Bluetooth. I’d love to be able to use a wireless keyboard and have audio, in the future.

                                                                                                            1. 1

                                                                                                              Since writing this, I had a quick look at the Kobo Clara 2e and it looks close enough that I’m going to gamble that my existing NixOS installation might boot. Purchased for $209AUD. Let’s see.

                                                                                                              1. 1

                                                                                                                Huh, nice. I have a Kobo Clara HD, but the only hacking I’ve done to it is to install KOReader. It would be pretty nice to be able to write with it, and to have a Gemini client on it.

                                                                                                                1. 3

                                                                                                                  I tried Gemini yesterday! nix-shell -p castor:

                                                                                                                  https://i.imgur.com/TXGCfmq.jpeg

                                                                                                                  1. 1

                                                                                                                    Looks good!

                                                                                                              1. 3

                                                                                                                CNX post that regurgitates a Hacksters.io post that repeats everything the original post by the author says, just less concise. URL should probably bet that source post instead? https://hackaday.io/project/187468-ello-lc1

                                                                                                                1. 4

                                                                                                                  Since people have been arguing about security around workers a lot (since it trusts V8 primitives for isolation), I’m curious if it being open is going to lead to people finding new issues there. They do state that production has additional security (see quote below), but I’d guess they can’t protect against the main criticism of mixing customers in the same process?

                                                                                                                  The Cloudflare Workers service uses the same code found in workerd, but adds many additional layers of security on top to harden against such bugs. I described some of these in a past blog post. However, these measures are closely tied to our particular environment. For example, we rely on build automation to push V8 patches to production immediately upon becoming available; we separate customers according to risk profile; we rely on non-portable kernel features and assumptions about the host system to enforce security and resource limits. All of this is very specific to our environment, and cannot be packaged up in a reusable way.

                                                                                                                  1. 1

                                                                                                                    See also the companion book: The Book of CP-System

                                                                                                                    If you want to learn about the hardware powering titles such as Street Fighter II, Ghouls’n Ghosts, or Final Fight, then “The Book of CP-System” is for you. Inside you will find the “Capcom System” (a.k.a CPS-1) explained in excruciating details, along more than one hundred explanatory drawings. The software is also covered with the description of the historical way of doing things and as well as a modern toolchain (CCPS).

                                                                                                                    1. 13

                                                                                                                      The issues with library (“crate”) organization are already apparent, and unless something is done about it relatively soon I think we’ll see a fracturing of the Rust ecosystem within 5 years. IMO the fundamental problem is that crates.io is a flat namespace (similar to Hackage or PyPI).

                                                                                                                      For example, the other day I needed to create+manipulate CPIO files from within a Rust tool. The library at https://crates.io/crates/cpio has no documentation and limited support for various CPIO flavors, but it still gets the top spot on crates.io just due to the name. There’s also https://crates.io/crates/cpio-archive, which is slightly better (some docs, supports the “portable ASCII” flavor) but it’s more difficult to find and the longer name makes it seem less “official”.

                                                                                                                      If I wanted to write my own CPIO library for Rust, it wouldn’t be possible to publish it on crates.io as cpio. I would face the difficult choice between (1) giving it an obscure and opaque codename (like hound the WAV codec, or bunt the ANSI color-capable text formatter) or (2) publishing it the C/C++ way as downloadable tarball[0] on my website or GitHub or whatever.

                                                                                                                      Go has a much better story here, because libraries are identified by URL(-ish). I couldn’t publish the library cpio, it would be john-millikin.com/go/cpio or github.com/jmillikin/go-cpio/cpio or something like that. The tooling allows dependencies to be identified in a namespace controlled by the publisher. Maven has something similar (with Java-style package names anchored by a DNS domain). Even NPM provides limited namespacing via the @org/ syntax.

                                                                                                                      [0] By the way, from what I can tell Cargo doesn’t support downloading tarballs from specified URLs at all. It allows dependencies to come from a crates.io-style package registry, or from Git, but you can’t say “fetch https://my-lib.dev/archive/my-lib-1.0.0.tar.gz”. So using this option limits the userbase to less common build tools such as Bazel.

                                                                                                                      1. 3

                                                                                                                        If I wanted to write my own CPIO library for Rust, it wouldn’t be possible to publish it on crates.io as cpio

                                                                                                                        library name doesn’t have to match package name. You can publish jmillikin_cpio which people could use as use cpio.

                                                                                                                        1. 8

                                                                                                                          Yes but I think the point was that if I, as someone who doesn’t know anything about rust ecosystem, was looking for a cpio package, i probably would not go beyond the “official” cpio.

                                                                                                                          1. 7

                                                                                                                            If you had to choose between jcreekmore/cpio and indygreg/cpio, you still wouldn’t know which is the better one.

                                                                                                                            1. 12

                                                                                                                              That’s the point though I think: it makes it more obvious that you actually need to answer that question, because both look equally (not) “official”/“authoritative”.

                                                                                                                              1. 4

                                                                                                                                I think you’re really stretching this example, because short of at_and_t_bell_labs/cpio there can’t be any official/authoritative cpio package. There may be a popular one, or ideally you should get a quality one. So to me this boils down to just search and ranking. crates-io has merely text-based search without any quality metrics, so it brings up keyword spam instead of good packages.

                                                                                                                                The voting/vouching for packages that @mtset suggests would be better implemented without publishing forks under other namespaces as votes. It could be an upvote/star button.

                                                                                                                                1. 3

                                                                                                                                  If crates.io had organization namespaces, then an “official” CPIO library might have the package name @rust/cpio.

                                                                                                                                  This would indicate a CPIO package published by the Rust developers, which would be as close to “official” as putting it into the standard library.

                                                                                                                                  1. 3

                                                                                                                                    That would be good for officialness, but I think it’s nether realistic nor useful.

                                                                                                                                    We are approaching 100K crates. Rust-lang org already has more work than it can handle, and can’t be expected to maintain more than a drop in a bucket of the ecosystem. See what’s available on GitHub under rust-lang-nursery and rust-lang-deprecated.

                                                                                                                                    And official doesn’t mean it’s a good choice. You’d have rust-lang/rustc-serialize inferior to dtolnay/serde. And rust-lang/mpsc that is slower and less flexible than taiki-e/crossbeam-channel. And rust-lang/tempdir instead of stebalien/tempfile, and rust-lang/lazy_static instead of matklad/once_cell.

                                                                                                                                    1. 1

                                                                                                                                      We are approaching 100K crates. Rust-lang org already has more work than it can handle, and can’t be expected to maintain more than a drop in a bucket of the ecosystem.

                                                                                                                                      Yep! That’s true! In a healthy ecosystem, the number of official packages is extremely small as a percentage. Look at C++, C#, Java, Go – there might be a few dozen (at most) packages maintained by the developers of the language, compared to hundreds of thousands of third-party packages.

                                                                                                                                      And official doesn’t mean it’s a good choice. You’d have rust-lang/rustc-serialize inferior to dtolnay/serde. And rust-lang/mpsc that is slower and less flexible than taiki-e/crossbeam-channel. And rust-lang/tempdir instead of stebalien/tempfile, and rust-lang/lazy_static instead of matklad/once_cell.

                                                                                                                                      Also yep! And also (IMO) totally normal and healthy. The definition of “good choice” will vary between users. Just because a package is maintained by the language team doesn’t mean it will be appropriate for all use cases. That’s why Go’s flags package can co-exist with third-party libraries like github.com/jessevdk/go-flags or github.com/spf13/pflag.

                                                                                                                                    2. 1

                                                                                                                                      I wish you had put that at the top of your original post. :)

                                                                                                                                      1. 2

                                                                                                                                        I think it’s a minor point, to be honest. Even if crates.io never gets organizational namespaces, just being able to upload to a per-user namespace would be a sea-change improvement over current state.

                                                                                                                                2. 1

                                                                                                                                  Personally, I think this is a great use case for a social-web system; we’ve already seen this with metalibraries like stdx and stdcli, though none have stood the test of time. I think a namespacing system with organizational reexports could really shine; I’d publish cpio (sticking with the same example) as mtset/cpio, and then it could be included in collections as stdy/cpio or embedded/cpio or whatever. Reviews and graph data would help in decisionmaking, too.

                                                                                                                              2. 6

                                                                                                                                There’s some issues with that approach.

                                                                                                                                First, I do want the package name to match the library name, or at least be ${namespace}${library_name} where ${namespace} is something clearly namespace-ish. If I did not have this requirement then I would name crates.io packages with a UUID. And to be honest, I don’t think anyone would do that remapping – people would type use jmillikin_cpio::whatever and grumble about the arrogance of someone who uses their own name in library identifiers.

                                                                                                                                Second, a namespace provides access control. I’m the only person who can create Go libraries under the namespaces john-millikin.com/ or github.com/jmillikin/, but anyone in the world can create crates.io packages starting with jmillikin_. It’s just a prefix, it has no semantics other than implying something (ownership) to human viewers that may or not be true.

                                                                                                                                1. 4

                                                                                                                                  And to be honest, I don’t think anyone would do that remapping – people would type use jmillikin_cpio::whatever and grumble about the arrogance of someone who uses their own name in library identifiers.

                                                                                                                                  To clarify, it’s the author of the library who sets its default name. You can have the following in Cargo.toml of your cpio library:

                                                                                                                                  [package]
                                                                                                                                  name = “jmillikin_cpio”
                                                                                                                                  
                                                                                                                                  [lib]
                                                                                                                                  name = “cpio”
                                                                                                                                  

                                                                                                                                  Users would then use jmillikin_cpio in their Cargo.tomls, but in the code the name would be just cpio.

                                                                                                                                  This doesn’t solve the problem of access control, but it does solve the problem of names running out.

                                                                                                                                  1. 3

                                                                                                                                    Yes, per my post I’m aware that’s possible, I just think it would be bad. What you propose would be semantically equivalent to using a UUID, since the package name and library name would no longer have any meaningful relationship.

                                                                                                                                    In other words, I think your code example is semantically the same as this:

                                                                                                                                    [package]
                                                                                                                                    name = "c3f0eea3-72ab-4e79-a487-8b162153cfd1"
                                                                                                                                      
                                                                                                                                    [lib]
                                                                                                                                    name = "cpio"
                                                                                                                                    

                                                                                                                                    Which I dislike, because I think that it should be possible to compute the library name from a package name mechanically (as is the idiom in Go).

                                                                                                                                2. 4

                                                                                                                                  Using a prefix doesn’t provide access control which is an important feature of namespacing. If there’s no access control, you don’t really have a namespace.

                                                                                                                                  For example, I might publish all my packages as leigh_<package> to avoid collisions with other people, but there’s nothing stopping someone else from publishing a package with a leigh_ prefix.

                                                                                                                                  This is a real problem, especially with the prevalent squatting going on on crates.io.

                                                                                                                                  For example, recently I was using a prefix for a series of crates at work, and a squatter published one of the crates just before I did. So now I have a series of crates with a prefix that attempts to act as a namespace, and yet one of the crates is spam.

                                                                                                                                  Most other ecosystems have proven that namespacing is an effective tool.