Threads for aae

    1. 3

      The only alias I use with git is a shell alias for gpo which is git push origin HEAD

      type everything else out, even status

      1. 2

        Similar. I use fish, which makes it so easy to repeat a command from history that shortening it is pointless: I type “push↑” (that’s an up arrow), and fish finds it from history in its glorious canonical form, which is how I use it:

        git push --force-with-lease origin HEAD~N:refs/heads/(whatever)
        

        Here, I do use the shorthand whatever for git rev-parse --abbrev-ref HEAD, or whatever branch I’m on. The point is not so much to shorten, but to normalize history, so I can repeat it.

      1. 1

        I like the idea but the haikus arent very good… Something useless like this seems like something an llm would be good at generating…

        EDIT: yea, it works astoundingly well…

        prompt: write me 10 haikus to serve as http error pages for code 403

        1. 1

          Here is the result of me prompting GPT-3 for it.

          1. Forbidden access, An error locks you out now, Seek permission first.

          2. 403: Stop! You lack the key to proceed, Turn back, do not pass.

          3. A forbidden gate, Your entry met with a wall, Access denied, friend.

          4. Forbidden terrain, You wander where you shouldn’t, Turn away and learn.

          5. 403: Halt! Unwelcome, you can’t proceed, Forbidden, step back.

          6. The door is shut tight, Your entry met with a block, Forbidden, go back.

          7. Permission denied, 403 stands in your way, Seek the right pathway.

          8. Forbidden fruit, beware, Temptation leads to this page, Access has been barred.

          9. Uncharted waters, Forbidden seas lie ahead, You may not sail through.

          10. Forbidden entry, A secret realm you can’t touch, Turn away, move on.

          1. 1

            Here are some GPT-4 results.

            user: Write 3 Haikus to serve as HTTP error pages for code 403. Do not use the actual HTTP code or technical terms in the poem.


            assistant: Forbidden garden,
            Silent whispers in the breeze,
            Access denied, friend.

            A secret pathway,
            Hidden from your eager eyes,
            No entry for you.

            Behind the locked door,
            Mysteries remain unseen,
            Turn back, traveler.

          1. 4

            How do I ensure that only authorized publishers are able to submit notifications to ntfy.sh/mytopic?

            1. 6

              keep mytopic a secret and only give it to authorized publishers

              1. 3

                Even if you do that, can’t any middle-box or intermediating agent see mytopic as part of a request URL?

                Ultimately I think this is fine! It just means that ntfy.sh is basically a demo server, that topic names underneath that domain provide no form of access control or delivery guarantees, and that any actual user will need to self-host with appropriate security measures. Which is, I guess, important to make clear in the docs. Specifically, there is no way for users to “choose a unique topic name” in a way that “keeps messages private”.

                1. 4

                  an intermediating agent shouldn’t be able to see anything w/ https

                  you could think of that URL as being analogous to an API key

                  1. 1

                    It would be incorrect to think of any URL as equivalent (in the security sense) to a secret like an API key.

                    1. 4

                      Discord webhooks work the same way ¯\_(ツ)_/¯

                      1. 3

                        i think any webhook works the same way 😅, as does many cloud file providers that have things like “Anyone with this link” on things like google docs or dropbox… invitations to chats on systems like whatsapp for anyone with the link (or QR code)…

                        it really all depends on what you do with the URL, and the administrative practices of the people running the site that utilizes this method of security

                        as long as you don’t misuse it, and it’s using https, and the people running the site do it knowing this is how the security works, it is absolutely secure… and, as long as everyone is aware, as secure as using an unencrypted API key…

                        1. 0

                          The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel. No downstream consumer would assume that an arbitrary message, whether submitted via webhook or otherwise, is actionable without additional authn/authz. So I don’t think this, or any other kind of webhook, is directly comparable. I could be wrong! If ntfy.sh topic notifications are understood by consumers to be un-trusted, then no problem, and mea culpa! But that’s not what I took away from the docs.

                          1. 10

                            You seem to be dead set to find a fatal flaw in ntfy, with quite the dedication. :-) I disagree with your assessment that the security of an API key and a secret URL are fundamentally different. And with that fundamental disagreement, our argument comes to an end.

                            On the wire, a HTTP request with an API key looks like this:

                            POST /api HTTP/1.1
                            Authorization: Bearer this-is-a-secret
                            
                            some message
                            

                            A request against the ntfy API looks like this (excluding the JSON endpoint, which is more like the above):

                            POST /this-is-a-secret HTTP/1.1
                            
                            some message
                            

                            The only difference is that the secret is in a different spot in the HTTP request.

                            • You made an argument that you cannot rely on TLS: That is completely flawed, because if you cannot trust TLS, then your header-based auth also falls apart.

                            • You also made an argument saying that you cannot rely on people making HTTPS requests. That also applies to the traditional Bearer/Basic/whatever auth.

                            • IMHO, the only valid argument to be made is the one that the HTTP path is cached and prominently displayed by browsers. That’s correct. That makes it less secure.

                            ntfy is a tool usually used for casual notifications such as “backup done” or “user xyz logged in”. It is geared towards simplicity: simple simple simple. It doesn’t do end-to-end encryption, and the examples are (partially at least) suggesting the use of HTTP over HTTPS (for curl). So yes, it’s not a fort-knox type tool. It’s a handy tool that makes notifying super simple, and if used right, is just as secure as you’d like. But yes, it can also be used in a way that is less secure and that’s okay (for me, and for many users).

                            I really didn’t want to get into such a (what it feels like) heated discussion. I just wanted to show off a cool thing I did …

                            1. 5

                              Technically, I agree with you that secret links and API keys are the same. I also agree that secret links are a simple, adequate solution for a simple service like ntfy.

                              When reasoning about the security of secret links, I’d encourage you to also think about the practicalities of how people tend to use links: It’s extremely easy to share them and people see them more as public information. This can be seen in the behavior of some tools that automatically upload and store them elsewhere without encryption, e.g. browser history sync. IIRC this also lead to leaked password reset links when outlook automatically scanned users’ emails for links and added them to the bing index.

                              1. 4

                                I just wanted to show off a cool thing I did …

                                Sorry! My intent is definitely not to find some fatal flaw. I’m providing feedback as requested:

                                I’d love feedback to the tool

                                1. 3

                                  Haha. I suppose I got what I asked for :-)

                              2. 1

                                The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel.

                                Which can be catastrophic. I’ve heard many stories of crypto scams that were fueled by a hacked “official” project Discord account sending out a scam phishing link or promoting a pump-and-dump scheme.

                                1. 1

                                  You can also DELETE a discord webhook

                              3. 3

                                why?

                                1. 1

                                  In what way could/would a URL (containing a long random string) not be a secret in that sense?

                              4. 2

                                Or a user is sending notifications like “The dishwasher is done” or “The Mets lost” and there’s no need for security.

                                1. 3

                                  Sending notifications to what? If you can

                                  curl -XPOST -d 'The Mets lost' ntfy.sh/fly
                                  

                                  then I can

                                  curl -XPOST -d 'Spam' ntfy.sh/fly
                                  

                                  right?

                                  1. 3

                                    Sure. But who cares? Then I abandon my channel and switch to a new one. They can’t find it, because I’m using https and they can’t MITM anything useful.

                                    1. 6

                                      If your use case allows you to abandon one topic and switch to a new topic on an ad-hoc basis, that’s great, but it’s not something that most applications are able to do, or really even reliably detect. This is all fine! It just means that any domain that provides unauthenticated write access to topics is necessarily offering a relatively weak form of access control, and can’t be assumed to be trusted by consumer applications. No problem, as long as it’s documented.

                                      1. 5

                                        I should really write this up as a FAQ, because it comes up so much. :-) First of, thanks for the vivid discussion on ntfy. I love reading feedback on it. Much appreciated.

                                        The original premise of ntfy is that topics are your secret, so if you pick a dumb secret, you cannot expect it to be or remain private. so ntfy.sh/mytopic is obviously just a demo. Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a an Authorization header with a bearer token (other than the notable difference that it’s in the server logs and such).

                                        If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.

                                        1. 3

                                          Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a an Authorization header with a bearer token (other than the notable difference that it’s in the server logs and such).

                                          It simply is not true that a URL containing a “unique-ish ID” like a UUID is “no more or less secure” than using an authorization header, or any other form of client auth. URLs are not secrets! Even if you ensure they’re only requested over HTTPS – which you can’t actually do, as you can’t prevent clients from making plain HTTP requests – it’s practically impossible to ensure that HTTPS termination occurs within your domain of control – see e.g. Cloudflare – and in any case absolutely impossible to ensure that middleboxes won’t transform those requests – see e.g. Iran. There are use cases that leverage unique URLs, sure, like, login resets or whatever, but they’re always time-bounded.

                                          If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.

                                          If you pay to “reserve” a topic foo, does that mean that clients can only send notifications to ntfy.sh/foo with specific auth credentials? If so, all good! 👍

                                          1. 6

                                            , as you can’t prevent clients from making plain HTTP requests

                                            Well, that’s the clients fault? The client leaking their secrets is just as possible with an authorization header.

                                            it’s practically impossible to ensure that HTTPS termination occurs within your domain of control

                                            It’s trivial to do this. I don’t understand and I don’t see how an authorization header is different.

                                            but they’re always time-bounded.

                                            No they aren’t. Unique URLs are used all the time. Like every time you click “Share” on a document in Paper/Drive and it gives you some really long url.

                                            1. 3

                                              We’re discussing “Capability URLs” as defined by this W3C doc which says that

                                              The use of capability URLs should not be the default choice in the design of a web application because they are only secure in tightly controlled circumstances. However, in section 3. Reasons to Use Capabilty URLs we outlined three situations in which capability URLs are useful:

                                              1. To avoid the need for users to log in to perform an action.
                                              2. To make it easy for those with whom you share URLs to share them with others.
                                              3. To avoid authentication overheads in APIs.

                                              and further dictates (among other constraints) that

                                              Capability URLs should expire

                                              1. 2

                                                I don’t really care about that doc tbh

                                                edit: To elaborate slightly, I’m extremely familiar with capabilities

                                            2. 1

                                              Sure, it’s not the most massively secure thing in the world, but anyone using this service can be confident their client isn’t making plain HTTP requests else they’d pick something normal. I don’t know why my HTTPS termination would be at CloudFlare unless I’d set it up (or ntfy started using it), and even if it were of all people I trust CloudFlare to not-spam me the most. It’s not that big a deal.

                                              1. 1

                                                anyone using this service can be confident their client isn’t making plain HTTP requests

                                                $ curl -vv -XPOST -d 'foo' ntfy.sh/bar
                                                *   Trying 159.203.148.75:80...
                                                * Connected to ntfy.sh (159.203.148.75) port 80 (#0)
                                                > POST /bar HTTP/1.1
                                                > Host: ntfy.sh
                                                ...
                                                < HTTP/1.1 200 OK
                                                < Server: nginx
                                                ...
                                                {"id":"P5DjRcNMBt3l","time":1682933054,"expires":1682889854,"event":"message","topic":"bar","message":"foo"}
                                                

                                                Looks like plain HTTP to me.

                                                1. 2

                                                  To clarify, an application developer using this service, being the type of developer to use a service like this, would be able to feel confident an application request to this web service is via HTTPS.

                                              2. 1

                                                you can’t prevent clients from making plain HTTP requests

                                                You can either refuse to listen on port 80, or you can detect they’ve transmitted it in the clear and roll that key.

                              1. 4

                                God I miss the days of native applications. Why does every app now have to be web based?

                                1. 12

                                  Because it’s better economics for the app makers.

                                  1. 2

                                    I like to believe it’s more because writing UI code is much easier than alternatives.

                                    1. 1

                                      Maybe, but writing UI code in stuff like Javascript seems harder and more error prone than the classic alternative GUI builders that were made for native platforms. The point is, of course, that what you write for a browser will run on any platform the browser runs on.

                                      1. 2

                                        Classic UI builders let you choose your screen size, and it was a PITA to target multiple resolutions.

                                        A pity, because they were so much better than the mess we have now.

                                        1. 4

                                          Classic UI builders let you choose your screen size, and it was a PITA to target multiple resolutions.

                                          That depended a lot on the toolkit to some degree. NeXT and Qt, to name just two examples, were actually pretty good at “fluid” layouts – they targeted platforms with high-res capabilities from the start, so they had to support applications whose windows could be resized anywhere between 640x480-ish to 2048x1536-ish. It’s Visual Basic, VCL to some degree (and, if anyone remembers Glade, GTK, early on) that really sucked at this, and gave people GUI builder PTSD.

                                          Web-based technologies have brought us a lot of useful things, like React’s component-driven system. But in my subjective experience, putting together a GUI for a single device class with the best web-based tools is still an inferior experience compared to the best native tools from twenty years ago.

                                          Crossing device classes is a whole other story. I don’t know of any toolkit that does it well, but that’s also, in part, because early on, device vendors realised it’s probably a bad idea to even try it in the first place. Application vendors obviously disagreed, but 15 years after the iPhone, it’s still pretty easy to tell an application that uses web-based components from one that doesn’t – the slow one with bad scrolling is usually the web one.

                                  2. 7

                                    Because it is instantly deployed and reachable by everyone with an internet connection. This includes people in poorer countries with $70 phones.

                                    Those questioning which one is better, were probably not everyday users of computers before mid 2000s. Desktop apps were a breeze to develop and their UX was superior in all aspects except perhaps how trendy their visuals were. Keyboard navigation almost always covered the whole functionality, latency, screen real estate usage, intuitiveness, etc. Pretty much everything was superior. Even networked applications, ironic as that might sound.

                                    Having to provide a binary, convince the user to install it or even buying it…. It was just a value offer that didn’t stand a chance against signing up for an online service.

                                    Heck, i am typing on my phone right now on lobste.rs. using a web app no less. On my phone.

                                    A little fun fact from an acquaintance of mine: a medium-small accounting company almost closed down a couple of years ago, because they replaced a rock solid desktop app they used since 1993 with a modern web based alternative. Accountant productivity took a nose dive, many customers exited and costs were dangerously approaching revenue.

                                    I think serious productivity is still a non goal for 99% of all web apps.

                                    1. 1

                                      It should be a goal for more than 1% for sure… Especially in the web apps we live in day by day…

                                    2. 3

                                      Because native GUI dev is horribly painful in comparison?

                                      1. 4

                                        And web development is any better?

                                        But yes, native GUI development has gotten worse over the years. Visual Basic allowed many non-programmers or barely-programmers the ability to write software, and a friend of mine still laments the development system on the NeXT, which made building applications nearly trivial.

                                        1. 3

                                          I’d love to know your friend’s perspective on how well or poorly Cocoa and Xcode’s version of Interface Builder preserved NEXTSTEP’s development ease in macOS and iOS.

                                          1. 1

                                            He used NeXT Step back in the early 90s, and then went on to work almost exclusive on Linux. These days he does use MacOS but not for native application development (he mostly does command line stuff).

                                        2. 3

                                          It might look this way, because GUI web devs skip 50% of the things that are normally required when using the native GUI framework:

                                          • error handling (“something went wrong, please click f5” is not error handling),
                                          • UI patterns (every website uses different patterns),
                                          • standard controls (it’s not clear what is clickable and what is a simple text),
                                          • keyboard handling (websites that can be conviniently operated by keyboard alone are very rare).

                                          Add in the missing pieces and I think it will be clear that it’s actually the other way around.

                                          1. 2

                                            What kind of pain do you mean? Forgive my ignorance; my experience is kind of specialized.

                                            I find native GUI dev mostly delightful these days because declarative UI tooling has arrived. SwiftUI, for instance, learned lessons from React and Flutter. Prototyping and iteration are both rapid. Jetpack Compose seems about the same. I have to imagine other systems would be somewhere on the same track?

                                            1. 4

                                              With native GUI dev there’s no REPL or inspector (afaik), you need (in practice) to use an IDE, there’s a lot of up front complexity (whereas with web you can add the more complicated parts in stages as you need them)

                                              Like when I’m making a client side web app my usual workflow is

                                              • come up with idea for app (usually a little thing I think will be useful for myself or my friends)
                                              • copy index.html, script.js, style.css from my template loading dependencies from esm.sh so I don’t need to bundle yet
                                              • start writing a basic UI with either vanilla JS or preact
                                              • prototype the functions that actually do things using the js console
                                              • hook them up to the ui
                                              • add css using the browser devtools
                                              • if the project is getting more serious i’ll
                                                • set up bundling
                                                • make an icon and set up opengraph tags
                                                • potentially rewrite in typescript

                                              I don’t know of a way to replicate anything remotely like my usual workflow with native GUI dev. It’s very important that I’m able to get a barely-working thing started super fast so I don’t lose interest and give up and the web just works really well for that I guess.

                                              1. 4

                                                I think a lot of this boils down to what we’re used to. My workflow for GUI tools (with the caveat that I don’t need to do those too often) is basically the same as yours, except it uses Qt and C++ or Python, or Tk and Python for really “quick & dirty” hacks. I don’t really need to use an IDE for either. Nowadays I use QtCreator because it’s there but years ago I knew the Qt API well enough that I just used Emacs.

                                                A good debugger isn’t always a good substitute for the inspector, but it’s also been my experience that, unless I engage in all sorts of weird QtQuick skullfuckery which I still think shouldn’t have been a thing in the first place BUT ANYWAY, I don’t really need one. Even crowded UIs (think dozens of widgets, some of them dynamically-built) tend to be easy to navigate without an inspector if the widgets don’t sit behind six stacks of nested divs and CSS alignment hacks (which are straightforward if you’ve been doing CSS alignment hacks for your whole computer life but very much not straightforward otherwise).

                                                I don’t mean to disparage web-based UIs by this, I’m sure there’s someone out there thinking sure, CSS alignment hacks are kindda nasty but they’re a breeze compared to hoodoo shit like Qt’s event loops. I just want to point out that ease of development is very much a subjective experience. For me, even with things like React, which are a huge improvement over back when I had to do that shit by hand with jquery (cue “that belongs in a museum!” Indiana Jones reference), writing web-based UIs is hell, and I secretly hope someone will just port OpenStep to WASM and release me from this neverending “how do I just fucking place these things next to each other in CSS” punishment cycle.

                                                1. 2

                                                  I’m not super experienced in native GUI development, and I’m just starting to dip my toes into it with the Common Lisp bindings for GTK4. As far as the initial complaints, GTK has a pretty good inspector, and with the Common Lisp bindings, there is a REPL, and the usual Lisp image-based redefinition of functions and so on; you can live-code the GUI by re-evaluating the macro that defines the application. I suspect, but don’t know for sure, that you can do the same in Python or JavaScript, but not C or Vala. I’m using Emacs.

                                                  My main gripe so far is that it seems like a hard either-or between using a builder to lay out your widgets and so on vs. building the UI up programmatically, but that may just be a matter of my inexperience.

                                                  1. 2

                                                    My main gripe so far is that it seems like a hard either-or between using a builder to lay out your widgets and so on vs. building the UI up programmatically, but that may just be a matter of my inexperience.

                                                    I think that’s kind of specific to GTK, which has a complicated history with GUI builders and has pretty much shunned them lately from what I’ve heard. Back when I was doing a lot more GUI development (2012-ish) it was pretty easy to mix the two in Qt – I wrote a lot of code that tl;dr programatically built some widget hierarchies inside builder-created interfaces.

                                                    1. 1

                                                      Oh, that sounds nice, I should try that!

                                                    2. 1

                                                      Speaking for iOS/macOS, it’s true that you’ll need the main IDE to work on GUIs, and its download/install size is obscene. It’s like the Call of Duty of IDEs; I don’t get it. But the fast project setup is there, the rapid iteration is there, as are the inspector, REPL, and other stuff you’re talking about.

                                                      Just an overview for those who haven’t tried native dev lately:

                                                      Basic coding methods available include the Swift REPL and SPM, a Jupyter-like thing called Playgrounds, an official VS Code+LSP extension, and Xcode’s project templates that scaffold various kinds of hello world in one step.

                                                      The UI workflow is based on SwiftUI Previews, with which you edit view code next to a preview canvas showing either a wysiwyg layout editor or a live instance with hot reloading (half a second or so). You don’t have to choose between a layout tool and pure code views; instead you write a mostly declarative DSL kind of like a C-family JSX, and the layout editor can alter that code, or you just type it and see it take effect. This works one view or screen at a time; you can set up a view’s previews with sample data or real backing services. The closest comparison I could make is to React+Storybook. I like to build apps iteratively as working wireframes, then style them minimally later.

                                                      For inspecting at runtime, there’s lldb, the view debugger (think web inspector for native—the 3D layer breakout is really useful), memory graph debugger, GPU shader debugger, and a huge variety of profiling tools in Instruments.

                                                      Now I really wish we had some kind of live REPL attached to the editor like SLIME. I think Swift is just too statically compiled by nature to get there… hot reloading surprised me though, and macro support is coming, so you never know.

                                                      I just have to think MS is also trying to provide this kind of cohesive, complete, productive toolchain. I can’t speak for Linux and friends; maybe it’s just too big an effort to organize all this except where a browser has done that work? Or maybe it’s happening and I just don’t know more of the landscape.

                                                2. 2

                                                  What do you think the benefits of native applications are?

                                                  1. 7

                                                    The big one is user experience consistency across the whole system. It’s easier to achieve and more likely to happen when apps are built with native tooling and product teams are mindful of native behavior.

                                                    For instance, on macOS, every app has File, Edit, other stuff, Window, Help. You’ve never used this app before, but you damn well know how to print, because the first app you ever used already taught you it’s in the File menu toward the bottom. Every user action goes in the menu first, so the menu is a complete index of what the app can do. That makes the app’s capabilities searchable, too, from the Help menu. The menus and items teach you the app’s objects and capabilities and their keyboard shortcuts, while you use the app, at your pace. But they also teach you about the computer in general because other apps match.

                                                    Another convention is that the main object types in the app get their own window type, and same-type windows can be docked together as tabs or split apart again. In Mail you can see a mailbox plus an optional focused message, but you can also open 15 messages in their own windows or tabs.

                                                    Now open Slack, an Electron app. See how its menu is sleeping on the job and doesn’t include that much of the app’s functionality. Try to open three conversations in three tabs or windows. Tabs would really help triage a lot of unreads. Windows would let you organize conversations among other apps’ related windows according to user tasks—like windows are for in a windowing operating system—but you get one window with two panes, tops. It’s a disaster as soon as you have several competing priorities to handle. In this regard Mail is way better, even though Slack’s entire job is to be better than email. But this single-window Electron app is also compulsory at work for lots of us.

                                                    It’s not like an Electron app couldn’t get the details right for each operating system it runs on, but if you were using the native tooling, the defaults would have helped in each platform’s case. It also has to be harder to keep a native behavior mindset when you’re living in a multi-platform abstraction layer, even if you mean to.

                                                    1. 6

                                                      Faster response for one (c is not just a good idea, it’s the law). Two, it keeps working even offline. Three, the interface is consistent (or should be) with the rest of the operating system. Four, it doesn’t gratuitously change the UI or work flow on me (yes, apps can be updated, but at least I get a warning, and can get an indication from others if it’s a “breaking change” or not). Five, I can keep using the app as long as I want (that is, if it doesn’t force some licensing check on me—seriously, I miss the days of owning software and not just renting it).

                                                      1. 2

                                                        I miss the days of owning software and not just renting it

                                                        This is interesting to think about… It could be a reason why FOSS software continues to be strong and even a bit more well known as time goes on. The current market of “X as a service” really helps the FOSS ideology.

                                                  1. 3

                                                    Culture Amp’s engineering leadership maintains an internal “Tech Radar” that lists technologies in four categories: “adopt”, “experiment”, “contain”, and “hold.”

                                                    Reminds me that we have making a tech radar on our list of things to do…. There’s a lot of things I would like to put in that”contain” category

                                                    1. 6

                                                      I’d suggest that auto-update is asking too much–have a functionality to check if updates are available, but leave the sysadmin work to the users and the ecosystem managers. If you do that, then you can just throw up a pile of binaries under Github releases or whatever.

                                                      1. 2

                                                        So is it ok for the binary to phone home regularly to check for update? I thought most people would be uncomfortable with that.

                                                        1. 4

                                                          Oh, no no no, not unless enabled explicitly. I mean, have a --check-updates argument or whatever so I as a user can script it or check myself.

                                                          1. 2

                                                            Fortunately, most distros will remove or disable your phone-home code.

                                                            1. 4

                                                              I’m not planning to phone home. I want to give users an option to easily upgrade e.g. brew upgrade or pip upgrade.

                                                            2. 1

                                                              i do this for some internal work tooling… but it doesn’t “phone home” it just hits an API for where my binaries are uploaded…

                                                              if you have releases in github, take the example of casey’s excellent just:

                                                              ❯ curl -L -s https://api.github.com/repos/casey/just/releases\?page\=1\&per_page\=1 | jq '.[0].name' -r
                                                              1.13.0
                                                              

                                                              in my tool i take this output and compare it to the tool’s current version and output a message if it’s older (“hey, a new update is available”)

                                                              of course i fail gracefully and quickly if connectivity isn’t there (short timeout)

                                                              i wouldn’t call that “phoning home”

                                                              1. 2

                                                                i wouldn’t call that “phoning home”

                                                                Users would - it’s still giving you an idea of how many people are using it and from where (source IP address), and you could start shipping extra information in the URL if and when you please. But if it’s just for work, who cares.

                                                                1. 1

                                                                  you certainly CAN, but this is going to github’s API, so the only one collecting data is possibly github, and there is a distinction there for a user, I think…. this is very different from running actual telemetry… which is useful in it’s own right

                                                          1. 2

                                                            Write a program that compiles and runs in two different programming languages.

                                                            i wrote fizzbuzz in ruby and crystal!

                                                            1. 14

                                                              is the problem really Alpine, or musl? i mean yea, Alpine uses musl, but it’s even mentioned in the article that DNS over TCP isn’t enabled by design, why not explore that a little more in the article?

                                                              1. 12

                                                                It’s a flaw in musl, but using musl outside Alpine is … extremely rare, as far as I can tell.

                                                                The real question in my mind is why people continue to use musl and Alpine when it has such a serious flaw in it. Are they unaware of the problem, or do they just not care?

                                                                1. 14

                                                                  I don’t know that I’d call it a “flaw” rather than a “design choice”.

                                                                  The DNS APIs in libc (getaddrinfo and gethostbyname) are poorly designed for the task of resolving DNS names (they are blocking and implementation-defined). musl implements these in a simple manner for simple use cases, but for anything more involved the recommendation of the musl maintainers is to use a dedicated DNS resolver library.

                                                                  This article goes into a bit more depth, but at the end of the day I think it’s a reflection of the different philosophy behind musl more generally (which is why I call it a “design choice” instead of a “flaw”).

                                                                  1. 4

                                                                    “Better is different” means people will get mad at you for trying to make things better. :-)

                                                                    1. 6

                                                                      Better is different doesn’t imply that different is better. The getaddrinfo function is the only moderately good way of mapping names to hosts without embedding knowledge of the lookup mechanism in the application. Perhaps a modern Linux system could have a DBUS service to do this, but that would add a lot more to containers (if containers had a sane security model, this is how it would work, code outside the container would do the lookup, and the container would not be able to create sockets except by asking this service, but I digress).

                                                                      The suggestion to use a DNS library misses the point: DNS should be an implementation detail. The application should not know if the name is resolved via a hosts file, a DNS, WINS, or something custom for micro service deployments. The decision on Alpine means that you need to encode that as custom logic in every program.

                                                                      1. 2

                                                                        The decision on Alpine means that you need to encode that as custom logic in every program.

                                                                        I think that’s a bit dramatic. Most applications won’t do a query that returns a DNS response bigger than 512 bytes because setting up TCP takes at least three times as longer than the UDP response, and that pisses off most users, so most sites try to make sure this isn’t necessary to show a website to sell people things, so very very few people outside of the containerverse will ever see it happen.

                                                                        Most applications just do a gethostbyname and connect to whatever the first thing is. There’s no reason for that to take more than 512 bytes, and so it’s hard to lament: Yes yes, if you want 200 IP addresses for your service, you’ll need more than 512 byte packets, but 100 IP addresses will fit, and I absolutely wonder about the design of a system that wants to use gethostbyname to get more than 100 IP addresses.

                                                                        The reason why, is because gethostbyname isn’t parallel, so an application that wants to use it in parallel service will need to use threads. Many NSS providers behave badly when threaded, so desktop applications that want to connect to multiple addresses in parallel (e.g. the happy eyeballs protocol used by chrome, firefox, curl, etc) avoid the NSS api completely and either implement DNS directly or use a non-blocking DNS client library.

                                                                        1. 2

                                                                          Most applications won’t do a query that returns a DNS response bigger than 512 bytes

                                                                          Most software that I’ve written that does any kind of name lookup takes address inputs that are not hard coded into the binary. As a library or application developer, I don’t know the maximum size of a host or domain name that users of my code are going to use. I don’t know if they’re going to use DNS at all, or whether they’re going to use host files, WINS via Samba, or something else. And the entire point of NSS is that I don’t have to know or care. If they want to use some exciting Web3 Blockchain nonsense that was invented after I wrote my code for looking up hosts, they can as long as they provide an NSS plugin. If I have to care about how host names provided by the user are mapped to network addresses as a result of using your libc, your libc has a bug.

                                                                          Most applications just do a gethostbyname and connect to whatever the first thing is.

                                                                          Hopefully not, anything written in the last 20 years should be using getaddrinfo and then it doesn’t have to care what network protocol it’s using for the connection. It may be IPv6, it may be something legacy like IPX (in which case the lookup definitely won’t be DNS!), it may be something that hasn’t been invented yet.

                                                                          The reason why, is because gethostbyname isn’t parallel, so an application that wants to use it in parallel service will need to use threads.

                                                                          That is a legitimate concern, and I’d love to see an asynchronous version of getaddrinfo.

                                                                          1. 1

                                                                            As a library or application developer, I don’t know the maximum size of a host or domain name that users of my code are going to use.

                                                                            Yes you do, because we’re talking about Alpine and Alpine use-cases, and in those use-cases where it tunnels DNS into the NSS API. RFC 1035 is clear on this. It’s 250 “bytes”.

                                                                            There’s absolutely nothing you or any of your users who are using Alpine can do on a LAN serving a single A or AAAA record to get over 512 bytes.

                                                                            It may be IPv6, it may be something legacy like IPX (in which case the lookup definitely won’t be DNS!),

                                                                            No it won’t be IPX because we’re talking about Alpine and Alpine use-cases. Alpine users don’t use IPX.

                                                                            it may be something that hasn’t been invented yet.

                                                                            No it won’t. That’s not how anything works. First you write the code, then you can use it.

                                                                            1. 1

                                                                              Yes you do, because we’re talking about Alpine and Alpine use-cases

                                                                              I don’t write my code for Alpine, I write it to work on a variety of operating systems and on a variety of use cases. Alpine breaks it. I would hazard a guess that the amount of code written specific targeting Alpine, rather than targeting Linux/POSIX and being run on Alpine, is a rounding error above zero.

                                                                              I do not write my code assuming that the network is IPv4 or IPv6. I do not write my code assuming that the name lookup is a hosts file, that it’s DNS, WINS, or any other specific mechanism. I write my code over portable abstractions that let the user select the name resolution mechanism and let the name resolution mechanism select to transport protocol.

                                                                              No it won’t. That’s not how anything works. First you write the code, then you can use it.

                                                                              That is literally how the entire Berkeley socket API was designed: to allow code to be written without any knowledge of the network protocol and to move between them as required. This is how you wrote code 20-30 years ago that worked over DECNET, IPX, AppleTalk, or IP. The getaddrinfo function was only added about 20 years ago, so is relatively young, but added host resolution to this. Any code that was written using it and the rest of the sockets APIs was able to move to IPv6 with no modification (or recompile), to support mDNS when that was introduced, and so on.

                                                                              These APIs were specifically designed to be future proof, so that when a new name resolution mechanism came along (e.g. DNS over TCP), or a new transport protocol, it could be transparently supported. If a new name lookup mechanism using a distributed consensus algorithm instead of hierarchical authority comes along, code using these APIs will work on any platform that decides that name resolution mechanism is sensible. If IPv7 comes along, as long as it offers stream and datagram connections, any code written using these APIs will be able to adopt it as soon as the kernel does, without a recompile.

                                                                              1. -1

                                                                                Alpine breaks it

                                                                                Can you show a single example of a real-world environment that is broken by what Alpine is doing, and that isn’t some idiot trying to put more than one or two addresses in a response?

                                                                                I don’t know if I agree or disagree with anything else you’re trying to say. I certainly would never say Alpine is “broken” because its telnet can’t reach IPX hosts on my lan, but you can’t be complaining about that because that’d be moronic. Some of the futuristic protocols you mention sound nice, but they can tunnel their responses in DNS too and will work on Alpine just fine. If you don’t want to use Alpine, don’t use Alpine, but switching to it saved me around 60gb of ram, so I was willing to make some changes to support Alpine. This is not one of the changes I had to make.

                                                                  2. 3

                                                                    You have no good options for DNS on Linux. You can’t static link the glibc resolver, so you can either have your binaries break every time anything changes, or use musl and have very slightly broken DNS.

                                                                    There are some standalone DNS libraries but they’re enormous and have nasty APIs and don’t seem worth using.

                                                                    1. 8

                                                                      There are a great many things I dislike about glibc, but binary compatibility is one thing that they do exceptionally well. I think glibc was the first library to properly adopt ELF symbol versioning and their docs are what everyone else refers to. If they need to make an ABI-breaking change, they add new versioned symbols and keep the old ones. You can easily run a binary that was created with a 10-year-old glibc with the latest glibc shared object. As I recall, the last time glibc broke backwards binary compat was when they introduced symbol versioning.

                                                                      1. 5

                                                                        You can’t static link the glibc resolver

                                                                        The glibc resolver is site-specific, which means it’s designed to be customised by the system administrator and static-linking would prevent it’s primary use-case. It also has nothing to do with DNS except that it ships with a popular “fallback” configuration is to try looking up hosts on the Internet if they aren’t managed by the local administrator.

                                                                        you can either have your binaries break every time anything changes

                                                                        Nonsense: I upgrade glibc and my existing binaries still work. You’re doing something else wrong.

                                                                        1. 1

                                                                          glibc has a quite stable ABI - it’s a major event when it breaks any sort of backwards compatibility. Sure, it’s not as stable as the Linux userspace ABI, but it’s still extremely rare to encounter breakage.

                                                                        2. 2

                                                                          The real question in my mind is why people continue to use musl and Alpine when it has such a serious flaw in it. Are they unaware of the problem, or do they just not care?

                                                                          I suppose I don’t care. I might even think of it as an anti-feature: I don’t want my kubernetes containers asking the Internet where my nodes are. It’s slow, it’s risky, and there’s no point, Kubernetes already has perfect knowledge anyway:

                                                                          If you bind-mount an /etc/hosts file (or hosts.dbm or hosts.sqlite or whatever) that would be visible instantly to every client. This is a trivial controller that anyone can put in their cluster and it solves this “problem” (if you think it’s a problem) and more:

                                                                          DNS introduces extra failure-modes I don’t need and don’t want, and having /etc/resolv.conf point to trash allows me to effectively prevent containers from doing DNS. DNS can be used to smuggle information in-and-out of the container, so having a policy of no-DNS-but-Internet-DNS makes audit much easier.

                                                                          1. 2

                                                                            I’ve seen people suggest that installing bind-tools with apk will magically solve the problem, but this doesn’t make sense to me… unless there’s some fallback to using host or dig for DNS lookups… ?

                                                                            BUT, it’s really odd to me that seemingly so many people use Alpine for smaller containers, but no one has bothered to fix the issue. Have people “moved on”? Is there another workaround people use?

                                                                            1. 2

                                                                              Is there another workaround people use?

                                                                              I have a shell script that whips up a hosts file and bind-mounts it into the container. This prevents all DNS lookups (and failure cases), is faster, and allows me to disable DNS on any container that doesn’t need access to the Internet (a cheap way to slow down attackers). It uses kubectl get -w to wait for updates so it isn’t spinning all the time.

                                                                              I can’t think the only advantage of Kubernetes abusing DNS for service discovery, and maybe there is one with Windows containers or something else I don’t use, but there are substantial performance and security disadvantages that I don’t even bother with it.

                                                                        1. 12

                                                                          One of the key benefits of zVisor is its use of the Zig programming language.

                                                                          I am at this point completely fed up with the implementation language of a program being touted as a feature or “key benefit” of that program. Microsoft GitHub encourages this behavior because if you want your repo to attract contributors or certain flavors of users it probably helps your SEO to put the language in your description. But the choice of language is a developer benefit, not a user benefit. Crowing that you’re wise enough to use Haskell/Zig/Go/Rust/C/TypeScript or whatever else before you explain what your program does makes me feel like your core audience is other people who think your language is the good one, not people who want to use your software to get something done.

                                                                          1. 5

                                                                            As someone who writes quite a lot of Zig, I 100% agree and am fed up as well.

                                                                            1. 5

                                                                              Interestingly, my first thought seeing such things is indeed “Why?”, because there are already tons of implementations and for the hypervisor part they are basically mostly wrapping the KVM interface that is readily available in Linux and by the means of an ABI available to any programming language.

                                                                              Otoh, I totally appreciate people exploring reinventing the wheel to learn more about wheels. And then you get way too excited about how round your wheel is in comparison to others. Human behavior.

                                                                              1. 4

                                                                                I’m also skeptical about the claim that Zig is “memory-safe.”

                                                                                1. 2

                                                                                  That’s exactly the problem with this kind of thing, it entirely lacks the nuance of the real world and instead talks in absolutes.

                                                                                  Does Zig have better spatial memory safety than C? Yes it does, slices and pervasive bounds checking make a big difference here.

                                                                                  Is Zig as memory safe as something with a garbage collector? No, at least not in its present state.

                                                                                  1. 1

                                                                                    Yup, fully agree.

                                                                                  2. 2

                                                                                    I am skeptical of any language that makes the following two claims:

                                                                                    • Can be used to implement a hypervisor
                                                                                    • Is memory safe

                                                                                    A hypervisor manipulates address translation. It deals with things below the level of an abstract machine that provides an object model (memory-safe or not). You cannot have both properties in the same language. Pick one.

                                                                                  3. 3

                                                                                    I hear ya, but to a user, when I see something is written in rust, go, or zig, I know I’m going to be able to run it far easier than something written in ruby, python, or node

                                                                                    1. 1

                                                                                      That’s true enough, though sufficiently diligent OS packaging can mitigate the usual issues with the latter.

                                                                                    2. 3

                                                                                      But the choice of language is a developer benefit, not a user benefit.

                                                                                      Are the users of a hypervisor so lacking in technical skills?

                                                                                      I’m far more interested in tools I can modify and extend (without significant effort) than in tools I can’t. Being written in any of the ten or so languages I can already use is a big plus.

                                                                                      1. 3

                                                                                        Are the users of a hypervisor so lacking in technical skills?

                                                                                        In the general case, yes.

                                                                                        How many users of QEMU/KVM do you think wish it was easier to delve into its source code compared to how many just want their VMs to run properly without difficulty?

                                                                                    1. 2

                                                                                      in terms of time + brainpower spent to produce a result, shell can do in 10 seconds what would take much longer in any other language

                                                                                      uhhh, lemme put my ruby hat on and say, naw

                                                                                      1. 3

                                                                                        for many common tasks, shell is absolutely faster. i wrote some short examples.

                                                                                        download a file:

                                                                                        # ruby
                                                                                        require 'net/http'
                                                                                        
                                                                                        Net::HTTP.start("example.com") do |http|
                                                                                            resp = http.get("/whatever.txt")
                                                                                            open("whatever.txt", "wb") do |file|
                                                                                                file.write(resp.body)
                                                                                            end
                                                                                        end
                                                                                        
                                                                                        # shell
                                                                                        curl -O https://example.com/whatever.txt
                                                                                        

                                                                                        replace the string “cats” with “dogs” in all files ending with .txt:

                                                                                        # ruby
                                                                                        file_names = Dir.glob("*.txt")
                                                                                        
                                                                                        file_names.each do |file_name|
                                                                                          text = File.read(file_name)
                                                                                          new_contents = text.gsub(/cats/, "dogs")
                                                                                          File.open(file_name, "w") {|file| file.puts new_contents }
                                                                                        end
                                                                                        
                                                                                        # shell
                                                                                        for i in *.txt; do
                                                                                            sed 's/cats/dogs/g' "$i" > "$i.tmp"
                                                                                            mv "$i.tmp" "$i"
                                                                                        done
                                                                                        
                                                                                        1. 2

                                                                                          Well, not sure if using curl counts as POSIX shell, if so there’s always

                                                                                          `curl -O https://example.com/whatever.txt`
                                                                                          

                                                                                          In ruby.

                                                                                          Although I’m a bit partial to using open3 instead of backticks. While I wouldn’t extensively code golf your second example, I gotta say, if I was doing what you’d asked in a quick-but-perhaps-not-as-readable-version, it might look more like:

                                                                                          Dir.glob("*.txt").each do |file|
                                                                                            File.write(file, File.read(file).gsub(/cats/, "dogs"))
                                                                                          end
                                                                                          
                                                                                          1. 1

                                                                                            curl doesn’t count as posix shell, i was just using it as an example of the conciseness of shell in general. i don’t think that using shelling out of your programming language counts ;) otherwise any language would be an adequate shell alternative!

                                                                                            half the power is being able to just whip shell up in your terminal on-the-fly. shell will also be available in many places ruby is not!

                                                                                            i do like the conciseness of ruby. ever since i read why’s ruby guide, the language has held a special place in my heart. have you seen crystal?

                                                                                            1. 1

                                                                                              Oh absolutely, and I reach for shell often, just disagreed with the original thought about shell vs. any other language

                                                                                              I’ve been watching crystal and tried it for a few things, but isn’t worth paying the innovation token at $job when we have Go (a poor replacement for shell, imo). But I think for a shell replacement to be feasible it for sure needs to be interpreted

                                                                                          2. 1

                                                                                            shell can be even shorter with sed -i 's/cats/dogs/g' "$i" to avoid the tmp file. Or is there an advantage to the tmp file method?

                                                                                            1. 2

                                                                                              sed -i isn’t posix ;) so i’ve gotten used to using the tmpfile method. it’s what -i does under the hood anyways.

                                                                                              1. 1

                                                                                                fair point!

                                                                                              2. 2

                                                                                                It’s so much worse than @j3s implies. Not only is -i not POSIX, -i on some platforms requires the suffix to use for the backup file, on others it’s optional. The fact that it’s optional makes it spectacularly dangerous because it’s easy to accidentally have it consume an argument. If you must use -i, at least use it with a suffix.

                                                                                          1. 1

                                                                                            Somebody needs to make one of these for Ruby

                                                                                            1. 0

                                                                                              doesn’t everyone else in this space just use containers?

                                                                                              1. 1

                                                                                                So our build runner is buildkitd, and it runs containerized but needs to run privileged. But I think the answer is no, everyone else doesn’t just let everyone run arbitrary containers on shared infra. AWS uses firecracker for lambda isolation for instance.

                                                                                                1. 1

                                                                                                  I could see privileged containers being the line drawn in the sand, but it was interesting the reasoning regarding container breakouts

                                                                                                  I guess I feel like container breakouts for unprivileged containers isn’t something that people typically worry about… perhaps as much as they should?

                                                                                                  I guess I need to try out the service but a CI service that doesn’t support customer containers seems constrained to me, maybe I just need to give that some more thought

                                                                                                  1. 1

                                                                                                    Oh, so this is probably bad communication on my part, but we do allow customers to run their own containers. We just don’t run all the containers together on a shared instance, like a shared Kuberentes cluster or something. Instead each customer is on their own EC2 instance. Containers are fine for packaging, we just have another layer there.

                                                                                                    1. 2

                                                                                                      each customer is on their own EC2 instance

                                                                                                      ah interesting

                                                                                                      yea, the more i read about buildkit and earthfiles the more i was seeing the whole thing together, i think the article might assume a lot of knowledge about what you all already have in place, which might be fine depending on your intended audience

                                                                                                      to me your company is just known as “that company trying to figure out how to get ci pipelines to run the same locally as remotely” which is a very inciting prospect when you’re an engineer working on devops tooling and you’re trying to figure out how to get things to work the same w/ local dev/build as it does in a gitlabci runner, which we run for ourselves w/ k8s clusters. but i’m not exactly sure how gitlab.com provides theirs, which this article would be more analogous to… that’s why i started the thread

                                                                                                1. 7

                                                                                                  One difference is that devbox wants to almost completely hide nix from you. For example, it automatically installs it for you. The abstraction is thicker, for sure.

                                                                                                  My very naive guess at which you should play with, based on how I decided: if you want an abstraction over nix that you can adopt without ever hearing the word nix, give devbox a try. If you want something to ease you into using nix before you dive in head first, give devenv a try. I have very low confidence that’s a good or accurate heuristic.

                                                                                                  1. 6

                                                                                                    Hi all, I’m the founder of jetpack.io (developers of Devbox), and I agree with your take:

                                                                                                    • Devbox is trying to give you the power of nix, but with a simplified interface that matches the simplicity of a package manager like yarn. If you want reproducible environments by simply specifying the list of packages you depend on, and not having to learn a new language, then devbox is a great match.

                                                                                                    • On the other hand, if you’re trying to do something that is involved and requires the full nix language, then devenv can be a great match.

                                                                                                    1. 3

                                                                                                      and it uses json as a surface language, with all the limitations this implies compared to nix-lang.

                                                                                                      1. 3

                                                                                                        yea, that’s what i surmised just from a cursory look… the devbox abstraction might be quite limiting in some ways, whereas devenv seems thinner and probably therefore less leaky (because it’s designed to leak?)

                                                                                                    2. 5

                                                                                                      Devbox is a commercial product in a pre-commercial “get free contributions” phase.

                                                                                                      1. 10

                                                                                                        I feel like this is misleading (I’m the founder of jetpack.io the company that makes Devbox)

                                                                                                        Yes, jetpack.io is a for-profit business: but we’ve committed to making the Devbox OSS tool free forever. We plan to monetize by offering managed services (Devbox Cloud), but those are an independent codebase, and you only need to pay for them if you want the managed services. If you don’t, then Devbox OSS is, and will forever be, free and completely open source.

                                                                                                        This is similar to how Cachix is a for-profit business. It offers a Cachix managed service that it monetizes. It is commited to making devenv a free open-source project. You only need to pay for cachix if you want to use the managed service.

                                                                                                        In that sense, both companies/projects monetize in a similar way.

                                                                                                    1. 6

                                                                                                      I’m curious about what storage stack they are using to replace S3. Highly-available setups like Ceph and Gluster are picky in their feeding and caring, which I guess could be fast-tracked with a few consulting sessions.

                                                                                                      1. 6

                                                                                                        Ceph and Gluster are picky..,

                                                                                                        +1. Of all the services to eject, S3/GCS are the hardest because storage is tricky to get right. Ceph and Gluster work fine, until they don’t, and you usually need some in-house expertise.

                                                                                                        Apart from those, I’ve used Minio in my homelab, and while it’s been pretty nice I’ve also had some real rough upgrades. It’s a homelab, so I don’t feel too stressed about it, but if I were running my business off it I’d be a little more worried.

                                                                                                        None of this is impossible, but it’s easy to forget that the cloud basically solved storage (at a premium).

                                                                                                        1. 5

                                                                                                          Paid onprem storage solutions can be pretty solid. Our last NetApp bill was about $400 per terabyte. “S3 Standard” is $276 per terabyte per year, so we’re saving money if the disks last just 2 years.

                                                                                                          1. 3

                                                                                                            minio is magical

                                                                                                            1. 2

                                                                                                              what storage stack they are using to replace S3

                                                                                                              They aren’t replacing S3 just yet

                                                                                                            1. 7

                                                                                                              I get that they want to promote this as a thing they’re doing that’s novel, but I don’t think people should take that bait.

                                                                                                              Dedicated hardware and non-cloud providers trade flexibility for costs. It’s as simple as that. It makes sense in some situations to do this! A lot of companies don’t have that consistent capacity planning, and definitely don’t have that consistency on a ~5+ year time horizon.

                                                                                                              For them cloud still makes sense, even if on the margin they’re spending more for that flexibility.

                                                                                                              1. 9

                                                                                                                I think the major point is that the flexibility angle could be overplayed.

                                                                                                                If you have a SaaS with relatively static sizing, you can afford to triple your capacity and sit at mostly idle all the time and still save massive amounts of money.

                                                                                                                That strikes me as wrong, the cloud is likely mis-priced.

                                                                                                                If people really have lost hardware management skills in their entirety (as in, not even being able to connect to an IPMI on fully managed hardware) - then we truly are serfs to our landlords now and they can charge whatever they want in perpetuity, its the ultimate drug dealers deal (something free to get you hooked).

                                                                                                                Whether you genuinely need more staff to run bare metal remains to be seen, but I am interested in the result. Running on a cloud with a wide scope also has a headcount price associated (you almost certainly have staff who’s job it will be to write terraform, audit billing, maybe even spec your quotas and so on).

                                                                                                                1. 2

                                                                                                                  That strikes me as wrong, the cloud is likely mis-priced.

                                                                                                                  absolutely this… long term things will even out, i think at this point AWS is just so far above everyone else that the profit margins are just way higher, making things like the original post feasible for more workloads than are even intuitively cloud-native

                                                                                                              1. 32

                                                                                                                Classic Ironies of Automation: You need a human operator to check that the machine is operating correctly and solve the complex cases, but humans get bored and distracted when they are checking stuff that is 99% okay, and their ability to check complex cases atrophies from lack of use.

                                                                                                                1. 6

                                                                                                                  Like falling asleep at the wheel of a self driving car

                                                                                                                1. 6

                                                                                                                  🥺

                                                                                                                  1. 10

                                                                                                                    🥺 make me a sandwhich

                                                                                                                    1. 12

                                                                                                                      🥺🥪

                                                                                                                  1. 3

                                                                                                                    Anyone actually running this? Curious to hear experiences and get details of the setup at scale

                                                                                                                    1. 3

                                                                                                                      Yeah it’s a real pleasure! I’d suggest Fleet (FleetDM).

                                                                                                                      I’m using it via elastic-agent today but running into issues with the agent itself, not osquery.

                                                                                                                      One important thing to note is that you cannot use one config for all OSes. There’s a few subtle differences for instance FIM between windows and Linux.

                                                                                                                      Also check out File Carving for something almost completely undocumented and extremely interesting.

                                                                                                                      1. 1

                                                                                                                        Read about this on the orange site from one of yogthos’s comments. They also made a blog post and submitted it here. They may have some insights.

                                                                                                                      1. 3

                                                                                                                        Couldn’t you just use freeze on a Struct? Or is this just a more ergonomic way of doing this

                                                                                                                        1. 5

                                                                                                                          I think part of this is that it has a default #deconstruct, and as such can get used with pattern matching, which I don’t think can get added to Struct in a backwards compatible way (? I think?).

                                                                                                                          So, this can work (from docs here: https://ruby-doc.org/3.2.0.preview3/Data.html#method-i-deconstruct)

                                                                                                                          Measure = Data.define(:amount, :unit)
                                                                                                                          
                                                                                                                          distance = Measure[10, 'km']
                                                                                                                          distance.deconstruct #=> [10, "km"]
                                                                                                                          
                                                                                                                          # later
                                                                                                                          case distance
                                                                                                                          in Measure(n, 'km')
                                                                                                                            puts "It is #{n} kilometers away"
                                                                                                                          # ...
                                                                                                                          end
                                                                                                                          
                                                                                                                          1. 4

                                                                                                                            Yeah ergonomics. You’d have to remember to freeze every new struct, or else make a wrapper that would essentially be this.