1. 10

    Why use powershell on Linux? Don’t get me wrong, I like a lot of the ideas behind PS, and on windows systems I imagine it is invaluable, as it exposes all of the Windows APIs that were previously behind GUIs, but on Linux we have bash, and if you need more complicated stuff, you can use Python, with it’s excellent subprocess module, or ruby, or another language that you’re familiar with. Then you don’t have to deal with PS’s insanely wordy syntax.

    1. 6

      I think the article adequately explains why to use pwsh over bash, especially if you’ve written much bash. On the other hand, I think the article also seems less compelling at first glance if you have written a lot of bash, as I have, and as I’m sure you have. Experienced bash users are necessarily experienced in dealing with piping text between programs in lieu of a richer data model.

      I personally find shell scripts invaluable, and prefer them for system-level tasks over Python or Ruby. And I disagree about Python’s subprocess module—I think it’s clunky and verbose. Sure, pwsh has long command names, but for shell-like tasks it’s still more concise and clear than equivalent Python. And for interactive use, pwsh has many built in aliases similar to common UNIX commands.

      That all said, I’m hardly an experienced pwsh user compared to my bash, Python, or Ruby. But every experience I’ve had with pwsh has been pleasant and resulted in readable code. There’s only one reason I haven’t switched: pure momentum. I’m so used to bashisms that I have little reason to invest time in anything else. But if I could go back and choose where to invest my time—and if pwsh were available on Linux much earlier—I would choose pwsh.

      1.  

        pwsh has long command names

        I think long command names are better than shorter command names. There is absolutely no reason that in $current_year we should use, document in, and most importantly teach (whether schools or documentation or books or blogs) what look like incantations to summon the prince of darkness.

        There was a time when cd was better than Set-Location. That has not been the case for decades. I’d argue if we are writing something that will be run more than once, you MUST write it in as verbose language as possible.

        1.  

          There was a time when cd was better than Set-Location. That has not been the case for decades.

          There is an important case for shorter names: interactive use. sls is a lot easier to type than Select-String!

          1.  

            I hate the fact that PowerShell is verb-noun, but with tab completion typing a short name that you’re very familiar with is a bit faster than typing a long well-namespaced name (e.g. String-Select) but it’s a lot faster to type a long well-namespaced name that you aren’t familiar with than it is to type the short one.

            For example, there’s a standard UNIX tool with a two- or three-letter name for printing a specific column of a stream and I never remember what it’s called, so I either spend a few minutes in man pages or just use awk and type more. Typing something like Column-Select would be 4-5 characters with tab completion and would save me a lot of time.

            I mostly use PowerShell for Azure admin things and I do that sufficiently rarely that the commands are never in muscle memory for me. Tab-completion works pretty well for helping me find things (though with noun-verb it would be much better).

            1.  

              For example, there’s a standard UNIX tool with a two- or three-letter name for printing a specific column of a stream and I never remember what it’s called

              cut? I suppose I understand what you mean - but it does have mnemonic name - it allows you to cut out fields and columns?

            2.  

              When writing things that I’ll need to write again, like api endpoints, or shell scripts, I favor shorter names. When writing functions I’ll be reading more times than I’ll be writing, I favor longer names. Coupling this guidance with a soft goal of lines under 80 characters, and I get what seems like a nice result to me.

            3.  

              Powershell also has a bevy of shorter name aliases, designed for interactive use.

            4.  

              Agree about shellscripts, but

              And I disagree about Python’s subprocess module—I think it’s clunky and verbose.

              Have you tried subprocess.check_call and subprocess.check_output? I ask because I used Popen for a long time and only last year thought to check for simple synchronous versions that are what I need a good 90% of the time.

              I do agree that Popen can’t really compete with the ease of bash piping, though; for large volumes of data, you need to use a subprocess.PIPE to process it (which you get for free with bash) and i’m not familiar with any idioms that make that painless.

              1.  

                I kind of agree with the first response. subprocess is clunky, and frankly weird at times, but it’s really flexible, and for me, when I’m trying to structure data into arrays or dicts, I find bash even clunkier. Structured data is usually where I go from using bash to an actual programming language like python or ruby.

                1.  

                  For me it’s just a matter of readability; something like grep foo file.txt | cut -d : -f 2 | tr -s ' ' is a lot more work in subprocess (or native Python).

                  This is like the old McIlroy vs. Knuth story, where Knuth had written a long Pascal program and McIlroy did the same with a fairly simple shell pipeline.

                  I think there was some library that tries to abstract a lot of the syntax around this, which might be a nice middle ground, and of course using Python brings many other advantages too.

                  1.  

                    Right, and IMO it’s not entirely a subprocess issue; python generally solves different problems, and Popen doesn’t get in my way when I need something more complex than a short bash script. But It’s pretty hard to beat |, 2>&1, >>foo, &, etc. for concision, and sometimes I just wish it were as mindlessly easy to do pipes in python as it is to strictly evaluate a generator ([*foo]) or unpack a nested iterable (x, (y, *z) = bar). I’d probably set the threshold lower for when to use python vs. bash if that were the case.

                    arp242 mentions below that libraries exist for this sort of thing, and I’ve used plumbum for this in one project, but then you have to worry about portability, version management, bloat, etc, which is again a hindrance.

                2.  

                  … And I disagree about Python’s subprocess module—I think it’s clunky and verbose.

                  I find it to be clunky as well, fortunately Python has Xonsh shell, which is pretty amazing ;-)

              1. 1

                Interesting. Would nice to see how this interacts with pg bouncer and the various types of pools (session, transaction..).

                1. 6

                  If you’re using vim, tpope have some excellent plug-ins, especially:

                  https://github.com/tpope/gem-ctags

                  With a couple of other gems,it allows seamless “go to definition” for methods that originate from gems. Very valuable when trying to reason about code and dependencies.

                  https://chodounsky.com/2016/12/09/using-tags-to-browse-ruby-and-gem-source-with-vim/

                  1. 2

                    This is helpful, thank you for the links!

                  1. 2

                    @gempain this looks great. Frankly I’ve been wanting to write something like this for my own personal use for years.

                    It’s a bit of a bummer to see this under the BSL. However, it looks like the change date to Apache 2.0 is extremely close to release date. Can you talk a bit about what you’re trying to accomplish/what the thinking is there?

                    1. 3

                      Thanks for the kind words ! This is a mistake from us, the change date should be in 2023, thanks for pointing this out. The reason behind choosing a BSL is that we’d like to finance the project by running a cloud version of Meli :) Basically you can do all you want with the tool, but we find this license fair enough so that a team like us can fully dedicate its time to making this project awesome while allowing others to use it for free.

                      1. 1

                        Hey, really cool project! Honestly I was looking for something exactly like this anddd youve got packaged up in docker, saves me from having to do that!

                        Looking forward to using it.

                        Also, I personally think that the license choice is appropriate but considering how this is a new type of license do you think after some time your team could share some thoughts on how successful it was?

                        I don’t want to detract too far, but I think finding a sweet spot between user freedom (open source) and sustainability is very important. I’d rather a BSL project that is updated and improved over the years than an open source prototype that cant reliably be used because the developers had to move on to another project.

                        1. 3

                          Thank you for this really nice comment ! I think exactly the same way as you ! I am aware that the debate over BSL is hot at the moment, and not every one will agree. We want to focus solely on developing this tool, and in this context, BSL makes sense as it gives the team a chance to monetize the project fairly. Everyone can use the tool for free, with no limitations, except the one mentioned in the license. It’s a fair model, which has been supported by the OS foundation even though they have not recognized it officially yet. We’re part of the people that believe in it and would love to see the community supporting this choice - it’s a good way to ensure healthy evolution of a platform like this. I think BSL makes sense for platforms, but for libraries, we always use MIT or GPL as I think it’s more suited. We’ll definitely blog about how it goes with this license, it’s a topic I hold at heart.

                        2. 1

                          Basically you can do all you want with the tool

                          From the license:

                          (…) make non-production use of the Licensed Work

                          Am I missing something, or can one do everything except use it?

                          1. 2

                            BSL: You can self host or pay for hosting but not sell hosting to others.

                            1. 1

                              It’s not that simple from the BSL alone, but I missed the concrete parameters to this use of the license:

                              Additional Use Grant: You may make use of the Licensed Work, provided that you may not use the Licensed Work for a Static Site Hosting Service.

                              A “Static Site Hosting Service” is a commercial offering that allows third parties (other than your employees and contractors) to access the functionality of the Licensed Work by creating organizations, teams or sites controlled by such third parties.

                              For an unmodified BSL, any “production use” is outside the grant, but Meli grants “use” outside of running a service as specified (it appears they allow a non-commercial service though).

                              1. 1

                                It may be a good idea to design a “composable” set of shared source licenses like Creative Commons did with their licenses for creative works. E.g. SourceAvailable-ProductionUse-NoSublicensing-NoCloud.

                          2. 1

                            Thanks for clarifying. This does make sense!

                        1. 2

                          I feel like we just need something that’s like Caddy v1 [1] but for VPNs that just works: it should have very little setup overhead and just do everything for you (e.g. generate public/private keys, certs, etc) but still be able to be more flexible with larger configurations.

                          This isn’t the first environment-assuming-auto-install script I’ve seen for insert generic complicated VPN software here and I don’t want more of those; I know I can’t just ask for free software and have it be made [2] but I don’t know much crypto and rolling your own is dangerous.

                          [1] Caddy v2 is bloated and doesn’t really respect v1’s simplicity IMO.

                          [2] There’s dsvpn but it seems the author has stopped maintaining it and it was quite unreliable when I tried it.

                          Edit: Another concern is cross-platform: only the big and bulky VPNs have mobile clients right now.

                          1. 2

                            Check dsnet, which was posted here a few weeks ago: https://github.com/naggie/dsnet it is basically a simpler UI for wireguard, which I like so far.

                            1. 2

                              There’s dsvpn but it seems the author has stopped maintaining it […]

                              The GitHub repo currently has 0 open issues, so I’d rather call it mature instead of unmaintained.

                              […] and it was quite unreliable when I tried it.

                              Maybe give it another chance now? It works perfectly for me.

                              1. 2

                                seems like streisand fills the gap of easy-but-still-configurable setup. not entirely one-click but aimed toward a less technical crowd and holds the user’s hand decently well.

                                1. 2

                                  there’s dsvpn

                                  Runs on TCP (first bullet point under features)

                                  Eh, no thanks. At that point I’d much rather just use openssh as a socks proxy.

                                  TCP over TCP is unpleasant, and UDP and similar protocols over TCP is even worse.

                                  It seems likely the future of vpn will be built on wireguard. But it needs something like zerotier.com for some “virtual secure lan” use cases.

                                  Tailscale.com does a bit of the zerotier stuff for wireguard - but zerotier has (AFAIK) smarter routing - local lan traffic stays local and encrypted. (if you have two laptops at home, a vps in the cloud - all on the same zerotier vpn - all traffic is encrypted, but traffic between the two laptops is routed locally. And things like bonjour/mDNS works across all three machines).

                                  1. 4

                                    FWIW, Tailscale also routes traffic intelligently, so LAN traffic will remain local (assuming the devices are able to talk to each other, of course). Tailscale does have public relay nodes as a last resort fallback, but on well-behaved networks, all traffic is p2p on the most direct path possible.

                                  2. 1

                                    This looks fantastic, thanks for putting this together. I’m particularly interested in the prospect of Wireguard support, is that waiting until that’s merged into OpenBSD proper? (If I can avoid needing any Go on my machines I’m happy).

                                  1. 4

                                    Author here. Shalabh recommended the post may be interesting for this community and invited me here. Thanks Shalabh! :)

                                    The goal of Boomla is to create a radically simpler & more powerful application platform. Web development is the “killer use case” to build a useful product and then grow from there. This post explains one of the unusual solutions of the platform. I’d be really curious to hear what you think.

                                    1. 1

                                      Hi there!

                                      What are your thoughts on databases which offer large object support, or filesystems with support for transactions (like mentioned in a sibling comment here)?

                                      1. 1

                                        Hi! Note that the context of the article is web development. The problem with storing large objects in a DB is in the performance department. To serve a static image stored in the DB, one has to load the image in memory, serialize it and send over a TCP connection, deserialize it and keep it in memory while serving to the visitor, even if the client is on an extremely slow connection. So the issue is not really with databases themselves but the way the application server would connect to the DB and serve the image.

                                        As for filesystems with transactions, that’s not enough in itself, one would need the ability to store structured data as well. Do you know any?

                                        1. 1

                                          Directories and files? /database/tables/users/1/{name,phonenumber}?

                                          Ed: i guess it depends a lot on what is meant by structured. Some files can be merged/joined, and results and views can be made with symlinks - but I don’t think just the filesystem works well as a relational store.

                                          1. 1

                                            Directories and files? /database/tables/users/1/{name,phonenumber}?

                                            Can’t follow you. Can you elaborate?

                                            By storing structured data, I mean data that can be directly accessed without deserializing if first. So, a JSON string would not count as it is not stored in a structured way. On the other hand, DB rows typically store structured data. Each column of each row contains one value.

                                            but I don’t think just the filesystem works well as a relational store.

                                            The crucial point here is that there is no the filesystem. Each filesystem has its own API and capabilities. Forget for a moment that it’s called a “file system” and think about an object system. Obviously, saying that an object system could not work well for storing relational data would be weird.

                                            The hardest part of “getting Boomla” is always that people think they know what a file is and how it behaves. One has to unlearn it first to get it. At best a file is just an API. And you can design a completely new crazy API if that brings huge benefits. Like, Boomla files can store other files, as in, image.jpg/other-image.jpg. Boomla a few of these weirdo things. Like, every file is also like a DB row at the same time. The fields are just called file attributes. And because it is “also a DB”, it sure can do the same things a DB can do.

                                            1. 2

                                              For that definition of structured data, the filesystem works sort of OK: in my example above, you have atoms in files:

                                              Insert a new user:

                                              mkdir users/1
                                              echo "John Connor" > users/1/name
                                              echo "555-CHEESE" > users/1/phone_number
                                              

                                              Now, if you know you want to lookup user 1’s name, you can read it directly. You could store binary data in the files too.

                                              This is somewhat similar to how maildirs work (but they are more of a “document store” storing whole emails.

                                              Im not sure I understand your point re: blobs in a db - if it’s an image, you can just stream it from the db to the client - like you stream it from the filesystem to the client?

                                              1. 1

                                                Relationships:

                                                mkdir users/2
                                                echo “Sarah Connor” > users/2/name
                                                 ln - s users/2 users/1/mother
                                                
                                                1. 1

                                                  Yeah this sort of works but I’d say there are a couple problems with this.

                                                  • The performance may be heavily impacted by the increased number of files. I’d say this results in an average of 10x more of them.
                                                  • You can store strings without serialization but not other data types. You still end up having to serialize/deserialize numbers, for example.
                                                  • If each file has lots of system metadata (creation time, modification time, created-by, etc.) you will need way more storage that way. It’s like adding, say, 50-100 bytes of data to every field in every row of a DB table. That will easily 10x your storage requirements for the kind of data that normally goes in a DB.

                                                  You could stream the image, from the DB but it would still need to go through your application logic if that’s where access control is implemented. Then this would mean that way more DB requests are required. That said, yes, this may be an improvement over loading the image in one go and keeping it in memory. Can’t tell, would require a benchmark.

                                                  If that works out well enough, I still had other requirements that would have still forced me to go the filesystem route, I just didn’t want to make the blog post even longer. As this was a CMS originally that turned to into an OS, the original use case was building websites in a sane way. Users constantly mess things up and needed undo/redo to save the day. That means a copy-on-write filesystem was required with the ability to quickly restore old snapshots. In fact, the first versions of the platform were prototyped on top of MySQL. It worked well enough for a while but this undo/redo was completely impossible.

                                                  Another problem was speed. If the DB is a separate software, communicating with it has higher latency than an in-process filesystem. Boomla makes heavy use of file access and this proved to be a problem as the system was growing.

                                                  That said, your streaming solution would be an interesting approach to moving all data to the DB in a classic setup.

                                                  1. 1

                                                    Note, I’m not really commenting in the context of the article, as much as what is generally possible.

                                                    You can store strings without serialization but not other data types.

                                                    Not true. You would need to store your schema somewhere/somehow - but nothing stops you from storing binary data in a file - if anything you’d need less serialization than with a typical database driver.

                                                    As for rollback/undo, I did consider a cms at one point with data in xml files on a nilfs2 filesystem.

                                                    Obviously such a “document database” via the file system would need to do quite a lot of serialization - but if the data fit the application - you might get away with a small number of files or request.

                                                    1. 1

                                                      nothing stops you from storing binary data in a file - if anything you’d need less serialization than with a typical database driver

                                                      Emh, yeah, you could say that and that would be “legally correct”. Yes, that’s less serialization. Yet from a programming perspective, in the end, that’s still a serialization layer and the rest is just implementation detail. You could also say that you do a memory dump for any value / object and that way you are not serializing. In a way, at least legally, that may be right too, yet from a different point of view, you are using the built-in serialization of the language that may not be cross platform, may not be documented and may even change over time. When using a different language, you would need to reimplement the same de/serialization, so even that would count as a serialization layer in my world.

                                                      I’m not really commenting in the context of the article, as much as what is generally possible.

                                                      Got it. I agree, that would work for certain use cases. Existing filesystems would probably make it slow but if you give yourself a blank slate, I’m sure the underlying idea could be made work reasonably well if you built this from the ground up yourself, that would need a purpose built FS though. But then I’m also sure you would end up optimizing parts of it as you learn more about the performance characteristics of what you have built and how it is used in the real world.

                                                      1. 1

                                                        you are using the built-in serialization of the language that may not be cross platform, may not be documented and may even change over time.

                                                        Maybe. The point is that the database driver certainly does serialization, and might even change byte order to match network standard- you can potentially do less serialization with files.

                                                        Or you could use a low-overhead format like captnproto to read/write.

                                                        1. 1

                                                          true

                                      1. 16

                                        Wrote this a few months back after launching my hiphop-focused sister-site, Hipppo, and thought maybe some would find it interesting.

                                        Right now the site is largely inactive, but it was a fantastic experience doing the whole project without any substantial knowledge of what I was getting myself into. It pushed me to switch majors back to my original discipline of CS, but obviously I’ve still got an incredible amount to learn ahead of me.

                                        If this doesn’t belong please let me know.

                                        1. 3

                                          Is the site still up? Looks like you didn’t link to it, and a search for hipppo didn’t turn up anything?

                                          1. 3
                                            1. 1

                                              Thank you for linking it! Probably should’ve done this in my original comment

                                        1. 3

                                          What is the point of making a fancy website and a readme if I still have to check the source what it does?

                                          This is an Honest question. It is effectively worse than just linking to a source code browser.

                                          The description is vague and ambiguous.

                                          1. 1

                                            Thanks for the feedback. How can I improve “Use CSS selectors to find and replace elements on pages with content from other sources.”? Or the rest of the readme… suggestions for how to make it clearer welcome!

                                            1. 3

                                              By describing what this software does and how it does it, rather than what a user does with it. The difference might be subtle but I still don’t know what this actually does.

                                              Does it generate JavaScript that fetches remote content? How does it circumvent same origin policy? Does it replace the contents of html elements at built time prior upload? Does it include a server component to update the contents throuh, say, a script, image iframe?

                                              A quick sample usage on the front page with just with the relevant piece of code without the boilerplate.

                                              I opened the exampe and I have no clue about what this does. I could spend a larger amount of time reading through the source of the whole project, but I am not a go programmer nor I think that is a reasonable requirement for using a program.

                                              1. 5

                                                I believe this would be a a good description:

                                                Stitcherd can automatically modify the element tree of a page before serving it to users. You can tell it where to insert new content using CSS selectors. For example, if you want to be evil, inject a huge animated banner just before the <main> element of every page you serve.

                                                1. 1

                                                  Thank you, That sounds awesome!

                                                  1. 1

                                                    And done (with credit).

                                                  2. 1

                                                    Okay, Thanks.

                                                    I had hoped the name (stitch == sew… d daemon) would be almost enough of a hint with a bit of supporting copy to show HOW it works would be enough, alas clearly it’s not.

                                                    Let me see if I can explain it… but it’s wordy. Hopefully this is a bit clearer.

                                                    It’s a server that reads source content (typically a local static file, but could come from a remote source), fetches one or more pieces of remote “dynamic” content and injects them into the source document using a css selector to find the place where it should insert the content. These can be nested. The remote dynamic content can be the remote html or output of a Go template that can also process remote html and/or remote json data.

                                                    The resulting content is then served to the client. You can have multiple routes with different sets of content to be replaced (and indeed source document).

                                                    In other words a server that does server side includes, but with css/dom manipulation and so doesn’t require special directives in the source documents.

                                                    A couple of Use cases:

                                                    • Fast e-commerce site with static product pages, but dynamic pricing/availability/promotions. Plus server side carts
                                                    • Commenting system for static blogs. JS Free
                                                    • Micro services/frontends
                                                    • $DAYJOB. We have a fairly heavy and slow Rails app that would benefit from re-architecting with stitcherd as the core front end server.

                                                    Yes, it’s server that has to be hosted somewhere and you need to decide if that’s okay for your use case or not, but then so are most of the alternatives, except JS of course, but same-origin, et al leads it to be harder (imo) to use than this.

                                                    1. 2

                                                      It could probably be used for styling, too. Diazo - a “new” style/theme solution for zope/plone (one of the very first object db/app servers) uses a similar technique:

                                                      http://docs.diazo.org/en/latest/

                                                      Note that diazo “compiles to” xlst and can be deployed directly in varnish, nginx or other edge side caches/proxies.

                                                      1. 1

                                                        I remember Zope/Plone :) Looks interesting, I’ll give this a deeper look in the morning.

                                              1. 21

                                                A long time ago I worked on Transactional NTFS, which was an attempt to allow user controlled transactions in the filesystem. This was a huge undertaking - it took around 8 years and shipped in Vista. The ultimate goal was to unify transactions between the file system and SQL server, so you could have a single transaction that spans structured and unstructured data. You can see the vestiges of that effort on docs.microsoft.com, although if you click that link, you’ll be greeted with a big warning suggesting that you shouldn’t use the feature.

                                                One of the use cases being mentioned early in development was atomic updates to websites. In hindsight, I’m embarrassed to not reflexively call “BS” then and there. Even if we could have had perfectly transactional updates to a web server, there’s no atomicity with web clients who still have pages in their browser with links that are expected to work, or are even actively downloading HTML which will tell them to access a resource in future. If the client’s link still works, it implies a different type of thinking, where resources are available long after there are no server side links to them, which is why clouds provide content addressable blob storage which is used as the underpinnings for web resources. Stale resources are effectively garbage collected in a very non-transactional way. Once you have GC deleting stale objects, you also don’t need atomic commit of new objects either.

                                                The majority of uses we hoped to achieve didn’t really pan out. There’s one main usage that’s still there, which is updates: transactions allow the system to stage a new version of all of your system binaries while the system is running from the old binaries. All of the new changes are hidden from applications. Then, with a bit of pixie dust and a reboot, your system is running the new binaries and the old ones are gone. There’s no chance for files being in use because nothing can discover the new files being laid down until commit. I really thought I was the last person alive still trying to make this work when writing filter drivers in 2015 that understand and re-implement the transactional state machine so the filter can operate on system binaries and the system can still update itself.

                                                Somebody - much older and more experienced in file systems - remarked when we were finishing TxF that file system and database hybrids emerge every few years because there’s a clear superficial appeal to them, but they don’t last long. At least in our case, he was right, and I got to delete lots of code when putting together the ReFS front end.

                                                1. 2

                                                  This was a super interesting read, thanks for sharing it!

                                                  Even if we could have had perfectly transactional updates to a web server, there’s no atomicity with web clients

                                                  This seems to become more of an issue when clients run code. When there is no client side code it seems to be a non-issue to me. (Say, all assets can be pushed via HTTP/2 to make sure the version is right.)

                                                  If there is client-side code, one could force a re-load when the server-side codebase has changed.

                                                  That aside, I’m not talking about transactions for application change, I’m talking about transactions for user data changes. That is currently unsolved, unless one stores all user uploaded images in the DB.

                                                  remarked when we were finishing TxF that file system and database hybrids emerge every few years because there’s a clear superficial appeal to them, but they don’t last long

                                                  Haha, interesting! I guess only time can tell. :)

                                                  1. 1

                                                    This seems to become more of an issue when clients run code.

                                                    That’s half of Fielding’s thesis on REST right there ;)

                                                    It’s a bit unfortunate that the need (and I agree it is a need) for encryption/confidentiality/privacy led to the current state of http2/tls - where a lot of the caching disappeared, leaving only client cache and server/provider cache (no more mtm lan caches)-which makes REST less interesting. Even for application/websites were the architecture is a great fit.

                                                    Recommended reading (still)ffor those that havennot read it (just remember modern web apps/spas are not REST - they’re more like applets or word files with macros.

                                                    https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

                                                  2. 1

                                                    What do you see as a problem when implementing transactions on a file system? It seems doable (as ZFS’s snapshot has shown, partially), but why it is not more prominent? Is there any trade offs made that bitter?

                                                    1. 4

                                                      I don’t think the problem was implementation. It was more a case of a solution looking for a problem.

                                                      In hindsight, among other things, a file system is an API that has a large body of existing software written to it. Being able to group a whole pile of changes together and commit them atomically is cute, but it doesn’t help if the program reading the changes is reading changes from before and after the commit (torn state.) Although Transactional NTFS could support transactional readers (at the file level), in hindsight it probably needed volume wide transactional read isolation, but even if it did, that implies that both the writer and reader are modified to use a new API and semantics. But if you can modify both the producer and consumer code, there’s not much point trying to impersonate a file system API - there are a lot more options.

                                                      1. 1

                                                        I’d say the big issue is that classic OSs are not transactional, so adding a transactional FS to it just doesn’t make sense. What if a process starts a transaction, keeps it open for months, then crashes?

                                                        To make a transactional FS you also need to build a transactional OS around it.

                                                    1. 3

                                                      This site doesn’t look great on mobole, which is sadly ironic.

                                                      1. 1

                                                        Looks fine in Firefox on Android?

                                                        1. 1

                                                          Can you tell me what issues you’re having or write an issue on github or sourcehut?

                                                        1. 1

                                                          Maybe it’s time to google “zfs” for once in my life. The encryption sounds cool as. (Is encrypted drives on a remote server like this safe? Can you trust that the drives are encrypted securely, or can the server host decrypt them in some evil way?)

                                                          1. 5

                                                            Zfs is really cool. There have been a few filesystem + volume manager “in one” systems - and there are some arguments to be made about cross cutting concerns - like handling SSD discard with an encrypted file system.

                                                            As for software encryption - à remote server isn’t safe if you don’t trust the provider. They even let you know they’ll run your is that mount the disks in a vm - so they cold just dum the ram and read the encryption keys.

                                                            If your threat model is more: I’d rather not anyone get the data from the disks after they’re spun down - it’s OK. (remote trusted computing is a really high bar to clear, with hardware enclaves that you can communicate with over the network, encrypted ram, worrying about encrypted cpu cache…).

                                                            1. 6

                                                              You don’t have to send the encryption key to the receiving server with ZFS.

                                                              1. 2

                                                                Ah, I was mostly responding to the question about mounting encrypted data on a remote server.

                                                                For just backup, no “remote” reading, it is indeed possible to send snapshots - and no need to decrypt/mount (but you can, if you want to).

                                                          1. 4

                                                            So what I didn’t get: What is FaaS ? What exactly is the use case for this service ? Defining some function which can then be called from anywhere (probably “sold”) ?

                                                            1. 9

                                                              It’s like CGI billed pr request on a managed server.

                                                              1. 7

                                                                NearlyFreeSpeech.net does something sort of like that. Not per request, but based on RAM/CPU minutes. I find it pretty convenient.

                                                                1. 5

                                                                  Yeah, I was hoping on billing by resource use (RAM, CPU and data transfer through syscalls) in a way that would give you a more precise view into how long your programs were taking to run. This would also give people an incentive to make faster code that uses less ram, which i would really love to see happen.

                                                              2. 3

                                                                I think this whole FaaS is a very interesting movement. Combined with more edge pods we deployed through Fastly / Cloudflare, we are onto something quite different than the big cloud services we saw with Facebook / Twitter / Gmail today.

                                                                Reimagining how you are going to do email today (or really, any personalized messaging system like Whatsapp). These edge pods with deployment bundles like wasm enables us to implement the always online inbox, potentially with high availability right at the edge. So your device syncing will be extremely fast. At the same time, it is hosted online so everyone can reach you with minimal central coordination.

                                                                It is unlikely in the beginning these things by their own will be successful. Decentralization is not a big consideration at this time. But it could deliver benefits today if implemented correctly, even for existing big players like Facebook. You could have a Wasm edge node close to the device materialize the feed GraphQL fragments in anticipation for a new user request. And since the edge node will know what’s the last synced feed fragment, it can also do so incrementally.

                                                                I am optimistic about this. Would love to see wasm based deployment taking off, especially for edge nodes.

                                                                1. 1

                                                                  This is an approach and idea that DFINITY (https://dfinity.org/) is pursuing, to provide a fully decentralized computing platform. The system is running wasm as the basic unit of execution, and charges for the cycles, memory, and bandwidth used. Currently, it is in beta, but should become available next year.

                                                                  Disclaimer: I work for DFINITY.

                                                                  1. 1

                                                                    Thanks! Yep, I looked at DFINITY before. One thing would be compelling to me is the closeness to the customers. With our cloud computing moved to the low-latency territory (most significantly, the cloud gaming), closeness of the edge nodes is a necessity. This is often overlooked by many decentralized movements from cryptocurrency space (probably because these Dapps have different focuses).

                                                                2. 2

                                                                  Functions as a Service. Basically the usecase is for people that want to run code that doesn’t run often enough to justify having a dedicated box for it, and just often enough that you don’t want to set up anything for it beforehand. In this case, I plan to start using it for webhook handlers for things like GitHub and Gitea.

                                                                  1. 2

                                                                    So then you plan to be administering/running Wasmcloud? The idea is that people can just upload code to you? What hosting service are you using?

                                                                    This reminds me that I need to write about shared hosting and FastCGI. And open source the .wwz script that a few people are interested in here:

                                                                    https://lobste.rs/s/xl63ah/fastcgi_forgotten_treasure

                                                                    Basically I think shared hosting provides all of that flexibility (and more, because the wasm sandbox is limited). I do want to stand my scripts up on NearlyFreeSpeech’s FastCGI support to test this theory though…

                                                                    I think the main problem with shared hosting is versioning and dependencies – i.e. basically what containers solve. And portability between different OS versions.

                                                                    I think you can actually “resell” shared hosting with a wasmcloud interface… that would be pretty interesting. It would relieve you of having to manage the boxes at least.

                                                                    1. 4

                                                                      So then you plan to be administering/running Wasmcloud?

                                                                      I have had many back and forth thoughts about this, all of the options seem horrible. I may do something else in the future, but it’s been fun to prototype a heroku like experience. As for actually running it, IDK if it would be worth the abuse risk doing it on my own.

                                                                      The idea is that people can just upload code to you?

                                                                      If you are either on a paid tier, uploading the example code or talked with me to get “free tier” access yes. This does really turn into a logistical nightmare in practice though.

                                                                      What hosting service are you using?

                                                                      Still figuring that part out to be honest.

                                                                      I think the main problem with shared hosting is versioning and dependencies – i.e. basically what containers solve.

                                                                      The main thing I want to play with using this experiment is something like “what if remote resources were as easy to access as local ones?” Sort of the Plan 9 “everything is a file” model taken to a logical extreme just to see what it’s like if you do that. Static linking against the platform API should make versioning and dependencies easy to track down (at the cost of actually needing to engineer a stable API).

                                                                      I think you can actually “resell” shared hosting with a wasmcloud interface… that would be pretty interesting. It would relieve you of having to manage the boxes at least.

                                                                      I may end up doing that, it’s a good idea.

                                                                      1. 1

                                                                        (late reply)

                                                                        FWIW I have some experience going down this rabbithole, going back 10 years. Basically trying to make my own hosting service :) In my case part of the inspiration was looking for answers to the “polyglot problem” that App Engine had back in 2007. Heroku definitely did interesting things around the same time period.

                                                                        Making your own hosting service definitely teaches you a lot, and it goes quite deep. I have a new appreciation for all the stuff we build on top of. (And that is largely the motivation for Oil, i.e. because shell is kind of the “first thing” that glues together the big mess we call user space.)


                                                                        To be a bit more concrete, I went down more of that rabbithole recently. I signed up for NearlyFreeSpeech because they support FastCGI. I found out that it’s FreeBSD! I was hoping for a “portable cloud” experience with Dreamhost and NearlyFreeSpeech. But BSD vs. Linux probably breaks that.

                                                                        It appears there are lots of “free shell” providers that support CGI, but not FastCGI. There are several other monthly providers of FastCGI like a2hosting, but not sure I want to have another account yet, since the only purpose is to test out my “portable cloud”.

                                                                        Anyway, this is a long subject, but I think FastCGI could be a decent basis for “functions as a service”. And I noticed there is some Rust support for FastCGI:

                                                                        https://dafyddcrosby.com/rust-dreamhost-fastcgi/

                                                                        ( I’m using it from Python; I don’t use Rust)

                                                                        It depends on how long the user functions will last. If you want very long background functions, then FastCGI doesn’t really work there, and shared hosting doesn’t work either. But then you have to do A LOT more work to spin up your own cloud.

                                                                        It’s sort of the “inner platform problem” … To create a platform, you have to solve all those same problems AGAIN at the level below. I very much got that sense with my prior hosting project. This applies to packaging, scheduling / resource management, and especially user authentication and security. Security goes infinitely deep… wasm may help with some aspects, but it’s not a complete solution.

                                                                        And even Google has that problem – running entire cluster managers, just to run another cluster manager on top! (long story, but it is interesting)

                                                                        Anyway I will probably keep digging into FastCGI and shared hosting… It’s sort of “alternative” now, but I think there is still value and simplicity there, just like there is value to shell, etc.

                                                                  2. 1

                                                                    So what I didn’t get: What is FaaS ?

                                                                    FaaS is a reaction to the fact that the cloud has horrendous usability. If I own a serer and want to run a program, I can just run it. If I want to deploy it in the cloud, I need to manage VMs, probably containers on top of VMs (that seems to be what the cool kids are doing), and some orchestration framework for both. I need to make sure I get security updates for everything in my base OS image and everything that’s run in my container. What I sctually want is to write a program that sits on top of a mainframe OS and runs in someone’s mainframe^Wdatacenter, with someone else being responsible for managing all of the infrastructure: If I have to maintain most of the software infrastructure, I am missing a big part of the possible benefit of outsourcing maintenance of the hardware infrastructure.

                                                                    Increased efficiency from dense hosting was one of the main selling points for the cloud. If I occasionally need a big beefy computer but only for a couple of hours a month and need a tiny trickle of work done that wouldn’t even stress a first-generation RPi the rest of the time, I can reduce my costs by sharing hosting with a load of other people and having someone else manage load balancing across a huge fleet of machines. If; however, I have to bring along a VM, container runtime, and so on, then I’m bringing a fixed overhead that, in the mostly-idle phases, is huge in comparison to my actual workload.

                                                                    FaaS aims to provide a lightweight runtime environment that runs your program and nothing else and can be scaled up and down based on load and billed by RAM MB-second, CPU-second and network traffic (often with some rounding). It aims to be a generic and scalable version of the kind of old-school shared hosting, where a load of people would use the same Apache instance with CGI: the cost of administering of the base environment that executes the scripts is shared across all users and the cloud provider can run the scripts on whatever node(s) in the datacenter make sense right now. The older systems typically used the filesystem for read-only data and a database for persistent data. With FaaS, you typically don’t have a local filesystem but can use cloud file / object stores and databases as you need them. Again, someone else is responsible for providing a storage layer that can scale up and down on demand and you pay for the amount of data that’s stored there and how often you access it but you don’t need to overprovision (as you do for cloud VM disks, where you’re paying for the maximum amount of space you might need for any given VM).

                                                                    TL;DR: FaaS is an attempt to expose the cloud as a useful computer instead of as a platform on which you can simulate a bunch of computers.

                                                                  1. 7

                                                                    I prefer to let a reverse proxy like Nginx handle compression, which simplifies things quite a bit and makes one less responsibility for the app.

                                                                    1. 3

                                                                      I don’t understand why the author repeatedly mentions compression as though it were an important feature. In front of any production Python web service, I’m already going to be running an httpd just to terminate TLS. Since it’s already there it is a reasonable place to do compression too.

                                                                      1. 2

                                                                        Abstraction boundaries. Engineers can find themselves splitting functionality between their fronting proxy and their web framework, which makes changes to the setup more error prone. Wait until one day you have ancient header stripping logic in your httpd while you try to force your Python app to emit headers, but you can’t get the headers to come out. Having two sources of truth for web service logic can often be quite confusing.

                                                                        1. 1

                                                                          Been there done that and it wasn’t that hard to debug.

                                                                          Obvious in retrospect solution is to keep httpd config file in the same repo so you deploy them together. We do this with IIS configuration files on Azure at the moment at work and it works pretty smoothly.

                                                                        2. 1

                                                                          In general I agree - but might make difference on embedded devices. Although you would probably want encryption - that might be left to a vpn.

                                                                      1. 3

                                                                        I worked on professional projects with Flask (v1+) and here are two things that deserve to be mentioned :

                                                                        • Using blueprints to structure the app is pretty effective to avoid the mess. It’s somewhat like Django’s apps.
                                                                        • Very good scability with uWSGI+gevent support. It doesn’t feel too hacky because uWSGI has a special option for that. (Not more hacky than using gevent per se)

                                                                        Flask dependencies don’t collaborate together. This will hit you at least once a year when you try to upgrade and things break. Too bad request.headers from flask is a werkzeug.datastructures object and the object has changed!

                                                                        It seems very strange to me. I never had any issue upgrading Flask and its core dependencies: Jinja2 and Werkzeug. Those are written by the same authors. Maybe it’s wiser to upgrade only Flask and let pip decide which dependencies need an upgrade too.

                                                                        However, it’s true, you have to be careful with global variables.

                                                                        1. 2

                                                                          I’ve used Flask only once in a small project, but a definite +1 on using blueprints to contain the mess. They provide a very nice way to encapsulate different parts of a web application.

                                                                          1. 1

                                                                            I guess the question is: is flask+blueprints preferable to tornado? (or maybe fastapi)?

                                                                            I get that with bottle, at least you stay lightweight and self-contained - for better or worse.

                                                                            1. 2

                                                                              Blueprints are a Flask feature, not an add-on or something. So using Flask+blueprints is just using Flask.

                                                                              I’d say Flask is suitable if you do traditional SSR-apps or hybrid SSR/SPA. If you only need a backend API or a little self-contained dashboard, then yes maybe other frameworks like Bottle or FastAPI may serve you better.

                                                                        1. 3

                                                                          Oh, this is very nice. Missing the newer micro-frsmeworks built around the new/standardized async - like fastapi or sanic.

                                                                          I feel my frustration around a flask project at work is somewhat vindicated :/

                                                                          1. 3

                                                                            +1. I did enjoy proper async in Sanic instead of the mess of handler methods called in who-knows-what order in Tornado. The author’s interpretation of “some boilerplate” in Tornado was rather charitable.

                                                                          1. 2

                                                                            Nice to have a faster alternative. A little sad that the filesystem is still slow enough that it’s needed.

                                                                            1. 1

                                                                              These large single-file databases are typically built by cron jobs overnight. Filesystems instead keep things up-to-date to the microsecond and typically colocate metadata nearby file data on storage. So, the problems are just different. Large, out of date single-file will always be much faster (though you may or may not have large file sets to search through or care about the “absolute” time).

                                                                            1. 3

                                                                              moreutils - specifically ifne shows up in my personal scripts quite a bit.

                                                                              ncat - a more fully featured nc

                                                                              pv - (not sure if it has a homepage) pipe viewer, monitor the amount of data going through pipes

                                                                              dc - (also unsure of homepage) RPN equivalent of bc, the terminal calculator

                                                                              1. 1

                                                                                pv - (not sure if it has a homepage) pipe viewer

                                                                                I believe that’d be (no ssl support):

                                                                                http://ivarch.com/programs/pv.shtml

                                                                                1. 1

                                                                                  RPN?

                                                                                  1. 1

                                                                                    Reverse Polish Notation, the style that older HP calculators use, rather than having operator precidence it’s a stack language, so rather than 1 * (2+3) you’d say 1 2 3 + *. I like it causw I had an HP calculator in high school, and I have a bit of a thing for concatenative languages. A proper modern language that works on this principle can be had at factorcode.org, but it’s not terminal based

                                                                                    1. 1

                                                                                      Aah, gotcha. It’s been a good while since I’ve heard of RPN mentioned anywhere.

                                                                                      I’m familiar with the style, and have dabbled with some stack-based languages before.

                                                                                1. 5

                                                                                  I wonder whether they got permission from all their open source contributors to re-license the code? Or maybe they use a CLA like Shopify and co. do, where you waive all your rights to the code you own once it’s merged to the main tree?

                                                                                  1. 12

                                                                                    It sounds like it was previously mit, and if I understand the law correctly you can make modifications to mit software and release the modified version under gpl without issue (so long as you preserve the original mit license text).

                                                                                    1. 3

                                                                                      Hmm… but relicensing code requires the permission of the code’s author, no? For the company’s own code that’s probably fine, but what about any outside contributors that might not agree with the license change? They might have the right to rescind their code.

                                                                                      1. 22

                                                                                        They gave that permission by using under the MIT License. It is when you go in the ‘other’ direction that you need to ask for everyone’s consent/permission. ej. Racket had a huge multiyear thread asking everyone if they were OK when changing from LGPL to MIT.

                                                                                        Btw I remember in the 00’s some BSD complained that Linux developers would take their driver code, use it and license it under the GPL, making it impossible to merge any improvements upstream.

                                                                                        https://opensource.stackexchange.com/a/5833

                                                                                        1. 8

                                                                                          Btw I remember in the 00’s some BSD complained that Linux developers would take their driver code, use it and license it under the GPL, making it impossible to merge any improvements upstream.

                                                                                          I mean, isn’t that exactly the purpose of MIT? “Here’s some code, do whatever you want with it, you don’t have to contribute improvements back”.

                                                                                        2. 12

                                                                                          Technically the old code would still be MIT and the new code would be AGPL. However, since AGPL has more strict requirements the whole project is effectively AGPL. They’d still need to preserve the original MIT license text though.

                                                                                          1. 7

                                                                                            The code’s authors licensed their code under the MIT license, which allows that code to be relicensed by anyone else under new terms (such as the AGPL).

                                                                                            1. 1

                                                                                              No, re-licencing is not permitted. If I write fila A of project X, using MIT, and someone else writes file B under AGPL - then another user who gets A and B would get both under AGPL - however, they could still (in general) use A according to MIT.

                                                                                              If this makes a difference or not will dependa lot on the projectas a whole, and the content of A.

                                                                                              A could be a self contained c allocator, or a clever implementation of a useful ADT. Or it could be a smallpart of what B provides, like an implementationof a print macro/trait for Canadian post codes.

                                                                                              1. 2

                                                                                                Sure. But say you write some file and license it publicly under the MIT license. I can then take that same file and, in accordance with the terms of the former license, license it to someone else under the terms of the AGPL license. They will then not be able to use it under the terms of the MIT license.

                                                                                                In practice, this is not such a big deal, since the original version is likely still available and indistinguishable from the version I provide. However if I change something small—like, say, the wording—then my changed version is distinct from your original, and if I license it as AGPL it won’t be possible to use it under the terms of the MIT license.

                                                                                                1. 2

                                                                                                  No, as far as I understand this is not correct - a bsd or mit license is connected to copyright - and you need to make substantial changes in order to claim copyright. Without copyright you cannot re-license.

                                                                                                  Remember -in most jurisdictions, the default is copyright. If I write a poem here - you could quote me, but not publish my poem - you have no license to redistribute it. If I explicitly give you a license - you cannot change that license.

                                                                                                  This does get a bit muddy with the various viral licenses you point out -but as far as I understand mixing file A under mit, with file B under gpl (or AGPL)- does not really allow you, the distributor of A and B, and the recipient of A to re-license A.

                                                                                                  Your downstream users would/should still get A with mit copyright notice, and will be free to distribute/use A (and only A) under mit.

                                                                                                  Doing so would not make the GPL license for A and B invalid.

                                                                                                  Ie: you include an mit malloc in your “ls” utility. A user that gets the source from you, could go in and see that, ok, this malloc bit (assume it’s not modified)- I can use that as mit.

                                                                                                  This is because you as the distributor, do not have copyright to the upstream mit bit.

                                                                                                  People will claim differently, and I don’t think it’s been tested in court - but AFAIK this how the legal bits land.

                                                                                                  1. 8

                                                                                                    You don’t need to claim copyright over something to relicense it. You can grant a license to a copyrighted work if your own license to that work permits it, which MIT explicitly does.

                                                                                                    including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software

                                                                                                    1. 3

                                                                                                      Ah, thank you. I wasn’t aware of this difference between MIT and bsd. I suppose I’ll have to check apache too.

                                                                                                      Some more on mit vs bsd: https://opensource.stackexchange.com/questions/217/what-are-the-essential-differences-between-the-bsd-and-mit-licences

                                                                                                  2. 1

                                                                                                    Note that this is different from explicit grants of re-license, like GPL v2 (i think) has a provision “or any later version”.

                                                                                                    So if I get a gpl2 file I can choose to distribute it as gplv3.

                                                                                        1. 3

                                                                                          Note that AGPL can still be exploited by cloud providers, and unfortunately it doesn’t always play well with other open-source projects that have more permissive licenses.

                                                                                          1. 9

                                                                                            what do you mean that it “can still be exploited by cloud providers”?

                                                                                            1. 2

                                                                                              If you’re really curious, read on why mongodb, confluent and redis (among others) changed their license to ones that aren’t authorized by the OSI.

                                                                                              1. 3

                                                                                                so by “exploit” you mean they can benefit from it for free, while sharing any modifications

                                                                                                1. 3

                                                                                                  No, I mean they can drive the authoring entity out of business, without ever having to make any modification in the first place.

                                                                                                  1. 1

                                                                                                    Like re-implementing the project and offering as a service with compatible interface?

                                                                                                    1. 3

                                                                                                      Sounds like Google vs Oracle =)

                                                                                              2. 1

                                                                                                I think they are talking about one of the scenarios the article explicitly wants to avoid.

                                                                                                We want to prevent corporations from offering Plausible as a service without contributing to the open source project

                                                                                                With the AGPL, they are only forced to share the code, they can don’t have to actually improve it. After all plausible is selling a support service, not software. A bigger company can offer the same service.

                                                                                                1. 4

                                                                                                  plausible doesn’t seem to be worried about that, since they are moving to the AGPL. i would think the developer of a product would have an edge in the support market over other companies offering support for a product they don’t develop.

                                                                                                  but i see how any use of a product could be considered “exploitation.” it’s just the nature of free software that anyone can use it and modify it as they wish.

                                                                                                  1. 4

                                                                                                    There doesn’t seem to be a license that’s accepted in the open source world and that prevents the cloud companies from offering the product as a service. What MongoDB and others did doesn’t seem to have been well received even though I do understand their concerns and think that’s there’s a need for a license like that.

                                                                                                    AGPL at least makes the playing field a bit more even and fair as a large corporation cannot just take from us but have to be clear about the relationship, give us credit and open any of their modifications. Then it’s up to us to make sure we communicate well so people are aware of what’s happening and can take that into consideration when they’re making a choice who to use.

                                                                                                    1. 1

                                                                                                      Nothing stops someone from hosting a managed service and also releasing all of their changes. If the win is that actual hosting, then that is the actual value. The only thing actually stopping them is not wanting to release the code, which is kinda ironic.

                                                                                                      1. 1

                                                                                                        With AGPL the have to allow any users to get a copy of the source. This isn’t quite the same as “contributing” (upstreamin changes) but since any of those users could send the changes upstream many consider it “good enough for rock ’n roll” to say it requires contributing

                                                                                                    2. 4

                                                                                                      Do you have an example of other permissive licenses that conflict?

                                                                                                      1. 3

                                                                                                        To the best of my understanding, a project that’s MIT or Apache 2.0 cannot use a GPL or AGPL project, because xGPL licenses are copyleft and effectively turn any project that uses them into xGPL as well.

                                                                                                        If the goal is mainly to prevent exploitation by the big players, then it’s a bit like burning your home to get rid of the ants. There have been attempts to produce licenses that are better suited for this purpose, however most of them end up doing it by “descriminating between fields of endeavor” (e.g. cloud hosting), and so the OSI deems them as “not open-source”, but rather “source available”.

                                                                                                        1. 4

                                                                                                          An MIT licensed project may have an AGPL dependency, but the distributed combination (or binary when linking, exact artifacts depend on stack) will be effectively AGPL. Some projects even have optional depndencies based on the license you want for your artifacts.

                                                                                                          Having an artifact be AGPL is only an issue if you plan to distribute as “cloed source”.

                                                                                                          1. 1

                                                                                                            Yes, it means every project that depends on you must be open-source as well, including small start-ups that try to remains competitive using their unique technology. Perhaps that’s what you want, but it’s not necessarily the best scenario for the world of open-source, or the world in general.

                                                                                                            1. 5

                                                                                                              Really not sure how preventing a startup from taking our freely given work and using it to produce something that is not open source is bad for anyone? That seems like the goal. They can release their code, or spend the money to write their own and not steal from the public commons.

                                                                                                              1. 4

                                                                                                                i think the mindset is that anything that could prevent an entrepreneur from bringing a product to market could be bad because the product might end up helping people. some people have that mindset.

                                                                                                                1. 1

                                                                                                                  I could try to argue the point, but instead let me ask you: Why do the MIT and Apache licenses exist in the first place, and why are they so popular? And why have they been gaining popularity every year in the last decade? (see: https://resources.whitesourcesoftware.com/blog-whitesource/open-source-licenses-trends-and-predictions)

                                                                                                                  According to your logic, most open-source code should choose to be GPL, no?

                                                                                                                  1. 1

                                                                                                                    because more and more open source projects are funded by tech companies that would like to use them in their proprietary projects

                                                                                                                    1. 1

                                                                                                                      So 70% of opensource is funded by commercial tech companies?

                                                                                                                      1. 1

                                                                                                                        i would think less

                                                                                                            2. 2

                                                                                                              Ah, yes, that sounds right. I was worried that there was maybe something I didn’t know about in case the licenses are combined the other way around. Ie. an (A)GPL project using a MIT/Apache 2.0 library should be fine, I think?

                                                                                                              I understand the concern about using AGPL for libraries, frameworks, etc, but it doesn’t look like a bad pick for application-type stuff, like OP’s product. The only type of derivative would be a fork/branch.

                                                                                                        1. 5

                                                                                                          I would target WASM, and only WASM, for this language.

                                                                                                          I was onboard until I got here. I didn’t want a Web Browser to be an application requirement when Electron did it, and I don’t want it now.

                                                                                                          1. 9

                                                                                                            You don’t need a web browser to use WebAssembly as a virtual machine though

                                                                                                            1. 3

                                                                                                              Are you talking about feeding a WASM blob into Node or something? Does a “JRE” exist for WASM? If so I would be interested in that. I just assumed a browser was required.

                                                                                                              1. 6

                                                                                                                There’s a few implementations out there. Here are two that I know of:

                                                                                                                https://wasmtime.dev/

                                                                                                                https://wasmer.io/

                                                                                                                Here’s a fun thought: if you have a program that wants to allow scripting, what if it embedded a wasm runtime, and hooked up the WASI bindings for the script integration points? Then instead of dictating a single language, you can allow scripting from any language that can compile to WASM.

                                                                                                                  1. 2

                                                                                                                    Looks like golang support for wasm is only for tinygo - with the runtime (the “libc”) only available as a Javascript implementation?

                                                                                                                    At any rate, yet another option would be: https://github.com/bytecodealliance/wasm-micro-runtime

                                                                                                                    But I expect it would fail in the same way (for golang).

                                                                                                                    Ed: i guess the canonical “not web browser” wasm runtime/vm for go is node:

                                                                                                                    https://github.com/golang/go/wiki/WebAssembly#executing-webassembly-with-node-js

                                                                                                                    Ed2: but should also work with deno, which provides some sandboxing: https://dev.to/taterbase/running-a-go-program-in-deno-via-wasm-2l08

                                                                                                                    1. 2

                                                                                                                      Looks like golang upstream is considering wasi (ed: https://wasi.dev/) support:

                                                                                                                      https://github.com/golang/go/issues/31105

                                                                                                                      There’s also this: https://github.com/go-wasm-adapter/go-wasm/issues/5

                                                                                                                      Found via: https://github.com/wasmerio/wasmer-go/pull/95

                                                                                                                      And: https://github.com/wasmerio/wasmer-go/issues/18

                                                                                                                      (also see my sibling comment)

                                                                                                                      1. 2

                                                                                                                        WASM is sandboxed to maximally hermetic paranoid level. It doesn’t support any communication with the outside world, except callbacks exposed by the interpreter. You can’t write WASM for your operating system. There is an emerging abstraction layer called WASI, which is like a very basic tiny abstract operating system.

                                                                                                                        In practice that means you can’t use use your language’s standard library, not even printf, unless it’s been rewritten for the WASI “operating system”.

                                                                                                                        1. 1

                                                                                                                          Thanks. That was exactly my reservation about using WASM, and you confirmed it. Thats the same reason I dont use JavaScript, because my understanding is the language specification itself has those same limitations. It is only the runtimes (Node, Deno) that have developed workarounds for file system access, but I think those are still not defined in the spec.

                                                                                                                          So for Cadey or whoever to say that a browser is not required for WASM - yeah, thats technically true, but it still doesnt solve the problem of WASM programs being crippled in what they can do versus a language like Go or Rust.

                                                                                                                          1. 4

                                                                                                                            WASM shouldn’t be any different than Lua in this regard. Theoretically nothing stops you from adding an escape hatch to your WASM interpreter, like ability to make an arbitrary syscall or FFI call.

                                                                                                                            If it hasn’t been done already, it’s probably only because WASM created a niche for itself among people who want the sandboxing (e.g. running untrusted code on a CDN edge, or portable plugins for programs).

                                                                                                                          2. 1

                                                                                                                            You can’t write WASM for your operating system.

                                                                                                                            https://github.com/wasmerio/kernel-wasm

                                                                                                                            I realize this is somewhat beside what you probably meant - but one possible benefit of a sandboxed spec is that things can run sort of safely in ring 0.

                                                                                                                        2. 2

                                                                                                                          Other good ones: life and lucet

                                                                                                                        3. 1

                                                                                                                          I’ve actually had multiple projects to do that and even did a talk on it.