Threads for Exagone313

  1.  

    Too bad it isn’t free software though.

    1. 2

      Which I think is a much more useful case than the original one he blogged about.

      1. 5

        Except you don’t need virtual columns to do it; they’re just syntactic sugar. I’ve been doing it for years by putting the json_extract call in the CREATE INDEX statement.

        1. 4

          I was going to ask this, I didn’t understand the point of creating virtual columns.

          I think being explicit is better in general. Except maybe if you use virtual columns for compatibility reasons.

          1. 2

            Yes, but virtual columns seem like a “syntactic sugar” feature all together. It just wraps some complexity that you can, of course, implement somewhere else.

            1. 6

              Agreed. Just thought I’d comment since the article could be misread as saying that indexing JSON properties requires virtual columns.

              1. 2

                I definitely misunderstood this. Thank you for clarifying!

        1. 4

          I wonder if you could support mutability somehow?

          I’m partly imagining torrent websites hosted on bittorrent (because it’s kinda meta) but could be generally useful/interesting perhaps.

          1. 4

            There’s a bittorrent protocol extension where you can distribute a public key that points to a mutable torrent, but I don’t know if it has been ported to webtorrent.

            1. 2

              The reference implementation for BEP0046 is done with webtorrent, don’t know if/how it works in browser though.

              1. 2

                As far as I understand, you can’t use DHT in a web browser, as nodes do not support WebRTC. The Webtorrent project includes a DHT library that works with Node.js (which is used by the desktop application).

          1. 4

            Core libraries like glibc (since 2.1) and libstdc++ (since GCC 5) are intending to remain backwards compatible indefinitely.

            If you need to distribute a binary built against glibc, you need to use a very old distribution to build it so that it may run on any other that your users use (it means that you will make less secure binaries - e.g. due to compiler bugs, new libraries that do not compile or without recent things like stack protection). Because some function symbols contain a version number that may not be supported in the earlier version some users have. That is not what you call backward compatible.

            And if you think about musl, then it’s a whole separate world: mixing libraries built for glibc with libraries built for musl will break.

            GUI apps built for Windows 95 still work out of the box on Windows 10.

            I think the author confuses backward compatibility with forward compatibility. Backward compatibility would mean that apps built for Windows 10 would still work on Windows 95.

            1. 5

              Forward compatibility is a design characteristic that allows a system to accept input intended for a later version of itself.

              Your use is also at odds with how “backward compatibility” is used with, e.g., game consoles.

              1. 4

                I got this wrong.

                A binary compiled against an earlier version of glibc is forward compatible with more recent versions of glibc. A binary compiled against a recent version of glibc is not backward compatible with earlier versions (but still forward compatible with newer versions).

                But glibc itself, by supporting the symbols of the past, is backward compatible. glibc is partially forward compatible, for the symbols that exist presently, so that newer versions are backward compatible. This is the same for operating systems that can run old binaries.

            1. 2

              Too bad it is implemented in C++ rather than C, which makes it harder to implement it in C projects that support Lua already.

              1. 1

                I wonder how the frontier of the definition of covered work of GPL applies here.

                1. 2

                  Has anyone considered trying NTFS root filesystem yet? It might be an…. interesting alternative to partitioning for dual boots.

                  1. 3

                    I’m fairly certain it’s not possible due to different features between the filesystems - in particular no suid means sudo won’t work. I’m also not sure mapping to different users on Linux works properly, though I haven’t checked in a while.

                    1. 1

                      That can probably bevworked around with creative use of extended attributes, if someone really wants to do it.

                      1. 1

                        I’m pretty sure NTFS has something for setuid, since Interix supported it.

                        1. 9

                          NTFS is a lot like BeFS: the folks talking to the filesystem team didn’t provide a good set of requirements early on and so they ended up with something incredibly general. NTFS, like BeFS, is basically a key-value store, with two ways of storing values. Large values can (as with BeFS) be stored in disk blocks, small values are stored in a reserved region that looks a little bit like a FAT filesystem (BeFS stores them in the inode structure for the file).

                          Everything is layered on top of this. Compression, encryption, and even simple things like directories, are built on top of the same low-level abstraction. This means that you can take a filesystem with encryption enabled and mount it with an old version of NT and it just won’t be able to read some things.

                          This is also the big problem for anything claiming to ‘support NTFS’. It’s fairly easy to support reading and writing key-value pairs from an NTFS filesystem but full support means understanding what all of the keys mean and what needs updating for each operation. It’s fairly easy to define a key-value pair that means setuid, but if you’re dual booting and Windows is also using the filesystem then you may need to be careful to not accidentally lose that metadata.

                          I also don’t know how the NTFS driver handles file ownership and permissions. In a typical *NIX filesystem, you have an small integer UID combined with a two-byte bitmap of permissions. You may also have ACLs, but they’re optional. In contrast, NTFS exposes owners as UUIDs (much larger than a uid that any *NIX program understands) and has only ACLs (which are not expressed with the same verbs as NFSv4 or POSIX ACLs), so you need some translation layer and need to be careful that this doesn’t introduce incompatibilities with the Windows system.

                          You’re probably better off creating a loopback-mounted ext4 filesystem as a file in NTFS and just mounting the Windows home directory, if you want to dual boot and avoid repartitioning.

                          Note that WSL1 uses NTFS and provides Linux-compatible semantics via a filter driver. If someone wants to reverse engineer how those are stored (wlspath gives the place they live in the UNC filesystem hierarchy) then you could probably have a Linux root FS that uses the same representation as WSL and also uses the same place in the UNC namespace so that Windows tools know that they’re special.

                          1. 1

                            What is used by WSL 2?

                            1. 5

                              WSL2 is almost totally unrelated to WSL, it’s a Linux VM running on Hyper-V (I really wish they’d given WSL2 a different name). Its root FS is an ext4 block device (which is backed by a file on the NTFS file system). Shared folders are exported as 9p-over-VMBus from the host.

                              This is why the performance characteristics of WSL and WSL2 are almost exactly inverted. WSL1 has slow root FS access because it’s an NTFS filesystem with an extra filter driver adding POSIX semantics but the perf accessing the Windows FS is the same because it’s just another place in the NTFS filesystem namespace. WSL2 has fast access to the root FS because it’s a native Linux FS and the Linux VM layer is caching locally, but has much slower access to the host FS because it gets all of the overhead of NTFS, plus the overhead of serialising to an in-memory 9p transport, plus all of the overhead of the Linux VFS layer on top.

                              Hopefully at some point WSL will move to doing VirtIO over VMBus instead of 9p. The 9p filesystem semantics are not quite POSIX and the NTFS semantics are not 9p or POSIX, so you have two layers of impedance mismatch. With VirtIO over VMBus, the host could use the WSL interface to the NTFS filesystem and directly forward operations over a protocol that uses POSIX semantics.

                              There are some fun corner cases in the WSL filesystem view. For example, if you enable developer mode then ln -s in WSL will create an NTFS symbolic link. If you disable developer mode then unprivileged users aren’t allowed to create symbolic links (I have no idea why) and so WSL creates an NTFS junction. Nothing on the system other than WSL knows what to do with a junction that refers to a file (the rest of Windows will only ever create junctions that refer to directories) and so will report the symlink as a corrupted junction. This is actually a pretty good example of the split between being able to store key-value pairs and knowing what they mean in NTFS: both WSL and other Windows tools use the same key to identify a junction but WSL puts a value in that nothing else understands.

                              1. 1

                                Actual Linux filesystems. Because it’s just a Linux kernel, in Hyper-V, with dipping mustards.

                        2. 2

                          Why not go the other way around and boot Windows off of btrfs? :D

                          1. 1

                            This is only a proof of concept at this stage - don’t use this for anything serious.

                            But really, why not, you have backups… right? :P

                        1. 1

                          Since the projects I use at work rely on docker-compose, I can’t make the switch. Unfortunately, podman-compose doesn’t support the whole syntax and features. It is possible to rewrite a compose file in a shell script, but this is not handy to maintain. I used such alternative for deploying and updating a container on a server though.

                          A big advantage of podman on Linux servers is that it doesn’t bypass netfilter unlike Docker, which is a pain to get it right.

                          1. 1

                            In the case of ZorinOS Reddit, does the GPL cover the whole iso distribution, or only individual pieces of software part of a larger operating system distribution? I’d say the branding of ZorinOS is not GPL, but I’m not sure (IANAL). Microsoft distributes software coming from various operating system distributions with WSL, but their whole operating system is not yet GPL-ed.

                            1. 3

                              An IP-based rate-limit might not be perfect if you only do a check per IP and not per block. Especially with IPv6 when everyone usually got a /48 or a /64.

                              A /48 means you have 2^(128-48) possible IP addresses to use, or 1208925819614629174706176.

                              And Lobsters is accessible in IPv6.

                              1. 94

                                For a huge number of cases (dense, two dimensional, tabular data) CSV is just fine, thank you. Metadata comes in a side car file if needed. This file can be read by your granddad, and will be readable by your grandkid.

                                Lots of programs can process gzipped CSV files directly, taking care of the OMG this dataset is 10 GB problem. You can open up a CSV in an editor and edit it. You can load it into any spreadsheet program and graph it. You can load it into any programming REPL and process it.

                                CSV is problematic for unstructured data, often known in the vernacular as raw, uncleaned data. Usually this data comes in messy, often proprietary, often binary formats.

                                I was also disappointed to see that the article had no actual point. No alternative is proposed and no insight is given.

                                1. 13

                                  They do suggest alternatives, avro, parquet, arrow, and similar formats. Yes, that throws away editing with a simple text editor. But I believe the author was more concerned with people who import and export with Excel. Those people will always load the CSV into Excel to edit anyway, so plain text isn’t a key feature.

                                  You can load it into any spreadsheet program and graph it.

                                  If the author’s dream comes true and spreadsheets support a richer binary format, that won’t change.

                                  You can load it into any programming REPL and process it.

                                  Yes. And the first step: import csv or the equivalent in your language of choice. How is that any different from import other_format?

                                  CSV is just fine, thank you. Metadata comes in a side car file if needed.

                                  I feel this argument is equivalent to “programmers are smart and therefore always do the right thing.” But that’s simply untrue. I’ve gotten malformed CSV-format database dumps from smart people with computer science degrees and years of industry experience. Programmers make mistakes all the time, by accident or from plain ignorance. We have type checkers and linters and test suites to deal with human fallibility, why not use data formats that do the same?

                                  CSV is like C. Yes, it has gotten us far, but that doesn’t mean there’s nothing better to aspire to. Are Rust, Go, and Zig pointless endeavors because C is universally supported everywhere already? Of course not.

                                  1. 12

                                    Yes, that throws away editing with a simple text editor.

                                    It also throws away using command-line tools for munging, client-side JS validation via regex or chunking, or all kinds of other things.

                                    Author needs to sell a replacement for CSV for business reasons, but that doesn’t make CSV bad.

                                    1. 2

                                      throws away using command-line tools for munging

                                      FWIW traditional unix command line tools like awk, cut and sed are terrible at all of the above with CSV because they do not understand the quoting mechanism.

                                      I would vastly prefer to be using jq or (if I have to cope with xml) xmlstarlet most of the time.

                                      client-side JS validation via regex or chunking

                                      We’ve got ArrayBuffer and DataView now, we can write JS parsers for complicated file formats. ;)

                                      1. 4

                                        FWIW traditional unix command line tools like awk, cut and sed are terrible at all of the above with CSV because they do not understand the quoting mechanism.

                                        It’s worse than that. They’re all now locale-aware. Set your locale to French (for example) and now your decimal separator is a comma. Anything using printf / scanf for floats will treat commas as decimal separators and so will combine pairs of adjacent numeric fields into a single value or emit field separators in the middle of numbers.

                                        For personal stuff where I want a tabular format that I can edit in a text editor, I always use TSV instead of CSV. Tabs are not a decimal or thousands separator in any language and they don’t generally show up in the middle of text labels either (they’re also almost impossible to type in things like Excel, that just move to the next cell if you press the tab key, so they don’t show up in the data by accident). All of the standard UNIX tools work really well on them and so do some less-standard ones like ministat.

                                        1. 2

                                          Tangentially, this reminds me of how incredibly much I hate Microsoft Excel’s CSV parser.

                                        2. 3

                                          Those tools are terrible, but are totally sufficient for a large chunk of CSV use cases. Like, yes, you’ll get unclean data sometimes, but in a lot of cases it’s no big deal.

                                          re: JS parsing…I’ve done all of those things. I still appreciate the simplicity of CSV where even an intern can bodge something reasonable together for most use cases.

                                          Like, this is all super far into the territory of Worse is Better.

                                          1. 2

                                            I’d much rather be code reviewing the intern’s janky jq filter, or janky for loop in JavaScript, than their janky awk script. :)

                                            Like, this is all super far into the territory of Worse is Better.

                                            Haha, for sure.

                                      2. 3

                                        This is the reason why some industry app I work on needs to support XLSX, because of Office usage. We got CSV, XSLX and Parquet formats support in different parts of the app, depending to how data is uploaded.

                                      3. 7

                                        I was also disappointed to see that the article had no actual point. No alternative is proposed and no insight is given.

                                        My understanding is that this article is a marketing blog post for the services they provide. Tho, I mostly I agree with them that CSV should be replaced with better tools (and many people are already working on it).

                                        1. 21

                                          Better at what though? CSV is not a “tool”- it’s a universally understood format.

                                          You had better have a fantastically good reason for breaking a universally understood format.

                                          That it is “old” or that it doesn’t work in a very small number of use-cases is not sufficient reason to fragment the effort.

                                          1. 1

                                            The problem to me is not that it is old. The problem is for the exchange and interoperability of large datasets. In particular how do you stream update a csv from a diff / delta?

                                            1. 3

                                              If you’re slinging data back and forth within your own organisation (where you control both endpoints), CSV is indeed sub-optimal.

                                              If you’re exchanging data between heterogeneous systems, CSV is the local minimum. You can wish for a better format all you want, but if just one of the systems is gonna emit/accept CSV only, that’s what you’re gonna have to use.

                                        2. 6

                                          Totally agree. I’ve contributed to a project that works with 100 million line CSV files (OpenAddresses). It works absolutely great for that. Honestly my only complaints with standard CSV are the dialect problems and the row oriented organization.

                                          The dialect problems are real but in practice not a big deal. Common tools like Python’s CSV module or Pandas’ CSV importer can autodetect and generally consume every possible quoting convention. In an idea world more people would use tab as the separator, or even better 0x1e the ASCII record separator (which no one uses). But we make commas work.

                                          The row orientation is a nerdy thing to quibble about, but CSV doesn’t compress as efficiently as it could. Rezipping the data so it’s stored in column order often results in much better compression. But that’s usually an unnatural way to produce or consume the data so fine, rows it is.

                                          1. 4

                                            I was also disappointed to see that the article had no actual point. No alternative is proposed and no insight is given.

                                            It seems like the author is angry he had to go through other people’s data dump and cleaned it up even though that’s basically what he is trying to sell. Bad arguments like “this data format cannot survive being opened in an editor and manually edited incorrectly”.

                                            thb this is just a “hey we need to drum up more sale can you write an article about something?” type of low-content marketing.

                                          1. 1

                                            Wanted to try it but it fails to build, too bad. At least I got links to similars projects in other comments (thanks for spot-client, a minimal client that works nicely).

                                            1. 9

                                              What’s even dumber is when you see all these websites that check that the browser is Chrome/Chromium to enable a feature, whether or not the browser can actually support it. How did this happen? There is everything needed in CSS and JS to check if a feature is supported nowadays.

                                              1. 4

                                                There are ways to check whether or not JS supports a feature and the same is true for CSS (s. @supports) but the problem is that some browsers lie with (Mobile) Safari being a prominent example unfortunately.

                                              1. 4

                                                Rather than publishing ports on any address using Docker (or, the same with podman), I publish to localhost or to a private subnet address (e.g. a virtual network for a virtual machine on a dedicated server, or usually a WireGuard tunnel), then I use a reverse proxy (e.g. nginx or HAproxy) on the front server (if different). This way I’m certain of what to whitelist and rate limit on my stateful firewall setup.

                                                In the case of a database, it doesn’t need to be accessible to the Internet, so I can just bind into a WireGuard tunnel (though I have not yet looked into WireGuard failover which is important for replicated databases).

                                                1. 4

                                                  I like how systemd brings all these features, but I don’t like how this makes this not portable to other operating systems, as systemd only supports Linux. I know that not all operating systems support all the underlying features needed by systemd, but I believe it is a shame to be Linux-centric.

                                                  I am not a user of non Linux-based operating systems myself, but I prefer having common standards.

                                                  1. 22

                                                    Personally, I’m completely fine that Systemd-the-init-system is Linux-only. It’s essentially built around cgroups, and I can imagine reimplementing everything cgroups-like on top of whatever FreeBSD offers would be extremely challenging if at all possible. FreeBSD can build its own init system.

                                                    …However, I would prefer if systemd didn’t work to get other software to depend on systemd. It definitely sucks that systemd has moved most desktop environments from being truly cross platform to being Linux-only with a hack to make them run on the BSDs. That’s not an issue with the init system being Linux-only though, it’s an issue with the scope and political power of the systemd project.

                                                    1. 11

                                                      The issue is that it’s expensive to maintain things like login managers and device notification subsystems, so if the systemds of the world are doing it for free, that’s a huge argument to take advantage of it. No political power involved.

                                                      1. 6

                                                        With politcal power I just meant that RedHat and Poettering have a lot of leverage. If I, for example, made a login manager that’s just as high quality as logind, I can’t imagine GNOME would switch to supporting my login manager, especially as the only login manager option. (I suppose we’ll get to test that hypothesis though by seeing whether GNOME will ever adopt seatd/libseat as an option.)

                                                        It’s great that systemd is providing a good login manager for free, but I can’t shake the feeling that, maybe, it would be possible to provide an equally high quality login daemon without a dependency on a particular Linux-only init system.

                                                        I don’t think the “political power” (call it leverage if you disagree with that term) of the systemd project is inherently an issue, but it becomes an issue when projects add a hard dependency on systemd tools which depend on the systemd init system where OS-agnostic alternatives exist and are possible.

                                                        1. 5

                                                          Everybody loves code that hasn’t been written yet. I think we need to learn to looks realistically at what we have now (for free, btw) instead of insisting on the perfect, platform-agnostic software. https://lobste.rs/s/xxyjxl/avoiding_complexity_with_systemd#c_xviza7

                                                    2. 18

                                                      Systemd is built on Linux’s capabilities, so this is really a question of–should people not try to take advantage of platform-specific capabilities? Should they always stay stuck on the lowest-common denominator? This attitude reminds me of people who insist on treating powerful relational databases like dumb key-value stores in the name of portability.

                                                      1. 5

                                                        I believe the BSDs can do many of the things listed in the article, but also in their very own ways. A cross-platform system manager would be some sort of a miracle, I believe.

                                                        1. 9

                                                          The big difference is that systemd (as well as runit, s6, etc.) stay attached to the process, whereas the BSD systems (and OpenRC, traditional Linux init scripts) expect the program to “daemonize”.

                                                          Aside from whatever problems systemd may or may not have, I feel this model is vastly superior in pretty much every single way. It simplifies almost everything, especially for application authors, but also for the init implementation and system as a whole.

                                                          A cross-platform system manager would be some sort of a miracle, I believe.

                                                          daemontools already ran on many different platforms around 2001. I believe many of its spiritual successors do too.

                                                          It’s not that hard; like many programs it’s essentially a glorified for loop:

                                                          for service in get_services()
                                                              start_process(service)
                                                          

                                                          Of course, it’s much more involved with restarts, logging, etc. etc. but you can write a very simple cross-platform proof-of-concept service manager in a day.

                                                          1. 4

                                                            Yes and no. Socket activation can be done with inetd(8), and on OpenBSD you can at least limit what filesystem paths are available with unveil(2), although that requires system-specific changes to your code. As far as dynamic users, I don’t think there’s a solution for that.

                                                            Edit: Also, there’s no real substitute for LoadCredentials, other than using privdropping and unveil(8). I guess you could use relayd(8) to do TLS termination and hand-off to inetd(8). If you’re doing strictly http, you could probably use a combo of httpd(8) and slowcgi(8) to accomplish similar.

                                                            1. 3

                                                              Then I’m imagining a modular system with different features that can be plugged together, with specifications and different implementations depending to the OS. Somehow a way to go back to having a single piece of software for each feature, but at another level. The issue is how you write these specifications while having things implementable on any operating system it makes sense of.

                                                              1. 2

                                                                Hell, a Docker API implementation for BSD would be a miracle. The last FreeBSD Docker attempt was ancient and has fallen way out of date. Have a daemon that could take OCI containers and run them with ZFS layers in a BSD jail with BSD virtual networks would be a huge advantage for BSD in production environments.

                                                                1. 3

                                                                  There is an exciting project for an OCI-compatible runtime for FreeBSD: https://github.com/samuelkarp/runj. containerd has burgeoning FreeBSD support as well.

                                                              2. 2

                                                                But, are FreeBSD rc.d scripts usable verbatim on, say, OpenBSD or SMF?

                                                                1. 8

                                                                  SMF is a lot more like systemd than the others.

                                                                  In fact aside from the XML I’d say SMF is the kind of footprint I’d prefer systemd to have, it points to (and reads from) log files instead of subsuming that functionality, handles socket activation, supervises processes/services and drops privileges. (It can even run zones/jails/containers).

                                                                  But to answer the question: yes any of the scripts can be used essentially* verbatim on any other platform.

                                                                  (There might be differences in pathing, FreeBSD installs everything to /usr/local by default)

                                                                  1. 2

                                                                    I wish SMF was more portable. I actually like it a lot.

                                                                  2. 6

                                                                    Absolutely not. Even though they’re just shell scripts, there are a ton of different concerns that make them non-portable.

                                                                    I’m gonna ignore the typical non-portable problems with shell scripts (depending on system utils that function differently on different systems (yes, even within the BSDs), different shells) and just focus on the biggest problem: both are written depending on their own shell libraries.

                                                                    If we look at a typical OpenBSD rc.d script, you’ll notice that all the heavy-lifting is done by /etc/rc.d/rc.subr. FreeBSD has an /etc/rc.subr that fulfills the same purpose.

                                                                    These have incredibly different interfaces for configuration, you can just take a look at the manpages: OpenBSD rc.subr(8), FreeBSD rc.subr(8). I don’t have personal experience here, but NetBSD appears to have a differing rc.subr(8) as well.

                                                                    It’s also important to note that trying to wholesale port rc.subr(8) into your init script to make it compatible across platforms will be quite the task, since they’re written for different shells (OpenBSD ksh vs whatever /bin/sh is on FreeBSD). Moreover, the rc.subr(8) use OS-specific features, so porting them wholesale will definitely not work (just eyeballing the OpenBSD /etc/rc.d/rc.subr, I see getcap(1) and some invocations of route(8) that only work on OpenBSD. FreeBSD’s /etc/rc.subr uses some FreeBSD-specific sysctl(8) MIBs.)

                                                                    If you’re writing an rc script for a BSD, it’s best to just write them from scratch for each OS, since the respective rc.subr(8) framework gives you a lot of tools to make this easy.

                                                                    This is notably way better than how I remember the situation on sysvinit Linux, since iirc there weren’t such complete helper libraries, and writing such a script could take a lot of time and be v error-prone.

                                                                    1. 5

                                                                      Yeah, exactly. The rc scripts aren’t actually portable, so why do people (even in this very thread) expect the systemd scripts (which FWIW are easier to parse programmatically, see halting theory) to be?

                                                                      Also, thank you for the detailed reply.

                                                                      1. 3

                                                                        I’m completely in agreement with you. I want rc scripts/unit files/SMF manifests to take advantage of system-specific features. It’s nice that an rc script in OpenBSD can allow me to take advantage of having different rtables or that it’s jail-aware in FreeBSD.

                                                                        I think there are unfortunate parts of this, since I think it’d be non-trivial to adapt provided program in this example to socket activation in inetd(8) (tbh, maybe I should try when I get a chance). What would be nice is if there was a consistent set of expectations for daemons about socket-activation behavior/features, so it’d be easier to write portable programs, and then ship system-specific configs for the various management tools (systemd, SMF, rc/inetd). Wouldn’t be surprised if that ship has sailed though.

                                                                    2. 2

                                                                      I don’t see why not? They’re just POSIX sh scripts.

                                                                  1. 5

                                                                    One could submit GPLv3’d code to the App Store, this is how Nextcloud does it.

                                                                    1. 2

                                                                      When I wrote to the FSF about this issue, they said:

                                                                      The issue with the Apple App store is that it imposes additional nonfree restrictions on software served through it. So adding an ‘exception’ would really be permitting these additional restrictions, which has the same issue as any other relicense in that you could not use third-party code.

                                                                      So you can definitely add a license exception (if all of your contributors agree, which might require a CLA), but it would no longer be possible to mix other people’s GPLv3 code in with yours if it does not contain an identical exception… at least that’s my understanding.

                                                                      1. 6

                                                                        Yes, but the CLA isn’t any different in that regard since it doesn’t cover dependencies.

                                                                        1. 2

                                                                          This is the good point, as it defeats their argument of using CLA.

                                                                    1. 13

                                                                      Yes, all the time! #!/usr/bin/env node fits nicely at the top of a js file, and process args or stdin/stdout streams work great.

                                                                      Lots of the time I use it for data wrangling and json ETL jobs that I’m playing around with. Sometimes they graduate into being actual apps instead of throw away scripts.

                                                                      My latest was I wrote a page listener to book myself a vaccine appointment:

                                                                      #!/usr/bin/env node
                                                                      var page = "https://www.monroecounty.gov/health-covid19-vaccine";
                                                                      var selector = "#block-system-main";
                                                                      var seconds = 15;
                                                                      var warning = "It's a ready"
                                                                      
                                                                      var child_process = require("child_process");
                                                                      var cheerio = require("cheerio");
                                                                      var fetchUrl = require("fetch").fetchUrl;
                                                                      
                                                                      var last = null;
                                                                      
                                                                      var check = function() {
                                                                          fetchUrl(page, function(error, meta, body){
                                                                              var $ = cheerio.load(body.toString());
                                                                              var curr = $(selector).html();
                                                                              if (last!=curr) {
                                                                                  //Page has changed!  Do something!
                                                                                  var command = `say "${warning}"`;
                                                                                  console.log("Changed!",new Date());
                                                                                  child_process.exec(command);
                                                                      
                                                                              } else {
                                                                                  console.log("No change",new Date());
                                                                              }
                                                                              last=curr;
                                                                              setTimeout(check,seconds*1000);
                                                                          });
                                                                      }
                                                                      
                                                                      check();
                                                                      
                                                                      1. 5

                                                                        At some point in time I was favoring Python over bash for scripts. But the gain in expressivity was counter-balanced by the strength of bash for piping stuffs together, and the shortness/simplicity for doing basic file stuffs, and the whole vocabulary of shell executables. In the end, I gained a lot in embracing bash.

                                                                        And when things go more complex I slowly change language: Bash -> Perl -> Python -> Compiled Language.

                                                                        How would you sell NodeJs for Linux scripting to others like me. What are the major pros ? Is your main point (which would be totally understandable and acceptable) that you are fluent in Js and want to use this over harder to read bash.

                                                                        If your convincing I’d gladly give it a try to NodeJs linux scripting. Maybe there are field where it shines. Like retrieving json over the internet and parsing it far easily than other language for example.

                                                                        1. 7

                                                                          Well it’s mostly preference and I’m a HUGE advocate of “use what you like and ignore the zealots”. It’s so hard to espouse pros and cons in a general sense for everyone, because everyone is doing different stuff in different styles.
                                                                          That being said, here’s why I personally like to use it as a shell scripting language:

                                                                          • I write python too, and use bash, and can sling perl if I have too, but js feels like the best balance between being concise and powerful (for my needs)
                                                                          • node is everywhere now (or really easy to get and I know how to get it in seconds on any OS).
                                                                          • Say what you will about node_modules, but node coupled with npm or yarn IMO set the standard in ease of using and writing packages. I also write python and writing a package for pip install is way way way more annoying compared to npm.
                                                                          • package.json is a single file that can do everything, and I loathe config split across files in other langs for tiny tiny scripts that do one thing only.
                                                                          • JSON is literally javascript, so messing with JSON in javascript is natural. Here’s how to pretty print a json file: process.stdout.write(JSON.stringify(JSON.parse(require('fs').readFileSync('myfile.json')),null,2))
                                                                          • Everyone complains about node_modules size but this ain’t webpack - it’s just a module or two. I’m a big fan of cheerio and lines. node-fetch is also quite small and very powerful

                                                                          Probably more reasons but that’s good enough :)

                                                                        2. 2

                                                                          Hello const and modules.

                                                                          1. 2

                                                                            nah ES5 for life!

                                                                          2. 1

                                                                            #!/usr/bin/env node fits nicely at the top of a js file

                                                                            At which point it’s no longer a valid Javascript file? But I suppose nodejs allows it?

                                                                            1. 4

                                                                              https://github.com/tc39/proposal-hashbang

                                                                              Stage 3 proposal. every engine I know of supports it.

                                                                              EDIT: hell I just checked, even GNOME’s gjs supports it.

                                                                              1. 2

                                                                                Yes node allows it and I like this because it makes it clear that its meant to be a command line script and not a module to include, and allows for the familiar

                                                                                $>./script.js

                                                                                versus needing to invoke node explicitly:

                                                                                $>node script.js

                                                                            1. 7

                                                                              Yes. But never nodejs. Deno is perfect for this.

                                                                              For instance, in a new machine you just need to install a single binary and execute the script straight from your repo (e.g. https://raw.githubusercontent.com/ruivieira/scripts/foo.ts). It will even fetch dependencies automatically.

                                                                              To execute processes I use https://deno.land/manual/examples/subprocess

                                                                              1. 1

                                                                                Your first link probably requires authentication.

                                                                              1. 2

                                                                                I try to write POSIX-compliant shell syntax (without relying on GNU coreutils or other 3rd party programs if possible), and if I can’t I switch to Python. Many things can be done in a shell. You will probably find a shell (/bin/sh) on any operating system you use (I hope for you), and Python isn’t unusual to recent operating systems either (/usr/bin/env python3). Much unlike Node.js (or even, PHP, like I used to use in a similar mindset).

                                                                                1. 5

                                                                                  I see a lot of posts on Firefox vs Chrome (or in this case Chromium) and it always seems to be people lobbying for others to use Firefox for any number of moral or security reasons. The problem that I see with a lot of this is that Firefox just isn’t as good of a user experience as Chromium-based browsers. Maybe Mozilla has the best intentions as a company, but if their product is subjectively worse, there’s nothing you can really do.

                                                                                  I’ve personally tried going back to Firefox multiple times and it doesn’t fulfill what I need, so I inevitably switch back to Vivaldi.

                                                                                  1. 10

                                                                                    This is really subjective. I tried using ungoogled-chromium but switched back to Firefox. I used Vivaldi for a while but switched to Firefox as well. Before I was using the fork of Firefox called Pale Moon but I got concerned with the lack of updates (due to how small is the team).

                                                                                    1. 2

                                                                                      Sure it is, but almost 80% of the world is using a chromium browser right now and Firefox is stagnant at best, slowly losing ground. Firefox even benefits from being around longer, having a ton of good will, and some name recognition and it still can’t gain market.

                                                                                      1. 8

                                                                                        It also didn’t get advertised everytime you visit Google from another browser. It also isn’t installed by default on every Android phone.

                                                                                        1. 8

                                                                                          Firefox also isn’t installed by default by a bunch of PC vendors.

                                                                                          1. 1

                                                                                            Firefox already had its brand established for years before that happened. It’s also worth noting that Microsoft ships with its browser (which is now a Chromium variant, but wasn’t until recently) and doesn’t even use Google as the search engine, so the vast majority of new users don’t start with a browser that’s going directly to google to even see that message.

                                                                                            1. 2

                                                                                              And yet they start with a browser and why replace something if what you have already works discounting those pesky moral reasons as if those are not worth anything.

                                                                                          2. 4

                                                                                            Among technical users who understand browsers, sure, you might choose a browser on subjective grounds like the UX you prefer. (Disclaimer: I prefer the UX of Firefox, and happily use it just fine.)

                                                                                            Most people do not know what a browser even is. They search for things on Google and install the “website opener” from Google (Chrome) because that’s what Google tells you to do at every opportunity if you are using any other browser.

                                                                                            When some players have a soap box to scream about their option every minute and others do not, it will never matter how good the UX of Firefox is. There’s no way to compete with endless free marketing to people who largely don’t know the difference.

                                                                                            1. 1

                                                                                              If that were the case, people would switch back to Edge and Safari because both Windows and MacOS ask you to switch back, try it out again, etc every so often.

                                                                                              The UX of firefox is ok (they keep ripping off the UI of Opera/Vivaldi though fwiw and have been doing so forever), but it functionally does not work in many cases where it should. Or it behaves oddly. Also, from a pure developer perspective, their dev tools are inferior to what has come out of the chromium project. They used to have the lead in that with Firebug, too, but they get outpaced.

                                                                                        2. 2

                                                                                          Yeah, I switched to Firefox recently and my computer has been idling high ever since. Any remotely complicated site being left as the foreground tab seems to be the culprit.