1. 3

    Arduino is for hobbyists, not for production. Actually programming MCUs without abstractions is a PITA. Don’t need to know all that stuff if you wanna blink a LED and read a sensor for your personal project.

    1. 1

      Nice review, I’ve also bought an Moonlander, but when I suspend my PC the lights won’t turn off. Have you had any issues with this?

      1. 1

        I still don’t understand why they thought the Ergodox needed fewer keys.

        1. 1

          Because you can have like >30 layers.

        1. 3

          I still can’t be surprised enough how much people felt into the meme of a bunch of web pages in borderless web browser pretending to be “text editor”.

          This just doesn’t fscking add up.

          1. 20

            It works well for people? What’s there not to understand? I bet most people don’t even know it’s built on Electron (I didn’t), as most don’t care – they just care that it works well for them – which strikes me as entirely reasonable.

            1. 1

              It “works well”, but has an amazing amount of side effects. I wouldn’t want all of my apps to become Electron apps, because it’s a huge waste of resources to run a separate Chromium instance for every app. Also, it’s a security hazard. I think there are many more things to consider before we say something “works well”.

              1. 4

                How is running and a Chrome instance different from running a Python instance? Or Java? Or using Qt which contains many features too? It seems to work well enough even on my cheap Celeron laptop anyway (at least, in the quick spin I gave it).

                What security issues are there?

                1. 2

                  VSCode is heavily optimized by MS so it runs pretty okay, but not all developers have the expertise and knowledge to optimize it like that. If everyone adopts Electron to create their desktop apps everything will slow down. Chromium is much more expensive to run than Python, Java, or QT by a huge margin.

                  Regarding security, you’re enlarging the desktop attack surface by coupling your web APIs to desktop APIs. So that means that web vulnerabilities can now be used more easily on someone’s desktop instead of in someone’s browser. Combine this with the fact that the Electron update model is pretty much non-existent and that a bug in Electron will not be fixed in a lot of projects due to this.

                  Electron is a huge tradeoff. Easier cross platform, but high performance impact and high security impact. Of course big corps will use it because they only care about profits, not the state of the world. What’s good for MS/Google/FB isn’t necessarily good for the world. Don’t get baited by them.

                  1. 3

                    Chromium is much more expensive to run than Python, Java, or QT by a huge margin.

                    I don’t know; have you ever tried running Eclipse or one of the IntelliJ IDEs? VSCode seems a lot faster to me, in my “let’s check this out test runs” anyway (I don’t really use it). Also, I use the Spotify desktop client which is built on Electron too I believe, and that works alright for me.

                    As for security, I’m not so sure it’s that much larger than, say, a PyQt app with a bunch of dependencies, or a Java app with a bunch of dependencies. I know people love to complain about it, but Web APIs aren’t actually that large compared to any complete desktop/GUI system (actually, I think it may be smaller). In the context of an IDE, I don’t see how it’s a security problem at all since you’re running a local desktop app which already implies full trust. Any scriptable environment + extensions is just as secure or insecure as VSCode; it’s not like Vim or Emacs sandboxes its extensions or anything.

            2. 14

              The only “UI framework” that has been continuously improving over the last decade is sadly the “web stack”. It does offer the most flexibility out of all of the common options. If one framework has such a dominating lead over the others, it’s no surprise that everyone will start using it for everything.

              1. 6

                Um, iOS and Android have unarguably improved. (MacOS you could quibble about whether the changes are improvements, as it was already pretty mature … but then, SwiftUI runs on MacOS and is a big leap forward in ease of development.)

                Maybe you meant “cross-platform UI framework”, in which case I’d have to agree, as in my experience both Qt and Java produce ugly, awkward apps.

                1. 1

                  it’s gonna stay that way for probably another ten years because Microsoft would rather you buy Azure servers than make desktop apps.

                2. 4

                  It goes both ways, you know. Vim and its brethren were adapted from line editors like Ed, and it shows. What people expect from UI is always changing, for better and for worse; Vim and Emacs have failed to keep up with even decade-old advancements in UI, including approachability, usability, and — most importantly — discoverability.

                  1. 3

                    A coworker (not a “techie” but very willing to learn) needed to do some quite complex changes to a remote script at work. We downloaded VS code and loaded the existing script and the fixed one from source control, and it was super easy to use the built-in comparison feature to ID what parts needed to be changed.

                    VS Code is a free, open source, easy to install, cross-platform editor with a ton of affordances. I’ll take those advantages over small implementation detail niggles any day.

                    1. 3

                      They sure quack like text editors. What’s your point?

                      1. 2

                        i used it for a bit and loved it, but when i discovered coc.nvim i just went back

                      1. 3

                        But by that measure, programmer can also be like a pro and amateur tennis player. A winner will score more features then the opponent, be faster with bug fixes and all. A loser will be making mistakes, refactoring, picking ineffective tech or architecture etc. Why is programming only a losers game?

                        1. 3

                          It entirely depends on who’s playing. Tennis becomes winners game only when the players stop making mistakes. Looking at the current state of programming, it’s closer to the losers game.

                          1. 2

                            But it’s the same with tennis, it’s a losers game because most people lose, right? There’s only one winner at the end? Like with programming or with anything else.

                            I mean, what does it even mean to win or lose? If it’s about selling your product - that’s not very programming-related. If it’s about, I don’t know, keeping the servers uptime at five nines, or having less then X bugs or y% coverage, then you might have something to measure up against, to see if you “won”. But even there, it’s not clear what you’ve won, since you were the one setting the rules in the first place (or well, the business did, and you said it can be done). My point is that I somewhat can understand this perspective, but it doesn’t really apply and the analogy for me doesn’t work. I can’t have a winner or loser tag if I don’t have anybody to compete with.

                            1. 3

                              I mean, what does it even mean to win or lose? If it’s about selling your product

                              The author describes it as “producing high quality code”, so let’s hold on to that. So something becomes a winning game when you can stop making mistakes. What does this imply? Imagine you have to write a huge multithreaded application in C++ and it’s going to be used by millions of endusers and you have to get it out yesterday. This seems like a pretty good example of a losers game.

                              So how can we transition to a winners game? Improve the factors which influence code quality, it seems. Maybe user better technology, maybe get more time to develop, etc.

                        1. 8

                          Fundamentally I think C trusts developers while C++ trusts compilers. This is a massive difference that sharing the same native types or syntax for while loop cannot hide.

                          I think this is a nice lens for viewing programming language decisions.

                          1. 3

                            As compilers are overcomplex pieces of software which can’t be trusted, this lens might favor C way too much.

                            1. 2

                              Just like brains are overly complex pieces of hardware / software which can’t be trusted.

                          1. 11

                            I used to use a lot of util alternatives, but now I just stick to the standard ones: find, make, man, vim, grep, xargs, watch, etc.

                            You still have to learn them anyway because they’re installed everywhere by default. You can’t always install random tools on a system.

                            1. 14

                              Not sure I buy this argument.

                              Becoming facile with the standard available everywhere tools is a good investment, to be sure, but some of the tools cited offer a huge value add over the standard tools and can do things they can’t.

                              1. 2

                                In my own usage, I haven’t really noticed too much of a difference. The things I have noticed are that the non-standard tools are sometimes faster and they tend to use more common --flags. entr is a bit unique, but I’ve also been able to replace entr with watch to achieve basically the same result.

                                What are some of the huge value adds that I’m missing out on?

                                1. 5

                                  Utilities like ripgrep and fd natively respecting your .gitignore is a huge feature.

                                  1. 5

                                    Oh, man. That’s actually the feature that made me leave ripgrep. I personally prefer to not hide a bunch of information by default. Let me filter it. I didn’t even realize it did that at first. I’ve spent hours debugging something only to realize that ripgrep was lying to me. Definitely user error on my part and I eventually found you can disable that behavior. But that made me realize grep suited my needs just fine.

                                    1. 3

                                      Yep, same here. I think it’s a horrible default and easily the biggest problem I’ve ever encountered with ripgrep.

                                      1. 5

                                        I find that to be an ok default and for the odd time you don’t want that behaviour there’s --no-ignore-vcs which you can also set in your ripgrep config file

                                        1. 1

                                          FWIW, the automatic filtering by default is one of the most highly praised features of ripgrep that I’ve heard about. (And that in turn is consistent with similar praise for tools like ack and ag.)

                                          If you don’t want any filtering at all, ripgrep will behave like grep with rg -uuu <pattern>.

                                          1. 1

                                            I absolutely believe that, and I didn’t like it with ack and ag (yeah, I also progressed through all of them).

                                            I’m not saying you didn’t do the right thing if most people like it, but it’s not for me. Thanks for ripgrep and please don’t take the above comment as any sort of attack. (and yes, I’m using the form you mentioned).

                                            1. 1

                                              I didn’t, no problem. Just wanted to defend my choice. :)

                                        2. 1

                                          Hi, ripgrep author here. Out of curiosity, what made you try ripgrep in the first place? From what I hear, folks usually try it for one or both of performance and its automatic filtering capabilities. It seems like you didn’t like its automatic filtering (and I’ve tried to put that info within the first couple sentences of all doc materials), so that makes me curious what led you to try it in the first place. If you were only searching small corpora, perhaps the performance difference was not noticeable?

                                          1. 1

                                            In my case it was speed, in particular, replacing vimgrep

                                      2. 3

                                        watch(1) as far as I know doesn’t do anything with inotify(7), so it’s limited to a simple interval instead of being event based. As others have pointed out, you could use inotifywait(1) and some relatively simple scripts to obtain similar results.

                                        That being said, I still use find, grep and others regularly, these are just a little easier to work with on the daily when they’re available. fd in particular has nicer query syntax for the common cases I have.

                                        1. 3

                                          Yeah, watch doesn’t watch (ha) for file changes, but my goal is to avoid having to rerun commands manually while I work on a file. watch(1) can achieve this goal. Yes, it’s inefficient to rerun the build every -n .1 seconds instead of waiting for file changes, but I end up reaching my goal either way, even if I have to wait 1 second for the build to start.

                                          Although, I can definitely see how this could be painful if the command is extremely slow.

                                          1. 5

                                            I work with Python… so it goes without saying that my tests are already slow, but also many of them are computationally expensive (numpy), so they can take several minutes to complete. Obviously that’s not the same situation for everyone, use what works for you! The whole point of my post is to share tools that others may not be aware of but may find helpful.

                                        2. 1

                                          What are some of the huge value adds that I’m missing out on?

                                          Thinking about this I may have over-promised, but one thing that occurs is vastly reduced cognitive load as well as a lot less straight up typing.

                                          For example I personally find being able to simply type: ag "foo" as opposed to: find . -name "*" -exec grep "foo" \; -print

                                      3. 5

                                        I’m a developer, so it’s super worth it for me to optimize my main workflow. I know the standard tools well enough that I can pretty easily revert back to them if there is one, but some of these don’t have an alternative in the standard *nix toolset. entr(1) in particular has nothing equivalent as far as I know.

                                        1. 3

                                          How about inotifywait?

                                          1. 2

                                            entr is built on the same os API, inotify(7). You could probably get inotifywait to act in a similar manner with some scripting, but it does not do the same thing.

                                            1. 2

                                              That’s all I’ve ever used inotifywait for: watch some files or directories, and run scripts when change events occur. But I didn’t even know about entr. It does look relatively streamlined.

                                              1. 1

                                                Your script(s) have to implement the behavior that entr(1) implements though, so while it’s technically possible it’s far from a ‘single command’ experience, which I find very nice.

                                        2. 1

                                          If you have to work on systems with standard tooling a lot and no option to install stuff, fair. But for me, and I think a lot of other devs, it’s pretty feasible to install non-standard tools. I almost never work on another machine than my own. Most people customize their personal machine anyway.

                                          One rule I have is that scripts are always written in plain POSIX compliant shell for portability with the standard tools.

                                          1. 1

                                            I almost never work on another machine than my own.

                                            I wonder if this is a side effect of the trends towards infrastructure as code, “cattle not pets”, containerization, etc etc etc.

                                            In general I’ve found the same: the most work I do on a remote box is connecting to a running container and doing some smoke testing.

                                            1. 3

                                              I feel like in the era where shared UNIX hosts were more common, I would just cart binaries around in my home directory – which was often an NFS directory shared amongst many hosts as well.

                                        1. 1

                                          An intuitive terminal editor like Micro and Tmux could bring you far.

                                          1. 1

                                            For the last couple of years I have been using Alacritty as a daily driver. It’s truly a bliss to use. However, the Vim-mode and scrollback seem redundant. Tmux / Screen implements scrollback. Why not add built-in tabs then?

                                            1. 3

                                              I was wondering if you could go a bit more indepth about your network storage.

                                              1. 5

                                                Sure, which parts are you interested in?

                                                I think https://michael.stapelberg.ch/posts/2016-11-21-gigabit-nas-coreos/ should give a good introduction, if you haven’t read that yet

                                              1. 7

                                                Postgresql is a great database, but I think people could use SQLite a lot more often.

                                                1. 5

                                                  I bought a Planck EZ Glow and I love it! My typing speed increased by like 20 wpm over the shitty Macbook Pro keyboard, although I realize that’s a low bar.

                                                  Now I want an ergonomic keyboard, but for portability + fitting onto small desks, the Planck EZ is hard to beat. I’ll probably end up buying either Keyboard.io’s Model 01 refresh (the apparently upcoming Model 100) if it’s good, or an Ergodox EZ refresh. I want USB-C.

                                                  I also am backing the Keyboardio Atreus Kickstarter for funsies, even though it kind of fills the same role as the Planck EZ… I’ll see which one I like better, and give away or sell the other most likely.

                                                  1. 2

                                                    Second this, I’m also using a Planck-EZ as my daily driver. I customized it, put in some lubed Holy Pandas and a nicer keycap set. Take a look at my layout!

                                                    1. 3

                                                      Nice, I like the spacebar as a layer toggle when held! Very clever, I’m going to have to steal that.

                                                      If we’re sharing layouts, here’s mine: https://configure.ergodox-ez.com/planck-ez/layouts/9wqxW/latest/0

                                                    2. 1

                                                      I was just looking at a Planck EZ the other day. A few questions:

                                                      • Does the case build quality feel good? It looks like the ones straight from OLKB are aluminum and the EZs are plastic.
                                                      • Do you like the MIT layout (2U space bar)?
                                                      • Did you get an older one that didn’t have USB-C? It looks like all the ones today have it.

                                                      As a no-longer-insecure-about-it Vim user (I tried Emacs three times and it’s not for me), I’m beginning to think that a 40% might be just what I’m looking for (coming from a mechanical 100%).

                                                      1. 3
                                                        • It feels pretty solid to me. I wouldn’t use it as a battering ram, but I don’t feel insecure about throwing it in my backpack and carrying it around. When I wrote to the company asking about carrying cases, they said that the main concern was keeping debris out of the keyboard, not really protecting it from bumping against things, it’s pretty sturdy. (I was annoyed they didn’t have any official carrying cases, but I bought a Nintendo Switch case and that fit the keyboard very well once I cut out the irrelevant stand for propping up the Switch inside the case.)
                                                        • The space bar works pretty well for me. I actually held off on buying a Keyboardio Model 01 because of the strange “space button” configuration where there’s only a single little Space key on the right hand, I was like “I want to be able to hit the spacebar with either hand!” The Planck EZ’s 2U spacebar definitely works in that regard. But I found that when I’m touch-typing at speed, I never hit the spacebar with my left hand, so I probably would be fine with a 1U space button only for my right hand.
                                                        • No, the Planck EZ I have has USB-C, but that was a reason I bought it over the current Ergodox EZ or the Keyboardio Model 01, neither of which have USB-C.
                                                        1. 2

                                                          … or the Keyboardio Model 01, neither of which have USB-C

                                                          Just fyi, the Model 01 does actually have a USB C port

                                                          1. 2

                                                            Whoops, you’re totally right! Maybe I should just buy one after all.

                                                          2. 1

                                                            Awesome, thanks! I think I might buy one.

                                                      1. 1

                                                        Interesting take! I think I understand what the author means. As of currently, AI couldn’t match human performance because they can’t deal with too much randomness in their input. That’s a fair point and I’d have to agree with that, but that wouldn’t mean that advances in AI wouldn’t enable a car to be fully autonomous.

                                                        We will probably invent something which can mimic the human way of learning stuff better, supercharged probably even. Look at GPT-2.

                                                        1. 3

                                                          The last sentence of item 7 is dubious. The right language for the right job can make a lot of difference. It shapes the way you think by making you apply certain concepts which can be great or not-so-great for certain tasks. I think there’s a pretty thick line between tribalism / fanboyism / evangelism and knowing which language is better for which job, these are not mutually exclusive.

                                                          1. 1

                                                            But “the right language for the right job” is a long way from what I suspect he experienced as tribalism, which was likely only ever using one language, not matter the situation. I think this is less prevalent than it used to be, but it’s definitely a thing.

                                                            1. 1

                                                              Definitely still is a thing, but it’s a far cry from “The language doesn’t matter”. It’s not as if there’s nothing inbetween tribalism and acknowledging that the language matters to a certain degree. Only a Sith deals in absolutes.

                                                            2. 1

                                                              For every 10 times I hear that the specific language makes a difference, probably 8 or 9 it’s tribalism. I’ve also been scarred by lots of enterprise mandates trying to limit and stagnate languages.

                                                              Whenever I hear this, I try to check the commit history of the person. If they have a mix of languages used, then I worry less. If they have a sustained history of only using a single language or approach, not even toy projects or minor fixes, then I strongly suspect tribalism.

                                                            1. 24

                                                              In some cases, I have a great deal of sympathy for the author’s point.

                                                              In the specific case of the software that triggered this post? Not so much. The author IS TALKING ABOUT A SENDMAIL MILTER when they say that

                                                              Python 2 is only legacy through fiat

                                                              No. Not in this case. An unmaintained language/runtime/standard library is an absolute environmental hazard in the case of a sendmail milter that runs on the internet. This is practically the exact use case that it should absolutely be deprecated for, unless you’re prepared to expend the effort to maintain the language, runtime and libraries you use.

                                                              This isn’t some little tool reading sensor data for an experiment in a closed environment. It’s processing arbitrary binary data from untrusted people on the internet. Sticking with this would be dangerous for the ecosystem and I’m glad both python and linux distro maintainers are making it painful for someone who wants to.

                                                              1. 2

                                                                A milter client doesn’t actually process arbitrary binary data from the Internet in a sensible deployment; it encapsulates somewhat arbitrary binary data (email messages and associated SMTP protocol information that have already passed some inspection from your MTA), passes it to a milter server, and then possibly receives more encapsulated binary data and passes it to the MTA again. The complex binary milter protocol is spoken only between your milter client and your milter server, in a friendly environment. To break security in this usage in any language with safe buffer handling for arbitrary data, there would have to be a deep bug that breaks that fundamental buffer safety (possibly directly, possibly by corrupting buffer contents so that things are then mis-parsed at the protocol level and expose dangerous operations). Such a deep break is very unlikely in practice because safe buffer handling is at the core of all modern languages (not just Python but also eg normal Rust) and it’s very thoroughly tested.

                                                                (I’m the author of the linked-to blog entry.)

                                                                1. 2

                                                                  I guess I haven’t thought about one where it would be safe… the last one I worked on was absolutely processing arbitrary binary data from the internet, by necessity. It was used for encrypting/decrypting messages, and on the inbound side, it was getting encrypted message streams forwarded through from arbitrary remote endpoints. The server could do some inspection, but that was very limited. Pinning it to some arbitrary library version for processing the message structures would’ve been a disaster.

                                                                  That’s my default frame of reference when I think of a milter… it processes information either on the way in or way out that sendmail doesn’t know how to and therefore can’t really sanitize.

                                                                  1. 1

                                                                    For us, our (Python) milter client sits between the MTA and a commercial anti-spam system that talks the milter protocol, so it gets a message blob and some metadata from the MTA, passes it off to the milter server, then passes whatever the milter server says about the email’s virus-ness and spam-ness back to the MTA. This is probably a bit unusual; most Sendmail milter clients are embedded directly into an MTA.

                                                                    If our milter client had to parse information out of the message headers and used the Python standard library for it, we would be exposed to any bugs in the email header parsing code there. If we were making security related decisions based on header contents (even things like ‘who gets how much spam and virus checking’), we could have a security issue, not just a correctness or DoS/crash one (and crashes can lead to security issues too).

                                                                    (We may be using ‘milter client’ and ‘milter server’ backward from each other, too. In my usage I think of the milter server as the thing that accepts connections, takes in email, and provides decisions through the protocol; the milter clients are MTAs or whatever that call up that milter server to consult it (and thus may be eg email servers themselves). What I’m calling a milter server has a complicated job involving message parsing and so on, but a standalone client doesn’t necessarily.)

                                                                    1. 2

                                                                      Mine was definitely in-process to the MTA. (I read “milter” and drew no client/server distinction, FWIW. I had to go read up just now to see what that distinction might even be.) Such a distinction definitely wasn’t a thing I had to touch in the late 2000s when I wrote the milter I was thinking about as I responded.

                                                                      The more restricted role makes me think about it a little differently, but it’d still take some more thinking to be comfortable sitting on a parsing stack that was no longer maintained, regardless of whether my distro chose to continue shipping the interpreter and runtime.

                                                                      Good luck to you. I don’t envy your maintenance task here. Doubly so considering that’s most certainly not your “main” job.

                                                                2. 1

                                                                  Yeah, it’s a good thing they do, it’s not the distro-maintainers fault that Python became deprecated.

                                                                1. 15

                                                                  Isn’t this a complaint about the lack of free support from distros? Am I misunderstanding?

                                                                  Perhaps a group of people would like to start a paid support service for Python 2?

                                                                  1. 9

                                                                    RHEL will be supporting a Python 2 interpreter until at least June of 2024. Potentially longer if they think there’s enough money in offering another “extended lifecycle” (which got RHEL 6 up to a whopping 14 years of total support from its initial release date).

                                                                    1. 2

                                                                      Alternately something can be “done” and never need to be touched again. Expiring toolchains breaks a lot of “done” code.

                                                                      1. 20

                                                                        In the current security landscape? Are you serious? No code is perfect. New flaws in old code are being found and exploited all the time.

                                                                        1. 1

                                                                          Obviously python is large enough to be a security problem, but take e.g. boltdb in the golang world. It doesn’t need more commits, unless golang shifts under it. I believe it’s possible to have code that’s useful and not a possible security problem.

                                                                          1. 15

                                                                            I don’t understand where you’re coming from here. I’m not a Golang fan, but looking at the boltdb repo on github I see that it’s explicitly unmaintained.

                                                                            You’re saying that you don’t think boltdb will ever have any serious security flaws that need addressing?

                                                                            I don’t mean to be combative here, but I have a hard time swallowing this notion. Complex software requires maintenance in a world where the ingenuity of threat actors is ever on the increase.

                                                                            1. 3

                                                                              Maybe it wouldn’t need any more commits in terms of features, which is most likely true in case of Bolt as the README states that it focusses on simplicity and doing one thing. But there’s no way to prove that something is secure, you can’t know if there’s a certain edge-case which will result in a security vulnerability. And in that sense, it does require maitainence. Because we can’t prove that something is secure, we can only prove that something is insecure.

                                                                              1. 2

                                                                                In fact, the most recent Go 1.14 release adds a -d=checkptr compiler flag to look for invalid uses of unsafe that is enabled for -race and -msan builds by default, and because it does invalid unsafe pointer casts all over the place, it causes fatal errors if you like to run with -race in CI, for example.

                                                                                So yeah, Go indeed did shift from under it very recently.

                                                                            2. 6

                                                                              Some things might be able to. I do not personally believe a sendmail milter is one of those things that can be “done” and never need to be touched again. Unless email itself becomes “done”, I suppose.

                                                                          1. 3

                                                                            Honest question, why would you use a nvim in a GUI which functions like the nvim TUI?

                                                                            1. 12

                                                                              Reasons I do:

                                                                              • Ligature support
                                                                              • Animated cursor
                                                                              • Faster performance
                                                                              • Possibility of future graphical features such as blurred floating windows, frameless window, etc

                                                                              Reasons I’ve heard from other users:

                                                                              • Identical cross platform experience

                                                                              Terminal is great for some things, but these days I use neovim as my terminal emulator, so I guess I’d ask it the other way around. Why use TUI when you can have a terminal inside of neovim?

                                                                              1. 0

                                                                                Reasons I do:

                                                                                • Ligature support

                                                                                Notice that many terminals support fonts with ligatures. You can see a handy table of terminals on the FiraCode documentation: https://github.com/tonsky/FiraCode In particular, konsole, qterminal, Windows Terminal, kitty, iTerm, have full ligature support.

                                                                                • Animated cursor

                                                                                What do you mean exactly by that? the blinking of the cursor? I prefer non-blinking cursors but surely all terminals support blinking cursor.

                                                                                • Faster performance

                                                                                I seriously doubt that the terminal inside the editor will be faster than a native terminal. Do you have benchmarks for that? Anyhow, terminal performance is very rarely an issue nowadays.

                                                                                • Possibility of future graphical features such as blurred floating windows, frameless window, etc

                                                                                What are “blurred floating windows” and why would you want such an horrific thing?

                                                                                1. 7
                                                                                  • Ligature Support

                                                                                  On windows the only terminal that supports ligatures with any semblance of performance is the Windows Terminal which is a new app that has issues of it’s own and doesn’t have mouse pass through at all. Maybe I’ve missed others?

                                                                                  • Animated Cursor

                                                                                  The gui supports a smear effect on the cursor which helps the user track where the cursor jumps. I find in my usage that I lose track of the cursor in some cases. The readme has a good example of it. This helps with that.

                                                                                  • Faster Performance

                                                                                  The combination of ligature support and good performance is very difficult to get right. In my experience on windows, terminal emulation is very slow. This isn’t

                                                                                  • Blurred Floating Windows

                                                                                  Floating windows are a feature inside of Neovim which lets windows appear on top of other windows. Neovim also supports some amount of fake transparency of the background so that characters behind show dimly in the front. This effect is fun and interesting, but a gui should be able to blur the background of these floating windows so that the text is less distracting but the effect is still visible.

                                                                                  As mentioned in the other comment, you are being unnecessarily mean.

                                                                                  1. 6

                                                                                    Your tone is a bit dismissive. You could express your points more kindly.

                                                                                    As for performance, terminal latency is still kinda bad for many. See the benchmarks from danluu and others. I don’t think there’s any particular reason to believe that a neovim GUI should have worse performance than a terminal. They do similar jobs.

                                                                                    If you look at the readme, you will see what they meant by animated cursor.

                                                                                    1. 1

                                                                                      Having “full ligature support” on a feature list and actually doing it are very different things. Last I checked (june 2019ish) windows terminal, no Linux terminal i can find, and iterm don’t do ZWJ emoji sequences according to font rules (hacker cat comes to mind but there’s more) or various non-emoji double-width characters right. The only one I know that does (at least as far as my IRC usage and dadaist fish_prompt is concerned) is mintty/putty, and it’s unfortunately slow.

                                                                                      1. 1

                                                                                        cannot speak about windows, but I’m a happy user of FiraCode on linux terminals (qterminal and kitty), and ligatures have been working since a few years ago (when I first heard about them).

                                                                                    2. 1

                                                                                      Ligature support and animated cursors are good reasons, but is the GUI faster than e.g. Alacritty, which is GPU accelerated? Also, many terminals can have blurry backgrounds, frameless windows, etc. Identical cross platform experience is also a good reason if it works consistently cross platform.

                                                                                      Terminal is great for some things, but these days I use neovim as my terminal emulator, so I guess I’d ask it the other way around. Why use TUI when you can have a terminal inside of neovim

                                                                                      I also run terminals inside of Neovim, but it’s far from a tmux replacement. That’s why I use the TUI, to I can use it with tmux.

                                                                                      1. 9

                                                                                        I don’t know exactly how nvim’s embedding api works, but in principle it should be easier to achieve high performance with a purpose-built editor frontend than a terminal. Reason being that with vt100 all you have is a character buffer, so redrawing only dirty sections requires extra work and increases coupling. But in principle there’s no reason why one would have to perform better than the other, and alacritty has had a lot more work done on it.

                                                                                        1. 5

                                                                                          Terminals cannot do blurry backgrounds for the floating windows inside of neovim. I think in the end, it comes down to preference. tmux isn’t an option today on windows, so for me the neovim terminal emulation is miles further than anything else I easily have available.

                                                                                          Perf wise, its not nearly as good as alacritty yet, but we are working on it, and as mentioned above, alacritty doesn’t support ligatures which is where a lot of the perf cost exists today.

                                                                                          1. 3

                                                                                            How is tmux not an option on windows? I use tmux in mintty a ton

                                                                                            1. 1

                                                                                              Gotta use cygwin for that. I’m not a fan, but if you are into that, tmux works great :)

                                                                                              1. 1

                                                                                                There’s a WSL port (full disclosure, I haven’t tried it) https://github.com/mintty/wsltty

                                                                                            2. 1

                                                                                              Fair point, I switched to Linux completely, so I kinda forgot about Windows. But yeah, especially on Windows it’d be nice to have this Neovim GUI.

                                                                                        2. 6

                                                                                          The terminal literally emulates a decades old hardware design. Using that as a platform is a silly default.

                                                                                          1. 1

                                                                                            Your cells emulate a billion-year-old hardware design. Please, update to a non-silly platform.

                                                                                            1. 12

                                                                                              Oh, man. Just tell me how.

                                                                                            2. 1

                                                                                              The terminal is just too convenient to not use as a platform.

                                                                                              1. 6

                                                                                                The concept isn’t the implementation. One can imagine a text oriented user interface without multiple decades of legacy requirements.

                                                                                                1. 1

                                                                                                  Hmm, could you expand on how you’d envision that in a bit more detail? Because I’m unsure I completely understand.

                                                                                                  1. 3

                                                                                                    One could reconsider/omit:

                                                                                                    • The entire termcap/terminfo infrastructure.

                                                                                                    • The limitation of the vt100 and friends as a medium (strict cell boundaries, no graphics, everything like DBCS/UTF-8 being a graft)

                                                                                                    • The raw bytestream nature, using a more structured protocol, which could enable everything from low bandwidth form entry (the 5250/3270 reality) or rich objects in your CLI (think anything from Mathematica to PowerShell)

                                                                                                    1. 2

                                                                                                      The entire termcap/terminfo infrastructure.

                                                                                                      I’m unfamiliar with this infrastructure, what does it do and why is it bad?

                                                                                                      The limitation of the vt100 and friends as a medium (strict cell boundaries, no graphics, everything like DBCS/UTF-8 being a graft)

                                                                                                      Fair point, having the option to have these things would be a very welcome addition.

                                                                                                      The raw bytestream nature, using a more structured protocol, which could enable everything from low bandwidth form entry (the 5250/3270 reality) or rich objects in your CLI (think anything from Mathematica to PowerShell)

                                                                                                      Well, if everything would work using that protocol / interface it’d be nice, but raw bytestreams seem to be the ultimate backwards compatible “protocol”, if you could call it such. Having these battle-tested tools which are still usable and easily extensible is quite a boon. The cost of turning over to another system with a more strict protocol doesn’t seem worth the benefits to me personally.

                                                                                                      Maybe if there’d be awk-like tools which could parse raw bytestreams to these objects? If this was simple enough, it could provide the same kind of backwards compatibility and extensibility.

                                                                                                      1. 5

                                                                                                        Well, if everything would work using that protocol / interface it’d be nice, but raw bytestreams seem to be the ultimate backwards compatible “protocol”, if you could call it such.

                                                                                                        This is the part of the discussion that really annoys me, because it’s so misleading.

                                                                                                        You can just dump arbitrary bytes on a terminal, if you don’t mind it switching into an unpredictable mode, or even crashing. But in reality, there is a protocol made up of the ANSI escape sequences and your codepage (hopefully UTF-8).

                                                                                                        Well-designed CLI apps, like vim, will escape these streams just like web apps escape tag characters, for readability purposes, but more importantly to prevent clickjacking.

                                                                                                        There are also potential vulnerabilities on the input side, too. For example, what happens if your clipboard contains an ESC and you paste it into vim?

                                                                                                        1. 1

                                                                                                          I get your point, but that doesn’t take away that its backwards compatibility is unmatched. But rebuilding everything from scratch every x years takes a lot of effort which, most of us, are not willing to put in I reckon.

                                                                                                    2. 1
                                                                                            1. 10

                                                                                              Relevant discussion on the Apple developers forum, where the developers of Little Snitch, TripMode and Radio Silence, among others, express their concerns:

                                                                                              https://forums.developer.apple.com/thread/79590

                                                                                              Apple official position is for them to file an “enhancement request”. Good luck with that…

                                                                                              1. 2

                                                                                                And all of that was in 2017. Really unlikely that Apple is going to do anything given it’s been almost 3 years.

                                                                                                1. 6

                                                                                                  Right. I was never fan of the theory that Apple was iPad’ifying macOS. But it looks like we are heading that direction, even if accidentally. I can understand Apple’s motivations for the individual changes. In principle SIP is great, it protects against many malware attacks. In principle user-space drivers are also great, a vendor’s crap drivers should not run in our ring-0 [1]. Signed applications were great, but the mechanism was somewhat sensitive to stolen developers keys. No we have notarization, which puts makes Apple de gate keeper, even outside the App Store.

                                                                                                  With many of these steps, there are accommodations for more advanced users, but they are all half baked. The do user-space drivers, but never complete the APIs necessary for developers to actually restore the old functionality in user-space. They make the system volume read-only, but come up with a half-baked mechanism for users who actually need a top-level directory. E.g. installing Nix in Catalina requires creating a new volume, creating an entry in synthetic.conf, and creating an entry in fstab. And then it doesn’t really work well if you encrypt the volume, because encrypted volumes are only mounted upon login, which means that applications that rely on the store could be started before the Nix store is mounted. How about just providing a menu item in Disk Utility that says “Create a top-level mounted volume”.

                                                                                                  The thing is that advanced users were just a gateway in the early 2000s for Apple to gain a foothold in the market and bootstrap a developer ecosystem. Now that the vast majority of Mac users are not advanced users, it’s just not their focus anymore. Their focus is providing a system that is as easy and secure as possible for the large majority of users and avoiding diverging from the iOS ecosystem to avoid maintenance costs. That’s a perfectly fine direction to take, but we as developers/advanced users should not expect much more than the occasional nice ‘back door’ that Apple developers manage to smuggle in, such as synthetic.conf.

                                                                                                  [1] The situation is really different compared to Linux, because in Linux virtually all drivers are open source and upstreamed, so one can verify that they don’t do stupid stuff.

                                                                                                  1. 0

                                                                                                    As of lately I’ve been a bit dissatisfied with MacOS. It used to be great IMO. I really hope they won’t completely dumb down MacOS.