1. 4

    The tl:Dr is: use Wayland because X just wasn’t designed for this.

    1. 3

      If all monitors have the same scale, X can look fine. But it absolutely cannot handle both low-DPI and high-DPI monitors at the same time.

      1. 1

        Although this does work fine with Qt5.

        1. 1

          It can, using a specialized xrandr configuration, and it looks great. The only problem I had (which stopped me from using it) is that there’s a bug in the Intel driver that makes the cursor on the laptop monitor flicker, which is more annoying than you’d think.

      1. 2

        I’m using entr for this.

        1. 4

          The most concerning part is DigitalOcean will shut off your server and then contact you second. There should be at least a day given to respond before your service gets shut off.

          1. 3

            On the other hand, if your server actually got hacked due to some security vulnerability, you might be happy they blocked it immediately. It would also potentially stop spreading.

            1. 2

              They prob. do that for a bigger customer.

              1. 2

                I suspect that anti-phishing policies are derived from safe harbor policies at most web hosting companies (since they both involve shutting down or making inaccessible other people’s files or services). DMCA takedowns are performed immediately & then investigated (or never investigated, more likely) in order to avoid liability under safe harbor provisions, since there’s a time limit of either 24 or 48 hours (I’ve forgotten which).

                If the author actually was a professional spammer/phisher, and was dumb enough to be paying for his own hosting from a place with anti-phishing policies, then he could use the 24 hours warning to register new accounts & move his operation, so even if you want to take the position that DigitalOcean’s primary goal is to be proactive against phishers / protect users rather than to cover their ass, there’s a case to be made.

                Certainly, under these circumstances the behavior is user- (and customer-) hostile, but that’s business: as long as you’re sure they’ll pay you, hostility doesn’t affect the bottom line, and you can feel free to alienate customers in direct proportion to other kinds of risk. Running a public IPFS gateway on a rented machine is an unusual behavior (of dubious utility, since running a private gateway is so easy & pure-JS gateways that run in the browser exist for those who can’t), & it’s unlikely that DigitalOcean is going to adapt their policies to support any kind of open gateway or proxy.

              1. 3

                It’s disappointing that there are still no extensions for mobile Chrome.

                1. 5

                  Then uBlock Origin would be there, that is undesirable, I would guess.

                  1. 3

                    That’s why using Firefox on mobile is a no-brainer.

                  1. 3

                    I would love to have a feedback post, three years later. I don’t really know the status of Neovim right now

                    1. 12

                      All of the points made in the post mentioned are still true.

                      Neovim is still developed actively and the community is stronger than ever. You can see the latest releases with notes here: https://github.com/neovim/neovim/releases

                      Vim’s BDFL ultimately caved and released his own async feature that is incompatible with Neovim’s design that has been in use by various cross-compatible plugins for years (no actual reason was provided for choosing incompatibility despite much pleading from community members). Some terminal support has also been added to recent Vim. IMO both implementations are inferior to Neovim’s, but that doesn’t matter much for end-users.

                      There are still many additional features in Neovim that haven’t been begrudgingly ported to Vim.

                      At this point, I choose to use Neovim not because of the better codebase and modern features and saner defaults, but because of the difference in how the projects are maintained and directed.

                      1. 20

                        Vim’s BDFL ultimately caved and released his own async feature

                        No, he didn’t. He didn’t cave. He was working on async, for a long time, with the goal of producing an async feature that actually fit in with the rest of Vim’s API and the rest of VimL, which he did. Did he probably work on it more and more quickly due to NeoVim? Sure. Did he only work on it because of pressure as you imply? No.

                        that is incompatible with Neovim’s design that has been in use by various cross-compatible plugins for years (no actual reason was provided for choosing incompatibility despite much pleading from community members).

                        NeoVim is incompatible with vim, not the other way around.

                        Some terminal support has also been added to recent Vim. IMO both implementations are inferior to Neovim’s, but that doesn’t matter much for end-users.

                        Async in vim fits in with the rest of vim much better than NeoVim’s async API would have fit in with vim.

                        There are still many additional features in Neovim that haven’t been begrudgingly ported to Vim.

                        The whole point of NeoVim is to remove features that they don’t personally use because they don’t think they’re important. There are a lot of Vim features not in NeoVim.

                        At this point, I choose to use Neovim not because of the better codebase and modern features and saner defaults, but because of the difference in how the projects are maintained and directed.

                        Vim is stable, reliable and backwards-compatible. I don’t fear that in the next release, a niche feature I use will be removed because ‘who uses that feature lolz?’, like I would with neovim.

                        1. 10

                          No, he didn’t. He didn’t cave. He was working on async, for a long time, with the goal of producing an async feature that actually fit in with the rest of Vim’s API and the rest of VimL, which he did.

                          Where did you get this narrative from? The original post provides links to the discussions of Thiago’s and Geoff’s respective attempts at this. I don’t see what you described at all.

                          Can you link to any discussion about Bram working on async for a long time before?

                          NeoVim is incompatible with vim, not the other way around.

                          Huh? Vim didn’t have this feature at all, a bunch of plugins adopted Neovim’s design, Vim broke compatibility with those plugins by releasing an incompatible implementation of the same thing, forcing plugin maintainers to build separate compatibility pipelines for Vim. Some examples of this is fatih’s vim-go (some related tweets: https://twitter.com/fatih/status/793414447113048064) and Shougo’s plugins.

                          I get the whole “Vim was here first!” thing this is about the plugin ecosystem.

                          Async in vim fits in with the rest of vim much better than NeoVim’s async API would have fit in with vim.

                          How’s that?

                          Here is the discussion of the patch to add vim async from Bram, where he is rudely dismissive of Thiago’s plea for a compatible design (no technical reasons given): https://groups.google.com/forum/#!topic/vim_dev/_SbMTGshzVc/discussion

                          The whole point of NeoVim is to remove features that they don’t personally use because they don’t think they’re important. There are a lot of Vim features not in NeoVim.

                          What are some examples of important features or features you care about that have been removed from Neovim?

                          The whole point of Neovim (according to the landing page itself: https://neovim.io/) is to migrate to modern tooling and features. The goal is to remain backwards-compatible with original vim.

                          Vim is stable, reliable and backwards-compatible. I don’t fear that in the next release, a niche feature I use will be removed because ‘who uses that feature lolz?’, like I would with neovim.

                          Do you actually believe this or are you being sarcastic to make a point? I honestly can’t relate to this.

                          1. 3

                            The vim vs. neovim debate is often framed a bit in the style of Bram vs. Thiago, and the accusation against Thiago is typically that he was too impatient or should not have forked vim in the first place when Bram did not merge Thiago’s patches. I have the feeling that your argumentation falls into similar lines and I don’lt like to view this exclusively as Bram vs. Thiago, because I both value Bram’s and Thiago’s contributions to the open source domain, and I think so far vim has ultimatetively profitted from the forking.

                            I think there are two essential freedoms in open source,

                            • the freedom of an open source maintainer not to accept / merge contribution,
                            • in the very essence of open source, that users have the right to fork, when they feel that the maintainers are not accepting their contributions (preferably they try to make a contribution to the source project first).

                            Both of this happend when neovim was forked. There is no “offender” in any way. Thus, all questions on API compatibility following the split cannot be lead from the perspective of a renegade fork (nvim) and an authorative true editor (vim).

                            1. 8

                              It was absolutely 100% justified of Thiago to fork vim when Bram wouldn’t merge his patches. What’s the point of open source software if you can’t do this?

                              1. 3

                                And as a follow up my more subjective view:

                                I personally use neovim on my development machines, and vim on most of the servers I ssh into. The discrepancy for the casual usage is minimal, on my development machines I feel that neovim is a mature and very usable product that I can trust. For some reason, vim’s time-tested code-base with pre-ANSI style C headers and no unit tests is one I don’t put as much faith in, when it comes to introducing changes.

                            2. 4

                              @shazow’s reasoning and this post are what I link people to in https://jacky.wtf/weblog/moving-to-neovim/. Like for a solid release pipeline and actual digestible explanations as to what’s happening with the project, NeoVim trumps Vim every time.

                          1. 2

                            When accessing more bytes per cache line, the code is running more instructions. Each of these instructions depend on the previous as they’re all writing to the same counter. So I assume it’s actually just slower because it’s more sequential code, not because L1 cache reads are expensive.

                            A better benchmark would compare the sizes you can access with a single instruction, or would have multiple instructions that can be run in parallel.

                            1. 2

                              Deployed via PDF font installer, that is. It seems the biggest issue here is that the installer never checks the signatures of the font installers it downloads, but the article never suggests that.

                              1. 4

                                Any example of a situation in which creating commits like this helps?

                                1. 3

                                  Let’s say you have some complicated history pattern with merges and so on. Lots of different developers doing lots of different things. After a bunch of merges and merge conflict resolutions you have a history of sorts, but you want to clean it up. This allows you to make a single commit where the end result is the tree matches the complex history you’d like to throw away.

                                  Very useful for keeping dev history clean.

                                  1. 1

                                    Ah, that makes sense. So like a squashed merge, but without the merge. I guess it’s what git merge --squash --strategy=theirs would do if that merge strategy existed.

                                    1. 1

                                      Well, now that I think about it, wouldn’t saying –strategy=theirs be specifying just how conflicts are handled? My tool is saying, forget about conflicts, merging, everything, take the entire tree from the other commit wholesale. Don’t even try and merge things together.

                                      1. 1

                                        No, that’s what --strategy=recursive -X theirs does. The existing “ours” strategy just throws away the other commit and takes the tree from the current one. A fictional “theirs” strategy would do the same with the other tree.

                                        Merge strategies and their options are pretty confusing.

                                    2. 1

                                      Wait… you use it to delete history? But… having that history around is the reason I use git?

                                      1. 1

                                        The last thing I want to do when fighting a production fire at 3am is be sorting through 12 merges of commits that look like:

                                        • add feature
                                        • whoops
                                        • small fix
                                        • review comments
                                        • doh maybe this time.

                                        squash that crap together! What commit broke the build is infinitely harder to figure out when the problem is in some chain of merges titled “whoops”

                                        I decidedly prefer having my git history serve as a neatly curated form of documentation about the evolution of the codebase, not chaos of immutable trial and error

                                        1. 2

                                          I constantly bring this up in pull requests when I see shitty commit histories like that. Squash your damn commits! If you’ve already pushed a branch, create a new one with a new name, pick your commits on top of it, rebase -i and squash them into succinct relevant feature sets (or try to get as close as you can).

                                          I realize this is once that’s already gone and it’s too late (history with a ton of “squishme: interum commit” bullshit in there) and that’s the purpose of tools like yours, but teaching people good code hygiene is pretty important too. :-P

                                          1. 1

                                            So I agree with you on this approach, but I think I’m still not grasping what your tool accomplishes. Couldn’t the situation you’re outlining here be accomplished by squashing?

                                            1. 1

                                              Yeah that last comment was really more of a discussion about why you might want to clean up git history. That’s a poor example for this tool.

                                              This tool is useful when there’s multiple merges along two divergent branches of history and you want to make a commit that essentially contains the entire diff from your commit down to the merge-base of another commit combined with the diff from the merge-base back up to that other commit.

                                              1. 1

                                                Hmm, I guess I just can’t picture in what kind of situation that would happen. Could you explain the example chronologically?

                                                1. 1

                                                  I think @jtolds is on significantly more complicated code bases than I’ve worked on. There was an earlier post about Octopus commits:

                                                  https://www.destroyallsoftware.com/blog/2017/the-biggest-and-weirdest-commits-in-linux-kernel-git-history

                                                  and here is a visual for what that would look like:

                                                  https://imgur.com/gallery/oiWeZmm

                                    1. 8

                                      A couple notes on the article (specifically, the one it links to at the beginning, The Logical Disaster of Null).

                                      Null is a crutch. It’s a placeholder for I don’t know and didn’t want to think about it further

                                      I disagree. In C, at least, NULL is a preprocessor macro, not a special object, “which expands to an implementation-defined null pointer constant”. In most cases, it’s either 0, or ((void *)0). It has a very specific definition and that definition is used in many places with specific meaning (e.g., malloc returns NULL on an allocation failure). The phrase, “It’s a placeholder for I don’t know and didn’t want to think about it further”, seems to imply that it’s used by programmers who don’t understand their own code, which is a different problem altogether.

                                      People make up what they think Null means, which is the problem.

                                      I agree. However, again in C, this problem doesn’t really exist, since there are no objects, only primative types. structs, for example, are just logical groupings of zero or more primative types. I can imagine that, in object-oriented languages, the desire to create some sort of NULL object can result in an object that acts differently than non-NULL objects in exceptional cases, which would lead to inconsistency in the language.

                                      In another article linked-to in Logical Disaster of Null talks about how using NULL-terminated character arrays to represent strings was a mistake.

                                      Should the C language represent strings as an address + length tuple or just as the address with a magic character (NUL) marking the end?

                                      I would certainly choose the NULL-terminated character array representation. Why? Because I can easily just make a struct that has a non-NULL-terminated character array, and a value representing length. This way, I can choose my own way to represent strings. In other words, the NULL-terminated representation just provides flexibility.

                                      1. 4

                                        “On Multics C on the Honeywell DPS-8/M and 6180, the pointer value NULL is not 0, but -1|1.”

                                        1. 3

                                          The C Standard allows that. It basically states that, in the source code, a value of 0 in a pointer context is a null pointer and shall be converted to whatever value that represents in the local architecture. So that means on a Honeywell DPS-8/M, the code:

                                          char *p = 0;
                                          

                                          is valid, and will set the value of p to be -1. This is done by the compiler. The name NULL is defined so that it stands out in source code. C++ has rejected NULL and you are expected to use the value 0 (I do not agree with this, but I don’t do C++ coding).

                                          1. 2

                                            I believe C++11 introduced the nullptr keyword which can mostly be used like NULL in C.

                                            1. 1

                                              Correct. Just for reference, from the 1989 standard:

                                              “An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant.”

                                          2. 3

                                            I would certainly choose the NULL-terminated character array representation. Why? Because I can easily just make a struct that has a non-NULL-terminated character array, and a value representing length. This way, I can choose my own way to represent strings. In other words, the NULL-terminated representation just provides flexibility.

                                            That’s not a very convincing argument IMO since you can implement either of the options yourself no matter which one is supported by the stdlib, the choice of one doesn’t in any way impact the potential flexibility. On the other hand NULL-terminated strings are much more likely to cause major problems due to how extremely easy it is to accidentally clobber the NULL byte, which happens all the time in real-world code.

                                            And the language not supporting Pascal-style strings means that people would need to reach for one of a multitude of different and incompatible third-party libraries and then convince other people on the project that the extra dependency is worth it, and even then you need to be very careful when passing the functions to any other third-party functions that need the string.

                                            1. 1

                                              You make a good point. Both options for strings can be implemented. As for Pascal strings, it is nice that a string can contain a NULL character somewhere in the middle. I guess back in the day when C was being developed, Ritchie chose NULL-terminated strings due to length being capped at 255 characters (the traditional Pascal string used the first byte to contain length). Nowadays, since computers have more memory, you could just use the first 4 bytes (for example) to represent string length, in which case, in C it could just be written as struct string { int length; char *letters; }; or something like that.

                                              From Ritchie: “C treats strings as arrays of characters conventionally terminated by a marker. Aside from one special rule about initialization by string literals, the semantics of strings are fully subsumed by more general rules governing all arrays, and as a result the language is simpler to describe and to translate than one incorporating the string as a unique data type.”

                                          1. 2

                                            I think your code is missing some calls to make_pipe? The child_* arrays are never initialized.

                                            1. 1

                                              You’re absolutely right. In the cleanup for my post I accidentally removed those. I will fix that.

                                            1. 4

                                              No nice IPv6 addresses though :(

                                              2606:4700:4700::1111
                                              2606:4700:4700::1001
                                              

                                              I wonder why they didn’t go for something shorter.

                                              1. 6

                                                I used to think the best way was to check for anything, @, anything, ., anything and then I was informed it is possible to have an email address with no dot if the domain side is an ipv6 address…

                                                1. 2

                                                  That sounds like it wouldn’t work with SPF or DKIM any way?

                                                  1. 3

                                                    Do SPF or DKIM matter in this context? The email receiver validates these things for the sender. You don’t need it if you only plan to receive emails, do you?

                                                1. 6

                                                  I think the faulty assumption is that the happiness of users and developers is more important to the corporate bottom line than full control over the ecosystem.

                                                  Linux distributions have shown for a decade that providing a system for reliable software distribution while retaining full user control works very well.

                                                  Both Microsoft and Apple kept the first part, but dropped the second part. Allowing users to install software not sanctioned by them is a legacy feature that is removed – slowly to not cause too much uproar from users.

                                                  Compare it to the time when Windows started “phoning home” with XP … today it’s completely accepted that it happens. The same thing will happen with software distributed outside of Microsoft’s/Apple’s sanctioned channels. (It indeed has already happened on their mobile OSes.)

                                                  1. 8

                                                    As a long-time Linux user and believer in the four freedoms, I find it hard to accept that Linux distributions demonstrate “providing a system for reliable software distribution while retaining full user control works very well”. Linux distros seems to work well for enthusiasts and places with dedicated support staff, but we are still at least a century away from the year of Linux on the desktop. Even many developers (who probably have some overlap with the enthusiast community) have chosen Macs with unreliable software distribution like Homebrew and incomplete user control.

                                                    1. 2

                                                      I agree with you that Linux is still far away from the year of Linux on the desktop, but I think it is not related to the way Linux deals with software distribution.

                                                      There are other, bigger issues with Linux that need to be addressed.

                                                      In the end, the biggest impact on adoption would be some game studios releasing their AAA title as a Linux-exclusive. That’s highly unlikely, but I think it illustrates well that many of the factors of Linux’ success on the desktop hinge on external factors which are outside of the control of users and contributors.

                                                      1. 2

                                                        All the devs I know that use mac use linux in some virtualisation options instead of homebrew for work. Obviously thats not scientific study by any means.

                                                        1. 8

                                                          I’ll be your counter example. Homebrew is a great system, it’s not unreliable at all. I run everything on my Mac when I can, which is pretty much everything except commercial Linux-only vendor software. It all works just as well, and sometimes better, so why bother with the overhead and inconvenience of a VM? Seriously, why would you do that? It’s nonsense.

                                                          1. 4

                                                            Maybe a VM makes sense if you have very specific wishes. But really, macOS is an excellent UNIX and for most development you won’t notice much difference. Think Go, Java, Python, Ruby work. Millions of developers probably write on macOS and deploy on Linux. I’ve been doing this for a long time and ‘oh this needs a Linux specific exception’ is a rarity.

                                                            1. 4

                                                              you won’t notice much difference.

                                                              Some time ago I was very surprised that hfs is not case sensitive (by default). Due to a bad letter-case in an import my script would fail on linux (production), but worked on mac. Took me about 30 minutes to figure this out :)

                                                              1. 3

                                                                You can make a case sensitive code partition. And now with APFS, partitions are continuously variable size so you won’t have to deal with choosing how much goes to code vs system.

                                                                1. 1

                                                                  A case sensitive HFS+ slice on a disk image file is a good solution too.

                                                                2. 2

                                                                  Have fun checking out a git repo that has Foo and foo in it :)

                                                                  1. 2

                                                                    It was bad when microsoft did it in VB, and it’s bad when apple does it in their filesystem lol.

                                                                3. 2

                                                                  Yeah definitely. And I’ve found that accommodating two platforms where necessary makes my projects more robust and forces me to hard code less stuff. E.g. using pkg-config instead of yolocoding path literals into the build. When we switched Linux distros at work, all the packages that worked on MacOS and Linux worked great, and the Linux only ones all had to be fixed for the new distro. 🙄

                                                                4. 2

                                                                  I did it for awhile because I dislike the Mac UI a lot but needed to run it for some work things. Running in a full screen VM wasn’t that bad. Running native is better, but virtualization is pretty first class at this point. It was actually convenient in a few ways too. I had to give my mac in for repair at one point, so I just copied the VM to a new machine and I was ready to run in minutes.

                                                                  1. 3

                                                                    I use an Apple computer as my home machine, and the native Mac app I use is Terminal. That’s it. All other apps are non-Apple and cross-platform.

                                                                    That said, MacOS does a lot of nice things. For example, if you try to unmount a drive, it will tell you what application is still using it so you can unmount it. Windows (10) still can’t do that, you have to look in the Event viewer(!) to find the error message.

                                                                    1. 3

                                                                      In case it’s unclear, non-Native means webapps, not software that doesn’t come preinstalled on your Mac.

                                                                      1. 3

                                                                        It is actually pretty unclear what non-Native here really means. The original HN post is about sandboxed apps (distributed through the App Store) vs non-sandboxed apps distributed via a developer’s own website.

                                                                        Even Gruber doesn’t mention actual non-Native apps until the very last sentence. He just talks/quotes about sandboxing.

                                                                        1. 3

                                                                          The second sentence of the quoted paragraph says:

                                                                          Cocoa-based Mac apps are rapidly being eaten by web apps and Electron pseudo-desktop apps.

                                                                    2. 1

                                                                      full-screen VM high-five

                                                                    3. 1

                                                                      To have environment closer to production I guess (or maybe ease of installation, dunno never used homebrew). I don’t have to use mac anymore so I run pure distro, but everyone else I know uses virtualisation or containers on their macs.

                                                                      1. 3

                                                                        Homebrew is really really really easy. I actually like it over a lot of Linux package managers because it first class supports building the software with different flags. And it has binaries for the default flag set for fast installs. Installing a package on Linux with alternate build flags sucks hard in anything except portage (Gentoo), and portage is way less usable than brew. It also supports having multiple versions of packages installed, kind of half way to what nix does. And unlike Debian/CentOS it doesn’t have opinions about what should be “in the distro,” it just has up to date packages for everything and lets you pick your own philosophy.

                                                                        The only thing that sucks is OpenSSL ever since Apple removed it from MacOS. Brew packages handle it just fine, but the python package system is blatantly garbage and doesn’t handle it well at all. You sometimes have to pip install with CFLAGS set, or with a package specific env var because python is trash and doesn’t standardize any of this.

                                                                        But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                                                        1. 1

                                                                          Installing a package on Linux with alternate build flags sucks hard in anything except portage

                                                                          You mention nix in the following sentence, but installing packages with different flags is also something nix does well!

                                                                          1. 1

                                                                            Yes true, but I don’t want to use NixOS even a little bit. I’m thinking more vs mainstream distro package managers.

                                                                          2. 1

                                                                            For all its ease, homebrew only works properly if used by a single user who is also an administrator who only ever installs software through homebrew. And then “works properly” means “install software in a global location as the current user”.

                                                                            1. 1

                                                                              by a single user who is also an administrator

                                                                              So like a laptop owner?

                                                                              1. 1

                                                                                A laptop owner who hasn’t heard that it’s good practice to not have admin privileges on their regular account, maybe.

                                                                            2. 1

                                                                              But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                                                              Can you elaborate more on this? You create a virtualenv and go from there, everything works.

                                                                              1. 2

                                                                                It used to be worse, when mainstream distros would have either 2.4 or 2.6/2.7 and there wasn’t a lot you could do about it. Now if you’re on python 2, pretty much everyone is 2.6/2.7. Because python 2 isn’t being updated. Joy. Ruby has rvm and other tools to install different ruby versions. Java has a tarball distribution that’s easy to run in place. But with python you’re stuck with whatever your distro has pretty much.

                                                                                And virtualenvs suck ass. Bundler, maven / gradle, etc. all install packages globally and let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs. Node installs all it’s modules locally to a directory by default but at least it automatically picks those up. I know there are janky shell hacks to make virtualenvs automatically activate and deactivate with your current working directory, but come on. Janky shell hacks.

                                                                                That and pip just sucks. Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch. The virtualenv melting pot of files that pip dumps into one directory just blatantly breaks a lot of the time. They’re basically write once. Meanwhile every gem version has it’s own directory so you can cleanly add, update, and remove gems.

                                                                                Basically the ruby, java, node, etc. all have tooling actually designed to author and deploy real applications. Python never got there for some reason, and still has a ton of second rate trash. The scientific community doesn’t even bother, they use distributions like Anaconda. And Linux distros that depend on python packages handle the dependencies independently in their native package formats. Ruby gets that too, but the native packages are just… gems. And again, since gems are version binned, you can still install different versions of that gem for your own use without breaking anything. Python there is no way to avoid fucking up the system packages without using virtualenvs exclusively.

                                                                                1. 1

                                                                                  But with python you’re stuck with whatever your distro has pretty much.

                                                                                  I’m afraid you are mistaken, not only distros ship with 2.7 and 3.5 at same time (for years now) it is usually trivial to install newer version.

                                                                                  let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs

                                                                                  You can also execute from virtualenvs directly.

                                                                                  Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch.

                                                                                  I’m not sure how to comment on that :-)

                                                                                  1. 1

                                                                                    it is usually trivial to install newer version

                                                                                    Not my experience? How?

                                                                                    1. 1

                                                                                      Usually you have packages for all python versions available in some repository.

                                                                      2. 2

                                                                        Have they chosen Macs or have they been issued Macs? If I were setting up my development environment today I’d love to go back to Linux, but my employers keep giving me Macs.

                                                                        1. 3

                                                                          Ask for a Linux laptop. We provide both.

                                                                          I personally keep going Mac because I want things like wifi, decent power management, and not having to carefully construct a house of cards special snowflake desktop environment to get a useable workspace.

                                                                          If I used a desktop computer with statically affixed monitors and an Ethernet connection, I’d consider Linux. But Macs are still the premier Linux laptop.

                                                                          1. 1

                                                                            At my work place every employee is given a Linux desktop and they have to do a special request to get a Mac or Windows laptop (Which would be in addition to their Linux desktop).

                                                                        2. 3

                                                                          Let’s be clear though, what this author is advocating is much much worse from an individual liberty perspective than what Microsoft does today.

                                                                          1. 4

                                                                            Do you remember when we all thought Microsoft were evil for bundling their browser and media player? Those were good times.

                                                                        1. 4

                                                                          At my undergrad CS program (NYU, 2002-2006) they taught Java for intro programming courses, but then expected you to know C for the next level CS courses (especially computer architecture and operating systems). Originally, they taught C in the intro courses, but found too many beginning programmers to drop out – and, to be honest, I don’t blame them. C isn’t the gentlest introduction to programming. But this created a terrible situation where professors just expected you to know C at the next level, while they were teaching other concepts from computing.

                                                                          But, as others have stated, knowing C is an invaluable (and durable) skill – especially for understanding low-level code like operating systems, compilers, and so on. I do think a good programming education involves “peeling back the layers of the onion”, from highest level to lowest level. So, start programming with something like Python or JavaScript. Then, learn how e.g. the Python interpreter is implemented in C. And then learn how C relates to operating systems and hardware and assembler. And, finally, understand computer architecture. As Norvig says, it takes 10 years :-)

                                                                          The way I learned C:

                                                                          • K&R;
                                                                          • followed by some self-instruction on GTK+ and GObject to edit/recompile open source programs I used on the Linux desktop;
                                                                          • read the source code of the Python interpreter;
                                                                          • finally, I ended up writing C code for an advanced operating systems still archived/accessible here which solidified it all for me.

                                                                          Then I didn’t really write C programs for a decade (writing Python, mostly, instead) until I had to crack C back open to write a production nginx module just last year, which was really fun. I still remembered how to do it!

                                                                          1. 3

                                                                            One of the things I loved about my WSU CS undergrad program 20 years ago is that in addition to teaching C for the intro class, it was run out of the EE department so basic electronics courses were also required. Digital logic and simple circuit simulations went a long way towards understanding things like “this is how RAM works, this is why CPUs have so much gate count, this is why you can’t simply make up pointer addresses”

                                                                            1. 2

                                                                              they taught Java for intro programming courses, but then expected you to know C for the next level CS courses (especially computer architecture and operating systems).

                                                                              It’s exactly like this at my university today. I don’t think there’s any good replacement for C for this purpose. You can’t teach Unix system calls with Java where everything is abstracted into classes. Although most “C replacement” languages allow easier OS interfacing, they similarly abstract away the system calls for standard tasks. I also don’t think it’s unreasonable to expect students to learn about C as course preparation in their spare time. It’s a pretty simple language with few new concepts to learn about if you already know Java. Writing good C in a complex project obviously requires a lot more learning, but that’s not required for the programming exercises you usually see in OS and computer architecture courses.

                                                                              1. 1

                                                                                I think starting from the bottom and going up the layers is better. Rather than being frustrated as things get harder, you will be grateful for and know the limitations of the abstractions as they are added.

                                                                              1. 4

                                                                                Why do people make the byte order mistake so often? I think it’s because they’ve seen a lot of bad code that has convinced them byte order matters.

                                                                                I think it’s also because it’s often just convenient to write byte-order dependent code. You need to serialize something and only develop for x86 anyways, so just write out a packed struct!

                                                                                At some point, you add support for a big endian architecture. You’re busy adding #ifdefs for that target anyways, so it appears easier to keep the original code as-is and byte-swap everything.

                                                                                1. 5

                                                                                  No example pictures? :(

                                                                                  1. 5

                                                                                    Hey, sorry I didn’t put any since I’ve done this a while ago. I don’t have too many on hand. Here’s a couple I found on my computer.

                                                                                    https://imgur.com/a/Ewwe7
                                                                                    https://imgur.com/a/fokYq

                                                                                  1. 1

                                                                                    VTune is pretty cool, but unfortunately needs a kernel module on Linux. When I used it last year, you either had to compile some old kernel or fix the module’s code for the current kernel. The officially supported distributions were all out of date at that point, I think. The Linux code evolves too fast for external kernel modules.

                                                                                    1. 6

                                                                                      Patreon also adds VAT to your pledge, so if you want to pledge $10, you’ll actually pay ~$12. Apparently they only do that in the EU, but they still add the VAT “US-style” as an additional fee instead of having it included in the price.

                                                                                      1. 5

                                                                                        Why are you installing to /usr/local? Packages are supposed to go to /usr directly.

                                                                                        1. 1

                                                                                          It’s the filesystem location specified in the GNU Coding Standard

                                                                                          Executable programs are installed in one of the following directories.

                                                                                          bindir: The directory for installing executable programs that users can run. This should normally be /usr/local/bin, but write it as $(exec_prefix)/bin.

                                                                                          1. 5

                                                                                            Packages should never be installed to /usr/local

                                                                                            https://wiki.archlinux.org/index.php/arch_packaging_standards

                                                                                            Arch users expect packages to install in /usr, so it makes more sense to follow the Arch packaging standards here.

                                                                                            1. 2

                                                                                              Fair enough, I can make that adjustment. Thanks for sharing that link.

                                                                                            2. 3

                                                                                              GNU expects downstream packagers (“installers”) to change the install location, which is why the prefix variable exists. /usr/local/ is an appropriate default for “from-source” installs, to avoid conflicts with packages.

                                                                                          1. 6

                                                                                            Happy to see people writing screensavers. In many ways, they’ve outlived their namesake purpose, but there is still something so charming about them! I also recently wrote a mac os screen saver (for my first time) and unfortunately found that the fragment shader I wrote really heats up my machine.

                                                                                            1. 3

                                                                                              They’re possibly still useful on display types that burn in in the modern day like OLED - and for people still on plasma and god forbid, CRTs, they still have a use.

                                                                                              1. 6

                                                                                                Is a screen saver better than just turning the monitor off (i.e. turning turning monitor output off which makes the screen go into standby mode)? Are/were people using screen savers just to avoid the few seconds the monitor needs to turn back on or is there another reason?

                                                                                                1. 2

                                                                                                  Certain screensavers can help with burn in on OLED displays, turning the display off does not help. I don’t know the actual science behind it, I just know it worked on an OLED display I had that had burn in. ;)

                                                                                                  1. 2

                                                                                                    Note that a screensaver is unlikely to have that property unless it was designed to. Those screen-healing screensavers usually use colored geometric patterns.

                                                                                                    I remember one of the patterns in such a screensaver was a series of black and white vertical stripes that slowly scrolled sideways. I once had the idea of making a free clone of that screensaver, so I replicated that pattern in Quartz Composer, Apple’s visual programming tool for generating graphics. I never remade any of the other patterns though.