Threads for marius

  1. 16

    Stop using laptops. For the same money you can get a kickassssss workstation.

    1. 27

      But then for the time you want to work away from the desk you need an extra laptop. Not everyone needs that of course, but if you want to work remotely away from home or if you do on-call, then laptop’s a requirement.

      1. 5

        Laptops also have a built-in UPS! My iMac runs a few servers on the LAN and they all go down when there’s a blackout.

        1. 2

          Curious in which country you live that this is a significant enough problem to design for it?

          1. 5

            Can’t speak about the other poster, but I think power distribution in the US would qualify as risky. And not only in rural areas. consider that even Chicago burbs don’t have buried power lines. And every summer there’s the blackout due to AC surges. I’d naively expect at least 4 or 5 (brief) blackouts per year

      2. 8

        i get that, but it’s also not a very productive framework for discussion. i like my laptop because i work remotely – 16GB is personally enough for me to do anything i want from my living room, local coffee shop, on the road, etc. i do junior full-stack work, so that’s likely why i can get away with it. obviously, DS types and other power hungry development environments are better off with a workhorse workstation. it’s my goal to settle down somewhere and build one eventually, but it’s just not on the cards right now; i’m moving around quite a bit!

        my solution? my work laptop is a work laptop – that’s it. my personal laptop is my personal laptop – that’s it. my raspberry pi is for one-off experiments and self-hosted stuff – that’s it. in the past, i’ve used a single laptop for everything, and frequently found it working way too hard. i even tried out mighty for a while to see if that helped ((hint: only a little)). separation of concerns fixed it for me! obviously, this only works if your company supplies a laptop, but i would go as far as to say that even if they don’t it’s a good alternative solution, and might end up cheaper.

        my personal laptop is a thinkpad i found whilst trash-hopping in the bins of the mathematics building at my uni. my raspberry pi was a christmas gift, and my work laptop was supplied to me. i spend most of my money on software, not really on the hardware.

        edit: it’s also hard; since i have to keep things synced up. tmux and chezmoi are the only reasonable way i’ve been able to manage!

        1. 6

          Agree. The ergonomics of laptops are seriously terrible.

          1. 7

            Unfortunately I don’t think this is well known to most programmers. Recently a fairly visible blogger posted his workstation setup and the screen was positioned such that he would have to look downward just like with a laptop. It baffled many that someone who is clearly a skilled programmer could be so uninformed on proper working ergonomics and the disastrous effects it can have on one’s posture and long-term health.

            Anyone who regularly sits at a desk for an extended period of time should be using an eye-level monitor. The logical consequence of that is that laptop screens should only be used sparingly or in exceptional circumstances. In that case, it’s not really necessary to have a laptop as your daily driver.

            1. 6

              After many years of using computers I don’t see a big harm of using a slightly tilted display. If anything a regular breaks and stretches/exercises make a lot more difference, especially in long term.

              If you check out jcs’ setup more carefully you’ll see that the top line is not that much lower from the “default” eye-line so ergonomics there works just fine.

              1. 1

                We discuss how to improve laptop ergonomics and more at https://reddit.com/r/ergomobilecomputers .

                (I switched to a tablet PC, the screen is also tilted a bit but raised closer to eye level. Perhaps the photo in the ‘fairly visible blogger’s setup was setup for the photo and might be raised higher normally)

            2. 2

              That assumes you’re using the laptop’s built-in keyboard and screen all day long. I have my laptop hooked up to a big external monitor and an ergonomic keyboard. The laptop screen acts as a second monitor and I do all my work on the big monitor which is at a comfortable eye level.

              On most days it has the exact same ergonomics as a desktop machine. But then when I occasionally want to carry my work environment somewhere else, I just unplug the laptop and I’m good to go. That ability, plus the fact that the laptop is completely silent unless I’m doing something highly CPU-intensive, is well worth the loss of raw horsepower to me.

            3. 1

              A kickass workstation which can’t be taken into the hammock, yes.

              1. 1

                I bought a ThinkStation P330 2.5y ago and it is still my best computing purchase. Once my X220 dies, if ever, then I will go for a second ThinkStation.

                1. 3

                  A few years ago I bought an used thinkcentre m92. Ultra small form factor. Replaced the hard drive with a cheap SSD and threw in extra RAM and a 4k screen. Great set up. I could work very comfortably and do anything I want to do on a desktop. Including development or whatching 4k videos. I used that setup for five years and have recently changed to a 2 year old iMac with an Intel processor so I can smoothly run Linux on it.

                  There is no way I am suffering through laptop usage. I see laptops as something suited for sales people, car repair, construction workers and that sort of thing. For a person sitting a whole day in front of the screen… No way.

                  I don’t get the need for people to be able to use their computers in a zillion places. Why? What’s so critical about it? How many people actually carries their own portable office Vs just doing their work on their desks before the advent of the personal computer? We even already carry a small computer in our pocket att all times that fills up lot of personal work needs such as email, chat, checking webpages, conference calls, etc. Is it really that critical to have a laptop?

                  1. 4

                    I don’t get the need for people to be able to use their computers in a zillion places. Why? What’s so critical about it?

                    I work at/in:

                    1. The office
                    2. Home office
                    3. Living room

                    The first two are absolutely essential, the third is because if I want to do some hobbyist computing, it’s not nice if I disappear in the home office. Plus my wife and I sometimes both work at home.

                    Having three different workstations would be annoying. Not everything is on Dropbox, so I’d have to pass files between machines. I like fast machines, so I’d be upgrading three workstations frequently.

                    Instead, I just use a single MacBook with an M1 Pro. Performance-wise it’s somewhere between a Ryzen 5900X and 5950X. For some things I care about for work (matrix multiplication), it’s even much faster. We have a Thunderbolt Dock, 4k screen, keyboard and trackpad at each of these desks, so I plug in a single Thunderbolt cable and have my full working environment there. When I need to do heavy GPU training, I SSH into a work machine, but at least I don’t have a terribly noisy NVIDIA card next to me on or under the desk.

                    1. 3

                      The first two are absolutely essential, the third is because if I want to do some hobbyist computing, it’s not nice if I disappear in the home office.

                      I believe this is the crux of it. It boils down to personal preference. There is no way I am suffering to the horrible experience of using a laptop because it is not nice to disappear to the office. If anything, it raises the barrier to be in front of a screen.

                    2. 2

                      Your last paragraph is exactly my thoughts. Having a workstation is a great way to reduce lazy habits IMNSHO. Mobility that comes with a laptop is ultimately a recipe for neck pain, strain in arms and hands and poor posture and habits.

                      1. 6

                        I have 3 places in which I use my computer (a laptop). In two of them, I connect it to an external monitor, mouse and keyboard, and I do my best to optimize ergonomics.

                        But the fact that I can take my computer with me and use it almost anywhere, is a huge bonus.

                1. 2

                  ninja TLD ? Do pirate or cowboy TLD exists too ?

                  1. 3

                    No, but if you have money you can create them.

                  1. 1

                    Not to disparage the effort, but I’m curious why the author has chosen to implement an X11 compatibility layer and not a Wayland one, since X is rapidly approaching obsolescence in the Linux space.

                    1. 23

                      since X is rapidly approaching obsolescence in the Linux space

                      This is said a lot, but it isn’t really true.

                      1. 3

                        This is said a lot, but it isn’t really true.

                        But it should be. X is …… old. It should be resting.

                        1. 17

                          Linux is only about seven years younger than X. I guess it must be approaching obsolescence too.

                          For that matter, I was released in 1986….. oh dear, I don’t think my retirement account is ready for that :(

                          1. 13

                            Personally I can’t believe lobster.rs is served primarily via TCP and UDP. They are 42 years old and should be put to rest.

                            </s>

                            1. 1

                              You’re right. We should be using SCTP instead.

                        2. 11

                          The author explained this here.

                          1. 9

                            Even if Wayland does finally replace Xorg for Linux users, it doesn’t necessarily mean people will stop wanting to run X11 applications.

                            1. 7

                              X was obsolete, full stop, a decade or two ago. Whether or not a thing is obsolete has little to do with how ubiquitous or useful it is.

                            1. 2

                              For “regular” applications, one solution for this is to put all external dependencies in the repo. Otherwise if your build depends of external package manager X having version Y of package Z ….eventually this will not be true anymore.

                              For Flash, or any other proprietary product, it it stops making money, there is no guarantee it will be maintained/available for a long period of time. For Flash specifically….maybe that’s for the best :)

                              1. 2

                                How does the windows package manager compare to something like apt or pacman on Linux these days? Last I used it, it used a git repository for packages, and worked quite well, but I’m unsure how big the ecosystem is now since then.

                                1. 4

                                  I’d say it’s good enought, it gets the job done.

                                  1. 4

                                    I tried it at my previous job, and it didn’t seem comparable. For starters, it does not care about dependencies at all, so you can install something that doesn’t work, or break something by uninstalling another thing and you won’t get as much as a warning.

                                    It seemed more like a fancy curl that downloads .exes by name or ID.

                                    1. 1

                                      Yeah that was my experience before as well. I’m hoping it becomes more like package managers on Linux or like homebrew on Mac. But unfortunately it seems currently it’s as you said, just a fancy curl.

                                  1. 16

                                    I wonder who at System76 was responsible for evaluating all possible directions they could invest in, and decided the desktop environment is the biggest deficiency of System76

                                    1. 11

                                      It’s also great marketing. I’ve heard “System76” way more since they have released Pop_OS. So while people may not be buying machines for the OS it seems that as a pretty popular distro it keeps the name in their head and they may be likely to buy a system on the next upgrade.

                                      1. 1

                                        Well I’d buy a machine, but they’re not selling anything with EU layouts or powercords.

                                      2. 5

                                        I know a few people who run Pop_OS, and none of them run it on a System76 machine, but they all choose Pop over Ubuntu for its Gnome hacks.

                                        Gnome itself isn’t particularly friendly to hacks — the extension system is really half baked (though perhaps it’s one of the only uses of the Spidermonkey JS engine outside Firefox, that’s pretty cool!). KDE Plasma has quite a lot of features, but it doesn’t really focus on usability the way they could.

                                        There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                        Honestly, I think that if something more friendly than Gnome and KDE came along and was well-supported, it could really be a big deal. “Year of the Linux desktop” is a meme, but it’s something we’ve been flirting with for decades now and the main holdups are compatibility and usability. Compatibility isn’t a big deal if most of what we do on computers is web-based. If we can tame usability, there’s surely a fighting chance. It just needs the financial support of a company like System76 to be able to keep going.

                                        1. 7

                                          There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                          It’s very difficult to do anything meaningful here. Consistency is one of the biggest features of a good DE. This was something that Apple was very good at before they went a bit crazy around 10.7 and they’re still better than most. To give a couple of trivial examples, every application on my Mac has the buttons the same way around in dialog boxes and uses verbs as labels. Every app that has a preferences panel can open it with command-, and has it in the same place in the menus. Neither of these is the case on Windows or any *NIX DE that I’ve used. Whether the Mac way is better or worse than any other system doesn’t really matter, the important thing is that when I’ve learned how to perform an operation on the Mac I can do the same thing on every Mac app.

                                          In contrast, *NIX applications mostly use one of two widget sets (though there is a long tail of other ones) each of which has subtly different behaviour for things like text navigation shortcut keys. Ones designed for a particular DE use the HIGs from that DE (or, at least, try to) and the KDE and GNOME ones say different things. Even something simple like having a consistent ‘open file’ dialog is very hard in this environment.

                                          Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                          1. 4

                                            There’s a lot of room for disruption in the DE segment of the desktop Linux market.

                                            Ok, so now we have :

                                            • kitchen sink / do everything : KDE

                                            • MacOS like : Gnome

                                            • MacOS lookalike : Elementary

                                            • Old Windows : Gnome 2 forks (eg MATE)

                                            • lightweight environments : XFCE / LXDE

                                            • tiling : i3, sway etc etc (super niche).

                                            • something new from scratch but not entirely different : Enlightment

                                            So what exactly can be disrupted here when there are so many options ? What is the disruptive angle ?

                                            1. 15

                                              I think you’re replying to @br, not to me, but your post makes me quite sad. All of the DEs that you list are basically variations on the 1984 Macintosh UI model. You have siloed applications, each of which owns one or more windows. Each window is owned by precisely one application and provides a sharp boundary between different UIs.

                                              The space of UI models beyond these constraints is huge.

                                              1. 5

                                                I think any divergence would be interesting, but it’s also punished by users - every time Gnome tries to diverge from Windows 98 (Gnome 3 is obvious, but this has happened long before - see spatial Nautilus), everyone screams at them.

                                              2. 3

                                                I would hesitate to call elementary or Gnome Mac-like. Taking elements more than others, sure. But there’s a lot of critical UI elements from Mac OS looking, and they admit they’re doing their own thing, which a casual poke would reveal that.

                                                I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time. (I’d say Gnome 2 draws more from both.)

                                                1. 3

                                                  I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time

                                                  I would have argued that at one point. I’d have argued it loudly around 2001, which is the last time that I really lived with it for longer than a 6 months.

                                                  Having just spent a few days giving KDE an honest try for the first time in a while, though, I no longer think so.

                                                  I’d characterize KDE as an attempt to copy all the trends for all time in Windows + Mac + UNIX add a few innovations, an all encompassing settings manager, and let each user choose their own specific mix of those.

                                                  My current KDE setup after playing with it for a few days is like an unholy mix of Mac OS X Snow Leopard and i3, with a weird earthy colorscheme that might remind you of Windows XP’s olive scheme if it were a little more brown and less green.

                                                  But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                  1. 1

                                                    But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                    I stopped using KDE when 4.x came out (because it was basically tech preview and not usable), but before that I was a big fan of the 3.x series. They always had settings for everything. Good to hear they kept that around.

                                                2. 2

                                                  GNOME really isn’t macOS like, either by accident or design.

                                                3. 3

                                                  I am no longer buying this consistency thing and how the Mac is superior. So many things we do all day are web-apps which all look and function completely different. I use gmail, slack, github enterprise, office, what-have-you daily at work and they are all just browser tabs. None looks like the other and it is totally fine. The only real local apps I use are my IDE which is writen in Java and also looks nothing like the Mac, a terminal and a browser.

                                                  1. 7

                                                    Just because it’s what we’re forced to accept today doesn’t mean the current state we’re in is desirable. If you know what we’ve lost, you’d miss it too.

                                                    1. 2

                                                      I am saying that the time of native apps is over and it is not coming back. Webapps and webapps disguised as desktop applications a la Electron are going to dominate the future. Even traditionally desktop heavy things like IDEs are moving into the cloud and the browser. It may be unfortunate, but it is a reality. So even if the Mac was superior in its design the importance of that is fading quickly.

                                                      1. 2

                                                        “The time of native apps is over .. webapps … the future”

                                                        Non-rhetorical question: Why is that, though?

                                                        1. 4

                                                          Write once, deploy everywhere.

                                                          Google has done the hard work of implementing a JS platform for almost every computing platform in existence. By targeting that platform, you reach more users for less developer-hours.

                                                          1. 3

                                                            The web is the easiest and best understood application deployment platform there is. Want to upgrade all user? F5 and you are done. Best of all: it is cross platform

                                                          2. 1

                                                            I mean, if you really care about such things, the Mac has plenty of native applications and the users there still fight for such things. But you’re right that most don’t on most platforms, even the Mac.

                                                        2. 2

                                                          And that’s why the Linux desktop I use most (outside of work) is… ChromeOS.

                                                          Now, I primarily use it for entertainment like video streaming. But with just a SSH client, I can access my “for fun” development machine too.

                                                        3. 3

                                                          Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                                          Honestly, I’d say Windows is more easily extensible. I could write a shell extension and immediately reap its benefit in all applications - I couldn’t say the same for other DEs without probably having to patch the source, and that’ll be a pain.

                                                          1. 1

                                                            GNOME HIG also keeps changing, which creates more fragmentation.

                                                            20 years ago, they did express a desire of unification: https://lwn.net/Articles/8210/

                                                        4. 1

                                                          It certainly is a differentiator.

                                                        1. 57

                                                          The developer of these libraries intentionally introduced an infinite loop that bricked thousands of projects that depend on ’colors and ‘faker’.

                                                          I wonder if the person who wrote this actually knows what “bricked” means.

                                                          But beyond the problem of not understanding the difference between “bricked” and “broke”, this action did not break any builds that were set up responsibly; only builds which tell the system “just give me whatever version you feel like regardless of whether it works” which like … yeah, of course things are going to break if you do that! No one should be surprised.

                                                          Edit: for those who are not native English speakers, “bricked” refers to a change (usually in firmware on an embedded device) which not only causes the device to be non-functional, but also breaks whatever update mechanisms you would use to get it back into a good state. It means the device is completely destroyed and must be replaced since it cannot be used as anything but a brick.

                                                          GitHub has reportedly suspended the developer’s account

                                                          Hopefully this serves as a wakeup call for people about what a tremendously bad idea it is to have all your code hosted by a single company. Better late than never.

                                                          1. 25

                                                            There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people (which doesn’t make it any less of a good idea for people to make their code hosting infrastructure independent from Github). The developer was absolutely trolling (in the best sense of the word) and a lot of people have made it cleared that they’re very eager for Github to deplatform trolls.

                                                            I don’t blame him certainly; he’s entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                                            The right solution here is for any users of these packages to do exactly what the developer suggested and fork them without the broken commits. If npm (or cargo, or any other programming language ecosystem package manager) makes it difficult for downstream clients to perform that fork, this is an argument for changing npm in order to make that easier. Build additional functionality into npm to make it easier to switch away from broken or otherwise-unwanted specific versions of a package anywhere in your project’s dependency tree, without having to coordinate this with other package maintainers.

                                                            1. 31

                                                              The developer was absolutely trolling (in the best sense of the word)

                                                              To the extent there is any good trolling, it consists of saying tongue-in-cheek things to trigger people with overly rigid ideas. Breaking stuff belonging to people who trusted you is not good in any way.

                                                              I don’t blame him certainly; he’s entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order

                                                              And GitHub was free to dump his account for his egregious bad citizenship. I’m glad they did, because this kind of behavior undermines the kind of collaborative trust that makes open source work.

                                                              to express his displeasure at companies using his software without compensating him in the way he would like.

                                                              Take it from me: the way to get companies to compensate you “in six figures” for your code is to release your code commercially, not open source. Or to be employed by said companies. Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.

                                                              1. 33

                                                                No I think the greater fool is the one who can’t tolerate changes like this in free software.

                                                                1. 1

                                                                  It’s not foolish to trust, initially. What’s foolish is to keep trusting after you’ve been screwed. (That’s the lesson of the Prisoner’s Dilemma.)

                                                                  A likely lesson companies will draw from this is that free software is a risk, and that if you do use it, stick to big-name reputable projects that aren’t built on a house of cards of tiny libraries by unknown people. That’s rather bad news for ecosystems like node or RubyGems or whatever.

                                                                2. 12

                                                                  Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.

                                                                  Thankyou. This is the point everybody seems to be missing.

                                                                  1. 49

                                                                    The author of these libraries stopped whining and took action.

                                                                    1. 3

                                                                      Worked out a treat, too.

                                                                      1. 5

                                                                        I mean, it did. Hopefully companies will start moving to software stacks where people are paid for their effort and time.

                                                                        1. 6

                                                                          He also set fire to the building making bombs at home, maybe he’s not a great model.

                                                                          1. 3

                                                                            Not if you’re being responsible and pinning your deps though?

                                                                            Even if that weren’t true though, the maintainer doesn’t have any obligation to companies using their software. If the company used the software without acquiring a support contract, then that’s just a risk of business that the company should have understood. If they didn’t, that’s their fault, not the maintainer’s - companies successfully do this kind of risk/reward calculus all the time in other areas, successfully.

                                                                            1. 1

                                                                              I know there are news reports of a person with the same name being taken into custody in 2020 where components that could be used for making bombs were found, but as far as I know, no property damage occurred then. Have there been later reports?

                                                                            2. 3

                                                                              Yeah, like proprietary or in-house software. Great result for open source.

                                                                              Really, if I were a suit at a company and learned that my product was DoS’d by source code we got from some random QAnon nutjob – that this rando had the ability to push malware into his Git repo and we’d automatically download and run it – I’d be asking hard questions about why my company uses free code it just picked up off the sidewalk, instead of paying a summer intern a few hundred bucks to write an equivalent library to printf ANSI escape sequences or whatever.

                                                                              That’s inflammatory language, not exactly my viewpoint but I’m channeling the kind of thing I’d expect a high-up suit to say.

                                                                    2. 4

                                                                      There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people

                                                                      Each new incident is another feather. For some, it’s the last one to break the camel’s back.

                                                                      1. 4

                                                                        in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                                                        This sense of entitlement is amusing. This people totally miss the point of free software. They make something that many people find useful and use (Very much thanks to the nature of being released with a free license, mind you), then they feel in their right to some sort of material/monetary compensatiom.

                                                                        This is not miss universe contest. It’s not too hard to understand that had this project been non free, it would have probably not gotten anywhere. This is the negative side of GitHub. GitHub has been an enormously valuable resource for free software. Unfortunately, when it grows so big, it will inevitably also attract this kind of people that only like the free aspect of free software when it benefits them directly.

                                                                        1. 28

                                                                          This people totally miss the point of free software.

                                                                          An uncanny number of companies (and people employed by said companies) also totally miss the point of free software. They show up in bug trackers all entitled like the license they praise in all their “empowering the community” slides doesn’t say THE SOFTWARE IS PROVIDED “AS IS” in all fscking caps. If you made a list of all the companies to whom the description “companies that only like the free aspect of free software when it benefits them directly” doesn’t apply, you could apply a moderately efficient compression algorithm and it would fit in a boot sector.

                                                                          I don’t want to defend what the author did – as someone else put it here, it’s dumbshittery of an advanced level. But if entitlement were to earn you an iron “I’m an asshole” pin, we’d have to mine so much iron ore on account of the software industry that we’d trigger a second Iron Age.

                                                                          This isn’t only on the author, it’s what happens when corporate entitlement meets open source entitlement. All the entitled parties in this drama got exactly what they deserved IMHO.

                                                                          Now, one might argue that what this person did affected not just all those entitled product managers who had some tough explaining to do to their suit-wearing bros, but also a bunch of good FOSS “citizens”, too. That’s absolutely right, but while this may have been unprofessional, the burden of embarrassment should be equally shared by the people who took a bunch of code developed by an independent, unpaid developer, in their spare time – in other words, a hobby project – without any warranty, and then baked it in their super professional codebases without any contingency plan for “what if all that stuff written in all caps happens?”. This happened to be intentional but a re-enactment of this drama is just one half-drunk evening hacking session away.

                                                                          It’s not like they haven’t been warned – when a new dependency is proposed, that part is literally the first one that’s read, and it’s reviewed by a legal team whose payment figures are eye-watering. You can’t build a product based only on the good parts of FOSS. Exploiting FOSS software only when it benefits yourself may also be assholery of an advanced level, but hoping that playing your part shields you from all the bad parts of FOSS is naivety of an advanced level, and commercial software development tends to punish that.

                                                                          1. 4

                                                                            They show up in bug trackers all entitled like the license they praise in all their “empowering the community” slides doesn’t say THE SOFTWARE IS PROVIDED “AS IS” in all fscking caps

                                                                            Slides about F/OSS don’t say that because expensive proprietary software has exactly the same disclaimer. You may have an SLA that requires bugs to be fixed within a certain timeframe, but outside of very specialised markets you’ll be very hard pressed to find any software that comes with any kind of liability for damage caused by bugs.

                                                                            1. 1

                                                                              Well… I meant the license, not the slides :-P. Indeed, commercial licenses say pretty much the same thing. However, at least in my experience, the presence of that disclaimer is not quite as obvious with commercial software – barring, erm, certain niches.

                                                                              Your average commercial license doesn’t require proprietary vendors to issue refunds, provide urgent bugfixes or stick by their announced deadlines for fixes and veatures. But the practical constraints of staying in business are pretty good at compelling them to do some of these things.

                                                                              I’ve worked both with and without SLAs so I don’t want to sing praises to commercial vendors – some of them fail miserably, and I’ve seen countless open source projects that fix security issues in less time than it takes even competent large vendors to call a meeting to decide a release schedule for the fix. But expecting the same kind of commitment and approachability from Random J. Hacker is just not a very good idea. Discounting pathological arseholes and know-it-alls, there are perfectly human and understandable reasons why the baseline of what you get is just not the same when you’re getting it from a development team with a day job, a bus factor of 1, and who may have had a bad day and has no job description that says “be nice to customers even if you had a bad day or else”.

                                                                              The universe npm has spawned is particularly susceptible to this. It’s a universe where adding a PNG to JPG conversion function pulls fourty dependencies, two of which are different and slightly incompatible libraries which handle emojis just in case someone decided to be cute with file names, and they’re going to get pulled even if the first thing your application does is throw non-alphanumeric characters out of any string, because they’re nth order dependencies with no config overrides. There’s a good chance that no matter what your app does, 10% of your dependencies are one-person resume-padding efforts that turned out to be unexpectedly useful and are now being half-heartedly maintained largely because you never know when you’ll have to show someone you’re a JavaScript ninja guru in this economy. These packages may well have the same “no warranty” sticker that large commercial vendors put on theirs, but the practical consequences of having that sticker on the box often differ a lot.

                                                                              Edit: to be clear, I’m not trying to say “proprietary – good and reliable, F/OSS – slow and clunky”, we all know a lot of exceptions to both. What I meant to point out is that the typical norms of business-to-business relations just don’t uniformly apply to independent F/OSS devs, which makes the “no warranty” part of the license feel more… intense, I guess.

                                                                          2. 12

                                                                            The entitlement sentiment goes both ways. Companies that expect free code and get upset if the maintainer breaks backward compatibility. Since when is that an obligation to behave responsibly?

                                                                            When open source started, there wasn’t that much money involved and things were very much in the academic spirit of sharing knowledge. That created a trove of wealth that companies are just happy to plunder now.

                                                                          3. 1

                                                                            releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                                                            Was that honestly the intent? Because in that case: what hubris! These libraries were existing libraries translated to JS. He didn’t do any of the hard work.

                                                                          4. 8

                                                                            There is further variation on the “bricked” term, at least in the Android hacker’s community. You might hear things like “soft bricked” which refers to a device that has the normal installation / update method not working, but could be recovered through additional tools, or perhaps using JTAG to reprogram the bootloader.

                                                                            There is also “hard bricked” which indicates something completely irreversible, such as changing the fuse programming so that it won’t boot from eMMC anymore. Or deleting necessary keys from the secure storage.

                                                                            1. 3

                                                                              this action did not break any builds that were set up responsibly; only builds which tell the system “just give me whatever version you feel like regardless of whether it works” which like … yeah, of course things are going to break if you do that! No one should be surprised.

                                                                              OK, so, what’s a build set up responsibly?

                                                                              I’m not sure what the expectations are for packages on NPM, but the changes in that colors library were published with an increment only to the patch version. When trusting the developers (and if you don’t, why would you use their library?), not setting in stone the patch version in your dependencies doesn’t seem like a bad idea.

                                                                              1. 26

                                                                                When trusting the developers (and if you don’t, why would you use their library?), not setting in stone the patch version in your dependencies doesn’t seem like a bad idea.

                                                                                No, it is a bad idea. Even if the developer isn’t actively malicious, they might’ve broken something in a minor update. You shouldn’t ever blindly update a dependency without testing afterwards.

                                                                                1. 26

                                                                                  Commit package-lock.json like all of the documentation tells you to, and don’t auto-update dependencies without running CI.

                                                                                  1. 3

                                                                                    And use npm shrinkwrap if you’re distributing apps and not libraries, so the lockfile makes it into the registry package.

                                                                                  2. 18

                                                                                    Do you really think that a random developer, however well intentioned, is really capable of evaluating whether or not any given change they make will have any behavior-observable impact on downstream projects they’re not even aware of, let alone have seen the source for and have any idea how it consumes their project?

                                                                                    I catch observable breakage coming from “patch” revisions easily a half dozen times a year or more. All of it accidental “oh we didn’t think about that use-case, we don’t consume it like that” type stuff. It’s truly impossible to avoid for anything but the absolute tiniest of API surface areas.

                                                                                    The only sane thing to do is to use whatever your tooling’s equivalent of a lock file is to strictly maintain the precise versions used for production deploys, and only commit changes to that lock file after a full re-run of the test suite against the new library version, patch or not (and running your eyeballs over a diff against the previous version of its code would be wise, as well).

                                                                                    It’s wild to me that anyone would just let their CI slip version updates into a deploy willynilly.

                                                                                    1. 11

                                                                                      This neatly shows why Semver is a broken religion: you can’t just rely on a version number to consider changes to be non-broken. A new version is a new version and must be tested without any assumptions.

                                                                                      To clarify, I’m not against specifying dependencies to automatically update to new versions per se, as long as there’s a CI step to build and test the whole thing before it goes it production, to give you a chance to pin the broken dependency to a last-known-good version.

                                                                                      1. 7

                                                                                        Semver doesn’t guarantee anything though and doesn’t promise anything. It’s more of an indicator of what to expect. Sure, you should test new versions without any assumptions, but that doesn’t say anything about semver. What that versioning scheme allows you to do though is put minor/revision updates straight into ci and an automatic PR, while blocking major ones until manual action.

                                                                                      2. 6

                                                                                        The general form of the solution is this:

                                                                                        1. Download whatever source code you are using into a secure versioned repository that you control.

                                                                                        2. Test every version that you consider using for function before you commit to it in production/deployment/distribution.

                                                                                        3. Build your system from specific versions, not from ‘last update’.

                                                                                        4. Keep up to date on change logs, security lists, bug trackers, and whatever else is relevant.

                                                                                        5. Know what your back-out procedure is.

                                                                                        These steps apply to all upstream sources: language modules, libraries, OS packages… dependency management is crucial.

                                                                                        1. 3

                                                                                          Amazon does this. Almost no-one else does this, but that’s a choice with benefits (saving the set up effort mostly) and consequences (all of this here)

                                                                                        2. 6

                                                                                          When trusting the developers (and if you don’t, why would you use their library?)

                                                                                          If you trust the developers, why not give them root on your laptop? After all, you’re using their library so you must trust them, right?

                                                                                          1. 7

                                                                                            There’s levels to trust.

                                                                                            I can believe you’rea good person by reading your public posts online, but I’m not letting you babysit my kids.

                                                                                        3. 2

                                                                                          Why wouldn’t this behavior be banned by any company?

                                                                                          1. 2

                                                                                            How do they ban them, they’re not paying them? Unless you mean the people who did not pin the dependencies?

                                                                                            1. 4

                                                                                              I think it is bannable on any platform, because it is malicious behavior - that means he intentionally caused harm to people. It’s not about an exchange of money, it’s about intentional malice.

                                                                                            2. 1

                                                                                              Because it’s his code and even the license says “no guarantees” ?

                                                                                              1. 2

                                                                                                The behavior was intentionally malicious. It’s not about violating a contract or guarantee. For example, if he just decided that he was being taken advantage of and removed the code, I don’t think that would require a ban. But he didn’t do that - he added an infinite loop to purposefully waste people’s time. That is intentional harm, that’s not just providing a library of poor quality with no guarantee.

                                                                                                Beyond that, if that loop went unnoticed on a build server and costed the company money, I think he should be legally responsible for those damages.

                                                                                          1. 4

                                                                                            Systray (minus right-click menu positioning)

                                                                                            Is there any proposal for the positioning protocol yet? Or is the official line still that apps shouldn’t do it?

                                                                                            1. 8

                                                                                              Or is the official line still that apps shouldn’t do it?

                                                                                              The cycle of new things: stage one is saying the old thing is bloated and useless so it needs to be thrown out. You make a “beautiful” replacement that works for you. Stage two is people asking for features but you defensively say nobody should ever do it anyway and refusing. Stage three is relenting and badly reinventing the same “bloated, useless” thing as some extensions.

                                                                                              Then stage four is acknowledging maybe the designers of the past weren’t all incompetent fools and actually maybe learning from them. But by then, the new kids on the block are reaching stage one with regard to you.

                                                                                              1. 4

                                                                                                Or, as it actually happens with wayland, someone writes a protocol proposal and it gets implemented and everyone walks away happy, with no contempt for users or developers.

                                                                                                1. 3

                                                                                                  The circle of….bloat. I mean features.

                                                                                                  1. 1

                                                                                                    This… is a complete misunderstanding of everything to do with X11. Nobody claims that the problem with X11 are that it lets applications provide too rich of a user experience; discrepancies between what an X11 application can do and what a Wayland application can do are usually regarded as bugs (within reason, of course) and eventually fixed. The issues people are referring to regarding X11 is actual problems with the user experience provided by X11-based systems, the fact that X11 makes sandboxing impossible, the fact that X11 makes screen tearing unavoidable, the fact that the protocol is absolutely chock-full of race conditions which causes a janky UX, the fact that there’s no concept of DPI scaling, etc etc. The bloat people refer to is the fact that X11 contains a complete GUI toolkit which literally nobody uses anymore because it’s stuck in the 80s, looks and feels like garbage, and can’t be changed because backwards compatibility.

                                                                                                    You should really learn something about the topic before confidently spouting bullshit. You’re free to dislike and disagree with the Wayland transition, but there are actual, good reasons for why everyone involved in actually building the graphical/GUI stack on Linux wants to get away from X11.

                                                                                                    1. 1

                                                                                                      This… is a complete misunderstanding of everything to do with X11

                                                                                                      the fact that X11 makes sandboxing impossible

                                                                                                      False.

                                                                                                      the fact that X11 makes screen tearing unavoidable

                                                                                                      False.

                                                                                                      the fact that the protocol is absolutely chock-full of race conditions which causes a janky UX

                                                                                                      Partially true, but there’s facilities to manage it.

                                                                                                      there’s no concept of DPI scaling

                                                                                                      False.

                                                                                                      he bloat people refer to is the fact that X11 contains a complete GUI toolkit which literally nobody uses anymore because it’s stuck in the 80s, looks and feels like garbage, and can’t be changed because backwards compatibility.

                                                                                                      False.

                                                                                                      You should really learn something about the topic before confidently spouting bullshit. You’re free to like and even agree with the Wayland transition, but you don’t have to lie about X11 and the people involved in the process.

                                                                                                      1. 3

                                                                                                        Notice how I elaborated on my point and made actual arguments. I can’t argue against “false”. If you want to be taken seriously, you should try to provide arguments. I’m up for having a nuanced conversation, but you don’t seem too interested in that.

                                                                                                        Being snarky isn’t an argument. Merry christmas.

                                                                                                        1. 3

                                                                                                          Notice how I elaborated on my point and made actual arguments.

                                                                                                          You know the post is right there for anyone to read and see that you did no such thing.

                                                                                                          But OK, let’s talk about it. WHAT about X11 makes screen tearing unavoidable? (Answer: nothing because it is a solved problem. DRI applications work almost identically to Wayland (in fact, Wayland uses many of the same facilities), you can also use traditional techniques like double buffering and the opt-in sync extension.) WHAT about it makes sandboxing impossible? (Answer: nothing because it is a partially solved problem. The hooks are all in the server code and the extension even works in some cases, though a lot of the containerization things run a nested server instead for various reasons… but nested servers also work perfectly fine.) How do you explain DPI scaling not being a concept when there’s tons of applications that successfully implement it?

                                                                                                          And none of this is material to my original point either, which is more about linked blog post. Wayland started off by throwing out over twenty years of work and breaking everything. This is an act of youthful arrogance, with no relation to technical merit. As they’ve matured, some of them have realized they were wrong and thus we see the proliferation of Wayland extension protocols. Others though, just keep lying. One of the notable points they refused to implement was any kind of global positioning, saying that’s the compositor’s job, and the applications shouldn’t know anything about the world outside their own surfaces. But the problem is that it is actually pretty useful, and as much as you want to break away from the world, there’s still applications people actually use that need this (e.g. Wine).

                                                                                                          See, I know this, because I’ve actually been there. I’ve written an X toolkit. This gives me experience both in the second-system effect - I started off refusing to implement various functions (including positioning!) saying they weren’t actually useful and relented on many over the years as user-demanded use-cases forced some extensions - and in the details of working with X. Many of the things Wayland propagandists say are impossible are things I implemented in a few dozen lines of code…. maybe GTK makes it hard, maybe your buggy compositor doesn’t do it right, but X doesn’t actually block you.

                                                                                                          (And this is why nobody really cares about Xaw. It costs nothing to leave there and it doesn’t prevent people from writing gtk or qt or my minigui or anything else.)

                                                                                                          Similarly, claims about how “everyone involved in actually building” is just an obvious falsehood if you yourself are involved in actually building the stack. One of the authors of a lot of X extensions, a name you get to know (and kinda hate, because so much of his documentation is kinda obtuse, but he’s done a very significant amount of work on Linux graphics and X itself) is Keith Packard. He’s still working on X in between his day job and has several concrete proposals to shore up some deficiencies. You can read about it on his website: https://keithp.com looks like last update almost a year ago, but December 2020 isn’t that long ago.

                                                                                                          But anyway, back to the bigger point, this is another example of hypocrisy: Wayland propagandists point to X’s 27 extensions and the freedesktop.org interop standards as proof that it is trash and balk at he idea of adding a 28th extension like Packard sometimes talks about or another inter-client specification to add whatever they want to add. Yet Wayland is now up to 58 extension protocols listed on the wayland.app website with more being talked about, including the positioning one right here! And several of those extensions (including more not specified there) are specific to particular compositors. Where in history have we seen this desire for some kind of, oh how should I say it, inter-compositor communications conventions manual before?

                                                                                                          Speaking of freedesktop.org, one of the things the blog in the OP talks about is drag and drop. It talks about how it seems impossible to do full interop there. Wonderful, more breakage. But the Xdnd spec is a microcosm of this same thing too. Just look at the version history: https://freedesktop.org/wiki/Specifications/XDND/#changesfrompreviousversions

                                                                                                          And it helps if you’re familiar with how it works on Windows. Or html5, which is just a tweak of the Windows way so you can see the compare and contrast in that spec. You can imagine xdnd wanted to keep things simple up front and cut a lot of the features Windows 95 had. But what happened in version 2 through 5? They started adding those things back. And one of the listed reasons is compatibility with Java. MAYBE, maybe, if you were alone in the world, you could pull off some of those changes. But in the real world, in the late 90’s, people used Java applications, so they HAD to make them work somehow. And today, people use Wine, so you HAVE to make it work somehow. Your walled garden rarely survives contact with a broad userbase.

                                                                                                  2. 2

                                                                                                    I would assume that the official stance is to let the compositor figure it out.

                                                                                                  1. 3

                                                                                                    What makes a mouse a “UNIX” mouse ?! A good mouse is good no matter the OS you use it with.

                                                                                                    1. 1

                                                                                                      That depends.

                                                                                                      If you would be able to set some of the mouse features only with Windows only application - then it would definitely will not be a good UNIX mouse, wouldn’t it? :)

                                                                                                    1. 8

                                                                                                      I’m not exactly sure what the special sauce is here but vanilla .NET supports creating single static binaries just like Go does.

                                                                                                      Here is an example in a project of mine though of the fsproj settings and the actual command invoked to get a static binary:

                                                                                                      https://github.com/eatonphil/dbcore/blob/master/dbcore.fsproj

                                                                                                      https://github.com/eatonphil/dbcore/blob/master/Makefile#L10

                                                                                                      I assume it’s a static binary because I could copy the binary alone to a VM without .NET on it and still run the binary.

                                                                                                      1. 4

                                                                                                        Even without the native compilation, I’m quite enjoying targeting the .net frameworks included in windows by default. If your users are on win10 and update at least once a year, you can publish your winforms app for .net framework 4.6 and it will “just run”. Ilmerge can get rid of extra DLLs too. Simple, single file GUI apps start <100KB.

                                                                                                        1. 2

                                                                                                          The problem is the frameworks, while conceptually simple deployment wise, are effectively EoL due to .NET Core/Standard. (The terminology was never clear; the incoherent story is why is the biggest reason I lost interest in .NET. I’d likely stick with Framework, because it’s basically not going to go full CADT like the rest of the ecosystem has.)

                                                                                                          1. 6

                                                                                                            The problem is the frameworks, while conceptually simple deployment wise, are effectively EoL due to .NET Core/Standard. (The terminology was never clear; the incoherent story is why is the biggest reason I lost interest in .NET

                                                                                                            The literal names are absolutely abysmal, granted, but their meanings have always been consistent:

                                                                                                            • .NET Standard is a standard set of APIs. It’s not an implementation. Mono, .NET Core, .NET Framework, UWP, and (I think, for the last couple of years or so) Unity are all implementations of .NET Standard.
                                                                                                            • .NET Framework is the version of .NET that ships as part of Windows. It is basically running in the old COM/ActiveX style, where all the .NET objects and runtime ship as part of the OS proper.
                                                                                                            • .NET Core is new implementation of .NET, separate from .NET Framework, that focuses on shipping static binaries and running on multiple platforms. It also aims to be a viable alternative for .NET Framework even when running on Windows, though the support cycle of .NET Framework means it’s not going anywhere soon.
                                                                                                            1. 2

                                                                                                              CADT?

                                                                                                              1. 5

                                                                                                                Had to look it up, but it’s Cascade of Attention-Deficit Teenagers.

                                                                                                                1. 3

                                                                                                                  are effectively EoL due to .NET Core/Standard.

                                                                                                                  That’s not true. .NET Framework is a Windows component and thus is supported as long as the OS is.

                                                                                                                  https://dotnet.microsoft.com/platform/support/policy/dotnet-framework

                                                                                                                2. 1

                                                                                                                  Unless I’m mis-understanding something here, this says you mightn’t be right about this :)

                                                                                                            1. 2

                                                                                                              Not a big fan of the installer for this. Why the heck do i need to download binaries for 2 (!) versions of ARM (when i’m using x64) and tons of localization files that i will never use ?

                                                                                                              For now i’m sticking with chocolatey :)

                                                                                                              1. 30

                                                                                                                The example with Postgres is pretty interesting because it made me realize that there’s an entire generation of programmers who got exposed to async I/O before the threaded/synchronous style, via JavaScript!

                                                                                                                It makes sense but I never thought about that. It’s funny because I would indeed think that threads were a convenient revelation if my initial thinking revolved around async :) Although there are plenty of downsides of threads in most environments; it does seem like you need a purpose-built runtime like Erlang / Go to make sure everything gets done right (timeouts, cancellation, easy-to-use queues / sempahores, etc.)

                                                                                                                It’s similar to the recent post about Rust being someone’s first experience with systems programming. That will severely affect your outlook for better and worse. There are also a lot of people who learned C++ before C too and I can only imagine how bewildering an experience that is.

                                                                                                                1. 4

                                                                                                                  Yeah, threads really are a “convenient revelation”! Aren’t OS-level threads implemented on top of CPU-level callbacks? https://wiki.osdev.org/Interrupt_Descriptor_Table

                                                                                                                  1. 10

                                                                                                                    I wouldn’t call CPU-level interrupt handlers “callbacks”. They’re too low-level of a concept for that. It’d be like calling an assembly-language JMP instruction a CPU-level case statement, just because case statements are ultimately implemented in terms of JMP or a similar CPU instruction.

                                                                                                                    1. 4

                                                                                                                      I was turned on to code when threading was current but recent. This reminds me of the day I finally understood that multiprocessing was previously done by, you guessed it, multiple processes.

                                                                                                                      1. 2

                                                                                                                        I should have said “synchronous” or “straight line code” and not “threads”. There is a lot of overloading of terms in the world of concurrency which makes conversations confusing. See my other reply:

                                                                                                                        https://lobste.rs/s/eaaxsb/i_finally_escaped_node_you_can_too#c_fg9k7y

                                                                                                                        I agree with the other reply that the term “callbacks” is confusing here. Callbacks in C vs. JavaScript are very different things because of closures (and GC).

                                                                                                                        I’d say if you want to understand how OS level threads are implemented, look up how context switches are implemented (which is very CPU specific). But I’m not a kernel programmer and someone else may give you a better pointer.

                                                                                                                      2. 3

                                                                                                                        Although there are plenty of downsides of threads in most environments

                                                                                                                        How come ? After all, threads are the basic multithread building block exposed directly by the OS.

                                                                                                                        1. 10

                                                                                                                          I should have said synchronous / “straight line” code – saying “threads” sort of confuses the issue. You can have straight line code with process-level concurrency (but no shared state, which is limiting for certain apps, but maybe not as much as you think)

                                                                                                                          It’s very easy to make an argument that threads exposed by the OS (as opposed to goroutines or Erlang processes) are a big trash fire of design. Historically that’s true; it’s more a product of evolution than design.

                                                                                                                          One reason is that global variables are idiomatic in C, and idiomatic in the C standard library (e.g. errno, which is now a thread local). Localization also uses global variables, which is another big trash fire I have been deep in: https://twitter.com/oilshellblog/status/1374525405848240130

                                                                                                                          Another big reason is that when threads were added to Unix, syscalls and signals had to grow semantics with respect to threads. For example select() and epoll(). In some cases there is no way to reconcile it, e.g. fork() is incompatible with threading in fundamental ways.

                                                                                                                          The other reason I already mentioned is that once you add threads, timeouts and cancellation should be handled with every syscall in order for you to write robust apps. (I think Go and node.js do a good job here. In C and C++ you really need layers on top; I think ZeroMQ gives you some of this.)

                                                                                                                          So basically when you add threads, EVERYTHING about language has to change: data structures and I/O. And of course C didn’t have threads originally. Neither did C++ for a long time; I think they have a portable threading API now, but few people use that.


                                                                                                                          The original concurrency primitive exposed by Unix is processes, not threads. You can say that a thread is a process that allows you to write race conditions :)

                                                                                                                          From the kernel point of view they’re both basically context switches, except that in the threading case you don’t change the address space. Thus you can race on the entire address space of the process, which is bad. It’s a mechanism that’s convenient for kernel implementers, but impoverished for apps.

                                                                                                                          OS threads are pretty far from what you need for application programming. You need data structures and I/O too and that’s what Go, Erlang, Clojure, etc. provide. On the other hand, if your app can fit within the limitations of processes, then you can write correct / fast / low level code with just the OS. I hope to make some of that easier with Oil; I think process-level concurrency is under-rated and hard to use, and hence underused. Naive threading results in poor utilization on modern machines, etc.

                                                                                                                          tl;dr Straight-line code is good; we should be programming concurrent applications with high level languages and (mostly) straight line code. OS threads in C or C++ are useful for systems programming but not most apps

                                                                                                                          1. 1

                                                                                                                            Race conditions, data races, deadlocks are a bit of an overstated problem. 99% of the cases, people are just waiting for IO, protecting shared data structures with locks is trivial and it often will take 3+ orders of magnitude less time than IO. It is a non issue, honestly.

                                                                                                                            Personally, I find the original P() and V() semantics introduced by Dijskstra to be the easiest concurrency idiom to reason about. All these newer/alternative semantics, being them promises, futures, defereds, callbacks, run to completion, async keywords and what have you, feel like a hack compared to that. If you can spawn a new execution flow (for lack of a better name) without blocking your current one, and query it for completion, then you can do it with almost whatever construct you have. Including threads.

                                                                                                                            The case for threads being that you can share data thus saving large amounts of memory.

                                                                                                                            In all seriousness, which percentage of people uses concurrency for other purposes than circumventing the need to wait for IO?

                                                                                                                          2. 2

                                                                                                                            There are a lot of ways to shoot yourself in the foot with something like pthreads. The most common is probably trying to share memory between threads, something as simple as adding a value to the end of a dynamic array fails spectacularly if two threads try to do it at the same time and there’s no synchronization mechanism. The same applies for most of your go-to data structures.

                                                                                                                            1. 4

                                                                                                                              Shared memory has more to do with the language and its memory management model than sync/async though. You can have an async runtime scheduled N:M where it’s up to you to manage resource sharing.

                                                                                                                              That’s the case if you use (for example) a libuv in C with a threadpool for scheduling. On the other hand Erlang which works pretty much in async way in all communication would not have the same issue.

                                                                                                                              1. 1

                                                                                                                                What’s the problem with adding a semaphore right before adding the value? Is it not how everyone does it? (honest question)

                                                                                                                            2. 2

                                                                                                                              The example with Postgres is pretty interesting because it made me realize that there’s an entire generation of programmers who got exposed to async I/O before the threaded/synchronous style, via JavaScript!

                                                                                                                              Your comment made me realize that! Crazy, but very interesting…

                                                                                                                              I wonder if that has any impact in how good these new developers are/will-be at parallel or synchronous programming.

                                                                                                                              The problem is that JavaScript is such a loosey-goosey language that I’m fairly convinced that people are probably writing incorrect async code in JavaScript, but it works well enough that they might even be worse off. Maybe I’m just being elitist, but I’ve reviewed some of my own Node code recently and caught several mistakes I had when modeling a “Notifier” object that had to manage its own state asynchronously. It never caused an issue “in the field” so I only noticed because I was refactoring from removing a deprecated dependency.

                                                                                                                              EDIT: Also, I’m one of those who learned C++ before C (And I don’t claim that I “know” C by any stretch: I understand the differences between the language in a technical sense, but I can’t write or read idiomatic C in real code bases). But I learned C++ before C++11, so I think that might not be what you are talking about. Learning C++98 probably isn’t that bewildering compared to today because we didn’t have smart pointers, or ranges, or variants, etc. The weirdest thing at the time was probably the STL iterators and algorithms stuff. But all of that just felt like an “obvious” abstraction over pointers and for loops.

                                                                                                                              1. 2

                                                                                                                                Yeah, JS (and Node backend code) has really interesting asynchronous behaviour; when folks start using other languages with better computational concurrency/parallelism, a lot of things that they relied on will no longer be true. Easiest example is the fact that there’s only ever one JS “thread” executing at any given time, so function bodies that would have race conditions don’t (because the function is guaranteed to continue executing before a different one starts).

                                                                                                                            1. 1

                                                                                                                              My single row vertical taskbar in Windows. Works great, i’ve been using it like that for several years already.
                                                                                                                              https://i.imgur.com/wPKMcoX.jpg

                                                                                                                              1. 1

                                                                                                                                Your taskbar is so skinny, I didn’t even notice it was there the first couple of times I looked at the image! :-D

                                                                                                                                OK, that’s pretty minimal, but hey, whatever works for you!

                                                                                                                              1. 14

                                                                                                                                This stack will also make supporting the following features difficult:

                                                                                                                                • Paging
                                                                                                                                • Site Navigation
                                                                                                                                  • By category
                                                                                                                                  • By tag
                                                                                                                                • Common Elements
                                                                                                                                1. 12

                                                                                                                                  It’s so annoying that (from our current perspective) this could have been so easily done in the <head> tag, by adding a few <meta ...> pointers to the next/previous page or something <neighbours>, but instead the only people who use anything like this are search engines.

                                                                                                                                  1. 4

                                                                                                                                    Why do you need paging? You’d need thousands of pages before your site exceeds the size of e.g. NYTimes which amounts to tens of megabytes.

                                                                                                                                    1. 4

                                                                                                                                      Maybe you don’t. But you don’t need to scale to thousands of pages before paging is useful.

                                                                                                                                      • 5 most recent stories? Needs paging.
                                                                                                                                      • List of articles about topic X? Needs paging.
                                                                                                                                      • Blog posts in the month of June? Needs paging.

                                                                                                                                      Not everyone needs those but for the ones that do? Plain HTML/CSS is going to suck and be really error prone.

                                                                                                                                      1. 1

                                                                                                                                        wait what even is paging in this context

                                                                                                                                        1. 2

                                                                                                                                          I suppose a more fitting word for that concept is ‘pagination’.

                                                                                                                                          1. 1

                                                                                                                                            well the things in /u/zaphar’s comment don’t seem to be about that… why do you need pagination to list the 5 most recent stories?

                                                                                                                                            1. 2

                                                                                                                                              Because that list would otherwise need to be updated manually.

                                                                                                                                              1. 1

                                                                                                                                                how would pagination help with that? a list of 5 stories probably fits on a single page anyway.

                                                                                                                                                1. 1

                                                                                                                                                  It’s the same process. The top-five-most-recent stories is just a page with 5 stories on it.

                                                                                                                                                  1. 1

                                                                                                                                                    pagination is the process of dividing a document into pages.

                                                                                                                                                    maybe /u/zaphar was using “paging” to refer to any dynamic content; I am not familiar with that usage.

                                                                                                                                                    1. 1

                                                                                                                                                      The “document” is the entire list of stories. The page is the first five.

                                                                                                                                                      1. 1

                                                                                                                                                        this is extremely dumb

                                                                                                                                    2. 1

                                                                                                                                      Yes, but he also doesn’t need to maintain things that he doesn’t support, and that works in his favor.

                                                                                                                                    1. 2

                                                                                                                                      Keypress to event on usb latency measurements would be nice to have.

                                                                                                                                      1. 1

                                                                                                                                        If you worry about latency over USB, consider going back to PS/2.

                                                                                                                                        1. 2

                                                                                                                                          Going back to old hardware isn’t the solution. Making current/future hardware good is.

                                                                                                                                          Objective metrics are a necessity if we are to make any progress.

                                                                                                                                          1. 3

                                                                                                                                            USB vs PS/2 is pooling vs interrupts . Different designs for different things but USB will always have some measure of latency compared with PS/2.

                                                                                                                                            1. 3

                                                                                                                                              USB 3 switched off polling on to full duplex. Unicomp keyboards aren’t USB 3, though.

                                                                                                                                              1. 1

                                                                                                                                                Yes, I’m aware how even input devices are binned into time slots.

                                                                                                                                                This doesn’t mean a keyboard or mouse can’t be faster than another. Even with polling at 1000Hz, it does usually take several slots from keypress to sending the event on USB. Because everything is shit nowadays.

                                                                                                                                              2. 2

                                                                                                                                                I honestly don’t think anybody will ever care to redesign a generic interface like USB to account for latency like this.

                                                                                                                                                And as it turns out, the Unicomp Model M seems to be pretty decent in at least one latency measurement:

                                                                                                                                                https://danluu.com/keyboard-latency/

                                                                                                                                                …however, that article is filled with (quoting):

                                                                                                                                                …then throws in a bunch more scientific mumbo jumbo to say that no one could really notice latencies below 100ms. This is a little unusual in that the commenter claims some kind of special authority…

                                                                                                                                                …so I’m not so sure about the validity in that measurement anymore. :)

                                                                                                                                                Oh this was interesting as well, but not concerning keyboards but latency measurements: https://thume.ca/2020/05/20/making-a-latency-tester/

                                                                                                                                                1. 2

                                                                                                                                                  …however, that article is filled with (quoting)

                                                                                                                                                  He’s got a point though. Human reaction time has nothing to do with human ability to notice latencies under 100ms. According to the literature, musicians definitely can, and drummers will identify jitter even under 3ms. He’s absolutely right in pointing that out.

                                                                                                                                                  …so I’m not so sure about the validity in that measurement anymore. :)

                                                                                                                                                  Measurements are fortunately objective, regardless of the guy’s opinions on human sensitivity to latency.

                                                                                                                                                  The worst is that keyboard latency is just a small part of the whole latency pipeline, from keypress to reaction. Every step adds latency, and it’s important that nothing neglects to minimize it.

                                                                                                                                                  If the input device does do so at the very start of the pipeline, it is already hopeless for the whole.

                                                                                                                                                  1. 2

                                                                                                                                                    Measurements are fortunately objective, regardless of the guy’s opinions on human sensitivity to latency.

                                                                                                                                                    Yes! My comment was about his use of language, especially “scientific mumbo jumbo”. It implies ignorance and/or hubris regarding his own method & choices. Science is not perfect by any means, but dismissing an article/someone else for using terminology within their field feels wierd. Dismissing others findings feels alarming and makes me suspicious.

                                                                                                                                                    The worst is that keyboard latency is just a small part of the whole latency pipeline, from keypress to reaction. Every step adds latency, and it’s important that nothing neglects to minimize it.

                                                                                                                                                    Agreed!

                                                                                                                                                    I remember reading Kirsch, “The Intelligent Use of Space” at uni, and one of the parts applies here (I think). He studied Tetris players (very good Tetris players) and how they made use of the ‘space’ in the game to figure out which parts they were playing before they could even see them. Given that competitive Tetris players still use old television sets to practice (due to old TVs very low inherent latency?) I guess they care a lot about this, and try to minimize the entire latency by any means necessary.

                                                                                                                                              3. 1

                                                                                                                                                I made sure my most recent desktop includes PS/2 ports because I was hoping to plug in my vintage Model M, then noticed that I’ve come to depend on modifier keys it’s lacking… Maybe it’s time to see if I could hack those in.

                                                                                                                                            1. 1
                                                                                                                                              • the built-in “viewer” for text files is cool.
                                                                                                                                              • on Windows at least, if i type “blah.pdf” and i would have expect it to open the file with the default associated app. Instead , i just get “command not found”….
                                                                                                                                              1. 1

                                                                                                                                                on Windows at least, if i type “blah.pdf” and i would have expect it to open the file with the default associated app. Instead , i just get “command not found”….

                                                                                                                                                On Windows, start file.pdf. (macOS, open, X11 world, xdg-open.)

                                                                                                                                              1. 26

                                                                                                                                                inheritance is something that makes your code hard to understand. Unlike function, which you can read just line by line, code with inheritance can play “go see another file” golf with you for a long time.

                                                                                                                                                This isn’t an argument against inheritance, it’s an argument against modularity: Any time you move code out of inline you have the exact same “problem” (to the extent it is a problem) and you can only solve it the same way, with improved tooling of one form or another. ctags, for example, or etags in Emacs.

                                                                                                                                                1. 31

                                                                                                                                                  Inheritance has this problem to a much larger degree because of class hierarchies. Tracing a method call on a class at the bottom of the tree, requires checking every parent class to see if its overridden anywhere. Plain function calls don’t have that problem. Theres only a single definition.

                                                                                                                                                  1. 7

                                                                                                                                                    Plain function calls don’t have that problem. Theres only a single definition.

                                                                                                                                                    Unless we start using higher order functions when the function is passed around as a value. Such abstraction creates the exact same problem, only now it’s called “where does this value originate from”.

                                                                                                                                                    1. 5

                                                                                                                                                      Yes, which is why higher order functions are another tool best used sparingly. The best code is the most boring code. The most debuggable code is the code that has the fewest extension points to track down.

                                                                                                                                                      This is, of course, something to balance against debugging complicated algorithms once and reusing them, but it feels like the pendulum has swung too far in the direction of unwise extensibility.

                                                                                                                                                      1. 4

                                                                                                                                                        For extra fun, use higher-order functions with class hierarchies!

                                                                                                                                                      2. 4

                                                                                                                                                        The best is python code where the parent class can refer to attributes only created in child classes. There are equivalents, but less confusing, in languages like Java.

                                                                                                                                                        1. 1

                                                                                                                                                          Isn’t the example in the linked article doing exactly that?

                                                                                                                                                          1. 2

                                                                                                                                                            Okaaaay… what’s self.connect doing? Ah, it raises NotImplementedError. Makes sense, back to SAEngine:

                                                                                                                                                            Not exactly. :)

                                                                                                                                                            1. 1

                                                                                                                                                              Check out Lib/_pyio.py (the Python io implementation) in CPython for lots of this.

                                                                                                                                                          2. 1

                                                                                                                                                            The overrides is mostly for modularity and reduce code duplication. Without classes, you might either end up with functions with tons of duplicated code, or tons of functions having a call path to simulate the “class hierarchies”. And yes, it’s going to make the code harder to read in some cases, but it also makes the code much shorter to read.

                                                                                                                                                            1. 6

                                                                                                                                                              Without classes, you might either end up with functions with tons of duplicated code

                                                                                                                                                              Why? There is literally no difference in code re-using between loading code through inheritance vs function calls, apart from possibly needing to pass a state to a function, that could otherwise be held in class instances (aka objects). this is certainly less than class definition boilerplates.

                                                                                                                                                              or tons of functions having a call path to simulate the “class hierarchies”

                                                                                                                                                              The call chain is there in both cases. It’s just that in the class-based approach it is hidden and quickly becomes a nightmare to follow. Each time you call a method statically or access a class atribute, you are basically pointing to a point in your code that could be hooked to different points in the hierarchy. This is a problem. People don’t think it is a big deal when they write a simple class and know it well, because the complexity is sneaky. Add another class and all of the sudden you brought in a whole other hierarchy into the picture. Each time you read “this” or “instance.something”, you’re up for am hunt. And each other hierarchy you bring into the picture increases complexity geometrically. Before you know, the project is unmanageable, the ones writing it went on to some green field project, doing a similar mess again for some poor soul to struggle with after them.

                                                                                                                                                              And yes, it’s going to make the code harder to read in some cases, but it also makes the code much shorter to read

                                                                                                                                                              It doesn’t really. People fall for this because you can instantiate a class and get a bunch of hidden references that are all available to you at will without you need to explicitly pass them to each method, but you only get this through defining a class, which is way more verbose than just passing the references you need.

                                                                                                                                                              All that said, what classes do offer in most languages is a scope that allows for fine grain control of data lifecycle. If we remove inheritance, then class members are akin to use global variables in non-OOP languages. But you can create as many scopes as you want. I which languages like python would do this as, for the same reason as OP, I suffer from working with OOP codebases.

                                                                                                                                                              1. 4

                                                                                                                                                                You make it sound like inheritance is the only way to reduce code duplication. In my experience that is simply not true, you can always use composition instead. E.g. Haskell doesn’t support inheritance or subtyping and you still get very compact programs without code duplication.

                                                                                                                                                                1. 5

                                                                                                                                                                  Without classes, you might either end up with functions with tons of duplicated code, or tons of functions having a call path to simulate the “class hierarchies”

                                                                                                                                                                  This is only true in my experience if you’re trying a functional approach with an OO mindset. There are other ways to solve problems, and many of them are far more elegant in languages designed with functional programming as the primary goal.

                                                                                                                                                              2. 5

                                                                                                                                                                When you move a bit of code out of your file it’s not going to call back function from the first file. You going to even make sure this is the case, that there is no circular dependency, because it makes (in cases when a language allows to make you one) code harder to read. In case of inheritance, those games with calling everything around is just normal state of things.

                                                                                                                                                                Of course, example in the article is small and limited, because pulling a monster from somewhere is not going to make it more approachable, but surely you’ve seen this stuff in the wild.

                                                                                                                                                                1. 4

                                                                                                                                                                  You might do that, in the same way that you might carefully document your invariants in a class that allows inheritance, mark methods private/final as needed, etc. But you also might not do that. It sounds a bit as if you’re comparing well-written code without inheritance to poorly written code with it.

                                                                                                                                                                  Not that there isn’t lots of terrible inheritance based code. And I’d even say inheritance, on balance, makes code harder to reason about. However, I think that the overwhelming issue is your ability to find good abstractions or ways of dividing up functionality–the choice of inheritance vs. composition is secondary.

                                                                                                                                                                  1. 2

                                                                                                                                                                    It’s just that without inheritance it’s easier to make good abstractions. Inheritance affords you to do wrong thing easily, without any friction - just read a good article about that few weeks ago.

                                                                                                                                                                2. 4

                                                                                                                                                                  Interesting article from Carmack about inlining everything:

                                                                                                                                                                  http://number-none.com/blow/blog/programming/2014/09/26/carmack-on-inlined-code.html

                                                                                                                                                                  1. 4

                                                                                                                                                                    This isn’t an argument against inheritance, it’s an argument against modularity: Any time you move code out of inline you have the exact same “problem” (to the extent it is a problem) and you can only solve it the same way, with improved tooling of one form or another. ctags, for example, or etags in Emacs.

                                                                                                                                                                    Not really, including code via accessing a class or object member forces you to manually go figure out which implementation is used, or where the implementation in a web of nested namespaces. In the case of function, each symbol is non-ambiguous. This is a big deal. If you have types A and B, with A having an attribute of the type B, each of these types containing a 3 level hierarchy, and you call A.b.some_b_method(). That could be defined in 9 different places, and if it is, you need to figure out which that symbol resolves to. This is a real problem.

                                                                                                                                                                    1. 2

                                                                                                                                                                      This isn’t an argument against inheritance, it’s an argument against modularity:

                                                                                                                                                                      Yeah, all code should be in a single file anyway. No more chasing of method definitions across multiple files. You just open the file and it’s all there…

                                                                                                                                                                      1. 2

                                                                                                                                                                        Any form of modularity should be there to raise the level of abstraction. ie. Become a building block, that is solid (pun intended) firm and utterly reliable, that you can use to understand the higher layers.

                                                                                                                                                                        You can peer inside the building block if you need to, but all you need to understand about it, to understand the next level up, is what it does, not how it does it.

                                                                                                                                                                        Inheritance is there to allow you to know that “all these things IS A that”. ie. I can think of them and treat all of them exactly as I would treat the parent class. (ie. The L in SOLID)

                                                                                                                                                                        I can utterly rely on the fact that the class invariant for the superclass holds for all subclasses. ie. The subclasses may guarantee other things, but amongst the things they guarantee, is that the super class’s class invariant holds.

                                                                                                                                                                        I usually write a class invariant check for every class I write.

                                                                                                                                                                        I then invoke it at the end of the constructor, and the beginning of the destructor, and at the start and end of every public method.

                                                                                                                                                                        As I become more convinced of the correctness of what I’m doing, I may remove some for efficiency reasons. As I become more paranoid, I will add some.

                                                                                                                                                                        In subclasses, the class invariant check always invokes the parent classes invariant check!

                                                                                                                                                                      1. 1

                                                                                                                                                                        I have a late 2013 MBP that I use for Lisp, Python and C++ development when I’m not at home. It’s primarily for photo editing, though, and that’s why it still runs OSX ;-)

                                                                                                                                                                        The hardware quality on this model is really good,and it’s more solidly built than the newer ThinkPad I have for work. I upgraded the RAM to 16 gb and had to disassemble and blow out the fan vents one time, but otherwise I haven’t had to do any maintenance to the hardware. I think I can upgrade the hard drive in this model, but I haven’t tried yet.

                                                                                                                                                                        Emacs and SBCL run great, but I haven’t spent much time with XCode. In general, though, I feel the quality of Apple’s software has really gone down over the last few years. Lots of changes for change’s sake, settings that get reset after each upgrade, and a bunch of little bugs slipping through. It feels like they’re making OSX more like iOS, and I don’t like it.

                                                                                                                                                                        My “main” dev machine is a frankenstein monster of old hardware:

                                                                                                                                                                        • 3.2 ghz AMD Phenom II circa 2009
                                                                                                                                                                        • 16 Gb RAM (upgraded from 8 Gb a few months ago)
                                                                                                                                                                        • 2 x 1Tb SSDs, one with FreeBSD 12.1, the other with Debian Testing
                                                                                                                                                                        • 1 x GeForce GTX 650 Ti
                                                                                                                                                                        • 1 x GeForce GT 640
                                                                                                                                                                        • Roland UA-25EX USB sound card
                                                                                                                                                                        • Ancient SBLive sound card
                                                                                                                                                                        • 24” Dell monitor

                                                                                                                                                                        The graphics cards and monitor were salvaged from the e-recycling bin at my last job, and the Roland is from Goodwill.

                                                                                                                                                                        1. 1

                                                                                                                                                                          What are you doing with 2 GPUs ?

                                                                                                                                                                          1. 1

                                                                                                                                                                            Unfortunately not much right now.

                                                                                                                                                                            One has 2G of ram and 384 CUDA cores, while the other has 1G of ram and 768 CUDA cores, and I’ve used them compare GLSL performance a few times, but mostly the second one just wastes power.

                                                                                                                                                                        1. 1

                                                                                                                                                                          I guess this finally proves TCP is too bloated (or to put it differently the price we have to pay for correctness and reliable delivery at the protovol level is too high ) and UDP like protocols are best suited for communicated over unreliable networks.

                                                                                                                                                                          1. 14

                                                                                                                                                                            Not really, more that TCP enforces a level of correctness that many applications don’t need. If you’re using telnet or SSH, you probably want strict in-order delivery of everything, and that’s what TCP gets you. With HTTP though, you generally say “I want to get these 10 things from point A to point B, but as long as they all get there correctly in the end I don’t really care what order they’re in”, which gives you much more wiggle room for reordering and resending lost pieces. QUIC is able to take advantage of that wiggle room.

                                                                                                                                                                            1. 4

                                                                                                                                                                              If you’re using telnet or SSH, you probably want strict in-order delivery of everything

                                                                                                                                                                              Tell the mosh people about that.

                                                                                                                                                                              1. 2

                                                                                                                                                                                Hence the “probably”. ;-) Thanks for the Cool New Thing To Investigate!

                                                                                                                                                                            2. 8

                                                                                                                                                                              QUIC is a TCP-like protocol that uses UDP instead of raw IP, because routers don’t understand anything else. For QUIC, UDP is simply overhead for legacy interoperability.

                                                                                                                                                                              1. 3

                                                                                                                                                                                Both TCP and UDP have design mistakes (assuming IPs as identifying clients, TCP latency and no encryption). We can’t fix those anytime soon because some networking hardware is incompatible.

                                                                                                                                                                                New protocols (mosh, wireguard, QUIC) use UDP datagrams mostly just as proxies for IP frames.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  That’s not a mistake in UDP. Those solutions don’t belong at that layer, otherwise you would have to replace your networking hardware every few years.

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    Which of the two claims do you think are not mistakes in UDP? I can see an argument for encryption, but I’m fairly sure that using a connection id would have been a good idea.

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      Both.

                                                                                                                                                                                      Connection IDs: A connection ID requires you to pre-establish routing (meaning extra RTT before first UDP packet arrives, and now there are two code paths instead of one), and requires all intermediate boxes to remember all routes for all active connections (drastically increasing RAM costs; run out of ram? need to re-establish routing).

                                                                                                                                                                                      Encryption doesn’t belong in UDP either, in particular because encryption schemes need to be upgraded on a different schedule to switches (I have an 11 year old gigabit switch under my desk).

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        I don’t agree with either of those premises.

                                                                                                                                                                                        The switches don’t need to understand the crypto (tho could maybe understand the MAC to drop bad traffic early) and for a connection id, you don’t need to do a round trip or have intermediate boxes remember, you just send one as part of the protocol.

                                                                                                                                                                                        If the server receives an authenticated packet from a different IP with the same connection id, then it just sends to that address in the future instead.

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          If the server receives an authenticated packet from a different IP with the same connection id, then it just sends to that address in the future instead.

                                                                                                                                                                                          UDP doesn’t have connections, so I’m unclear on how this is better, and adds an extra header in every packet.

                                                                                                                                                                                          I could understand the claim “There should be a standard layer between UDP and TCP adding support for crypto”, and/or “TCP should support session continuation across client IP changes”.

                                                                                                                                                                                          Are those close to what you’re arguing for? Otherwise I don’t think I have understood you very well, sorry.

                                                                                                                                                                              1. 0

                                                                                                                                                                                I come from the Windows world (where IIS is the only web server that matters) so the idea of having to use a (reverse) proxy to (easely) support HTTPS is ludicrous to me.

                                                                                                                                                                                1. 4

                                                                                                                                                                                  IIS fills the same place as nginx in this design.

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    You don’t have to, but it is convenient.

                                                                                                                                                                                    1. 2

                                                                                                                                                                                      It’s really easy. Here’s a trivial nginx conf to enable that:

                                                                                                                                                                                      server {
                                                                                                                                                                                              server_name api.myhost.com;
                                                                                                                                                                                              listen 80;
                                                                                                                                                                                      
                                                                                                                                                                                              location / {
                                                                                                                                                                                                      include proxy_params;
                                                                                                                                                                                                      proxy_pass http://localhost:5000/;
                                                                                                                                                                                              }
                                                                                                                                                                                      
                                                                                                                                                                                             # Certbot will put in SSL for you
                                                                                                                                                                                      }
                                                                                                                                                                                      

                                                                                                                                                                                      And then you can easily get SNI-based multiple hosts on the same ‘node’ if you’d like. This lets you easily handle SSL stuff and still have whatever web-server you want bound to the local host:port to actually do the action. You can also do all the fun URL rewrite stuff up there if you’d like.