1.  

    This is a godsend. I’ve been staring at my Indigo2 for months wondering how I’m going to take the next steps in getting it back online. Bam, here we go! Now I just need to get a working SCSI drive and burn these CDs to reset the root password. Thanks!

    1.  

      Use DINA instead?

      1.  

        How will DINA help if @jamestomasino doesn’t have IRIX install media?

        1.  

          It doesn’t but it saves on avoiding the clumsy install process from CDs which involves swapping disks in and out multiple times.

      2.  

        I had a full Indigo2 with graphics upgrade that got abandoned in a move. :(

        At one time we had a Challenge, a Fuel, an Octane (which I still, have, I think, or maybe an O2?), and that Indigo2.

      1. 6

        What a disheartening read.

        At least with a paperclip-maker mindlessly optimizing you get paperclips.

        1.  

          I’m disappointed that the PLT folks seem to continue to stuff more and more stuff into JS while the standard library continues to be so spartan.

          1. 5

            Mostly I just jot down ideas in my current notebook (I have scores of notebooks full of things) and that allows me to stop thinking about that particular thing because I’ll get around to organizing it into my todo list sometime very soon. Then, months later but also seemingly in the blink of an eye, I’ll remember that I wanted to do it and feel an oppressive guilt wash over me for never even starting it. The feelings of shame and regret swirling around all the tasks become denser and more opaque until they dwarf me and I live in their shadow every waking minute. There is no light here, only tasks. Melville knew my plight: “they heap me; I see them in outrageous strength, with an inscrutable malice sinewing them.”

            1.  

              I follow this exact workflow pretty much, but I skimp on the notebooks as an unneeded I/O step.

              The savings in wasted paper I pass on to my therapist.

            1. 15

              world-open QA-less package ecosystems (NPM, go get)

              This is one I’m increasingly grumpy about. I wish more ecosystems would establish a gold set of packages that have complete test coverage, complete API documentation, and proper semantic versioning.

              1.  

                world-open QA-less package ecosystems (NPM, go get)

                i’d argue that go get is no package ecosystem. it’s just a (historic) convenience tool, which was good enough for the initial use (inside a organization). furthermore, i like the approach better than the centralized language package systems. nobody checks all the packages in pypi or rubygems. using a known good git repo isn’t worse, maybe it’s even better as there is not another link in the chain which could break, as the original repository is used instead of a somehow packaged copy.

                I wish more ecosystems would establish a gold set of packages that have complete test coverage, complete API documentation, and proper semantic versioning.

                python has the batteries included since ages, gos standard library isn’t bad either. both are well-tested and have good documentation. in my opinion the problem is that often another 3rd pary depencendy gets quickly pulled in, instead of giving a second thought to if it is really required or can be done by oneself which may spare one trouble in the future (e.g. left-pad).

                in some cases there is even a bit of quality control for non standard packages: some database drivers for go are externally tested: https://github.com/golang/go/wiki/SQLDrivers

                1.  

                  Then you get the curation (and censorship) of Google Play or Apple’s Store.

                  Maybe you want more of the Linux package repo model where you have the official repo (Debian, RedHat, Gentoo Portage), some optional non-oss or slightly less official repos (Fedora EPEL) and then always having the option to add 3rd party vendor repos with their own signing keys (PPA, opensuse build service, Gentoo Portage overlays).

                  I really wish Google Play had the option of adding other package trees. I feel like Apple and Google took a great concept and totally fucked it up. Ubuntu Snap is going in the same (wrong) direction.

                  1.  

                    On Android it’s certainly possible to install F-Droid, and get access to an alternate package management ecosystem. I think I had to sideload the F-Droid APK to get it to work though, which not every user would know how to do easily (I just checked, it doesn’t seem to be available in the play store).

                1. 5

                  My revulsion is only surpassed by my awe.

                  1. 3

                    I bet this would look so much nicer in that programming language I keep talking about.

                  1. 12

                    So, this might be a good time to float an idea:

                    None of this would be an issue if users brought their own data with them.

                    Imagine if users showed up at a site and said “Hey, here is a revokable token for storing/amending information in my KV store”. The site itself never needs to store anything about the user, but instead makes queries with that auth token to modify their slice of the user’s store.

                    This entire problem with privacy and security would go away, because the onus would be on the user to keep their data secure–modulo laws saying that companies shouldn’t (and as a matter of engineering and cost-effectiveness, wouldn’t) store their own copies of customer data.

                    Why didn’t we do this?

                    1. 16

                      http://remotestorage.io/ did this. I’ve worked with it and it’s nowhere near usable. There are so many technical challenges (esp. with performance) you face on the way that result of you basically having to process all user data clientside, but storing the majority of data serverside. It gets more annoying when you attempt to introduce any way of interaction between two users.

                      We did try this, saw that it’s too hard (and for some services an unsolved problem) and did something else. There’s no evil corporatism in that, nor is it a matter of making profit, even if a lot of people especially here want to apply that imagination to everything privacy-related. It’s human nature.

                      1. 2

                        basically having to process all user data clientside

                        If I go to a site, grant that site a token, couldn’t that server do processing server side?

                        It gets more annoying when you attempt to introduce any way of interaction between two users.

                        Looking at remotestorage it appears there’s no support for pub/sub, which seems like a critical failing to me. To bikeshed an example, this is how I see something like lobste.rs ought to be implemented:

                        • User data is stored in servers (like remotestorage) called pods, which contain data for users. A person can sign up at an existing pod or run their own, fediverse-style.

                        • These pods support pub/sub over websocket.

                        • A particular application sits on an app server. That app server subscribes to a list of pods for pub/sub updates, for whatever users that have given that application permission. On top of these streams the app server runs reduce operations and keeps the result in cache or db. A reduce operation might calculate something like, give me the top 1000 items sorted by hotness (a function of time and votes), given streams of user data.

                        • A user visits the site. The server serves the result instantly from its cache.

                        • Additionally the pub/sub protocol would have to support something like resuming broken connections, like replay messages starting from point T in time.

                        Anyway, given this kind of architecture I’m not sure why something like lobste.rs for example couldn’t be created - without the performance issues you ran into.

                        1. 2

                          If I go to a site, grant that site a token, couldn’t that server do processing server side?

                          If your data passes through third-party servers, what’s the point of all of this?

                          The rest of your post is to me, with all due respect, blatant armchair-engineering.

                          • The pub/sub stuff completely misses the point of what I am trying to say. I’m not talking about remotestorage.io in particular.

                          • Lobste.rs is a trivial usecase, and not even an urgent one in the sense that our centralized versions violate our privacy, because how much privacy do you have on a public forum anyway? Let’s try something like Facebook. When I post any content at all, that content will have to be copied to all different pods, making me subject to the lowest common denominator of both their privacy policies and security practices. This puts my privacy at risk. Diaspora did this. It’s terrible.

                          • Let’s assume you come up with the very original idea of having access tokens instead, where the pods would re-fetch the content from my pod all the time instead of storing a copy. This would somewhat fix the risk of my privacy (though I’ve not seen a project that does this), but:

                            • Now the slowest pod is a bottleneck for the entire network. Especially stuff like searching through public postings. How do you implement Twitter moments, global or even just local (on a geographical level, not on network topology level) trends?
                            • Fetching the data from my pod puts the reader’s privacy at risk. I can host a pod that tracks read requests, and, if the system is decentralized enough, map requests from pods back to users (if the request itself doesn’t already contain user-identifying info)

                          See also this Tweet, from an ex-Diaspora dev

                          1. 1

                            If your data passes through third-party servers, what’s the point of all of this?

                            It decouples data and app logic. Which makes it harder for an application to leverage its position as middle man to the data you’re interested in. Doing stuff like selling your data or presenting you with ads. Yet you put up with it because you are still interested in the people there. Because if data runs over a common protocol you’re free to replace the application-side of things without being locked in. For example, I bet there’s some good content on Facebook but I never go there because I don’t trust that company with my data. I wish there were some open source, privacy friendly front end to the Facebook network available, that would let me interact with people there, without sitting on Facebook’s servers, and open source. Besides that, if an application changes its terms of use, maybe you signed up trusting the application, but now you’re faced with a dilemma of rejecting the ToS and losing what you still like about the application, or accepting new crappy terms.

                            The rest of your post is to me, with all due respect, blatant armchair-engineering.

                            Ha! Approaching a design question by first providing an implementation without discussion seems pretty backwards to me. Anyway, as far as I’m concerned I’m just talking design. Specifically I’m criticizing what I perceive as a deficiency in remotestorage’s capabilities. And arguing that a decentralized architecture doesn’t have to be slow, is at least as good as a centralized architecture, and better, in many regards, for end users.

                            Let’s try something like Facebook. When I post any content at all, that content will have to be copied to all different pods,

                            No, I was saying that this would be published to subscribing applications. There could be a Facebook application. And someone else could set up a Facebook-alternative application, with the same data, but a different implementation. Hey, you could even run your own instance of Facebook-X application.

                            making me subject to the lowest common denominator of both their privacy policies and security practices.

                            If you grant an application access to your data, you grant it access to your data. I don’t see a way around that puzzle in either a centralized or decentralized architecture. If anything, in a decentralized architecture you have more choices. Which means you don’t have to resign yourself to Facebook’s security and privacy policies if you want to interact with the “Facebook” network. You could move to Facebook-X.

                            Now the slowest pod is a bottleneck for the entire network. Especially stuff like searching through public postings. How do you implement Twitter moments, global or even just local (on a geographical level, not on network topology level) trends?

                            What I was describing was an architecture where pods just store data. Apps consume and present it. If I have an app, and I subscribe to X pods, there’s no reason I have to wait for the slowest pod’s response in order to construct a state that I can present users of my app.

                            So for something like search, or Twitter moments, you would have an application that subscribes to whatever pods it knows about. Those pods publish notifications to the app over web socket, for example whenever a user tweets. Your state is a reduction over these streams of data. Let’s say I store this in an indexed lookup like ElasticSearch. So every time a user posts a tweet, I receive a notification and add it to my instance of ElasticSearch. Now someone opens my app, maybe by going to my website. They search for X. The app queries the ElasticSearch instance. It returns the matching results. I present those results to the user’s browser.

                            Fetching the data from my pod puts the reader’s privacy at risk.

                            Hmm, I’m not sure if we’re on the same page. In the design I laid out, the app requests this data, not the pod.

                            1. 2

                              With respect, “social media” and aggregator sites are red herrings here. They cant be made to protect privacy by their very nature.

                              I’m more thinking about, say, ecommerce or sites that aren’t about explicitly leaking your data with others.

                              1. 1

                                “With respect, “social media” and aggregator sites are red herrings here. They cant be made to protect privacy by their very nature.”

                                Sure they can. Starting with Facebook, they can give privacy settings per post defaulting on things like Friends Only. They could even give different feeds for stuff like Public, Friends Only, or Friends of Friends. They can use crypto with transparent key management to protect as much of the less-public plaintext as possible. They can support E2E messaging. They can limit discovery options for some people where they have to give you a URL or something to see their profile. Quite a few opportunities for boosting privacy in the existing models.

                                Far as link aggregators, we have a messaging feature that could be private if it isn’t already. Emails and IP’s if not in public profile. The filters can be seen as a privacy mechanism. More to that point, though, might be things like subreddits that were only visible to specific, invited members. Like with search, even what people are looking at might be something they want to keep private. A combo of separation of user activities in runtime, HTTPS and little to no log retention would address that. Finally, for a hypothetical, a link aggregator might also be modified to easily support document drops over an anonymity and filesharing service.

                      2. 9

                        Because the most formidably grown business of late are built on the ability to access massive amounts of user data at random. Companies simply don’t know how to make huge money on the Internet without it.

                        1. 2

                          This still doesn’t solve problems with tracking, because companies have already started to require GDPR opt-in to use their products (even when using the product doesn’t necessarily require data tracking), or to use their products without a degraded user experience.

                          See cloudflare, recaptcha, facebook, etc.

                          “You can’t use this site without Google Analytics having a K/V-auth-token”, “We will put up endless ‘find-the-road-sign’ captchas if we can’t track you”, etc.

                          1. 6

                            It’s a mistake to think you can “GDPR opt-in”. You can’t.

                            You have to prove that the data subject wants this processing. One way to do this is to ask for their consent and make them as informed as possible about what you’re doing. But they can decide not to, and they can even decide to revoke their consent at any time until you’ve actually finished the processing and erased their data.

                            These cookie/consent banners are worse than worthless; a queer kind of game people like Google are playing to try to waste time of the regulators.

                            We will put up endless ‘find-the-road-sign’ captchas if we can’t track you

                            I’ve switched to another search engine for the time being. It’s faster, the results are pretty good, and I don’t have to keep fiddling with blocking that roadblock on Google’s properties.

                          2. 2

                            We did. They’re called browser cookies.

                            The real problems are around an uneducated consumption-driven populous: Who can resist finding out “which spice girl are you most like?” – but would we be so willing to find out if it meant we get a president we wouldn’t like?

                            It is very hard for people to realise how unethical it is to hold someone responsible for being stupid, but we crave violence: We feel no thrill that can compare serving food, working in an office, or driving a taxi. Television and Media give us this violence, an us versus them; Hillary versus Urine Hilarity or The Corrupt Incumbent versus a Chance to Make America Great Again, or even Kanye versus anybody and everybody.

                            How can we make a decision to share our data? We can never be informed of how it will be used against us.

                            The GDPR does something very interesting: It says you’re not allowed to use someones data in a way they wouldn’t want you to.

                            I wish it simply said that, but it’s made somewhat complicated by a weird concept of “data” It’s clear that things like IP addresses aren’t [by themselves] your data, and even a name like John Smith isn’t data. Software understands data but not the kind of “data” that the GDPR is talking about. Pointing to “you” and “data” is a fair thick bit of regulation if you don’t want to draw a box around things and prevent sensible people from interpreting the forms of “data” nobody has yet thought of.

                            But keep it simple: Would that person want you doing this? Can you demonstrate why you think that is and convince reasonable people?

                            I’m doing a fair bit of GDPR consulting at the moment, and whilst there’s a big task in understanding their business, there’s also a big task getting them to approach their compliance from that line of questioning: How does this make things better for that person? Why do they want us to do this?

                            We’re not curing cancer here, fine, but certainly there are degrees.

                            1. 2

                              Browser cookies is something that crossed my mind after I suggested this, but my experience as a web dev makes me immediately suspect of them as durable stores. :)

                              I agree with your points though.

                          1. 5

                            The best usages of pie menus I’ve seen are invariably in video and computer games, usually because they actually care about UX and efficiency. Three really brilliant examples I’ve seen (one of which is mentioned in the article):

                            • The Sims action menus
                            • CS:GO/ CS:CZ
                            • Natural Selection mod for Half-Life (interesting radial/tree compromise)
                            • Mass Effect
                            • Fallout 4 (vastly worse than the old New Vegas style, but easy to use)
                            1. 1

                              I like how Mass Effect also puts the keypress into the menu, too. So, you can use it immediately in an intuitive way plus quickly learn the shortcut buttons. An extra boost comes with the buttons being different colors.

                            1. 11

                              The handling of the Damore memo and related science should tell us everything we need to know about to what degree we can trust both data and the people who criticize it.

                              The problem with claiming “mathwashing” is that it’s dangerously close to creating a culture that ignores studies if they don’t feel right. This is not scientific governance.

                              1. 6

                                You mean the method of citing a number of irrelevant and/or dubious scientific studies in a ideological rant based on logical fallacies and then claiming these cites bolster the credibility of the rant and indicate that anyone who objects is anti-science? Yep!

                                1. 0

                                  The handling of Galileo’s studies should tell us everything we need to know about how science always triumphs over obtuseness.

                                  Now, do you know what’s funny?

                                  We call “scientific researchers” incompetent people arguing that neural networks’ models are too deep for humans to understand.
                                  I mean these people not only rationalize their failures, they sell them as features!

                                  This is not scientific governance.

                                  1. 7

                                    Galileo’s heliocentric theories had reasonable scientific counterobjections based on the observational evidence available at the time, and other contemporary figures (such as Copernicus and Kepler) with heliocentric models of the universe had no particular trouble with the authorities. Galileo’s persecution by the Church was mostly about political and personal conflict between him and the pope, which has been ahistorically re-contextualized as a story about the Catholic church (or Religion in general) persecuting inconvenient scientific truths, by certain modern scientists generally studying different things than Galileo did and offending different authorities than the Catholic church.

                                    1. 1

                                      I don’t understand what you’re saying here, could you please rephrase it?

                                      1. 4

                                        Let’s try (but I’m not sure what is not clear… my English simply sucks, sorry…)

                                        I understand the concerns of @friendlysock, but the fact that we now teach an heliocentric model in the elementary schools show that good science always wins against censorship.
                                        We won’t ignore disturbing studies that “don’t feel right”.
                                        On the contrary, we will verify them carefully, as we should do with anything that is qualified as “Science”. (and we should not qualify as “Science” any unverified claim: it’s just an hypothesis until several independent experiments confirm it!)

                                        However, today in IT there is another issue that is much more dangerous.
                                        Several powerful companies are lobbying to spread the myth of machine intelligence. Not just to collect money or data, but to delegate to machines the responsibility of their errors.

                                        Now, if you tell me that a software you wrote cannot be debugged, I think that you are not competent to develop any software at all. But if you boldly state that your software is not broken, but too smart for me (and even you) to understand its internal working, I would remove you from any responsibility role in IT.

                                        For some strange reason, this is not what happens in AI.

                                        Developers happily admit that they cannot explain their own neural network’s computation.
                                        But they rationalize such failure as if it was not their fault, but it’s the neural network that is “too smart” (they usually mumble that it takes into account too many variables, it finds unintuitive correlations and so on).
                                        So they are not just incompetent developers: they are rationalizing their failures.

                                        And they sell such opacity as if it’s an inherent aspect of neural networks, but an advantage!

                                        They do not say “this software is shitty mess”, they say “this software is too smart for humans!”.

                                        Is this a scientific approach?

                                  1. 12

                                    I can’t really get behind just ignoring headers because some engineer feels like they aren’t useful anymore.

                                    1. 8

                                      He doesn’t just “feel like”, he has a justified technical position, and I don’t see any counter arguments to any of his points.

                                      1. 5
                                        • Via is actually useful, if properly used, and can detect request loops outside your network
                                        • Expires is actually useful if you need to expire a response at a specific date, Cache-Control doesn’t do that, it’s only use isn’t “expire my content and don’t cache”
                                        • X-Frame-Options is needed to support older browser, IE only supports a minimal version of CSP since 10, if you support older clients, XFO is a good security addition as CSP may not be available
                                        1. 5

                                          The repeated use of “deprecation” without obvious links to the RFCs superceding those deprecations doesn’t help. Further, the entire point of the article is pretty clearly to help advertise Fastly (which presumably wants to go after some of Cloudflare’s market).

                                          Like, it’s an interesting read, but I’m a bit concerned about people putting their services behind providers that sanctimoniously decide to break with RFCs because it might get them more business.

                                        2. 3

                                          From the bit at the end it doesn’t sound like they’re doing anything to the headers by default? These are headers they recommend stripping out, and there’s an example at the end of how to strip out individual headers if you want to, but a site owner would have to actually do that to have any effect.

                                          1. 1

                                            Yeah, I don’t really see the problem here.

                                            Nobody’s forced to look at headers they’re not interested in, and the extras don’t hurt anything, except for using a bit of bandwidth.

                                          1. 21

                                            I hope to see more of this — if workers with as much leverage as we have don’t speak up against technology we create being used for evil, we can’t call ourselves engineers.

                                            1. 13

                                              Relying on morality when incentives go the other way does not scale.

                                              1. 6

                                                Exactly. It has to be a large number of people that damage their mission directly or indirectly with media pressure. Otherwise, it’s something with no impact. At least people are following their principles, though.

                                                1. 6

                                                  It has to be a large number of people that damage their mission directly or indirectly with media pressure.

                                                  Can you trust an engineering company who ignores the opinions of its engineers?

                                                  We are talking about one of the most celebrated company of western economy, often cited as an example of excellence.

                                                  Leaving Google for ethical concerns poses a serious burden on the employment of these engineers that will probably be marked as dangerous employees for the time being.

                                                  We can assume that this is something they knew, as Google don’t hire dumb guys.

                                                  So why they quit?

                                                  My bet is that the militar use of the Google’s artificial intelligence technology is so dangerous that these engineers felt obliged to leave the organization beyond any doubt.

                                                  Otherwise, it’s something with no impact.

                                                  Well, it’s a first step.

                                                  And a courageous one.

                                                  Its impact goes beyond the worldwide image of Google, beyond the direct issues in their production line.

                                                  It is an example.

                                                  1. 4

                                                    Can you trust an engineering company who ignores the opinions of its engineers?

                                                    It doesn’t matter. What matters here is (a) the companies’ goals/incentives, (b) how successful they are at achieving them, and (c) if a tiny number of engineers quitting changes that. Note that (b) includes implicit support by the many people who use their products and services voting with their wallet. The stuff in (a) means they’re anywhere from apathetic to damaging for a lot of ethical issues around privacy and making money. Due to (b), actions to damage them have to put a huge dent in that or make them think it will. (c) doesn’t do that. So, (c) is probably irrelevant to Google. The article itself says as much:

                                                    “However, the mounting pressure from employees seems to have done little to sway Google’s decision—the company has defended its work on Maven and is thought to be one of the lead contenders for another major Pentagon cloud computing contract, the Joint Enterprise Defense Infrastructure, better known as JEDI, that is currently up for bids.”

                                                    I gave them credit in my other comment for standing up on their principles. That’s respectable. It’s just that a “dozen” or so people quitting a company with over 70,000 employees with people waiting to fill their positions doesn’t usually change anything. They’d instead have to campaign in media or government aimed at stopping those contracts or drone operations. At least half the voting public and current President support military action overseas. The other half didn’t convince their prior President to stop drone use or strikes. There are also not large swaths of Google customers threatening to stop using Google Search, Gmail, etc if Google doesn’t turn down government contracts.

                                                    So, quitting over this is pointless if the goal is to achieve something. At best, it’s a personal decision by those individuals to not be involved in something they disagree with that’s going to happen anyway. That’s fine but practically a separate thing from ending these contracts. If anything, we’ll just get a shift in Google employees from those who might leave over the contracts to people who range from favoring them or just griping about them continuing to work there. I think most will be in latter category.

                                                    1. 2

                                                      It’s just that a “dozen” or so people quitting a company with over 70,000 employees with people waiting to fill their positions doesn’t usually change anything.

                                                      The fact is that fewer talented people will want to fill their position.
                                                      This is a pretty serious issue, if engineers are the core resource of your company.

                                                      Now, I’d guess most Google engineers don’t feel as important to the company as they feel the company is important to them. This happens in many companies, and I would guess Google has turned this kind of internal narrative into an art.

                                                      The fact is that, instead, Google literally would not exists without those engineers.

                                                      These few have shown exactly that: that working in Google is not that important.
                                                      It’s a matter of time, but if Google do not take serious actions to avoid this general wake up, other engineers will follow. And the same might happen in Facebook, in Apple and in many other smaller IT companies.

                                                      On the other hand, in Europe and everywhere else, people will start to ask why engineers from a company that operate in their territories, are so afraid for what the company is doing, to quit. To avoid the risk of being associated with the company future. To avoid sharing its responsibility.
                                                      Politicians will be less friendly to a company that might be doing something really evil for a foreign state.

                                                      I agree that more engineers should follow their example, but I know that life is not that easy.
                                                      However people continuing to work there might organize to keep the company “on track”, and this might lead to the creation of a labor union.

                                                      1. 4

                                                        The fact is that fewer talented people will want to fill their position.

                                                        You have to prove that assumption. Google changed their Don’t Be Evil motto doing sneakier and sneakier stuff overtime. They’re a surveillance company that hires brilliant people to do interesting work for high pay and good perks. They’ve had no trouble that I’ve seen keeping new people coming in. Status quo has the evidence going against your claim: it’s a shady, rich company with in-demand jobs whose shady activities haven’t changed that for years. There’s also nearly 70,000 workers mostly in favor of it with more trying to get in.

                                                        “However people continuing to work there might organize to keep the company “on track”, and this might lead to the creation of a labor union.”

                                                        That’s a different issue entirely. Given I am in a union, I think it would be cool to see it happen. Unlike OP topic, that could happen with higher probability. Silicon Valley will do everything they can to stamp it out in mean time, though. Still a long shot.

                                                        1. 0

                                                          The fact is that fewer talented people will want to fill their position.

                                                          You have to prove that assumption.

                                                          Not an assumption, but a deduction: people avoid cognitive dissonance, if possible.

                                                          A dozen people leaving a company cause of ethics, means that such company forced them too high on cognitive dissonance, and this will make Google relatively less attractive, in comparison to the alternatives: a talented engineer want to fix problems, not fool herself to avoid the pain of contradictions.

                                                          Our brain consume around 20% of our energy, after all.

                                                          This is the same reason that make me guess others will quit Google in the future.
                                                          Because now they have a new thinkable precedent.
                                                          A new, effective solution to reduce their cognitive dissonance.

                                                2. 2

                                                  I agree. But we also can’t rely on companies that we don’t own to incentivize us to act in a moral fashion – engineers need a governing body for that.

                                                  1. 1

                                                    What about entering both US political parties and changing the policy? If you believe that killing people is wrong, maybe make it a law?

                                                    Sometimes the only way to advance your field is to step out of it and fix the external systems. And war zones are definitely not a good environment in which to build global information network to advance everyone’s wellbeing…

                                                  2. 1

                                                    I think it’s definitely a factor. Many prominent business people would not like to be associated with payday loan companies, for example.

                                                    I think this is less about being the silver bullet for problems, and more about being one of the 20 or 30 things we need to be doing to make the world A Better Place(TM)

                                                  3. 13

                                                    We can’t even speak up for honest pay for an honest day’s work–and that’s a lot less subjective than some arbitrary definition of “evil”.

                                                    1. 4

                                                      Why not both?

                                                      1. 4

                                                        At least the “evil” one is super cloudy.

                                                        Say you are an engineer working at a company that builds control software for missiles. You are a pacifist, and so you decide to introduce a minor bug (or fail to patch a discovered bug) that causes the missile to not detonate when it lands.

                                                        • Are you good for not facilitating the loss of life?
                                                        • Are you evil for misleading your employer about the labor of yours that they’ve purchased?
                                                        • If the missile lands on a poor grunt and severs their legs causing them to bleed out over minutes instead of detonating properly and just kinda instantly killing them, are you evil for prolonging suffering?

                                                        That’s just scratching the surface of morality in engineering.

                                                        1. 6

                                                          That’s fair – and I should’ve been explicit earlier. I believe that there are (at least) two moral guidelines that should be taken into account.

                                                          The first is a professional code of ethics, similar to what ACM has here. Of course even this is cloudy – for example, in my opinion 1.2 “Avoid harm to others” would necessarily preclude working for a missile manufacturer in the first place. At the very least, if one views missiles and missile software as being a necessary “evil”, safeguards should be put in to protect human life at all cost, etc. etc. The minutiae of the professional code of ethics can and should be rigorously debated, because it provides a minimally viable base for how we should conduct ourselves. So for example, the question of whether or not working in the weapons manufacturing industry truly violates rule 1.2 should be an explicit discussion that is had in a professional organization (not a workplace per se).

                                                          The second guideline is in line with your own personal moral code. This is important because it provides for people who are religious (or not) or any other number of cultural influences that have caused a person to believe what they believe today. This, of course, has to be superseded by the professional code – for example, if I personally believe that discrimination based on what TV shows you enjoy is okay, that doesn’t mean that my personal morality should define what happens in a professional setting. But in the hypothetical case you provided, even if I don’t feel that writing that software goes against a professional code of ethics, if I am a pacifist, it goes against my personal code. I know from the professional code that purposefully writing bad or buggy software is wrong, and so my only option is to find another job in which both my personal and professional codes of ethics can be upheld.

                                                          1. 9

                                                            Why discuss an unlikely hypothetical rather than the issue at hand? Why the need to logically define evil beyond any confusion? This is not even possible in the general case for anything. Can you logically define ‘fun’ such that everyone agrees? At the end of day, evil means what people talking about it think it means, and it’s better to work off of that than to halt all discussion until we achieve the impossible task of absolutely grounding natural language in logic.

                                                            1. 6

                                                              It’s precisely because evil is so ill-defined that talking about it is difficult. As @mordae points out, it’s more effective to talk about other incentives.

                                                              And again, I’m not saying “halt all discussion”–quite the opposite! I’m saying that the issue is more nuanced than “don’t be evil”.

                                                              1. 1

                                                                I certainly agree with that. I still think it’s worth going into, because at a certain point you’re likely to end up doing it anyway. For instance, if we start talking about incentives, we might end up talking about how to incentivize people towards good, or at least, some concept of “not evil”. I’m not saying it trumps incentives or that this is a more effective approach, I’m just saying we should still have the discussion.

                                                                I think a trap we as engineers often fall into is to attempt to build everything up from laws and axioms. That doesn’t quite work for morality, and the nebulous nature of it means it rarely gets discussed. The software industry in particular is very focused on “solving problems” and never asks questions like “should we solve this problem?”

                                                                I guess another scary thing about it is that we can’t really empirically verify what the right answer is, and depending on the issue we might even have multiple valid answers. But sometimes just asking the question is worthwhile, even if we don’t have an answer.

                                                                Perhaps tech companies should start hiring philosophers.

                                                                1. 2

                                                                  Perhaps tech companies should start hiring philosophers.

                                                                  I’d argue that a good programmer is a philosopher almost by definition.

                                                                  We talk like if our field was an engineering field but most of times we don’t build things constrained by the physical world (yeah I know what latency is… I said most of times :-D).

                                                                  Or we talk like if our field was just applied math, pure and intangible, but then we talk about usability or we kill someone through a self driving car.

                                                                  But ultimately we work with ideas.

                                                                  The choice to ignore the ethics of our work is up to us.

                                                                  But we have much more instruments to think about our role in the world than any “professional philosopher” hired to think for us (in the interest of the company).

                                                                  1. 1

                                                                    if we start talking about incentives, we might end up talking about how to incentivize people towards good, or at least, some concept of “not evil”.

                                                                    That’s how you do it. In Google’s case, a publicly-traded company, that means you have to hit them in the wallet in a way that knocks out the contract. Alternatively, convince their management to change their charter or use other legal means to block whole classes of action in the present and future that they agreed were evil. I’m not sure if that would even work in Google’s case but one can start businesses like that in nonprofit or public benefit form.

                                                                2. 1

                                                                  I think friendlysock was trying to illustrate the point with some examples. The comment succeeded given the other person understood the points. There’s nothing wrong with that. You said to instead work off claims about evil in this situation based on what people are saying. In this case, what does evil mean exactly to both those employees and various stakeholders in the United States? Based on the political debates, I know there’s quite a few different views on whether these programs are evil or not. Even within the main, political parties, in Silicon Valley, and in Google itself.

                                                                  The only thing sure is that about 4,000 of Google’s 70,000 people plus some other folks writing a letter don’t like what Google is doing. Of the 4,000, only a dozen or so showed it’s worth not working for Google. So, that’s under under 1% of Google’s workforce. The others are continuing to support Google’s success, including that program indirectly, while protesting that program. They may or may not leave but I think most will stay: workers gripe more than they take action in general case, esp if employer’s actions is morally a mix to them or six digits are involved. If they leave, there’s a lot of people willing to take their place with no long term effect on Google. The remainder and some new hires collectively are apathetic to this or believe it’s morally acceptable.

                                                                  Many of the people staying would probably tell you they’re decent people with Google doing a lot of good for the world (arguably true) despite this evil. We saw this in NeverAgain pledge. Others would tell you this kind of thing is inevitable enough that Google not doing it would make no difference. Some of them would even say it’s better if they do it so they can do it right minimizing harm. Yet another group will claim these programs prevent a larger number of deaths than they cause or prevent real damage vs hypothetical risks detractors talk about. People ranging from those developing software to those doing drone strikes might believe they’re saving lives in their work while the dozen that quit will be doing less valuable work in tech for their own benefit.

                                                                  I don’t think there’s a clear answer of evil if I’m looking at the stakeholders in this discussion. They’re all over the place with it. The acting public is in a few camps: those doing a mix of opposing and tolerating drone operations who lost the election; those mostly supporting them whose party is in control; billions of dollars worth of users and businesses who don’t care enough to switch providers; tiny, tiny, tiny slice of revenue from those that will. Put in that light, nothing they’re doing will matter past their own conscience. Hell, those thinking the tech is evil might have been better off staying in there half-assing the programming on purpose to make it look like such tech just isn’t ready to replace people yet. There’s precedents for that with many of them in defense industry except for profit rather than moral reasons.

                                                        1. 3

                                                          There isn’t even a product here. This is just somebody talking about one day maybe building a thing. It is bad advertising for a space already full of hype and bunk.

                                                          1. 1

                                                            HTC is so desperate that they offered this clown some BIGNUM dollarbux to spin bafflegab. Not really news.

                                                          1. 7

                                                            This is a nice summary, thanks for sharing it. Combined with this tweet: https://twitter.com/kellabyte/status/996429414970703872

                                                            …I’m inclined to wonder how much time/bandwidth would be saved at larger sites if people cleaned these up, although I suspect that “size of HTTP headers” is not the worst bottleneck for most people.

                                                            1. 7

                                                              For most sites the comparison goes something like javascript > unoptimized images > cookie size > other http headers for bytes/load time wasted.

                                                              1. 6

                                                                I suspect the impact is minimal. It’s a few hundred bytes at worst, and the site is probably more affected by 3rd party adtech or unoptimized pictures.

                                                                1. 7

                                                                  Somewhat related, but even small changes to the request/response can have large impact on the bandwidth consumed.

                                                                  From Stathat “This change should remove about 17 terabytes of useless data from the internet pipes each month” https://blog.stathat.com/2017/05/05/bandwidth.html

                                                                  1. 5

                                                                    Optimized Images alone would most likely save a lot more since they can save a lot more too. A recent google blog loaded a 8MB GIF image to show a few second long animation in a 250x250 thumbnail. 2 minutes in ffmpeg reduced that to about 800KB.

                                                                    Imagine if people did this on sites with more traffic than some random google product announcement blog…

                                                              1. 8

                                                                For people who aren’t quite as ambitious about heat pipes, there are several nice little fanless kits from companies like Zotac that will give you a machine that’s passively cooled and only needs some RAM and an SSD.

                                                                They’re basically the 80% lowers of fanless computing.

                                                                1. 2

                                                                  Hi, a Zotac user here. CPU is a bit slower than I expected, but overall I’m very happy with my setup. Zotac CI527 is cheap, well built, and silent!

                                                                  1. 2

                                                                    It seems like the best Zotac fanless PC is the ZBOX-CI549NANO-P which uses an i5-7300u. The author of the post installed an AMD Ryzen 5 1600.

                                                                    2 cores/4 threads vs 6 cores/12 threads.

                                                                    1. 1

                                                                      Thanks for the link! A few thoughts:

                                                                      Wow, Zotac is really bad at selling silent computers. They have a ton of models and I don’t see a way to see only passively cooled models.

                                                                      The silent PC crowd are all about x86 at the moment. I wonder how ARM fares here. Are all end user ARM machines like Raspberry Pi? (Its CPU is too slow and it has bad IO connectivity.)

                                                                      1. 2
                                                                        1. 2

                                                                          Ha! The displays are a tad small for desktop computing though. 🙂

                                                                    1. 13

                                                                      I have other things going on in the pixel mines, but a couple parts of this I don’t think illustrate the points the author wants to make.

                                                                      But this criticism largely misses the point. It might be nice to have very small and simple utilities, but once you’ve released them to the public, they’ve become system boundaries and now you can’t change them in backwards-incompatible ways.

                                                                      This is not an argument for making larger tools–is it better to have large and weird complicated system boundaries you can’t change, or small ones you can’t change?

                                                                      While Plan 9 can claim some kind of ideological purity because it used a /net file system to expose the network to applications, we’re perfectly capable of accomplishing some of the same things with netcat on any POSIX system today. It’s not as critical to making the shell useful.

                                                                      This is a gross oversimplification and glossing over of what Plan 9 enabled. It wasn’t mere “ideological purity”, but a comprehensive philosophy that enabled an environment with neat tricks.

                                                                      The author might as well have something similar about the “ideological purity of using virtual memory”, since some of the same things can be accomplished with cooperative multitasking!

                                                                      1. 4

                                                                        This is a gross oversimplification and glossing over of what Plan 9 enabled. It wasn’t mere “ideological purity”, but a comprehensive philosophy that enabled an environment with neat tricks.

                                                                        Not only tricks, but a whole concept of how ressources can be used: Use the file storage on one system, the input/output (screen, mouse, etc.) of another and run the programs somewhere with a strong cpu, all by composing filesystems. Meanwhile in 2018 we are stuck with ugly hacks and different protocols for everything, trying to fix problems by adding another layer on top of things (e.g. pulseaudio on top of alsa).

                                                                        And, from the article:

                                                                        And as a filesystem, you start having issues if you need to make atomic, transactional changes to multiple files at once. Good luck.

                                                                        Thats an issue of designing the concrete filesystem, not of the filesystem-abstraction. You could write settings to a bunch of files which are together in a dictionary and commit them with a write to a single control file.

                                                                        Going beyond text streams

                                                                        PowerShell is a good solution, but the problem we have with pipelines on current unix-style systems isn’t that the data is text, but that the text is ill formatted. Many things return some cute markup. That makes it more difficult to parse than necessary.

                                                                        1. 3

                                                                          Actually Unix proposed the file as an universal interface before Plan 9 was a dream.
                                                                          The issue was that that temporary convenience and the hope that “worse is better” put Unix in a local minimum were that interface was not universal at all (sockets, ioctl, fctl, signals…).
                                                                          Pike tried to escape such minimum with Plan 9, where almost every kernel and user service is provided as a filesystem and you can stack filesystems like you compose pipes in Unix.

                                                                          1. 10

                                                                            Put a quarter in the Plan9 file-vs-filesystem “well actually” jar ;)

                                                                        1. 21

                                                                          Gosh, I couldn’t make it very far into this article without skimming. It goes on and on asking the same ‘why’ but mentally answering it in the opposite direction of the quoted comments.

                                                                          Docker is easy, standard isolation. If it falls, something will replace it. We’re not going in the opposite direction.

                                                                          The article doesn’t explain to me what other ways I have of running 9 instances of an app without making a big mess of listening ports and configuration.

                                                                          Or running many different PHP apps without creating a big mess of PHP installs and PHP-FPM configs. (We still deal with hosting setups that share the same install for all apps, then want to upgrade PHP.)

                                                                          Or how to make your production setup easy to replicate (roughly) for developers who actually work on the codebase. (Perhaps on macOS or Windows, while you deploy on Linux.)

                                                                          We’re not even doing the orchestration dance yet, these are individual servers that run Docker with a bunch of shell scripts to provision the machine and manage containers.

                                                                          But even if we only use 1% of the functionality in Docker, I don’t know how to do that stuff without it. Nevermind that I’d probably have to create a Vagrantbox or something to get anyone to use it in dev. (I’ve come to dislike Vagrant, sorry to say.)

                                                                          Besides work, I privately manage a little cloud server and my own Raspberry Pi, and sure they don’t run Docker, but they don’t have these requirements. It’s fine to not use Docker in some instances. And even then, Docker can be useful as a build environment, to document / eliminate any awkward dependencies on the environment. Makes your project that much easier to pick up when you return to it months later.

                                                                          Finally, I’m sorry to say that my experiences with Ansible, Chef and Puppet have only ever been bad. It seems to me like the most fragile aspect of these tools is all the checks of what’s what in the current environment, then act on it. I’m super interested in trying NixOS sometime, because from what I gather, the model is somewhat similar to what Docker does: simply layering stuff like we’ve always done on software.

                                                                          1. 1

                                                                            For the php part it’s not that complex. Install the required versions (Debian and Ubuntu both have 5.6 through 7.2 “major” releases available side by side that’s to Ondrej Sury’s repo. Then just setup a pool per-app (which you should do anyway) and point to the apps specific Unix domain socket for php-fpm in the vhost’s proxy_fcgi config line.

                                                                            I’ve used this same setup to bring an app from php5.4 (using mod_php) up through the versions as it was tested/fixed too.

                                                                            Is there some config/system setup required? You betcha. Ops/sysadmins is part of running a site that requires more than shared hosting.

                                                                            What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

                                                                            1. 12

                                                                              What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

                                                                              Yes. The whole point of “DevOps”/docker is to deploy softwares certified by “Works on My Machine” certification program. This eliminates coordination time with separate Ops team.

                                                                              1. 2

                                                                                Is this sarcasm, or are you actually in favour of the definition “DevOps = Developers [trying to] do Ops” ?

                                                                                1. 7

                                                                                  Descriptively, that’s what DevOps is. I am prescriptively against such DevOps, but describing what’s currently happening with docker is unrelated to whether I am in favor of it.

                                                                                  1. 3

                                                                                    I don’t disagree that it’s a definition used by a lot of places (whether they call it devops or not). But I believe a lot of people who wax poetic about “DevOps” don’t share this same view - they view it as Operations using ‘development’ practices: i.e. writing scripts/declarative state files/etc to have reproducible infrastructure, rather than a “bible” of manual steps to go through to setup an environment.

                                                                                    I’m in favour of the approach those people like, but I’m against the term simply because it’s misleading - like “the cloud” or “server less”.

                                                                              2. 2

                                                                                I don’t understand your last point, that’s exactly what developers do all day.

                                                                                In Docker, the PHP version the app depends on is set in code. It doesn’t even take any configuration changes when the app switches to a new PHP version.

                                                                                But if there’s one gripe I have with the Docker way of things, baking everything into an image, it’s security. There are no shared libraries in any way, upgrading a dependency minor version requires baking a new image.

                                                                                I kinda wish we had a middle road, somewhere between Debian packages and Docker images.

                                                                                1. 3

                                                                                  the PHP version the app depends on is set in code

                                                                                  And of course we all know Docker is the only way to define dependencies for software packages.

                                                                                  1. 4

                                                                                    Did anyone say it was? Docker is just one of the easiest ways to define the state of the whole running environment and have it defined in a text file which you can easily review to see what has been done.

                                                                                  2. 1

                                                                                    You can share libraries with Docker by making services share the same Docker image. You can actually replicate Debian level of sharing by having a single Docker image.

                                                                                    1. 2

                                                                                      Well, I guess this is just sharing in terms of memory usage? But what I meant with security is that I’d like if it were possible to have, for example, a single layer in the image with just OpenSSL, that you can then swap out with a newer version (with, say, a security fix.)

                                                                                      Right now, an OpenSSL upgrade means rebuilding the app. The current advantage managing your app ‘traditionally’ without Docker is that a sysadmin can do this upgrade for you. (Same with PHP patch versions, in the earlier example.)

                                                                                      1. 4

                                                                                        And this is exactly why I don’t buy into the whole “single-use” container shit show.

                                                                                        Want to use LXC/LXD for lightweight “VM’s”? Sure, I’m all for it. So long as ops can manage the infra, it’s all good.

                                                                                        Want to have developers having the last say on every detail of how an app actually runs in production? Not so much.

                                                                                        What you want is a simpler way to deploy your php app to a server and define that it needs a given version of PHP, an Apache/Nginx config, etc.

                                                                                        You could literally do all of that by just having your app packaged as a .deb, have it define dependencies on php-{fpm,moduleX,moduleY,moduleZ} and include a vhost.conf and pool.conf file. A minimal (i.e. non-debian repo quality but works for private installs) package means you’ll need maybe half a dozen files extra.

                                                                                        And then your ops/sysadmin team can upgrade openssl, or php, or apache, or redis or whatever other thing you use.

                                                                                        1. 2

                                                                                          I actually do think this is a really good idea. But what’s currently there requires a lot more polish for it to be accessible to devs and small teams.

                                                                                          Debian packaging is quite a pain (though you could probably skip a lot of standards). RPM is somewhat easier. But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                                                                                          You could then go the LXC route, and have an admin manage each instance in a Debian container. That’s great, but we don’t have the resources to set up and manage all of this, and I expect that is the case for quite a lot of small teams out there.

                                                                                          Maybe it’s less complicated than I think it is? If so, Docker marketing got something very right, and it’d help if there was a start-to-finish guide that explains things the other way.

                                                                                          Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                                                                                          1. 3

                                                                                            But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                                                                                            In the case of the context - it is one instance. Either you build your packages with different names for different stages (e.g. acme-corp-foo-app-test, acme-corp-foo-app-staging, acme-corp-foo-app-prod) or use separate environments for test/stage/prod - either via VMs, LXC/LXD, whatever.

                                                                                            Nothing is a silver bullet, Docker included. It’s just that Docker has a marketing team with a vested interest in glossing over it’s deficiencies.

                                                                                            If you want to talk about how to use the above concept for an actual project, I’m happy to talk outside the thread.

                                                                                            1. 2

                                                                                              Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                                                                                              This is exactly why at work we started to use Docker (and got rid of Vagrant).

                                                                                              1. 1

                                                                                                At some point things inside the VM get hairy, because of organic growth.

                                                                                                Can you define “hairy”?

                                                                                                1. 2

                                                                                                  The VM becomes a second workstation, because you often SSH in to run some commands (test migrations and the like). So people install things in the VM, and change system configuration in the VM. And then people revive months old VMs, because it’s easier than vagrant up, which can take a good 20 minutes. There’s no reasoning about the state of Vagrant VMs in practice.

                                                                                                  1. 3

                                                                                                    So people install things in the VM, and change system configuration in the VM

                                                                                                    So your problem isn’t vagrant then, but people. Either the same people are doing the same thing with Docker, or not all things are equal?

                                                                                                    because it’s easier than vagrant up, which can take a good 20 minutes

                                                                                                    What. 20 MINUTES? What on earth are you doing that causes it to take 20 minutes to bring up a VM and provision it?

                                                                                                    There’s no reasoning about the state of Vagrant VMs in practice.

                                                                                                    You know the version of the box that it’s based on, what provisioning steps are configured to run, and whether they’ve run or not.

                                                                                                    Based on everything you’ve said, this sounds like blaming the guy who built a concrete wall, when your hammer and nails won’t go into it.

                                                                                                    1. 1

                                                                                                      I suppose the main difference is that we don’t build images for Vagrant, but instead provision the machine from a stock Ubuntu image using Ansible. It takes a good 3 minutes just to get the VirtualBox VM up, more if you have to download the Ubuntu image. From there, it’s mostly adding repos, installing deps, creating configuration. Ansible itself is rather sluggish too.

                                                                                                      Compare that to a 15 second run to get a dev environment up in Docker, provided you have the base images available.

                                                                                                      A people problem is a real problem. It doesn’t sound like you’ve used Docker for Mac/Windows, but the tool doesn’t give you a shell in the VM. And you don’t normally shell into containers.

                                                                                                      1. 1

                                                                                                        That’s interesting that it takes you 20 minutes to get to something usable. I never had that experience back when I used VMware and VirtualBox. I can’t remember having it anyway. I decided to see what getting Ubuntu up on my box takes with the new version for comparison to your experience. I did this experiment on my backup laptop: a 1.5GHz Celeron with plenty of RAM and older HD. It’s garbage far as performance goes. Running Ubuntu 16-17 (one of them…), VirtualBox, and Ubuntu 18.04 as guest in the a 1GB VM. That is, the LiveCD of Ubuntu 18.04 that it’s booting from.

                                                                                                        1. From power on to first Ubuntu screen: 5.7 seconds.

                                                                                                        2. To get to the Try or Install screen: 1 min 47 seconds.

                                                                                                        3. Usable desktop: 4 min 26 seconds.

                                                                                                        So, it’s up in under 5 minutes on the slowest-loading method (LiveCD) on some of the slowest hardware (Celeron) you can get. That tells me you could probably get even better startup time than me if you install and provision your stuff into a VirtualBox VM that becomes a base image. You use it as read-only, snapshot it, whatever the feature was. I rarely use VirtualBox these days so can’t remember. I know fully-loaded Ubuntu boots up in about a minute on this same box with the VirtualBox adding 5.7s to get to that bootloader. Your setup should just take 1-2 minutes to boot if doing it right.

                                                                                                        1. 0

                                                                                                          It takes a good 3 minutes just to get the VirtualBox VM up

                                                                                                          What? Seriously? Are your physical machines running on spinning rust or with only 1 or 2 GB of RAM or something? That is an inordinate amount of time to boot a VM, even in the POS that is Virtualbox.

                                                                                                          but the tool doesn’t give you a shell in the VM.

                                                                                                          What, so docker attach or docker exec /bin/bash are just figments of my imagination?

                                                                                                          you don’t normally shell into containers

                                                                                                          You don’t normally just change system settings willy nilly in a pre-configured environment if you don’t know what you’re doing, but apparently you work with some people who don’t do what’s “normal”.

                                                                                                          1. 2

                                                                                                            Physical machines are whatever workstation the developer uses. Typically a Macbook Pro in our case. Up until Vagrant has SSH access to the machine, I’m not holding my breath.

                                                                                                            You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                                                                                            People do regularly make changes to vhost configuration, or installed packages in VMs when testing new features, instead of changing the provisioning configuration. Again, because it takes way longer to iterate on these things with VMs. And because people do these things from a shell inside the VM, spending time there, they start customizing as well.

                                                                                                            And people do these things in Docker too, and that’s fine. But we’re way more comfortable throwing away containers than VMs, because of the difference in time. In turn, it’s become much easier to iterate on provisioning config changes.

                                                                                                            1. 2

                                                                                                              If time was a problem, sounds like the Docker developers should’ve just made VM’s faster in existing stacks. The L4Linux VM’s in Dresden’s demo loaded up about one a second on old hardware. Recently, LightVM got it down to 2.3 milliseconds on a Xen variant. Doing stuff like that also gives the fault-isolation and security assurances that only come with simple implementations which Docker-based platforms probably won’t have.

                                                                                                              Docker seems like it went backwards on those properties vs just improving speed or usability of virtualization platforms.

                                                                                                              1. 1

                                                                                                                You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                                                                                                No. Your complaint is that people change configuration inside the provisioned environment. The provisioned environment with Docker isn’t a VM - that’s only there because it requires a Linux kernel to work. The provisioned environment is the container, which you’ve just said people are still fucking around with.

                                                                                                                So your complaint still boils down to “virtualbox is slow”, and I still cannot imagine what you are doing to take twenty fucking minutes to provision a machine.

                                                                                                                That’s closer to the time to build a base box from nothing than the time to bring up an instance and provision it.

                                                                                                                1. 2

                                                                                                                  Look, this is getting silly. You can keep belittling every experience I’ve had, as if we’ve made these choices based on a couple of tiny bad aspects in the entire system, but that’s just not the case, and that’s not a productive discussion.

                                                                                                                  I did acknowledge that in practice Docker images a lot more things, which factors into a lot of the slowness of provisioning in the Vagrant case for us. There’s just a lot more provisioning has to do compared to Docker.

                                                                                                                  And while we could’ve gone another route, I doubt we would’ve been as happy, considering where we all are now as an industry. Docker gets a lot of support, and has a healthy ecosystem.

                                                                                                                  I see plenty of issues with Docker, and I can grumble about it all day. The IPv6 support is terrible, the process management is limited, the Docker for Mac/Windows filesystem integrations leave a lot to be desired, the security issue I mentioned in this very thread. But it still has given us a lot more positives than negatives, in terms of developer productiveness and managing our servers.

                                                                                                                  1. 1

                                                                                                                    You can keep belittling every experience I’ve had Every ‘issue’ you raised boils down to ‘vagrant+virtualbox took took to long to bring up/reprovision’. At 20 minutes, that’s not normal operation, it’s a sign of a problem. Instead of fixing that, you just threw the whole lot out.

                                                                                                                    This is like saying “I can’t work out why apache keeps crashing under load on Debian. Fuck it, I’m moving everything to Windows Server”.

                                                                                                                    But it still has given us a lot more positives than negatives The linked article seems to debunk this myth.

                                                                                                                  2. 2

                                                                                                                    I have the same experience as @stephank with VirtualBox. Every time I want to restart with a clean environment, I restart with a standard Debian base box and I run my Ansible playbooks on it. This is slow because my playbooks have to reinstall everything (I try to keep a cache of the downloaded packages in a volume on the host, shared with the guest). Docker makes this a lot easier and quicker thanks to the layer mechanism. What do you suggest to keep using Vagrant and avoid the slow installation (building a custom image I guess)?

                                                                                                                    1. 2

                                                                                                                      Please tell me “the same experience” isn’t 20 minutes for a machine to come up from nothing?

                                                                                                                      I’d first be looking to see how old the base box you’re using is. I’m guessing part of the process is an apt-get update && apt-get upgrade - some base boxes are woefully out of date, and are often hard-coded to use e.g. a US based mirror, which will hurt your update times if you’re elsewhere in the world.

                                                                                                                      If you have a lot of stuff to install, then yes I’d recommend making your own base-box.

                                                                                                                      What base-box are you using, out of interest? Can you share your playbooks?

                                                                                                                      1. 2

                                                                                                                        Creating a new VM with Vagrant just takes a few seconds, provided that the base box image is already available locally.

                                                                                                                        Provisioning (using Ansible in my case) is what takes time (installing all the services and dependencies required by my app). To be clear, in my case, it’s just a few minutes instead of 20 minutes, but it’s slow enough to be inconvenient.

                                                                                                                        I refresh the base box regularly, I use mirrors close to me, and I’ve already checked that apt-get update/upgrade terminates quickly.

                                                                                                                        My base box is debian/jessie64.

                                                                                                                        I install the usual stuff (nginx, Python, Go, Node, MySQL, Redis, certbot, some utils, etc.).

                                                                                                                        1. 2

                                                                                                                          Reading all yours comments, you seem deeply interested by convincing people that VMs are solving all the problems people think Docker is solving. Instead of debating endlessly on comments here, I’d be (truly) interested to read about your work-flow as a an ops and as a dev. I’ve finished my studies using Docker and never had to use VMs that much on my machines, so I’m not an expert and would be really interested to have a good article/post/… that I could learn from on the subject on how VM would be better than Docker.

                                                                                                2. 1

                                                                                                  I think the point is to use something like ansible, so you put some ansible config in a git repo then you pull the repo, build the docker image, install apps, apply the config and run, all via ansible.

                                                                                                3. 2

                                                                                                  How do you manage easily 3 different versions of PHP with 3 different version of MariaDB? I mean, this is something that Docker solves VERY easily.

                                                                                                  1. 4

                                                                                                    Maybe if your team requires 3 versions of a database and language runtime they’ve goofed…

                                                                                                    1. 8

                                                                                                      It’s always amusing to have answers pointing the legacy and saying “it shouldn’t exist”. I mean, yes it’s weird, annoying but it exists now and will exists later.

                                                                                                      1. 6

                                                                                                        it exists now and will exists later.

                                                                                                        It doesn’t have to exist at all–like, literally, the cycles spent wrapping the mudballs in containers could be spent just…you know…cleaning up the mudballs.

                                                                                                        There are cases (usually involving icky third-party integrations) where maintaining multiple versions of runtimes is necessary, but outside of those it’s just plan sloppy engineering not to try and cleanup and standardize things.

                                                                                                        (And no, having the same container interface for a dozen different snowflakes is not standardization.)

                                                                                                        1. 2

                                                                                                          I see it more like, the application runs fine, the team that was working on it doesn’t exist anymore, instead of spending time to upgrade it (because I’m no java 6 developer), and I still want to benefit from bin packing, re-scheduling, … (and not only for this app, but for ALL the apps in the enterprise) I just spend time to put it in a container, and voila. I still can deploy it in several different cloud and orchestrator without asking for a team to spend time on a project that already does the job correctly.

                                                                                                          To be honest, I understand that containers are not the solution to everything, but I keep wondering why people don’t accept that it has some utility.

                                                                                                        2. 2

                                                                                                          I think the point is that there is often little cost/benefit analysis done. Is moving one’s entire infrastructure to Docker/Kubernetes less work than getting all one’s code to run against the same version of a database? I’m sure sometimes it is, but my experience is that these questions are rarely asked. There is a status-quo bias toward solutions that allow existing complexity to be maintained, even when the solutions cost more than reducing that complexity.

                                                                                                          1. 4

                                                                                                            Totally agreed, but I’m also skeptical on the reaction of always blaming containers to add complexity. From my point of view, many things that I do with containers is way easier than if I had to do it another way (I also agree that some things would be easier without them too).

                                                                                                      2. 2

                                                                                                        Debian solves three different versions of php with Ondrej’s packages (or ppa on Ubuntu).

                                                                                                        In anything but dev or the tiniest of sites you’ll have you database server on a seperate machine anyway - what possible reason is there to have three different versions of a database server on the same host for a production environment?

                                                                                                        If you need it for testing, use lx{c,d} or vms.

                                                                                                        1. 3

                                                                                                          Especially MySQL has broken apps in the past, going from 5.5 -> 5.6, or 5.6 -> 5.7. Having a single database server means having to upgrade all apps that run on top of it in sync. So in practice, we’ve been running a separate database server per version.

                                                                                                          Can’t speak for other systems, though.

                                                                                                          1. 1

                                                                                                            As you said, testing is a good example of such use case. Then why using VMs when I can bin-pack containers on 1 (or many) machine, using less resources?

                                                                                                            1. 1

                                                                                                              That still isn’t a reason to use it in prod, and it isn’t that different from using LXC/LXD style containers.

                                                                                                              1. 1

                                                                                                                Do you have rational arguments to be against Docker which is using LXC? For now I don’t see any good reason not too. It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                                                                                                1. 6

                                                                                                                  It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                                                                                                  That’s a reasonable position though. There are people who have good reasons to prefer git CLI to Github Desktop, MySql console to PHPMyAdmin, and so forth. Abstractions aren’t free.

                                                                                                                  1. 1

                                                                                                                    Exactly! But I don’t see such hatred for people using Github Desktop or PHPmyadmin. It’s not because you don’t want to use it that it doesn’t fit the usecase of someone.

                                                                                                                    1. 1

                                                                                                                      As someone who usually ends up having to ‘cleanup’ or ‘fix’ things after someone has used something like a GUI git client or PHPMyAdmin, I wouldn’t use the word hatred, but I’m not particularly happy if someone I work with is using them.

                                                                                                                      1. 1

                                                                                                                        I can do interactive staging on the CLI, but I really prefer a GUI (and if I find a good one, would probably also use a GUI for rebasing before sending a pull request).

                                                                                                                  2. 2

                                                                                                                    If I want a lightweight machine, LXC provides that. Docker inherently is designed to run literally a single process. How many people use it that way? No, they install supervisord or whatever - at which point, what’s the fucking point?

                                                                                                                    You’re creating your own ‘mini distribution’ of bullshit so you can call yourself devops. Sorry, I don’t drink the koolaid.

                                                                                                                    1. 1

                                                                                                                      Your argument is purely flawed. You justify the initially of Docker by generalizing what a (narrow) subset of users is doing. Like I said, I’m ready to hear rational arguments.

                                                                                                                      1. 2

                                                                                                                        generalizing what a (narrow) subset of users is doing

                                                                                                                        I found you 34K examples in about 30 seconds: https://github.com/search?l=&q=supervisord+language%3ADockerfile&type=Code

                                                                                                                        1. 1

                                                                                                                          Hummm okay you got me on this one! Still, I really think there is some real utility for such a solution, even if yes it can be done in many other ways.

                                                                                                      1. 4

                                                                                                        For those curious, the failure mode of the windlass is likely due to the phenomena of stress concentration. In short, in any load-bearing mechanical design of a single piece (say, a plate of metal) that has a sharp interior angle in it is likely to be weakest starting at that corner. In some ways, you can consider the corner to be the beginnings of a crack.

                                                                                                        That’s why on many designs for parts, windows, and openings in solid pieces there is filleting or other design changes done to help spread the stress. Failure to do this can lead to catastrophic failure (notably, the de Havilland Comet) .

                                                                                                        1. 27

                                                                                                          What are the advantages to making it federated over the current setup?

                                                                                                          1. 7

                                                                                                            In terms of content and moderation, each instance would be kind of like a “view” over the aggregate data. If you want stricter moderation you could sign up for one instance over another. Each instance could also cater to a different crowd with different focuses, e.g. Linux vs. BSD vs. business-friendly non-technical vs. memes vs. …. Stories not fitting an instance could be blocked by the instance owner. Of course you could also get the catch-all instance where you see every type of story; it might feel like HN.

                                                                                                            The current Lobsters has a very specific focus and culture, and also locked into a specific moderation style. Federating it would allow a system closer to Reddit and its subreddit system where each instance has more autonomy, yet the content from the federated instances would all be aggregated.

                                                                                                            So of course such a system wouldn’t be a one-to-one replacement for Lobsters but a superset. Ideally an individual instance could be managed and moderated such that it would feel like the Lobsters of today.

                                                                                                            1. 18

                                                                                                              The current Lobsters has a very specific focus and culture, and also locked into a specific moderation style. Federating it would allow a system closer to Reddit and its subreddit system where each instance has more autonomy, yet the content from the federated instances would all be aggregated.

                                                                                                              If federation results in a reddit-like site, I’d much rather that lobste.rs doesn’t federate. It’s a tech-news aggregator with comments, there’s no real benefit in splitting it up, especially at it’s current scale.

                                                                                                              1. 6

                                                                                                                I get what you’re saying. I think OP framed the idea wrong. People come to Lobsters because they like Lobsters. The question is whom would the federated Lobsters benefit – it would mostly benefit people who aren’t already Lobsters users.

                                                                                                                It’s just that the Lobsters code base is open source and actively developed, and much simpler than Reddit’s old open source code. So it’s not unreasonable to want to build a federated version on top of Lobsters’ code rather than start somewhere else.

                                                                                                                1. 3

                                                                                                                  it would mostly benefit people who aren’t already Lobsters users.

                                                                                                                  Well that was my point. Any spammer or shiller can create and recreate reddit and hacker-news accounts, thereby decreasing the quality and the standard of the platform, and making moderation more difficult. This is exactly what the invite tree-concept prevents, which is quite the opposite of (free) federation.

                                                                                                                  1. 8

                                                                                                                    We do have one persistent fellow who created himself ~20 accounts to submit and upvote his SEO spam. He’s still nosing around trying to re-establish himself on Lobsters. I’m very glad not to be in an arms race with him trying to prevent him from abusing open signups.

                                                                                                                    1. 1
                                                                                                              2. 2

                                                                                                                Based on my experience in community management, including here on Lobsters, I do not believe it’s possible for an individual instance in a system like you describe to have a coherent culture which is different from the top-level culture in substantial ways, unless you’re okay with participants feeling constantly under siege. The top-level culture always propagates downward, and overriding it takes an enormous amount of resources and constant effort.

                                                                                                                1. 1

                                                                                                                  Have you used Mastodon at all? If that’s used as a model, it seems each instance can have a distinct personality, as Mastodon instances do today. Contrast with traditional forums, and Reddit to some extent, which do more-or-less have a tree structure and where your concern definitely applies. With federation there doesn’t necessarily need to exist a top-down structure, even if that might be the easiest to architect (although I don’t know if it is the easiest).

                                                                                                                  1. 1

                                                                                                                    I have used Mastodon, but not enough to have a strong opinion on it. It’s been a challenge for me to pay enough attention to it to keep up with what’s happening; it’s kind of an all-or-nothing thing, and right now Twitter is still taking the attention that I would have to give to Mastodon.

                                                                                                              3. 7

                                                                                                                Biggest argument in favor is probably for people that want to leech off of the quality submissions/culture here but who don’t want to actively participate in the community or follow its norms. That and the general meme today of “federated and decentralized is obviously better than the alternative”.

                                                                                                                Everybody wants the fruit of tilled gardens, but most people don’t want to put in the effort to actually do the work required to keep them running.

                                                                                                                The funny thing is that we’d probably just end up with a handful (N < 4) of lobster peers (after the novelty wears off), probably split along roughly ideological lines:

                                                                                                                • Lobsters for people that want a more “open” community (signups, etc.) and with heavier bias towards news and nerdbait
                                                                                                                • Lobsters for social-justice and progressive people
                                                                                                                • Lobsters for edgelords and people who complain about “social injustice”
                                                                                                                • Lobsters Classic, this site

                                                                                                                And sure, that’d scratch some itches, but it’d probably just result in fracturing the community unnecessarily and creating the requirement for careful monitoring of what gets shared between sites. As a staunch supporter of Lobsters Classic, though, I’m of course biased.

                                                                                                                1. 3

                                                                                                                  So “federation” is what the cool kids are calling “forking” nowadays? Good to know ;)

                                                                                                                2. 2

                                                                                                                  I’d be quite interested to see lobsters publish as ActivityPub/OStatus (so I could, for instance, use a mastodon account to follow users / tags / all stories). I don’t see any reason to import off-site activity; one of the key advantages of lobsters is that growth is managed carefully.

                                                                                                                  1. 1

                                                                                                                    Lobsters actually already does this with Twitter, so that seems both entirely straightforward to add and in line with existing functionality.

                                                                                                                    (Note that I don’t use Twitter, so I can’t speak to how well that feed actually works.)

                                                                                                                    1. 1

                                                                                                                      The feeds already exist, just have to WebSub enable them…

                                                                                                                    2. 1

                                                                                                                      It won’t go away entirely if the one, special person who happens to own this system decides to make it go away for whatever reason of their own. It won’t die off if this specific instance gets sold or given to someone who can’t handle it and who runs it into the ground.

                                                                                                                    1. 2

                                                                                                                      When reading Javascript/PHP rants, I’m always reminded of this:

                                                                                                                      There are only two kinds of programming languages: those people always bitch about and those nobody uses.

                                                                                                                      1. 16

                                                                                                                        That quote always comes up in these kinds of things. And it’s a stupid quote. Like most things that try to draw dichotomies, it’s not really true. Not every language people used is equally bitched about. Some people even praise languages more than complain about it. That quote is effectively used to argue for any status quo. Let’s not try to discuss how we could do things better because people will just bitch about it anyways. Or all languages equally suck so let’s use whatever we have here. It really adds nothing to the conversation.

                                                                                                                        1. 8

                                                                                                                          I heard this phrase a lot when I was a PHP developer. In my experience it was used as a conversation stopper, which got very frustrating.

                                                                                                                          For example, after fixing some bug I might recommend that we stick to using === instead of == because the latter behaves in complicated and potentially confusing ways; get the reply “There are two kinds of programming languages…”. OK, that’s nice, but are we going to use === instead of == or not?

                                                                                                                          Or we might be discussing how to design a certain system or feature. Point out that X tends to be a bad idea due to Y, get the response “There are two kinds of programming languages…” “So…?” “We’re going with X”.

                                                                                                                          Thankfully I’m no longer a PHP developer.

                                                                                                                          1. 3

                                                                                                                            It doesn’t mean that you shouldn’t complain, or even that you’re a conservative person who like to use latin phrases here and there. :) It rather means that, generally, people complain a lot about any successful language.

                                                                                                                            1. 1

                                                                                                                              Some people even praise languages more than complain about it.

                                                                                                                              Sure, but I’d assert that those tend overwhelmingly to be niche languages.

                                                                                                                              A lot of people love languages like APL, Haskell, ML, Idris, Elm, and so forth–fact is, those languages just aren’t very relevant to mainstream software engineering.

                                                                                                                              The best thing that can be said is that exposure to them helps people using bad languages reconsider how to approach things in their vulgar daily driver. The worst thing to be said is that zealots of those languages try to infect otherwise bearable languages with features from their pet tongue and in so doing make things more complicated for everybody else (FP folks did this to C++, Java folks did this to JS, etc.)

                                                                                                                          1. 26

                                                                                                                            My observation about Best Practices is that the ones worth listening to are often:

                                                                                                                            • the result of colossal fuckups or near misses
                                                                                                                            • quietly known to conservative engineers who encountered them on a team, documented them, and don’t make a fuss
                                                                                                                            • very different from whatever is being proclaimed in the mailing-lists and blogs
                                                                                                                            • arose during fighting common problems on real products

                                                                                                                            All too frequently (looking at you Dave Thomas, Uncle Bob, many security folks and academics, and others) there seems to be a tendency to have practices that:

                                                                                                                            • are meant to address theoretical concerns that don’t occur during normal development or operations
                                                                                                                            • are given “Best Practices” status by handwaving about “the community has decided” (Thomas’s first edition of Programming Elixir was rotten with this) when the community is both very young and new
                                                                                                                            • arose fighting uncommon problems on real products (lots of FB/GOOG/AMZN Best Practices make sense only in their operational regime…blindly following them is not good engineering)
                                                                                                                            • rely really heavily on a commercial/philosophical context that you probably don’t share
                                                                                                                            1. 2

                                                                                                                              I agree, and would add to the “all too frequently” pile:

                                                                                                                              • best practices are those things practised by mediocre teams, who would like to be told what to do rather than bring their experience or contextual knowledge to bear.
                                                                                                                              1. 2

                                                                                                                                Best Practices are worthwhile only when understood. Why is this a best practice? If you cannot understand it, then don’t follow it and learn why you should (or shouldn’t) follow it.

                                                                                                                                Blindly putting them aside is also a mistake since you refuse to learn why some people wrote about them.

                                                                                                                                1. 2

                                                                                                                                  arose fighting uncommon problems on real products (lots of FB/GOOG/AMZN Best Practices make sense only in their operational regime…blindly following them is not good engineering)

                                                                                                                                  This is probably the worst mind virus, at least the one I encounter most often. This leads to stuff like basic websites using Redis, Elasticsearch, Cloudflare, a RDBMS, docker, kybernetes, etc…