1. 4

    I’m using three as a Kubernetes cluster so I can learn how to set it up on cheap hardware. Learned a lot along the way, and I’d highly encourage anyone else interested in learning Kubernetes to do the same!

    1. 1

      Do you have a writeup on this?

      1. 2
        1. 1

          thanks!

    1. 5

      Battlestation

      • iMac
      • iPad Pro
      • MacBook Air

      Not shown:

      • iPhone Xs
      • Blue jeans
      • Black turtleneck

      Yup.

      1. 1

        Blue jeans

        Black turtleneck

        Are you doing this without coercion? Blink twice for no ;)

        1. 2

          I blinked. Can’t say how many times, though.

      1. 6

        Bonus points if you get an alert on a comment on this story on lobste.rs.

        1. 3

          alert

          Do I win?

        1. 3

          Can you keep a secret? … So can I ;-)

          1. 1

            Zerodium would’ve paid a million for you to not debug that. I admire your willingness to turn down money. ;)

            Note to security folks: iOS payout dropped down to $1 mil with Android going up to $2.5 mil. Wonder what that means.

          1. 10

            I struggle with the same predicament: I run my own email server and have seen emails received from my email server fluctuate over the years between the Inbox and Spam folder in my recipient’s email, irrespective of what rules I follow:

            • SPF, DMARC, and other greater than zero length letter email-related acronyms
            • asking people to mark my emails as “Not Spam”
            • registering my IP and domain with Google Webmasters
            • registering my IP and domain with DNSWL.org

            I have gone to lengths trying to solve this problem but gave up a few years back. But let’s take this issue and internalize it within our community:

            What would you do if you were tasked with handling the spam problem at GMail?

            1. 3

              What would you do if you were tasked with handling the spam problem at GMail?

              I’ve been following this debate for a long time, and suffered myself from the problem (albeit it’s outlook.com that’s blocking me usually, not gmail, and outlook.com has quite some marketshare in Germany). Personally, to circumvent it, I pay for an SMTP relay. This means that my mail server directs all outgoing mail to that SMTP relay server, which then does the real delivery. Since many people (mostly businesses) use that SMTP relay server, it has a high volume and mail usually goes directly to INBOX.

              I acknowledge this is a workaround. The other workaround is to pay Google and have Google’s staff fix the problem. But I think there’s a possibility for a real solution (apart from the best solution, which is to fix the SMTP protocol itself). What I see is that it’s almost always low-volume senders that are self-hosting who have the delivery problems. Based on that, I suggest the following:

              People who self-host mail should form groups and create an SMTP relay server with a clean IP reputation. Every member of that group then has his/her mail server direct all the outgoing mail to the group’s SMTP relay, which does the real delivery. As a consequence, the group SMTP relay has the cumulated volume of all group members and has a much better stand in the delivery. The bigger the group, the better.

              It’s required that someone administers that mail relay, most notably to prevent any member from sending spam (e.g. compromised account). Maybe try rotating the administrative duty annually or so. Finally, from a German perspective, an association (Verein) would make a candidate for the legal framework of such a collective SMTP relay. It’d be the legal entity to address in case of problems. It’d also allow some kind of charity “business” model: assocation members may use the SMTP relay for free, but non-members have to pay. In any case, only individuals are accepted as members. German Ehrenamt at work.

              The individual’s fight against GMail and similar companies is not winnable. My suggestion can give the self-hosters a better stand, and still allow real self-hosting: it’s your SMTP server that accepts all the incoming mail, does the spam filtering, the sieve scripts, and whatever you wish. It’s also your server that sends out the outgoing mail, with just one little twist: all outgoing mail is submitted to the group’s SMTP relay server, thereby joining the group SMTP server’s volume and reputation.

              1. 1

                People who self-host mail should form groups and create an SMTP relay server with a clean IP reputation. Every member of that group then has his/her mail server direct all the outgoing mail to the group’s SMTP relay, which does the real delivery. As a consequence, the group SMTP relay has the cumulated volume of all group members and has a much better stand in the delivery. The bigger the group, the better.

                Sorry, but this is just really silly. I don’t want someone else to be running my mail server. If you do, you can already use one of the vapourware products for sending bulk email or whatnot (be careful, though, because a lot of it doesn’t actually implement SMTP correctly, and cannot do greylisting, for example, so, you’d probably only want to use it to send mail to Gmail and Outlook, still using your own IP address to send mail to anyone else).

                OTOH, I certainly wouldn’t mind to contribute to a legal fund to sue Google and Google Mail, due to them having a monopoly position on the corporate email front, and refusing to accept my mail under false pretences of low domain reputation, which was acquired merely because I’m myself a Gmail user, and have been sending mail to my own account that they misclassified as Spam, and extended such misclassification to the rest of their userbase, too.

              2. 1

                What would you do if you were tasked with handling the spam problem at GMail?

                Maybe this, right?

                I mean, the current situation is kind of the worst of both worlds. Not only do we have an ossified protocol nobody can change, but it isn’t even giving us interoperability!

                However, I can only assume there’s been a slowly-escalating cat-and-mouse game, and as more and more people use email powered by one of the big providers it’s at least plausible to me that more spam than legitimate mail would be coming from self-hosted mail. Without working there I’m not sure how I could estimate what kind of spam they’re getting, and without knowing what kind of spam they’re getting I don’t know that they’re actually doing the wrong thing. Maybe the proper application of bayes rule really should lead you to conclude that self-hosted mail is probably spam.

                I would hate it if that were true, but I don’t know how I would be able to tell if it wasn’t true.

                1. 1

                  The biggest problem is that the whole Google Mail thing is a black box, that there’s no way to know why exactly your mail is being rejected (e.g., it tells me that my domain has a low reputation, but doesn’t tell me why), and that there’s no recourse on fixing the situation (e.g., there’s no way for appealing the misclassification of those cron emails that I sent from my own domain to my own Gmail account that Google, apparently, deems to contribute to the low reputation).

                  Google Search already has a solution for picking up new web properties very fast, whereas the rest of the players (e.g., Yandex) take ages to move a new site from a Spam to a non-Spam status and to include it in their main index — there’s little reason this couldn’t be done for email. The fact that it’s the very small players that are most affected leads me to believe that it’s merely a CoI at play — they don’t want you or me to run our own mail servers — they want all of us to simply use Gmail and G Suite instead.

              1. 1
                $ ls -F
                Archive.zip  Maildir/     bin@         repos/       sandbox/
                
                • Archive.zip created every few days that zips Maildir
                • Maildir mail contents
                • bin symlinks to repos/bin_files/ -> https://github.com/zg/bin_files
                • repos/ contains repositories I’ve cloned (including https://github.com/zg/dot_files, which I’ve symlinked contents up to ~, e.g. ~/.zshrc points at ~/repos/dot_files/.zshrc)
                • sandbox/ experimenting, so it’s a bag of dirt
                1. 3

                  GitHub is changing. It is important to understand that GitHub is moving away from being a Git Repository hosting company and towards a company that provides an entire ecosystem for software development.

                  Hasn’t this always been the case?

                  1. 5

                    No, if you look back to when Git was first becoming popular (around 2008) the alternatives to hosting your own Git repository were very cumbersome. Using pull requests instead of sending patches was part of the draw, but the main thing was “be able to use Git without putting up with (for instance) the terrible user interface of Rubyforge.

                    1. 6

                      But a pull request isn’t part of Git, so I think my postulate still holds true.

                      1. 6

                        What I said was that pull requests were a small part of the draw, and the main thing was being able to host a git repository without dealing with apache or sourceforge.

                        1. 1

                          Is it possible to get actual numbers from some kind of VCS server log from 11 years ago?

                          Did you know Fossil is 12 years old? http://fossil-scm.org/home/timeline?c=a28c83647dfa805f I just found out.

                          1. 1

                            I’m having a hard time seeing any connection between what you said and what I said. Wrong thread, maybe?

                            1. 1

                              i get your lack of sight on account of my lack of clarity so here’s some of that:

                              when Git was first becoming popular (around 2008)

                              … as a rhetorical point, boils down to a dispute between you and zg regarding these like long-range ecosystemic benefits (and the pull request thing is kind of an aside - you are in agreement more than youre in disagreement, imho, and causality is not inferrable about why git and github pulled ahead, is it? it’s pretty contingent)

                              is it possible to get actual numbers

                              this refers to numbers about popularity

                              otherwise talking about some farfagnugen ecosystem-level obscurities is kind of pointless

                              i mean zg kind of just said rhetorically that he doesn’t agree with the fossil guy and you kind of just said that one time in history one thing happened once, and so i figured that having maybe some actual rigorous data would allow us to come to some kind of conclusion, but I know that it’s not very important or interesting, but i was just curious, actually, and i feel like there’s a slim chance that some literal data on vcs usage might exist and that that would solve a lot of these “does the ecosystem come before the theoretical innovation in VCS design or the chicken before the egg or what?” types of questions. since they take place in an ahistorical vacuum otherwise. doy.

                    2. 1

                      Ya pretty much. I think the author missed the point of Github. It was never really about Git more than to the extent that Git appeared to be in the lead at the time and perhaps some preference by the founders.

                      The value proposition is everything around supporting the Git workflow.

                      1. 2

                        A few things:

                        1. Fossil exists in contradiction to the value proposition of “everything supporting git,” to solve problems that aren’t yet solved…

                        2. Fossil is NOT git, in the same sense that, once upon a time, GNU was supposed to be NOT Unix…

                        3. Literally, GitHub invited the guy AS the Chief Point-Misser in a special critical capacity.

                        4. Everything around supporting the git workflow is a value proposition – only to the business supporting “everything around the git workflow! …

                        4.1 (continuing) … – but that’s a proposition about the ecosystem NOT to the conceptual framework of what it means to “do VCS stuff.”

                        I explained this to epilys, but also wanted to point out to you, that the author probably “missed the point” of GitHub intentionally, whereas you missed precisely that point…

                        1. 1

                          If you say so.

                          1. 0

                            aren’t you saying that GitHub matters but VCS does not?

                            aren’t you saying that Git is irrelevant and the whole thing should just be called “Hub?”

                            when you say “it was never really about Git… etc., etc.,” what do you mean by “it,” if not GitHub?

                            aren’t you just saying “value proposition” in the hopes that everybody forgets that GitHub is a “value-proposing business”… running on a vcs called GIT?

                        2. 1

                          I disagree. I’ve been a Github user since 2009ish and I used it 90% for “hosting a git repository” - that was in addition to hosting my own git repos via ssh/gitolite, so also some mirroring.

                          When my company paid for github, it was to have a git repo. And pull requests, but nothing else. And I simply don’t believe I’m the outlier here. Sure, webhooks were nice but that’s the extent of any added benefit there.

                        3. 1

                          #include jolly-sarcasm-compiler.h

                          I don’t know! But let me look in a BOOK or a TECHNICAL GUIDE at the very least - oh here’s one…

                          https://lobste.rs/s/v4jcnr/technical_guide_version_control_system#c_bolhkj

                          When you say “always” do you mean “since we moved away from mainframes?”

                        1. 2
                          1. 1

                            If a small business needs a block of IP addresses, couldn’t they do so through Amazon Web Services or Google Cloud?

                            1. 4

                              You are absolutely able to continue renting IPv4 subnets from various providers. And the big landlords will probably end up owning more IPv4 space as they consolidate their grip because people cash out.

                              This, however, marks the beginning of the end of small businesses owning their IP space outright.

                              1. 1

                                Most ISP’s will be able to get you a /29, /28 or /27 or something like that for a nomimal monthly fee… Brokers will also still sell you large blocks for large one-off prices (12-20 USD per ip)

                              1. 2

                                What is PF?

                                1. 3

                                  Package Filter (Firewall)

                                  https://www.openbsd.org/faq/pf/

                                  1. 1

                                    Thank you!

                                1. 4

                                  I’m in a final two-week push to deliver Magento 2 to the company so I’ll be a big ball of stress trying to hammer out some features.

                                  1. 4

                                    Good luck, you got this!

                                    1. 1

                                      Thank you!

                                  1. 1

                                    I’ve shifted my focus on reading about design this year, and it may take me through the year into 2020 before I actually complete a lot of the material I’m planning to read.

                                    I think it’s important to understand the designer’s perspective on problems they face. How they think, work, and produce output is important because they’re the ones who are leading the change in this world. Right now, I think technology is an implementation detail of the designer’s vision. By the end of the year, I’ll know if that theory still stands.

                                    There are three broad categories for the books I’m planning to read: thinking, process, and design skills. They’re all found, along with a description for each, on this page.

                                    1. 5

                                      San Francisco Museum of Modern Art visit, while the René Magritte exhibit is still going on. It may sound unusual for an engineer like me to visit an art museum, but I go because my hope is that I will learn at least one new thing from all the exposure to things I’m completely unfamiliar with.

                                      1. 4

                                        Interesting takeaways from the APNIC Labs blog post:

                                        In setting up this joint research program, APNIC is acutely aware of the sensitivity of DNS query data. We are committed to treat all data with due care and attention to personal privacy and wish to minimise the potential problems of data leaks. We will be destroying all “raw” DNS data as soon as we have performed statistical analysis on the data flow. We will not be compiling any form of profiles of activity that could be used to identify individuals, and we will ensure that any retained processed data is sufficiently generic that it will not be susceptible to efforts to reconstruct individual profiles. Furthermore, the access to the primary data feed will be strictly limited to the researchers in APNIC Labs, and we will naturally abide by APNIC’s non-disclosure policies.

                                        This joint project has an initial period of five years and may be renewed. Upon the expiration of the initial period, or at any time thereafter, APNIC shall consider a request by Cloudflare for a permanent allocation of these IPv4 addresses to Cloudflare. APNIC undertakes to refer any such request to the regional Address Policy Special Interest Group as a matter of a change to the current research use designation of these IPv4 addresses, and APNIC shall be bound to the outcomes of this policy group.

                                        https://labs.apnic.net/?p=1127

                                        1. 1

                                          Off-topic: I signed up for the newsletter on queue.acm.org a few weeks back and haven’t seen an email from them. Do they do monthly digests? Do you find yourself checking the website frequently for new articles?

                                          1. 2

                                            It should be every week, have you checked your spam folder?

                                            1. 1

                                              Unfortunately, I can’t answer those questions. I don’t regularly use ACM. I got that article off Hacker News.

                                            1. 1

                                              I’m not fully understanding what issue is being described here. Is it that the archive URLs are unreliable, i.e. the “Source code (zip / tar.gz)” URL?

                                              1. 2

                                                The hash of the auto-generated tar files is not stable. I assume the compression level changes or the tar implementation to create them.

                                                1. 1

                                                  And what about the zip files?

                                                  1. 3

                                                    Same problem with zip files.

                                                    The OpenBSD ports tree stores checksums of release artifacts to ensure authenticity of code that is being compiled into packages.

                                                    Github’s source code links create a new artifact on demand (using git-archive, I believe). When they upgrade the tooling which creates these artifacts the output for existing artifacts can change, e.g. because the order of paths inside the tarball or zip changes, or compression level settings have changed, etc.

                                                    Which means that trying to verify the authenticity of a github source link download against a known hash is no better than downloading a tarball and comparing its hash against the hash of another distinct tarball created from the same set of input files. Hashes of two distinct tarballs or zip files are not guaranteed to match even if the set of input files used to create them is the same.

                                                    1. 1

                                                      Thank you for the detailed response! I understand the issue now.

                                                      There are likely tradeoffs from GitHub’s perspective on this issue, which is why they create a new artifact on demand. They maintain a massive number of repositories on their website, so they probably can’t just store all those artifacts for long periods of time as one repository could potentially be gigantic. There are a number of other reasons I can think of off the top of my head.

                                                      Why not have the checksum run against the file contents rather than the tarball or zip?

                                                      1. 3

                                                        Why not have the checksum run against the file contents rather than the tarball or zip?

                                                        One reason is that this approach couldn’t scale. It would be insane to store and check potentially thousands of checksums for large projects.

                                                        It is also harder to keep secure because an untrusted archive would need to be unpacked before verification, see https://lobste.rs/s/jdm7vy/github_auto_generated_tarballs_vs#c_4px8id

                                                        I’d rather turn your argument around and ask why software projects hosted on github have stopped doing releases properly. The answer seems to be that github features a button on the web site and these projects have misunderstood the purpose of this button. While some other projects which understand the issue actively try to steer people away from the generated links by creating marker files in large friendly letters: https://github.com/irssi/irssi/releases

                                                        I’d rather blame the problem on a UI design flaw on github’s part than blaming best practices software integrators in the Linux and BSD ecosystems have followed for ages.

                                                        1. 2

                                                          Some more specifics on non-reproducible archives: https://reproducible-builds.org/docs/archives/.

                                                          Why not have the checksum run against the file contents rather than the tarball or zip?

                                                          Guix can do something like that. While it’s preferred to make packages out of whatever is considered a “release” by upstream, it is also possible to make a package based directly on source by checking it out of git. Here’s what that looks like.

                                                1. 11

                                                  They are just following the trend. Support XMPP to get people using it and then drop it when they have enough power.

                                                  1. 5

                                                    Wasn’t this the reason Google Talk shut down and got replaced with Hangouts?

                                                  1. 13

                                                    The trick here is that the bar will disappear when you fill it up. My bar disappeared when I donated the remaining balance. If you’re still seeing your bar, it means you need to donate the remaining balance.

                                                    1. 2

                                                      Huh? I donated before the bar was put there, I still see it. What do you mean by “remaining balance”?

                                                      1. 10

                                                        I think zg was going for humor, that if someone wants to donate all of the remaining amount to reach the goal, the fundraiser will end and the bar will be removed.

                                                        1. 3

                                                          what’s “humor”?

                                                    1. 3

                                                      I’m so glad I moved to my own domain, own email, own calendar, own contacts, own backup, own you-name-it server. I replaced every conceivable cloud provider that I was consuming and to this day I am very glad that I took the time to do it because it’s shit like this that I get to chuckle at.

                                                      I highly encourage anyone who depends on any cloud provider to ask yourself this: do I like the service I’m being provided? Are there alternatives I could run myself? It’s questions like these that led me to obtain the experience I needed to land a job.

                                                      1. 1

                                                        Could you expand?

                                                        Where do you host your services? How much time did it take for you to set it up? How much maintenance does it need? Also did you have any problems with mobile?

                                                        Lately outside of mail I’m thinking about photos hosting. I would like to tag photos and a camera icons dedicated to certain tags.

                                                        1. 2

                                                          Sorry, let me clarify one point: I do rely on one cloud provider: DigitalOcean. I run my email, contacts, and calendar services within a droplet on DigitalOcean. I routinely have backups scpd from the VPS to my local machine which has a 8 TB RAID setup, which is where I backup other things as well. I also run my own webdav service which allows me to sync up documents between my laptop and iPhone.

                                                          I could technically avoid the reliance on DigitalOcean if I purchased my own hardware and placed it into a colocated datacenter, but that would be costly, and it kind of goes a bit beyond the idea of no cloud provider dependency. I’m fine with relying on DigitalOcean because I know that I have backups in case I need to switch to a different provider.

                                                          I also purchase CDs and import them to iTunes then sync them onto my iPhone. The frustrations of dealing with bad LTE coverage led me to make this choice a long time ago, and I’ve been happy ever since.

                                                          1. 1

                                                            I am on fastmail and it works just fine. Comes with mail, contacts, calendar and cloud storage.

                                                            1. 1

                                                              But that’s not what OP meant, is it? Replacing one provider with another is not “my own domain, own email, own calendar, own contacts, own backup, own you-name-it server”. Also he said: “I replaced every conceivable cloud provider”.

                                                              1. 1

                                                                I think there is a big difference between fastmail and google, namely that fastmail is not an ad company and you pay for your mail/calendar etc.

                                                        1. 1

                                                          There clearly is a spectrum of possible ways we could think about how to program the problems we’re trying to solve. As we have seen in the course of the last 100 years or so, some paradigms have stood out more than others, while others have had their good parts taken from them.

                                                          I would argue that the very early paradigms stood out because they were easy to understand and iterate on. The last 20 years or so has shown that paradigms had to shift to accommodate to scale, i.e. when the Internet started to take off, developers had to go from handling a dozen users to handling upwards of a few billion users. I think that the “scale paradigm shift” is coming to an end since we’ve got many services on the Internet which can accommodate to massive scale.

                                                          1. 3

                                                            Be aware of an efficient markets fallacy or purposeful evolution fallacy here. Our current paradigm makes it possible to build services at internet scale, and there are a small handful of successful examples. This is not the same as having converged on a paradigm for building at scale, nor is it the same as having found the best or even a good paradigm for building at scale.