1. 2

    Side-note: Ken got himself a pretty good email address (ken at google.com).

    1. 3

      It’s not tremendously difficult to acquire vanity email aliases at big tech companies, so long as nobody else is using the one you want already, and the Mail system supports customizing it.

    1. 2

      I originally read Clean Code as part of a reading group which had been organised at work. We read about a chapter a week for thirteen weeks.

      I am so eager to know (a) who organised this reading group, and (b) what the goal was.

      1. 3

        How small is the irreparable damage? Is there a picture of it?

        1. 22

          I’m fairly sure the fuses are a part of the CPU die, so they’re only several microns in size.

          1. 9

            @dstaley is right it’s just an extra small metal trace somewhere inside the die. Like any other fuse you put a high enough voltage across it and it pops. Then the CPU can just check the continuity with a lower voltage to check if it has been blown or not.

            This has some die photos of one example: https://archive.eetasia.com/www.eetasia.com/ART_8800717286_499485_TA_9b84ce1d_2.HTM

            1. 7

              Like others have said, these fuses are on the CPU die itself. Fuses like this are actually quite common on microcontrollers for changing various settings, or locking the controller to disallow it from being programmed after its received the final production programming.

              1. 6

                The Xbox360 also did something similar with its own “e-fuses.” I assume it’s standard practice now.

                1. 4

                  Yup, it’s entirely standard for any hardware root of trust. There are a couple of things that they’re commonly used for:

                  First, per-device secrets or unique IDs. Anything supporting remote attestation needs some unique per-device identifier. This can be fed (usually combined with some other things) into a key-derivation function to generate a public-private key pair, giving a remote party a way of establishing an end-to-end secure path with the trusted environment. This is a massive oversimplification of how you can spin up a cloud VM with SGX support and communicate with the SGX enclave without the cloud provider being able to see your data (the most recent vulnerability allowed the key that is used to sign the public key along with the attestation to be compromised). There are basically two ways of implementing this kind of secret:

                  1. PUFs. Physically Unclonable Functions are designs that take some input (can be a single bit) and produce an output that is stable but depends on details beyond the manufacturing tolerances of a particular process. The idea is that two copies of exactly the same mask will produce versions that generate different outputs. PUFs are a really cool idea and an active research area, but they’re currently quite big (expensive) and not very reliable (so you need a larger area and some error correction to get a stable result out of them).
                  2. Fuses. Any hardware root of trust will have a cryptographic entropy source. On first boot, you read from this, filter it through something like Fortuna (possibly implemented in hardware) to give some strong random numbers, and then burn something like a 128- or 256-bit ID into fuses. Typically with some error correction.

                  The MAC address (as @Thra11 pointed out) is a simple case of needing a unique identifier.

                  The second use is monotonic counters for roll-back protection. A secure boot chain works (again, huge oversimplifications follow, ) by having something tiny that’s trusted, which checks the signature of the second-stage boot loader and then loads that. The second-stage checks the signature of the third stage, and so on. Each one appends the values that they’re producing to a hash accumulator. Again, with a massive oversimplification, you may end up with hash(hash(first stage) + hash(second stage) + hash(third stage) …), where hash(first stage) is computed in hardware and everything else is in software (and where each hash function may be different). You can read the partial value (or, sometimes, use a key derived from it but not actually read the value) at any point, so at the end of second-stage boot you can read hash(hash(first stage) + hash(second stage)) and can then use that in any other crypto function, for example by starting the third-stage image with the decryption key or signature for the boot image encrypted with a key derived from the hashes of all of the allowed first and second-stage boot chains. You can also then use it in remote attestation, to prove that you are running a particular version of the software.

                  All of this relies on inductive security proofs. The second stage is trusted because the first stage is attested (you know exactly what it was) and you trust the attested version. If someone finds a vulnerability in version N, you want to ensure that someone who has updated to version N+1 can never be tricked into installing version N.

                  Typically, the first stage is a tiny hardware state machine that checks the signature and version of a small second-stage that is software. The second-stage software can have access to a little bit of flash (or other EEPROM) to store the minimum trusted version of the third-stage thing, so if you find a vulnerability in the third-stage thing but someone has already updated with an image that bumped the minimum-trusted-third-stage-thing-version then the second-stage loader will refuse to load an earlier version. But what happens if there’s a vulnerability in the second-stage loader? This is typically very small and carefully audited, so it shouldn’t be invalidated very often (you don’t need to prevent people rolling back to less feature-full versions, only insecure ones, so you typically have a security version number that is distinct from the real version number and invalidate it infrequently). Typically, the first-stage (hardware) loader keeps a unary counter in fuses so that it can’t possibly be rolled back.

                  1. 1

                    (You likely know this, but just in case:)

                    What you describe above is a strong PUF; weak PUFs (that do not take inputs) also exist, and - in particular - SRAM PUFs (which you can get from e.g. IntrinsicID) are pretty reliable.

                    (But indeed, lots of PUFs are research vehicles only.)

                2. 4

                  Examples of fuses I’ve seen used in i.MX6 SOCs include setting the boot device (which, assuming it’s fused to boot from SPI or onboard mmc effectively locks out anyone trying to boot from USB or SD card), and setting the mac address.

              1. 3

                I can’t help but to think that at the scale Kubernetes works best at, you end up with the lesser of two [tools] principle. From a non-technical angle if you have a complex system spanning multiple machines, supported by lots of code, contains architectural complexity, and so on and so forth, at least having an open core that a new hire can understand will lower the barrier of entry for new hires to come in and hit the ground running with contributions.

                1. 1

                  I once read a great article in defense of Kubernetes that made a similar claim and I wish I could find it.

                  Essentially, their point was that doing infrastructure at the scale where you need this tooling means you already have a complex environment with complex interactions. If you think you can just say “oh, we’ll use some homemade shell scripts to wrangle it” you are ignoring the fact that you needed to evolve those scripts over time and that your understanding of them is deeply linked with that evolution.

                  I don’t know the first thing about Kubernetes, and still barely do - but when I started my current job I was able to have an understanding about how their software is deployed, what pieces fit together, etc. When I got confused I could find solid documentation and guides… I didn’t have to work through a thousand line Perl monstrosity or try to buttonhole someone in ops to get my questions answered.

                1. 7

                  It contains a full copy of Mark Twain’s novel “The Adventures of Tom Sawyer”. It gets really tiring to blacklist this file on my desktop search engine as it otherwise constantly comes up in unrelated searches for words that are by accident in this novel.

                  Why.

                  1. 9

                    Test case for a compression library, IIRC.

                    1. 1

                      I tried a few different approaches to productivity, including notebook and a pen, iPad, etc.

                      I settled on simply having a set of somewhat structured documents tracking the work of entire projects. If a new project comes up, I create a new document.

                      Each document has:

                      • Latest Update (the first, and maybe second, bullet on the Timeline),
                      • Blockers (a bulleted list containing “Blocked -> Blocked By” mappings),
                      • Tracking (links to tickets that other teams/individuals are posting updates), and
                      • Timeline (bulleted items with a date and time)

                      I go through the documents and simply append to the Timeline when something new comes up.

                      In theory I could coalesce all these documents into a single .txt file, which would achieve something similar to what you’ve done.

                      1. 2

                        Off-topic: I used to own this domain.

                        I should have never gotten rid of it. Oh well.

                        1. 2

                          Who knew .io would take off like it did… Don’t kick yourself. It’s at least a cool, alternative universe where they might have written you a check for it. :)

                          1. 3

                            It’s at least a cool, alternative universe where they might have written you a check for it.

                            I have the unusual honor to say as a 16-year-old I was able to sell a domain to a company for a four-figure amount. The alternative universe is a reality for me, albeit in a different context.

                            P.S. I’m holding out on w-1.net and w-3.net. ;-)

                            1. 1

                              That’s cool!

                        1. 4

                          I’m using three as a Kubernetes cluster so I can learn how to set it up on cheap hardware. Learned a lot along the way, and I’d highly encourage anyone else interested in learning Kubernetes to do the same!

                          1. 1

                            Do you have a writeup on this?

                            1. 2
                              1. 1

                                thanks!

                          1. 5

                            Battlestation

                            • iMac
                            • iPad Pro
                            • MacBook Air

                            Not shown:

                            • iPhone Xs
                            • Blue jeans
                            • Black turtleneck

                            Yup.

                            1. 3

                              Blue jeans

                              Black turtleneck

                              Are you doing this without coercion? Blink twice for no ;)

                              1. 2

                                I blinked. Can’t say how many times, though.

                            1. 6

                              Bonus points if you get an alert on a comment on this story on lobste.rs.

                              1. 3

                                alert

                                Do I win?

                              1. 3

                                Can you keep a secret? … So can I ;-)

                                1. 1

                                  Zerodium would’ve paid a million for you to not debug that. I admire your willingness to turn down money. ;)

                                  Note to security folks: iOS payout dropped down to $1 mil with Android going up to $2.5 mil. Wonder what that means.

                                1. 10

                                  I struggle with the same predicament: I run my own email server and have seen emails received from my email server fluctuate over the years between the Inbox and Spam folder in my recipient’s email, irrespective of what rules I follow:

                                  • SPF, DMARC, and other greater than zero length letter email-related acronyms
                                  • asking people to mark my emails as “Not Spam”
                                  • registering my IP and domain with Google Webmasters
                                  • registering my IP and domain with DNSWL.org

                                  I have gone to lengths trying to solve this problem but gave up a few years back. But let’s take this issue and internalize it within our community:

                                  What would you do if you were tasked with handling the spam problem at GMail?

                                  1. 3

                                    What would you do if you were tasked with handling the spam problem at GMail?

                                    I’ve been following this debate for a long time, and suffered myself from the problem (albeit it’s outlook.com that’s blocking me usually, not gmail, and outlook.com has quite some marketshare in Germany). Personally, to circumvent it, I pay for an SMTP relay. This means that my mail server directs all outgoing mail to that SMTP relay server, which then does the real delivery. Since many people (mostly businesses) use that SMTP relay server, it has a high volume and mail usually goes directly to INBOX.

                                    I acknowledge this is a workaround. The other workaround is to pay Google and have Google’s staff fix the problem. But I think there’s a possibility for a real solution (apart from the best solution, which is to fix the SMTP protocol itself). What I see is that it’s almost always low-volume senders that are self-hosting who have the delivery problems. Based on that, I suggest the following:

                                    People who self-host mail should form groups and create an SMTP relay server with a clean IP reputation. Every member of that group then has his/her mail server direct all the outgoing mail to the group’s SMTP relay, which does the real delivery. As a consequence, the group SMTP relay has the cumulated volume of all group members and has a much better stand in the delivery. The bigger the group, the better.

                                    It’s required that someone administers that mail relay, most notably to prevent any member from sending spam (e.g. compromised account). Maybe try rotating the administrative duty annually or so. Finally, from a German perspective, an association (Verein) would make a candidate for the legal framework of such a collective SMTP relay. It’d be the legal entity to address in case of problems. It’d also allow some kind of charity “business” model: assocation members may use the SMTP relay for free, but non-members have to pay. In any case, only individuals are accepted as members. German Ehrenamt at work.

                                    The individual’s fight against GMail and similar companies is not winnable. My suggestion can give the self-hosters a better stand, and still allow real self-hosting: it’s your SMTP server that accepts all the incoming mail, does the spam filtering, the sieve scripts, and whatever you wish. It’s also your server that sends out the outgoing mail, with just one little twist: all outgoing mail is submitted to the group’s SMTP relay server, thereby joining the group SMTP server’s volume and reputation.

                                    1. 1

                                      People who self-host mail should form groups and create an SMTP relay server with a clean IP reputation. Every member of that group then has his/her mail server direct all the outgoing mail to the group’s SMTP relay, which does the real delivery. As a consequence, the group SMTP relay has the cumulated volume of all group members and has a much better stand in the delivery. The bigger the group, the better.

                                      Sorry, but this is just really silly. I don’t want someone else to be running my mail server. If you do, you can already use one of the vapourware products for sending bulk email or whatnot (be careful, though, because a lot of it doesn’t actually implement SMTP correctly, and cannot do greylisting, for example, so, you’d probably only want to use it to send mail to Gmail and Outlook, still using your own IP address to send mail to anyone else).

                                      OTOH, I certainly wouldn’t mind to contribute to a legal fund to sue Google and Google Mail, due to them having a monopoly position on the corporate email front, and refusing to accept my mail under false pretences of low domain reputation, which was acquired merely because I’m myself a Gmail user, and have been sending mail to my own account that they misclassified as Spam, and extended such misclassification to the rest of their userbase, too.

                                    2. 1

                                      What would you do if you were tasked with handling the spam problem at GMail?

                                      Maybe this, right?

                                      I mean, the current situation is kind of the worst of both worlds. Not only do we have an ossified protocol nobody can change, but it isn’t even giving us interoperability!

                                      However, I can only assume there’s been a slowly-escalating cat-and-mouse game, and as more and more people use email powered by one of the big providers it’s at least plausible to me that more spam than legitimate mail would be coming from self-hosted mail. Without working there I’m not sure how I could estimate what kind of spam they’re getting, and without knowing what kind of spam they’re getting I don’t know that they’re actually doing the wrong thing. Maybe the proper application of bayes rule really should lead you to conclude that self-hosted mail is probably spam.

                                      I would hate it if that were true, but I don’t know how I would be able to tell if it wasn’t true.

                                      1. 1

                                        The biggest problem is that the whole Google Mail thing is a black box, that there’s no way to know why exactly your mail is being rejected (e.g., it tells me that my domain has a low reputation, but doesn’t tell me why), and that there’s no recourse on fixing the situation (e.g., there’s no way for appealing the misclassification of those cron emails that I sent from my own domain to my own Gmail account that Google, apparently, deems to contribute to the low reputation).

                                        Google Search already has a solution for picking up new web properties very fast, whereas the rest of the players (e.g., Yandex) take ages to move a new site from a Spam to a non-Spam status and to include it in their main index — there’s little reason this couldn’t be done for email. The fact that it’s the very small players that are most affected leads me to believe that it’s merely a CoI at play — they don’t want you or me to run our own mail servers — they want all of us to simply use Gmail and G Suite instead.

                                    1. 1
                                      $ ls -F
                                      Archive.zip  Maildir/     bin@         repos/       sandbox/
                                      
                                      • Archive.zip created every few days that zips Maildir
                                      • Maildir mail contents
                                      • bin symlinks to repos/bin_files/ -> https://github.com/zg/bin_files
                                      • repos/ contains repositories I’ve cloned (including https://github.com/zg/dot_files, which I’ve symlinked contents up to ~, e.g. ~/.zshrc points at ~/repos/dot_files/.zshrc)
                                      • sandbox/ experimenting, so it’s a bag of dirt
                                      1. 3

                                        GitHub is changing. It is important to understand that GitHub is moving away from being a Git Repository hosting company and towards a company that provides an entire ecosystem for software development.

                                        Hasn’t this always been the case?

                                        1. 5

                                          No, if you look back to when Git was first becoming popular (around 2008) the alternatives to hosting your own Git repository were very cumbersome. Using pull requests instead of sending patches was part of the draw, but the main thing was “be able to use Git without putting up with (for instance) the terrible user interface of Rubyforge.

                                          1. 6

                                            But a pull request isn’t part of Git, so I think my postulate still holds true.

                                            1. 6

                                              What I said was that pull requests were a small part of the draw, and the main thing was being able to host a git repository without dealing with apache or sourceforge.

                                              1. 1

                                                Is it possible to get actual numbers from some kind of VCS server log from 11 years ago?

                                                Did you know Fossil is 12 years old? http://fossil-scm.org/home/timeline?c=a28c83647dfa805f I just found out.

                                                1. 1

                                                  I’m having a hard time seeing any connection between what you said and what I said. Wrong thread, maybe?

                                                  1. 1

                                                    i get your lack of sight on account of my lack of clarity so here’s some of that:

                                                    when Git was first becoming popular (around 2008)

                                                    … as a rhetorical point, boils down to a dispute between you and zg regarding these like long-range ecosystemic benefits (and the pull request thing is kind of an aside - you are in agreement more than youre in disagreement, imho, and causality is not inferrable about why git and github pulled ahead, is it? it’s pretty contingent)

                                                    is it possible to get actual numbers

                                                    this refers to numbers about popularity

                                                    otherwise talking about some farfagnugen ecosystem-level obscurities is kind of pointless

                                                    i mean zg kind of just said rhetorically that he doesn’t agree with the fossil guy and you kind of just said that one time in history one thing happened once, and so i figured that having maybe some actual rigorous data would allow us to come to some kind of conclusion, but I know that it’s not very important or interesting, but i was just curious, actually, and i feel like there’s a slim chance that some literal data on vcs usage might exist and that that would solve a lot of these “does the ecosystem come before the theoretical innovation in VCS design or the chicken before the egg or what?” types of questions. since they take place in an ahistorical vacuum otherwise. doy.

                                          2. 1

                                            Ya pretty much. I think the author missed the point of Github. It was never really about Git more than to the extent that Git appeared to be in the lead at the time and perhaps some preference by the founders.

                                            The value proposition is everything around supporting the Git workflow.

                                            1. 2

                                              A few things:

                                              1. Fossil exists in contradiction to the value proposition of “everything supporting git,” to solve problems that aren’t yet solved…

                                              2. Fossil is NOT git, in the same sense that, once upon a time, GNU was supposed to be NOT Unix…

                                              3. Literally, GitHub invited the guy AS the Chief Point-Misser in a special critical capacity.

                                              4. Everything around supporting the git workflow is a value proposition – only to the business supporting “everything around the git workflow! …

                                              4.1 (continuing) … – but that’s a proposition about the ecosystem NOT to the conceptual framework of what it means to “do VCS stuff.”

                                              I explained this to epilys, but also wanted to point out to you, that the author probably “missed the point” of GitHub intentionally, whereas you missed precisely that point…

                                              1. 1

                                                If you say so.

                                                1. 0

                                                  aren’t you saying that GitHub matters but VCS does not?

                                                  aren’t you saying that Git is irrelevant and the whole thing should just be called “Hub?”

                                                  when you say “it was never really about Git… etc., etc.,” what do you mean by “it,” if not GitHub?

                                                  aren’t you just saying “value proposition” in the hopes that everybody forgets that GitHub is a “value-proposing business”… running on a vcs called GIT?

                                              2. 1

                                                I disagree. I’ve been a Github user since 2009ish and I used it 90% for “hosting a git repository” - that was in addition to hosting my own git repos via ssh/gitolite, so also some mirroring.

                                                When my company paid for github, it was to have a git repo. And pull requests, but nothing else. And I simply don’t believe I’m the outlier here. Sure, webhooks were nice but that’s the extent of any added benefit there.

                                              3. 1

                                                #include jolly-sarcasm-compiler.h

                                                I don’t know! But let me look in a BOOK or a TECHNICAL GUIDE at the very least - oh here’s one…

                                                https://lobste.rs/s/v4jcnr/technical_guide_version_control_system#c_bolhkj

                                                When you say “always” do you mean “since we moved away from mainframes?”

                                              1. 2
                                                1. 1

                                                  If a small business needs a block of IP addresses, couldn’t they do so through Amazon Web Services or Google Cloud?

                                                  1. 4

                                                    You are absolutely able to continue renting IPv4 subnets from various providers. And the big landlords will probably end up owning more IPv4 space as they consolidate their grip because people cash out.

                                                    This, however, marks the beginning of the end of small businesses owning their IP space outright.

                                                    1. 1

                                                      Most ISP’s will be able to get you a /29, /28 or /27 or something like that for a nomimal monthly fee… Brokers will also still sell you large blocks for large one-off prices (12-20 USD per ip)

                                                    1. 2

                                                      What is PF?

                                                      1. 3

                                                        Package Filter (Firewall)

                                                        https://www.openbsd.org/faq/pf/

                                                        1. 1

                                                          Thank you!

                                                      1. 4

                                                        I’m in a final two-week push to deliver Magento 2 to the company so I’ll be a big ball of stress trying to hammer out some features.

                                                        1. 4

                                                          Good luck, you got this!

                                                          1. 1

                                                            Thank you!

                                                        1. 1

                                                          I’ve shifted my focus on reading about design this year, and it may take me through the year into 2020 before I actually complete a lot of the material I’m planning to read.

                                                          I think it’s important to understand the designer’s perspective on problems they face. How they think, work, and produce output is important because they’re the ones who are leading the change in this world. Right now, I think technology is an implementation detail of the designer’s vision. By the end of the year, I’ll know if that theory still stands.

                                                          There are three broad categories for the books I’m planning to read: thinking, process, and design skills. They’re all found, along with a description for each, on this page.

                                                          1. 5

                                                            San Francisco Museum of Modern Art visit, while the René Magritte exhibit is still going on. It may sound unusual for an engineer like me to visit an art museum, but I go because my hope is that I will learn at least one new thing from all the exposure to things I’m completely unfamiliar with.