1. 3

    Starting the build of my new 3D printer a Voron 2.2 350mm.

    1. 8

      I find it odd that CEO Super-Secure didn’t change their password in Slack after the widely publicized 2015 breach, even if they didn’t get a notice from Slack that they were included.

      1. 3

        My thoughts exactly, especially since he totally threw out $5K of computer equipment…

      1. 4

        I’m not a fan of this change actually. I suspect that this will result in a thousand thousand repositories with useful bits of code to be read and re-used will go dark.

        That’s a shame.

        1. 8

          I stopped using GitHub years ago because I couldn’t have private repos for free. I’m sure most people who want to have private repos already do somewhere else.

          1. 7

            A lot of those repositories didn’t have licenses and so using that code would be dubious from a legal standing. If it did have a license then it is likely the author wouldn’t have made it private.

            1. 1

              but you can still read unlicensed code and heavily lean on it while “rewriting” it.

              1. 1

                Still seems legally dubious. According to Harvard Law School’s Copyright Basics, that is copyright infringement. Specifically:

                1. create a new work derived from the original work (for example, by translating the work into a new language, by copying and distorting the image, or by transferring the work into a new medium of expression)
                1. 3

                  legally dubious but impossible to actually get sued for

                  1. 1

                    Legally dubious, morally wrong.

                    1. 1

                      there is no moral basis for copyright

                      1. 1

                        Does a person not have a right to the product of their work?

                        1. 1

                          yes, nobody should take their code away from them

                          1. 1

                            With a right to the product of your labor, you have the right to keep control of the direct product of your labor.

                            1. 1

                              true, nobody can force you to put your code on github

                              1. 1

                                So you think once you share something in any way, you lose all moral rights to that work?

                                1. 1

                                  having exclusive dominion over an idea has no more legitimacy than having exclusive dominion over a plot of land. we might decide that certain rules are for the good of society, but if those rules are idiotic you have no moral obligation to follow them.

                                  1. 1

                                    So you can have exclusive dominion over a chair you make, but not a website you build? What about a song you perform?

                                    1. 1

                                      having a degree of personal property is a sensible rule, so it would be wrong to steal someones chair in most cases. preventing people from making copies of something at no cost to you crosses the line into unjust power. copyright laws were never justified on the basis of morality: it was always justified on the basis that it would incentivise the creation of new works. maybe a 10 year copyright on books makes sense as a way to incentivize publishers to produce hard copies of a book, but that’s not a question of morality.

                                      this is a good lecture: https://archive.org/details/Dr.RichardStallmanCopyrightVs.Community

                                      1. 1

                                        So you think that once you record a song with the purpose of selling it, there are no moral problems with someone else coming along and sharing it for free?

                                        1. 1

                                          of course not, why would there be

                                          1. 1

                                            Because the creator expects as a term of his creation that he will derive benefit in the form of money from his effort. Therefore by copying without permission, you are stealing what was no less a product than a chair.

                                            1. 1

                                              so the injury is done when someone forms an unreasonable expectation. maybe if someone reads this thread they will be saved from that :)

                                              1. 1

                                                There is a natural right to property, so it’s not unreasonable.

                                                1. 1

                                                  no there isn’t and yes it is

                                                  1. 2

                                                    I’m sorry, I simply believe in the right to property, including intellectual, and the right to things you produce, even if the cost of copying is closer to zero than ever before.

                                                    1. 0

                                                      do i have a right to prevent you from wearing your hair like mine?

                                                      1. 1

                                                        I don’t know anything about hair. I just wash it. There is some hair soap thing involved. That’s it. So I can’t answer your question.

                                                        1. 1

                                                          oh okay

                                                    2. 1

                                                      Partly I do believe that you have a right to your labor, not just the product of your labor. Thus you have a right to your music video, even if the copying of that music video is free. You have a right to your code, even if it’s on github. You are morally in the wrong if you steal, even when that stealing doesn’t detract from the original work at all.

                                                      The natural right of property by the way emerged in the medieval period, and is the basis of all modern civilizations. It is the reason we have the capabilities we have today, and without it, the world would be in a worse place.

                                                      1. -1

                                                        wanna know why else we have the capabilities we have today? slavery.

            1. 2

              This post has a bizarre mismatch of crypto primitives, and I can honestly say I’ve never seen a system that uses both DES and SHA-512 at the same time. I’d stay very far away from this. Maybe check out tink from Google.

              1. 1

                There is no way in heck that linus will merge some DIY home rolled crypto code into the kernel

                1. 11

                  It seems like you may not recognize the author. I would typically agree with you on first glance, but given who it is and what it is I wouldn’t be surprised if it got merged.

                  1. 8

                    That’s a good point but missing key detail. I’ll add author did WireGuard which has had good results in both formal verification and code review.

                  2. 7

                    Where else is kernel crypto code rolled?

                      1. 2

                        High praise from linus!

                      2. 2

                        Why not? How would Linus even know if some crypto code was DIY nonsense?

                        (The subtext of these commits from Jason is that the existing kernel crypto APIs are not particularly good, IMO.)

                      1. 5

                        Is this legal in Europe? In Australia if not being tracked was considered legally to be a “common law right” it’s not possible to opt out of it.

                        1. 7

                          I think we need to wait and see, as GDPR will go into effect on May 25 and probably a number of practices like this one will be challenged legally. I personally feel this give-your-consent-or-so-long approach is not in the spirit of the law.

                          1. 2

                            If it’s not legal, they’ll make it legal and sugar-coat it with GDPR in a way that’s impractical or infeasible to the users.

                            I hope Facebook users can combat this with addons, but as most users are mobile users, they surely lack the addons or the technical know-how to set it up.

                            Just opt out of Facebook already.

                            1. 10

                              I hope Facebook users can combat this with addons

                              At some point, the person being abused has to acknowledge that they are being abused, and choose to walk away.

                              1. 3

                                Yeah, just opt out. But sadly there are people who, say, expatriated and have no better way to stay in touch with old friends.

                                Until a viable replacement comes along, which may never happen, I think it’s a nice hope that they can find a way to concentrate on their use case without all the extra baggage.

                                1. 14

                                  I am an expat.

                                  I manage to keep in contact with the friends that matter, the same as I did when I didn’t use Facebook in a different state in my home country.

                                  If they’re actually friends, you find a way, without having some privacy raping mega-corp using every conversation against you.

                                  1. 3

                                    Agreed, I don’t buy the argument that Facebook is the only way to keep in touch from afar.

                                    I’m an expat, and I have regular healthy contact with my friends and loved ones from another continent, sharing photos and videos and prose. I have no Facebook account.

                              2. 2

                                I hope Facebook users can combat this with addons

                                Then this will happen: https://penguindreams.org/blog/discoverying-friend-list-changes-on-facebook-with-python/

                                Unfriend Finder was sent a cease and desist order and chose not to fight it. I made my own python script that did the same thing, and ironically, Facebooks changes the fixed the Cambridge Analytica issue broke my plugin. It stopped 3rd parties yes, but it also kept developers from having real API access to our own data.

                                I also wrote another post about what I really think is going on with the current Facebook media attention:

                                https://fightthefuture.org/article/facebook-politics-and-orwells-24-7-hate/

                              3. 1

                                You’re not forced to use Facebook. It looks like they’re following GDPR and capturing consent. It seems the biggest issue is the bundling of multiple things into one consent and not letting folks opt in or out individually.

                              1. 23

                                GitHub URLs are pretty badly designed.

                                For example, /contact is their contact page, and /contactt is a user profile.

                                Apparently, there’s a hardcoded list of ”reserved words” in the code, and when someone adds a new feature, they add the word/path segment there and check that it’s not taken by a user.

                                So it could perhaps be the case that they’re adding some feature related to malware?

                                1. 13

                                  That could very well be the case – and I’d be totally fine with that. I understand being coded into a corner, and wanting to fix things for the greater good at the expense of a few users.

                                  I just can’t figure out why, for the sake of “privacy and security”, they don’t want to tell me.

                                  1. 16

                                    I think this is absurd behavior on GitHub’s part, and you’re right to be upset by it.

                                    Since you do seem curious, I have a guess why they’re being so evasive, and it’s pretty simple: They’re a large organization. The person you’re talking to would probably need to get approval from both legal and PR teams to tell you about their product plan before it’s launched. I have no information on how busy GitHub’s lawyers and PR people are, but I would expect an approval like that to take a few weeks. Based on what they told you about the timeframe, it sounds like they want to launch their feature sooner than that.

                                    What I’d really like to know is whether this is a one-off, or whether they’ve done it to other people before. It seems like their URL scheme will require it pretty frequently…

                                    1. 7

                                      The person you’re talking to would probably need to get approval from both legal and PR teams to tell you about their product plan before it’s launched.

                                      Which is why I didn’t single out the support representative that contacted me; they clearly were not in the decision process for any of this, and I don’t want to cause them any undue grief/trouble past my first email reply asking for clarification.

                                      To be clear: I don’t really care about the malware username, other than it’s a pretty cool name. I’m more interested in the reason behind why the forced rename.

                                      Lots of people (read: salty News of Hacker commenters) say it’s obvious (wanting to reserve the /malware top level URL) and call me dumb for even asking, but no one has given me any evidence other than theories and suppositions. Which is great! I love thinking and hypothesizing.

                                      1. 5

                                        I don’t have any documented evidence other than anecdotal, but when I worked at a similar company with an almost identical URL structure this was one of the hardest parts of launching a new top level feature. It turns out recognizable words make for good usernames… so it’s almost impossible to find one that’s still available when working on a new feature. The choice ends up being between picking a horrible URL or displacing one user to make it easier to find.

                                        It’s also worth noting that GitHub has a habit of being very secretive about what they’re working on - it’s almost impossible to get information about known bugs which have been reported before, let alone information about a potential new feature.

                                        I would be willing to bet that this is being done for something we’ll hear about in the next year or two.

                                  2. 11

                                    We made a team that was just the unicode pi symbol and GitHub assigned us the url /team/team.

                                    1. 4

                                      That’s a great unicode hack.

                                    2. 11

                                      The curse of mounting user paths directly to /. When in doubt, always put a namespace route on it.

                                      1. 6

                                        That was my thought as well. I would imagine they want it as a landing page for some new feature or product.

                                      1. 11

                                        Some people want easy access to the benefits of containerization such as: resource limits, network isolation, privsep, capabilities, etc. Docker is one system that makes that all relatively easy to configure, and utilize.

                                        1. 4

                                          Docker is one system that makes me wish Solaris Zones took off, which had all of that, but without the VM.

                                          1. 15

                                            What VM? Docker only requires a VM if you run it “non-natively”, on OS X or Windows.

                                            1. 1

                                              Docker isn’t running in a VM on Linux machines. It uses LXC.

                                              1. 10

                                                Docker hasn’t used LXC on Linux in a while. It uses its own libcontainer which sets up the Linux namespaces and cgroups.

                                            2. 1

                                              This is the correct answer. It’s a silly question. Docker has nothing to do with fat binaries. It’s all about creating containers for security purposes. That’s it. It’s about security. You can’t have security with a bunch of fat binaries unless you use a custom jail, and jails are complicated to configure. You have to do it manually for each one. Containers just work.

                                              1. 9

                                                security

                                                That is definitely not why I use it. I use it for managing many projects (go, python, php, rails, emberjs, etc) with many different dependencies. Docker makes managing all this in development very easy and organized.

                                                I don’t use it thinking I’m getting any added security.

                                                1. 3

                                                  I don’t use it thinking I’m getting any added security.

                                                  The question was “Why would anyone choose Docker over fat binaries?”

                                                  You could use fat binaries of the AppImage variety to get the same, and probably better organization.

                                                  Maybe if AppImages could be automatically restricted with firejail-type stuff they would be equivalent. I just haven’t seen many developers making their apps that way. Containers let you deal with apps that don’t create AppImages.

                                                  1. 1

                                                    Interesting. So in effect you wish to “scope” portions for “protected” or “limited” use in a “fat binarie”. As opposed to the wide open scope implicit in static linking?

                                                    So we have symbol resolution by simply satisfying an external, resolution by explicit dynamic binding (dynload call), or chains of these connected together? These are all the cases, right?

                                                    We’d get the static cases handled via the linker, and the dynamic cases through either the dynamic loading functions or possibly wrapping the mmap calls they use.

                                                  2. 1

                                                    That sounds genuine.

                                                    So I get that its one place, already working, to put all the parts in one place. I buy that.

                                                    So in this case, it’s not so much Docker as Docker, as it is a means to an end. This answers my question well, thank you. Any arguments to the contrary with this? Please?

                                                    1. 5

                                                      This answers my question well, thank you. Any arguments to the contrary with this? Please?

                                                      While I think @adamrt is genuine, I’m interested in seeing how it pans out over the long run. My, limited, experience with Docker has been:

                                                      • It’s always changing and hard to keep up, and sometimes changing in backwards breaking ways.
                                                      • Most container builds I come across are not reproducible, depending on HEAD of a bunch of deps, which makes a lot of things more challenging.
                                                      • Nobody really knows what’s in these containers or how they were built, so they are big black boxes. One can open it up and poke around but it’s really hard to tell what was put in it and why.

                                                      I suspect the last point is going to lead to many “we have this thing that runs but don’t know how to make it again so just don’t touch it and let’s invest in not touching” situations. People that are thoughtful and make conscious decisions will love containers. People inheriting someone’s lack of thoughtfulness are going to be miserable. But time will tell.

                                                      1. 1

                                                        Well these aren’t arguments to the contrary but accurate issues with Docker that I can confirm as well. Thank you for detailing them.

                                                  3. 5

                                                    I think there’s something more to it than that. On Solaris and SmartOS, you can have security/isolation with either approach. Individual binaries have privileges, or you can use Zones (a container technology). Isolating a fat binary using ppriv is if anything less complicated to configure than Zones. Yet people still use Zones…

                                                    1. 4

                                                      I thought it was about better managing infrastructure. Docker itself runs on binary blobs of priveleged or kernel code IIRC (dont use it). When I pointed out its TCB, most people talking about it on HN told me they really used it for management and deployment benefits. There was also a slideshow a year or two ago showing security issues in lots of deployments.

                                                      What’s the current state in security versus VM’s on something like Xen or a separation kernel like LynxSecure or INTEGRITY-178B?

                                                      1. 5

                                                        Correct. It is unclear the compartmentalization aspect of containers to security specially.

                                                        I’ve implemented TSEC Orange Book Class B2/B3 systems with labelling, and worked with Class A hardware systems that had provable security at the memory cycle level. Even these had intrusion evaluations that didn’t close, but at least the models showed the bright line of where the actual value of security was delivered, as opposed to a loose, vague concept of security present as a defense here of security.

                                                        FWIW, what the actual objective that the framers of that security model was, was program verifiable object oriented programming model to limit information leakage in programming environments that let programs “leak” trusted information to trusted channels.

                                                        You can embed crypto objects inside an executable container and that would deliver a better security model w/o additional containers, because then you deal with issues involving key distribution w/o having the additional leakage of the intervening loss of the additional intracontainer references that are necessary for same.

                                                        So again I’m looking for where’s the beef instead of the existing marketing buzz that makes people feel good/scure because they use the stuff that’s cool of the moment. I’m all ears for a good argument for all this things, I really am, … but I’m not hearing it yet.

                                                        1. 1

                                                          Thanks to Lobsters, I already met people that worked in capability companies such as that behind KeyKOS and E. Then, heard from one from SecureWare who had eye opening information. Now, someone that worked on the MLS systems I’ve been studying a long time. I wonder if it was SCOMP/STOP, GEMSOS, or LOCK since your memory cycle statement is ambiguous. I’m thinking STOP at least once since you said B3. Do send me an email to address in my profile as I rarely meet folks knowledgeable about high-assurance security period much less that worked on systems I’ve studied for a long time at a distance. I stay overloaded but I’ll try to squeeze some time in my schedule for those discussions esp on old versus current.

                                                        2. 2

                                                          thought it was about better managing infrastructure.

                                                          I mean, yes, it does that as well, and you’re right, a lot of people use it just for that purpose.

                                                          However, you can also manage infrastructure quite well without containers by using something like Ansible to manage and deploy your services without overhead.

                                                          So what’s the benefit of Docker over that approach? Well… I think it’s security through isolation, and not much else.

                                                          Docker itself runs on binary blobs of priveleged or kernel code IIRC (dont use it).

                                                          Yes, but that’s where capabilities kicks in. In Docker you can run a process as root and still restrict its abilities.

                                                          Edit: if you’re referring to the dockerd daemon which runs as root, well, yes, that is a concern, and some people, like Jessie Frazelle, hack together stuff to get “rootless container” setups.

                                                          When I pointed out its TCB, most people talking about it on HN told me they really used it for management and deployment benefits. There was also a slideshow a year or two ago showing security issues in lots of deployments.

                                                          Like any security tool, there’s ways of misusing it / doing it wrong, I’m sure.

                                                        3. 4

                                                          According to Jessie Frazelle, Linux containers are not designed to be secure: https://blog.jessfraz.com/post/containers-zones-jails-vms/

                                                          Secure container solutions existed long before Linux containers, such as Solaris Zones and FreeBSD Jails yet there wasn’t a container revolution.

                                                          If you believe @bcantrill, he claims that the container revolution is driven by developers being faster, not necessarily more secure.

                                                          1. 2

                                                            According to Jessie Frazelle, Linux containers are not designed to be secure:

                                                            Out of context it sounds to me like you’re saying “containers are not secure”, which is not what Jessie was saying.

                                                            In context, to someone who read the entire post, it was more like, “Linux containers are not all-in-one solutions like FreeBSD jails, and because they consist of components that must be properly put together, it is possible that they can be put together incorrectly in an insecure manner.”

                                                            Oh sure, I agree with that.

                                                            Secure container solutions existed long before Linux containers, such as Solaris Zones and FreeBSD Jails yet there wasn’t a container revolution.

                                                            That has exactly nothing (?) to do with the conversation? Ask FreeBSD why people aren’t using it as much as linux, but leave that convo for a different thread.

                                                            1. 1

                                                              That has exactly nothing (?) to do with the conversation?

                                                              I’m not sure how the secure part has nothing to do with the conversation since the comment this is responding to is you saying that security is the reason people use containers/Docker on Linux. I understood that as you implying that was the game change. My experience is that it has nothing to do with security, it’s about developer experience. I pointed to FreeBSD and Solaris as examples of technologies that had secure containers long ago, but they did not have a great developer story. So I think your believe that security is the driver for adoption is incorrect.

                                                              1. -1

                                                                Yes. Agree not to discuss more on this thread, … but … jails both too powerful and not enough at the same time.

                                                              2. 2

                                                                Generally when you add complexity to any system, you decrease its scope of security, because you’ve increased the footprint that can be attacked.

                                                          1. 12

                                                            I disagree with this post. I’m also a professional and my time is valuable too. However, two of the three suggestions they made are taking significant time away from me and my team for evaluating a candidate that potentially can’t even write code to solve a simple task. Part of an interview process is to filter out people before it gets to that point, so we’re not wasting employees time.

                                                            1. 10

                                                              I think coding challenges are optimised for candidates who are looking for a job. I’ve been in that boat once, and when you’re actually looking for a job your “valuable time” is of course bet spent trying to get said job (by doing coding challenge or whatever else).

                                                              Most of the time, though, I’m being recruited. I’m not going to do a coding challenge for a recruiter.

                                                              1. 1

                                                                Taking an entire day to work with them (unpaid) still strikes me as really weird.

                                                                1. 1

                                                                  Think of it as a great way to find out if these are people you would want to work with every day before you actually have to do that.

                                                              2. 8

                                                                I disagree with this post. I’m also a professional and my time is valuable too.

                                                                I have the same problem with it as a hiring manager– how do I screen out socially capable but inept developers– and I share the author’s opinion when I’m the candidate– this tells me nothing about why I want to work for you. Each side wants the other to make an unequal commitment so it amounts to a single round game with that candidate. As a candidate with a choice of games, I don’t want to play this one and it signals disregard for/trivialization of the candidate’s investment and work history. For the hiring side, this plays out multiple times and there is investment in posting, screening, reviewing, etc. so regardless of this round my total investment is higher but not visible.

                                                                So what have I personally done? When I’m the candidate, I refuse to do the coding challenge and say, like the author, check my repos and talk to my references (unless the problem catches my interest, then I might). I have that luxury. When I’m the employer? Up front I explain how it works and what timeline they can expect as part of a 15-minute phone screen for basic info with someone technical. Then I arrange a round of 45-60 minute calls: a technical call with someone on the team and a social/technical call where I try to get them to talk with me about their work in detail and many of the non-code aspects of their job, habits, tools, designs, etc. They’ll probably have a call with my manager or another peer. Then, if I’m satisfied but not sure, I bring them in or have a video chat and they do a quick coding test. This wastes less of their time, makes my commitment visible, and seems to work but it is not a scalable process.

                                                                1. 7

                                                                  I have a portfolio and some github projects. This is where most of my hiring emails come from. So when a company doesn’t spend the time to check that out, and they want me to implement some trivial thing that doesn’t generate value for them, I don’t have time for them either.

                                                                  I’ve had companies pay me to be a consultant for a week before giving me an offer, which was a nice way to learn about who they are. On the other hand, sometimes companies give me job offers now before I know anything about them, and I have to pump the brakes and talk to more of them before I feel comfortable going into something long-term.

                                                                  1. 1

                                                                    …evaluating a candidate that potentially can’t even write code to solve a simple task.

                                                                    In the post, they talk about how they have a blog, numerous GitHub repositories, etc. At that point it should be obvious they can code. The interview then should be more about “fit” or whatever, IMHO.

                                                                    1. 5

                                                                      They aren’t the only candidate we would interview and in my opinion, it is better to have a consistent process. If every candidate had a similar level of public presence to evaluate then maybe that would be different.

                                                                      1. 7

                                                                        So, again IMHO, at that point you’re basically throwing out someone with passion and talent due to bureaucracy. If I come to you with decades of experience/conference talks/published papers/lots of open source software to review/whatever…and you ask me to spend 30 minutes doing trivial work, you’re basically implying that I’m lying to you and/or that your company cares more about process than people.

                                                                        Again, this is IMHO.

                                                                        1. 7

                                                                          I’m saying that you’re not the only person applying for the job and I need to treat everyone the same, so we’re not giving preferential treatment.

                                                                          1. 3

                                                                            I know, but…maybe you should give preferential treatment to people who are obviously better candidates. :)

                                                                            1. 9

                                                                              Some of the best engineers I know have zero public presence. Some of them are extremely humble and don’t like public flashiness. Some of them have families and maintain a strong work-life balance with non-tech stuff. Never assume those with a strong public presence are giving the world the whole picture. You still want to drill into the parts of their personality that they don’t highlight.

                                                                              1. 4

                                                                                Why does having a public portfolio make someone an obviously better candidate? What makes a candidate obviously better? Arbitrary social metrics? Ability to speak quickly about technical topics? Ability to talk bullshit without it sounding like bullshit?

                                                                                How do you know a candidate is obviously better without having them go through the same process and pipeline?

                                                                                1. 3

                                                                                  How do you know a candidate is obviously better without having them go through the same process and pipeline?

                                                                                  If the code in their GitHub account is as good or better than what would be tested by my coding test, why subject them to that? Ask harder questions, ask questions about the things that the coding test wouldn’t cover (including “soft” things that would judge a good fit), etc.

                                                                                  Why does having a public portfolio make someone an obviously better candidate?

                                                                                  Which surgeon would you rather have? The one nobody’s ever heard of, or the one who has published articles on the relevant surgical procedures, who goes to conferences to learn more about surgery, who obviously is passionate enough about medicine that they would study it even if they weren’t getting paid?

                                                                            2. 8

                                                                              There are, unfortunately, a lot of liars out there. I won’t say that industry hiring practices are anywhere near ideal, but as an interviewer it was astonishing how many people with great optics were incapable of coding. Someone would speak with great passion about all their projects and yada yada, and id be fairly convinced they could do the job, then I’d ask the most basic question imaginable. Splat.

                                                                              I guess it helps if you choose to believe the test isn’t meant for you, but for all the other pretenders.

                                                                              1. 7

                                                                                Even more surprising to me is that people who can’t actually code are somehow able to sustain careers as developers. It started making a lot of sense to me why good developers are in such high demand after I had the opportunity to do some interviewing and found that a frustratingly large amount of applicants can’t actually code, even if they look great on paper and are currently employed as developers.

                                                                                I think it’s incredibly risky to hire a developer without seeing code that they have written, be it from open source contributions or a coding test if necessary.

                                                                                1. 3

                                                                                  Onsite nerves can kick in. It sure as hell did for me. I hate white boarding and I lockup. Totally brain freeze. That said, if it’s a basic one like writing a loop…well, they somehow lied their way into the onsite. Thing is, a take home coding challenge can weed out those people pretty fast. If they do that and come in and fall flat on their face before the whiteboard I don’t totally discount them. Anyway, there’s no perfect solution. There is always the potential to hire someone that is great at coding interviews and then sucks at real world problems.

                                                                                  1. 2

                                                                                    This is exactly my company’s experience. Half of the candidates would just bomb out on the C++ test even though they have studied it at high school/college/university and worked with it at their job for 5-10 years. How?!? Because they were either doing Java, not C++, or they were actually managing a team that did C++ and never had to touch it themselves (Well since leaving school at least).

                                                                                    1. 1

                                                                                      What I don’t understand is why this is so hyper-specific to developers. You never hear UI designers talking about processes like this.

                                                                                      1. 6

                                                                                        Really? I’ve heard UI designers talk about it a lot.

                                                                            1. 15

                                                                              I’m sure that everyone knows but just in case if we have new people around here that never heard of http://www.linuxfromscratch.org/

                                                                              1. 4

                                                                                Linux From Scratch was one of the most educational things I’ve ever done. It gives a great understanding of how a Linux system is put together.

                                                                                1. 4

                                                                                  See also http://landley.net/aboriginal. Though it’s in some flux at the moment. The challenge with such projects is always keeping them up to date.

                                                                                  1. 2

                                                                                    Development of Aboriginal Linux has ended, replaced by mkroot.

                                                                                    That sounds quite final and not “in some flux”.

                                                                                    1. 1

                                                                                      For a learning project it’s not a big deal to run one script instead of another.

                                                                                  2. 2

                                                                                    Never knew there was resource this great. Thanks. I am planning to learn Debian. This will help me a lot.

                                                                                  1. 13

                                                                                    Storing my key on the phone… Isn’t that exactly what we don’t want?

                                                                                    1. 6

                                                                                      I wouldn’t trust my phone to store secrets. LG patch it only about twice a year. Even Google only patch their phones once a month. And we have not even started talking about intentional backdoors… So for the same reason I don’t (any more) store GPG keys on my phone, I will not store SSH keys on my phone.

                                                                                      I was wondering, how does the SSH client talk to the phone?

                                                                                      1. 5

                                                                                        Your phone (at least for iOS) actually has pretty good secret storage. There was a great talk at BlackHat a few years ago about what Apple does: https://www.youtube.com/watch?v=BLGFriOKz6U

                                                                                        1. 7

                                                                                          Yes, also some Android devices have it too (TEE/SE). The thing is that, if the device has none of these, any app with enough privileges could read your keys… just like on your computer.

                                                                                          I wouldn’t claim my computer to be saver (okay, okay, I actually would) but this “second factor, put all the trust into a corporate controlled, highly connected, often stolen mobile device” doesn’t make anything better.

                                                                                          Long story short : Use a Smartcard!

                                                                                          1. 1

                                                                                            Smartcard++ It offers the convenience of having the keys on the device, while not having the keys on the device (at least on android - I haven’t found a way to do smartcards on iOS)!

                                                                                            1. 2

                                                                                              Could you provide a link?

                                                                                              1. 1

                                                                                                OpenKeychain is what I am using along with k-9 mail and Password Store, the auth api is still wip. I should have specified that I am not doing ssh stuff yet, sorry if I got your hopes up! :D

                                                                                                1. 2

                                                                                                  Ah this is the same setup as mine, but I’m using a yubikey neo (with nfc) to read the keys

                                                                                      1. 14

                                                                                        Quite frankly, this is just a usability bug in GitHub. Clicking a line number should implicitly do the y thing before appending #L to the uri.

                                                                                        1. 5

                                                                                          I agree that it’s a usability bug but I don’t think linking to a specific commit (the ‘y’ behavior) is desirable. It should link to the line you originally linked to at the current version (it may have moved, but we have that info in the git history), or if the line was deleted, should display “that line no longer exists, it last appeared in commit ab352cf”

                                                                                          1. 9

                                                                                            But the linker is not necessarily trying to link to a single line. Often they are trying to link to a larger context which may have disappeared by the current version even if the line they clicked on still exists. Or it could be the opposite. The line they linked may have disappeared by the overall aim of the link remains the same.

                                                                                            1. 6

                                                                                              Pretty sure you can’t identify a line that accurately through git history in any automated fashion. Git’s basic unit is the whole file object, the concept of lines being moved around or compared is all provided by higher-level tooling. That tooling still has problems with complicated scenarios.

                                                                                              1. 3

                                                                                                I think what cmm is trying to say is that the link should refer to the current latest commit. That way future viewers will see the content of the file in the context it was originally linked.

                                                                                                It might also be cool to think about a semantic linking scheme, following a piece of code through history. I have no idea how hard it would be to implement something like that.

                                                                                                1. 1

                                                                                                  Here’s my idea:

                                                                                                  Super hard. :-)

                                                                                                  1. 5

                                                                                                    Apologies for carrying this argument into multiple threads, but I think you are overstating this problem’s difficulty. Nobody seems to believe me that this is doable, so I quickly hacked up a solution.

                                                                                                    Clone https://github.com/Droogans/.emacs.d/ (the repo from the linked example) somewhere. I snooped the history and the original commit they linked to was 6ecd48 from 2014. Run fwd_lines.py 6ecd48 init.el 135 | less and scroll through. It will display both files with the corresponding lines marked with >>> <<<. You will see that we have correctly identified line 177 as the corresponding line in HEAD, 3 years later.

                                                                                                    In general, any time GitHub’s diff functionality would identify lines as the same, so will fwd_lines.py. Like I argued in my original thread, if this kind of identification wasn’t accurate enough for general usage, code reviews would be a lot more difficult than they currently are. And this is just using an off-the-shelf diff algorithm and only using the linked commit and HEAD. You can increase the precision by taking into account the entire history between those two commits, and you can add some error-tolerance by allowing lines to match w/ some noise. Finally, if there’s the occasional error, I would argue that’s better than the alternatives: linking arbitrary lines (currently) or linking stale results (‘y’).

                                                                                                    I haven’t exhaustively tested this so if anyone finds an interesting case where this fails, let me know.

                                                                                                    1. 1

                                                                                                      Nice demo! It was cool to see it run. I still want to make a case though for preferring “linking stale results” as you put it to either the current or your proposed option. I’ll provide two justifications for my position:

                                                                                                      a) You’re right that you can easily do as well as Github’s existing diff results. But there’s no reason to think that may be good enough. If I move a function twice in its history it’s not really acceptable for early links to inside it to forever error out with a message that it was deleted. Showing a function to have been deleted in one set of lines and added back in another is fine for a diff presentation, but not a good fit for taking lines forward.

                                                                                                      b) Even in situations where you can take lines forward correctly, as @tome pointed out there’s perfectly valid use cases where you want to show a link as it was at the time you wrote it. In fact, in my experience I rarely encounter situations where I’m referring to a set of lines of code without wanting to talk about what’s there. In that situation explanations get very confusing if the code changes from under you. Even if you don’t agree with me that that’s the primary use case I hope it’s reasonable that it is a use case. So perhaps we need two different sets of links, one for the specific version and one with “smart take-forward”. But then that gets confusing. If I had to choose I’d rather create just permalinks that don’t try to be smart, just link to the code I want to link to. Going from a specific version of a file to head is pretty easy, so checking what the code looks like right now should, I argue, remain a manual process. That would be robust to the errors I described above, and it also gives me control to pick the precise tag or branch that I want to compare with (trunk vs release 1 vs release 2, etc.)

                                                                                                      1. 1

                                                                                                        If I move a function twice in its history it’s not really acceptable for early links to inside it to forever error out with a message that it was deleted.

                                                                                                        Solvable by taking into account the entire history rather than just the linked commit and HEAD, and in my first message I acknowledged that we can just display the commit linked to if the line was deleted. You lose nothing over ‘y’

                                                                                                        Even in situations where you can take lines forward correctly, as @tome pointed out there’s perfectly valid use cases where you want to show a link as it was at the time you wrote it.

                                                                                                        I don’t remember advocating for GitHub to remove the ‘y’ functionality, I just don’t think it’s the best default. It’s certainly not what I want most of the time when I link people to a particular line. Outside of a code review the historical perspective doesn’t matter that much to me

                                                                                                        So perhaps we need two different sets of links, one for the specific version and one with “smart take-forward”.

                                                                                                        This is already the case, except replace “smart take-forward” with “arbitrary nonsense”. All of GitHub’s links are relative to HEAD unless you explicitly ask

                                                                                                        1. 2

                                                                                                          We can agree that the current default is “arbitrary nonsense”. Beyond that we’ll have to agree to disagree about what the best default is.

                                                                                                          Falling back to ‘y’ on deletes is reasonable, yes. But here’s another example to show that this is a hard problem with tons of rare use cases that a normal codebase is going to run afoul of at some point (causing all older links to break each time). Say we have an intervening commit that replaces:

                                                                                                          f(a, b, c);
                                                                                                          

                                                                                                          with

                                                                                                          f(a, b, c, false);
                                                                                                          f(a, b, c, true);
                                                                                                          

                                                                                                          What would you want your tool to do? I would just like to see the former. I would very much want my tools to not mislead me about what somebody meant when they sent me a link to the former.

                                                                                                          Hmm, perhaps the best of both worlds would be to show the lines highlighted in the specific commit with a call-out link up top to perform the smart carry forward. I don’t object to smarts, but unless they’re utterly reliable I don’t want to be forcibly opted in to them.

                                                                                                          1. 1

                                                                                                            What would you want your tool to do? I would just like to see the former.

                                                                                                            This indeed gets fuzzier if you allow lines to match with noise, but without that you would get what you want, because that line no longer exists in the source, so we fall back to the commit you linked at.

                                                                                                            Hmm, perhaps the best of both worlds would be to show the lines highlighted in the specific commit with a call-out link up top to perform the smart carry forward. I don’t object to smarts, but unless they’re utterly reliable I don’t want to be forcibly opted in to them.

                                                                                                            Probably a good compromise, I didn’t expect so much push-back on this idea. I don’t understand what you mean by “forcibly opted in” though. Do you think you are currently being “forcibly opted in” in the current relative-to-HEAD linking scheme? When you link other sites than GitHub, are you miffed that your browser doesn’t automatically replace your link with archive.org? I think relative-to-HEAD is a useful convenience and my suggestion is in the vein of making it better, not ruining all of your specific-commit links.

                                                                                                            1. 1

                                                                                                              My mental model is that when I make simple actions like taking a step or clicking on a link, I expect the response to them to be simple. If I see a link to a file on github I expect it to go to HEAD. If I try to go to a link and it doesn’t exist I want to know that so that I can decide whether I want to go to archive.org. I would be miffed if I was automatically taken to archive.org. I wouldn’t be miffed if that happened because I chose to install an extension or turn on some setting to automatically be taken to archive.org.

                                                                                                              As a second example, the autocomplete on my phone has lately gotten smart enough to enter some uncanny valley where it’s often doing bullshit like replacing ‘have’ with ‘gave’. It has also started replacing words two back, so even if I check each word after I type it I still end up sending out crap I didn’t type. This kind of shit gives me hives. I’d rather go back to making typos with an utterly dumb keyboard than put up with crappy ‘smarts’ like this.

                                                                                                              You’re right that the current default on github is “arbitrary nonsense”, but I’d actually prefer it to your smart carry-forward – unless I could choose when I want to try it at a time of my own choosing.

                                                                                                              1. 1

                                                                                                                If I see a link to a file on github I expect it to go to HEAD.

                                                                                                                Do you then also disagree with auto-‘y’ (auto archive.org), the perspective of the person who started this thread?

                                                                                                                You’re right that the current default on github is “arbitrary nonsense”, but I’d actually prefer it to your smart carry-forward – unless I could choose when I want to try it at a time of my own choosing.

                                                                                                                But, arbitrary nonsense doesn’t work, just like your crappy smartphone autocomplete doesn’t work. That’s the whole point of the linked post, and the perspective of virtually everyone I’ve talked to about this issue. You are using simplicity and predictability as measuring sticks here, but the behavior of a GitHub relative-to-HEAD link a year from now is completely unpredictable. Isn’t it worth investigating alternatives that might work, even if they might end up sucking like phone autocomplete? I mean, I do acknowledge that you are saying “I could choose when I want to try it”, but I can’t help but feel like this is a way of ducking engaging with the idea.

                                                                                                                1. 1

                                                                                                                  Yes, I definitely don’t mean to rain on your parade. My initial comment was just to inject a note of caution and a competing viewpoint. Absolutely, this is all fascinating and worth trying. I think I’m just frustrated with my phone :)

                                                                                                                  I didn’t find OP’s auto-y unpredictable because as the reader I can see that a shared URL refers to a particular tree state. Whereas I interpreted your proposal to be taking a URL to a specific tree state but then showing me something outside of that tree. Is that accurate?

                                                                                                                  Coda: after I first submitted this comment I noticed that my phone had autocompleted ‘auto-y’ to ‘auto-insertion’. Sigh..

                                                                                                                  1. 2

                                                                                                                    Whereas I interpreted your proposal to be taking a URL to a specific tree state but then showing me something outside of that tree. Is that accurate?

                                                                                                                    Technically accurate, but maybe missing intent. The intent of such a URL is to point to HEAD. To make this completely concrete, instead of links like

                                                                                                                    https://github.com/Droogans/.emacs.d/blob/mac/init.el#L135

                                                                                                                    we would have links like

                                                                                                                    https://github.com/Droogans/.emacs.d/blob/mac/init.el#6ecd48/L135 (if GitHub supported this type of URL, clicking this would take you here)

                                                                                                                    We use 6ecd48 here similar to how on the web we use redirects to ensure that URLs always point to the latest content, or in the case where the line was deleted, error 404 + “maybe check here?”

                                                                                                                    I didn’t find OP’s auto-y unpredictable because as the reader I can see that a shared URL refers to a particular tree state.

                                                                                                                    I think I would find this behavior more predictable if navigating to https://github.com/Droogans/.emacs.d/ started a new “session” by redirecting you to https://github.com/Droogans/.emacs.d/tree/aecc15138a407c69b8736325c0d12d1062f1f21b (the current commit HEAD points to). Otherwise, there is an inconsistency between your current browsing state (your URL is always pointing to HEAD) and the links you are sending to others.

                                                                                                                    Coda: after I first submitted this comment I noticed that my phone had autocompleted ‘auto-y’ to ‘auto-insertion’. Sigh..

                                                                                                                    Maybe if I owned a smartphone I would better understand everyone’s objections :)

                                                                                                                    1. 2

                                                                                                                      Indeed, this comment helps understand our relative positions. I see now that you’re right – what we find ‘predictable’ is subjective. I find OP’s auto-y more predictable partly because it fits my needs more often than a link to HEAD. Your hash/line proposal seems pretty reasonable!

                                                                                                2. 1

                                                                                                  Git’s storage model and the functions provided in their CLI toolkit/libgit2/whatever GitHub uses are irrelevant. With the history and a modified diff algorithm you can identify lines across commits. Make an index of that data. GitHub already has to maintain a separate index for search, nothing new.

                                                                                                  1. 3

                                                                                                    With the history and a modified diff algorithm you can identify lines across commits.

                                                                                                    What @jtdowney is saying is: No, you can’t. You can almost do it. You can often do it. But the only reliable way to know what code that link was talking about, is to show the original file where the line belongs.

                                                                                                    1. 1

                                                                                                      All of what you’re saying applies to diff algorithms in general. It’s necessarily a fuzzy, ill-defined task, and yet we have effective algorithms for solving it. Our whole software development industry is powered by diff, if it’s good enough for pull requests, it’s good enough for forwarding links to a current commit. We’re not talking about mission critical software here.

                                                                                                      edit: also, that is not what the parent comment was saying. Their criticism was rooted in Git’s storage model and tooling, as if I was implying GitHub should shell out to a non-existent “git find-corresponding-line-in-commit” command. If someone wants to argue that what I am suggesting is not feasible it should not be grounded in Git implementation details.

                                                                                                      1. 1

                                                                                                        Yes.

                                                                                                        Good points. I agree that a good-enough algorithm could show us the line in an updated context or detect that it can’t and then fall back to showing us the place where that line number was originally anchored.

                                                                                                        And yes, the low-level tooling is irrelevant. No matter how you store the data, unless you have a human confirm that yes, this line has the same identity as that line, it’s still just a succession of changesets (as svn stores it), which is just the dual of a succession of trees (as git stores it).

                                                                                                        Maybe the way bzr and svn handle file copies can help things along, but it’s not an essential piece of information (as git shows us), and there’s no tool out there that explicitly tracks line identities. Bzr does it in the low-level layer, but it’s still algorithmic, not human-verified, so that metadata has no more value than what git can infer after the fact. Basically it’s done to amortize some of the work of the diff/merge and for space efficiency.

                                                                                                        Thank you. Worse is better. Don’t let the perfect get in the way of the good.

                                                                                            1. 7

                                                                                              Web is easy - a static IP (may) be the cheapest, but it depends on your ISP. It might be better to colo a box.

                                                                                              Email isn’t much harder; you’ll need your ISP to do rDNS and open up :25. you’ll likely need business internet here because getting that done probably means talking to support, and the home internet support won’t know wtf to do with you when you ask them that.

                                                                                              Payment means shopping for and opening up a merchant account with a payment processor. That’s the only way to accept cards.

                                                                                              The biggest drawback is with payment - email and web have the standard worries about DDoS, blacklisting, etc - but if you’re a merchant you are in scope for PCI. Depending how many transactions you process annually, which card companies you want to accept, and how you handle card data, you could end up anywhere on the scale from “self-certify and pay for quarterly scans from your processor” up to “external audits” so many people use Stripe to stay out of PCI scope entirely.

                                                                                              In the end it’s all a matter of time/money - these solutions let you stay in control, but is it worth spending time doing email administration rather than making/selling your widgets? Up to you. Good luck :)

                                                                                              1. 3

                                                                                                You still must comply with PCI if you use Stripe: https://stripe.com/docs/security#pci-dss-guidelines.

                                                                                                1. 4

                                                                                                  This is true, but with an important distinction: if you chose to integrate with providers like Stripe via iframe/links, you’re under SAQ A or at worst A-EP; if you get a merchant account and roll everything yourself, you’re firmly in SAQ D territory.

                                                                                                  If you go with a Stripe-like provider, they automatically fill this out for you and retain it as the provider.

                                                                                                  If you go your own way, have fun! Be sure to get it to your processor on time and schedule your quarterly scans religiously, and avoid screwing up so you don’t get a letter saying that your breach of PCI regulations means you’ve been bumped to a level one merchant now.

                                                                                                  1. 3

                                                                                                    I wouldn’t describe filling out the SAQ A or A-EP as “out of PCI scope entirely.”

                                                                                                    1. 2

                                                                                                      <insert Futurama ‘technically correct’ GIF here> ;-)

                                                                                                      But yeah, definitely correct, and I wouldn’t presume to argue the nuances of PCI compliance with you!

                                                                                                2. 1

                                                                                                  if you’re a merchant you are in scope for PCI

                                                                                                  Not just that, but KYC and AML laws as well…

                                                                                                1. 1

                                                                                                  Isn’t the Linux kernel hosted on GitHub? Wouldn’t it technically run afoul of these terms?

                                                                                                  1. 5

                                                                                                    It isn’t officially hosted on GitHub, it is hosted on https://www.kernel.org. The repository on GitHub is a mirror.

                                                                                                  1. 5

                                                                                                    This bit of sample code jumps out at me (as a total crypto noob, pretty nervous about writing any crypto-related code):

                                                                                                    $nonce = random_bytes(24);
                                                                                                    $ciphertext = sodium_crypto_box($plaintext, $nonce, $message_keypair);
                                                                                                    

                                                                                                    Would it make sense for a safe-defaults-oriented crypto api to generate $nonce by itself, and return it along with the ciphertext? That way I can’t screw it up by re-using it or using something predictable.


                                                                                                    Meta: I can’t stand “first” claims. Title suggest: PHP 7.2 Adds Modern Cryptography to its Standard Library

                                                                                                    1. 3

                                                                                                      The nonce does not necessarily have to be randomly generated, and in fact it can have some semantic meaning if you want it to. For example, in “Practical Cryptography,” Schneier and Ferguson suggest using the message number plus some additional information as a nonce, assuming the messaging system has the concept of a message number. The most important aspect of a nonce is that it is used only once (hence, “Number used once”), and does not necessarily have to be high-entropy. If you want it to have some kind of meaning, then having the crypto library choose it for you is not always what you want.

                                                                                                      1. 12

                                                                                                        Just to be pedantic and because I’m taking a break at work:

                                                                                                        I’m like 90% sure that the etymology isn’t “Number used once”, but rather from the Middle English “nonce”, meaning “current occasion”, via the linguistics term “nonce word”, meaning a word that is expected to only occur once.

                                                                                                        Why yes, I am fun at parties.

                                                                                                        1. 4

                                                                                                          I won’t argue with you at all, as I had absolutely no idea what the etymology of the word is :). Assuming you are right, I still find my mental link of “nonce” == “number used once” useful because it helps me to remember the actual purpose behind the use of the value.

                                                                                                          1. 6

                                                                                                            No, no, your version is immediately more useful. I just like linguistics. :)

                                                                                                        2. 2

                                                                                                          Oh, interesting! That makes me feel better, so hopefully the docs will include similar guidance :)

                                                                                                          I wonder then, if sodium_crypto_box is stateful, and will refuse to work if $nonce is reused? I didn’t find relevant libsodium docs with a quick search, but I think I just don’t know the right keywords.

                                                                                                          1. 3

                                                                                                            Secretbox is an abstraction of the concept: “given a key x nonce pair, push data into a thing and get the encrypted data on the other side”. An application could potentially have thousands of secretboxes at once and they really shouldn’t know anything about each other.

                                                                                                            Think of a secretbox being stateful the way an iterator is: if you give it all the same parameters its going to replay exactly the same behavior.

                                                                                                            Finally, the nonce space can be very large (depending on the cartographic algorithm being used and what it specifies). In the case of Salsa20, it is 64 bits. It isn’t realistic to track a space that size. (When using libsodium you should really stick to the symbolic constants, like crypto_secretbox_NONCEBYTES, and not, say, 24.)

                                                                                                            As long as you never use the same key x nonce pair, you’re fine. As @untothebreach said, using an application invariant that guarantees the nonce will be unique is often sufficient. For many algorithms, the nonce space is large enough to randomly generate the nonce each time and not worry about it (if that sounds fishy to you, recall that the nature of cryptography is to make bad things really unlikely and then accept that it’s fine).

                                                                                                          2. 1

                                                                                                            Does “once” in this case mean once, ever. Or once in this application of the algorithm? Is some randomness required?

                                                                                                            But to go back to @phil’s point, it sounds like a safe default would be to generate the nonce in the library. But you can always provide your own.

                                                                                                            1. 1

                                                                                                              Once with the same key

                                                                                                        1. 2

                                                                                                          I’ve been using https://github.com/deanoemcke/thegreatsuspender for a while to automatically suspend tabs I haven’t used recently. It made a noticeable impact on Chrome’s battery usage.

                                                                                                          1. 2

                                                                                                            I’ve had tabs open for several months thanks to The Great Suspender. My open tabs are my reading list now, no more hidden bookmarks or TODO list. If Chrome crashes and I lose it, NBD - it’s something I’m OK declaring bankruptcy on every now and then.

                                                                                                            1. 1

                                                                                                              I turn on the “Open my previous tabs” setting and also use a tab saving extension. I have still lost my tabs at some point, I think Chrome crashed while it was exiting? Anyway now about once a week I manually save my tabs as well via the extension and I’m happy with it.

                                                                                                          1. 7

                                                                                                            The resulting machine code is seriously impressive! One thing that I’d be interested to see is the LLVM IR that the Rust compiler itself produces; it’s unclear to me how much of the handiwork is Rust’s, and how much is LLVM’s.

                                                                                                            1. 9

                                                                                                              In general, for now the answer is “mostly LLVM’s”, but is slowly changing over to more inside rustc. MIR in particular will allow us to start doing some of our own optimization passes.

                                                                                                              1. 6

                                                                                                                I plugged their example into the Rust Playground and it looks like the iterators get unrolled before it is turned into LLVM IR.

                                                                                                              1. 9

                                                                                                                One caveat here that I haven’t seen much attention brought to is this piece of their FAQ:

                                                                                                                How do Lightsail instances perform?

                                                                                                                Lightsail instances are specifically engineered by AWS for web servers, developer environments, and small database use cases. Such workloads don’t use the full CPU often or consistently, but occasionally need a performance burst. Lightsail uses burstable performance instances that provide a baseline level of CPU performance with the additional ability to burst above the baseline. This design enables you to get the performance you need, when you need it, while protecting you from the variable performance or other common side effects that you might typically experience from over-subscription in other environments.

                                                                                                                If you need highly configurable environments and instances with consistently high CPU performance for applications such as video encoding or HPC applications, we recommend you use Amazon EC2.

                                                                                                                This infers that these instances perform similarly to EC2’s T2 instances, and so CPU will be throttled for anything more demanding than a web application.

                                                                                                                Their documentation doesn’t mention anything about CPU credits however, so some testing needs to be done to see how burstable/fixed performance is. I’ll probably write a blog post investigating this further.

                                                                                                                1. 7

                                                                                                                  They are t2 instances from my testing. If you launch one and query the EC2 metadata service, it can return back the type of instance it is.

                                                                                                                  1. 1

                                                                                                                    Interesting. I’m assuming you queried it through the instance itself and not an API?

                                                                                                                    1. 2

                                                                                                                      I SSH’d to the box and curl’d the metadata URL. For my test $5 Ubuntu box it returned t2.nano.

                                                                                                                      1. 1

                                                                                                                        I don’t know if they have, but I’ve seen some people query it on the instance and see a T2 instance as a result

                                                                                                                  1. 6

                                                                                                                    I used to go all-in with webfonts for all of my projects, but for my current web design project I’m planning to abandon them entirely. Performance matters more than me showing off.

                                                                                                                    Edit: The post title is a reference to a line by Governor Ritchie on The West Wing, right? Or am I crazy?

                                                                                                                    1. 5

                                                                                                                      Ritchie was my first thought, but it’s conceivably just idiomatic speech in the sort of place from which Ritchie was supposed to hail?

                                                                                                                      1. 3

                                                                                                                        Yeah, I suppose that’s my question. Doing some basic looking around, I don’t see any evidence for it being a known phrase that Ritchie would be repeating, so I’m going to assume that is legitimately a West Wing reference. I, of course, wholeheartedly approve of such a thing.

                                                                                                                      2. 4

                                                                                                                        The post title is a reference to a line by Governor Ritchie on The West Wing, right? Or am I crazy?

                                                                                                                        Oh good, I’m not the only one

                                                                                                                        1. 2

                                                                                                                          That is where I went immediately too.

                                                                                                                      1. 21

                                                                                                                        Normally I would be extremely skeptical of a new TLS/SSL/crypto library, but this appears to be from Thomas Pornin, who definitely has the right crypto chops. Seems brand new but definitely something to keep an eye on.