1. 1

    I just got an ergodox ez, and… man, I do not like the column layout.

    I’m undecided on whether I should attempt to power through what feels like typing through sludge, or if I should just eat the cost of shipping it back :/

    1. 3

      Give it at least two weeks. It took me about that long to unlearn my bad habits from decades of typewriter-style staggering, and once you do it’s so much more comfortable.

      1. 1

        I might buy that, but even then, I have to/will use other, normal keyboards in the interim.

        1. 5

          Yeah, you should give it more time. When I first used the Atreus I was afraid my fingers would break, and was really slow. But that turned upside down when I got more comfortable with. Typing now is faster, more accurate and a lot more smooth then with the classical typewriter layout.

          And surprisingly, typing on a normal keyboard is not going to much worse when you get used to ortholinear. Although, for some reason, I mix up ‘c’ and ‘v’ on regular keyboards much more often now.

        2. 1

          This is the thing I keep trying to do with my Ergo EZ, I will use it in bursts but then switch back to my staggered so I feel productive. I just need to be super strict for longer and completely embrace it

        3. 3

          It took my about six weeks to get used to the ortholinear layout on my DIY board. Now that I am used to it I much prefer it to a normal row staggered keyboard. But it was very much a slog for the first several weeks to a month.

          1. 3

            do some typing tutorials / games, deliberate practice

            it was very uncomfortable but i got used to it very quickly with a bit of practice

          1. 15

            Q: is the HTTP protocol really the problem that needs fixing?

            I’m under the belief that if the HTTP overhead is causing you issues then there are many alternative ways to fix this that don’t require more complexity. A site doesn’t load slowly because of HTTP, it loads slowly because it’s poorly designed in other ways.

            I’m also suspicious by Google’s involvement. TCP HTTP 1.1 is very simple to debug and do by hand. Google seems to like closing or controlling open things (Google chat support for XMPP, Google AMP, etc). Extra complexity is something that should be avoided, especially for the open web.

            1. 10

              They have to do the fix on HTTP because massive ecosystems already depend on HTTP and browsers with no intent to switch. There’s billions of dollars riding on staying on that gravy train, too. It’s also worth noting lots of firewalls in big companies let HTTP traffic through but not better-designed protocols. The low-friction improvements get more uptake by IT departments.

              1. 7

                WAFs and the like barely support HTTP/2 tho; a friend gave a whole talk on bypasses and scanning for it, for example

                1. 6

                  Thanks for feedback. I’m skimming the talk’s slides right now. So far, it looks like HTTP/2 got big adoption but WAF’s lagged behind. Probably just riding their cash cows minimizing further investment. I’m also sensing business opportunity if anyone wants to build a HTTP/2 and /3 WAF that works with independent testing showing nothing else or others didn’t. Might help bootstrap the company.

                  1. 3

                    ja, that’s exactly correct: lots of the big-name WAFs/NGFWs/&c. are missing support for HTTP/2 but many of the mainline servers support it, so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                    I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…

                    1. 4

                      so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                      Unbelievable… That shit is why I’m not in the security industry. People mostly building and buying bullshit. There’s exceptions but usually setup to sell out later. Products based on dual-licensed code are about only thing immune to vendor risk. Seemingly. Still exploring hybrid models to root out this kind of BS or force it to change faster.

                      “I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…”

                      Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                      1. 5

                        Products based on dual-licensed code

                        We do this where I work, and it’s pretty nice, tho of course we have certain things that are completely closed source. We have a few competitors that use our products, so it’s been an interesting ecosystem to dive into for me…

                        Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                        pfff… there’s a “NGFW” vendor I know that…

                        • when it sees a connection it doesn’t know, analyzes the first 5k bytes
                        • this allows the connection to continue until the 5k+1 byte is met
                        • subsequently, if your exfiltration process transfers data in packages of <= 5kB, you’re ok!

                        we found this during an adversary simulation assessment (“red team”), and I think it’s one of the most asinine things I’ve seen in a while. The vendor closed it as works as expected

                        edit fixed the work link as that’s a known issue.

                        1. 3

                          BTW, Firefox complains when I go to https://trailofbits.com/ that the cert isn’t configured properly…

                          1. 2

                            hahaha Nick and I were just talking about that; its been reported before, I’ll kick it up the chain again. Thanks for that! I probably should edit my post for that…

                            1. 2

                              Adding another data point: latest iOS also complains about the cert

                2. 3

                  They have to do the fix on HTTP

                  What ‘fix’? Will this benefit anyone other than Google?

                  I’m concerned that if this standard is not actually a worthwhile improvement for everyone else, then it won’t be adopted and IETF will lose respect. I’m running on the guess that’s it’s going to have even less adoption than HTTP2.

                3. 13

                  I understand and sympathize with your criticism of Google, but it seems misplaced here. This isn’t happening behind closed doors. The IETF is an open forum.

                  1. 6

                    just because they do some subset of the decision making in the open shouldn’t exempt them from blame

                    1. 3

                      Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                      1. 5

                        Any browser vendor can ship what they want if they think that makes them more attractive to users or what not. Doesn’t mean it’s a standard. WebUSB has shipped in Chrome (and only in Chrome) more than a year ago. The WebUSB spec is still an Editor’s Draft and it seems unlikely to advance significantly along the standards track.

                        The problem is not with the standards bodies, but with user choice, market incentive, blah blah.

                        1. 3

                          Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                          “WebUSB”? It’s like kuru crossed with ebola. Where do I get off this train.

                        2. 2

                          Google is incapable of doing bad things in an open forum? Open forums cannot be influenced in bad ways?

                          This does not displace my concerns :/ What do you mean exactly?

                          1. 4

                            If the majority of the IETF HTTP WG agrees, I find it rather unlikely that this is going according to a great plan towards “closed things”.

                            Your “things becoming closed-access” argument doesn’t hold, imho: While I have done lots of plain text debugging for HTTP, SMTP, POP and IRC, I can’t agree with it as a strong argument: Whenever debugging gets serious, I go back to writing a script anyway. Also, I really want the web to become encrypted by default (HTTPS). We need “plain text for easy debugging” to go away. The web needs to be great (secure, private, etc.) for users first - engineers second.

                            1. 2

                              That “users first-engineers second” mantra leads to things like Apple and Microsoft clamping down on the “general purpose computer”-think of the children the users! They can’t protect themselves. We’re facing this at work (“the network and computers need to be secure, private, etc) and it’s expected we won’t be able to do any development because of course, upper management doesn’t trust us mere engineers with “general purpose computers”. Why can’t it be for “everybody?” Engineers included?

                              1. 1

                                No, no, you misunderstand.

                                The users first / engineers second is not about the engineers as end users like in your desktop computer example.

                                what I mean derives from the W3C design principles. That is to say, we shouldn’t avoid significant positive change (e.g., HTTPS over HTTP) just because it’s a bit harder on the engineering end.

                                1. 6

                                  Define “positive change.” Google shoved HTTP/2 down our throats because it serves their interests not ours. Google is shoving QUIC down our throats because again, it serves their interests not ours. That it coincides with your biases is good for you; others might feel differently. What “positive change” does running TCP over TCP give us (HTTP/2)? What “positive change” does a reimplementation of SCTP give us (QUIC)? I mean, other than NIH syndrome?

                                  1. 3

                                    Are you asking what how QUIC and H2 work or are you saying performance isn’t worth improving? If it’s the latter, I think we’ve figured out why we disagree here. If it’s the former, I kindly ask you to find out yourself before you enter this dispute.

                                    1. 3

                                      I know how they work. I’m asking, why are they reimplementing already implemented concepts? I’m sorry, but TCP over TCP (aka HTTP/2) is plain stupid—one lost packet and every stream on that connection hits a brick wall.

                                      1. 1

                                        SPDY and its descendants are designed to allow web pages with lots of resources (namely, images, stylesheets, and scripts) to load quickly. A sizable number of people think that web pages should just not have lots of resources.

                        1. 4

                          There’s also feed.json that serves the same purpose but using JSON instead of XML

                          https://jsonfeed.org

                          1. 20

                            In my opinion, jsonfeed is doing active harm. We need standardization, not fragmentation.

                            1. 2

                              Well as long as people are just adding an additional feed, xml/rss + json. You can have two links in your headers. Over the course of time, all readers will probably add support and then it shouldn’t matter which format your RSS feed is in. That’s kinda how we got to where we are today.

                            2. 10

                              How far spread is support for this in feed readers? RSS and Atom have a very broad support among feed readers, so unless there’s a compelling reason a working and widely supported standard shouldn’t be replaced just because of taste.

                            1. 5

                              I’ve been using a WASD CODE Tenkeyless with Cherry MX Browns for the past 4-5 years. About a year ago, I got a Pok3r RGB with Browns as well. More recently, I bought a ErgoDox EZ (shine) with MX Cherry Reds that I’m trying to train myself on (I bounce back and forth between the CODE and the Ergo when I need to be a more efficient typist while I’m not effective on the Ergo yet. The goal is to use the Ergo as my daily driver in the next month or so.

                              1. 2

                                My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all - as well as placing you at the mercy of a third party provider of which you are a mere customer. Don’t pay your bill because of hard times or, worse yet, have an adversary inside the provider or government that has influence over the priced and all bets are off - your password is going to get reset or account ‘recovered’ and there isn’t much you can do.

                                For these reasons, the best 2FA, IMO, is a combination of “something you have” - a crypto key - and “something you know” - the password to that key. Then you can backup your own encrypted key, without being at the mercy of third parties.

                                Of course, if you loose the key or forget the password then all bets are off - but that’s much more acceptable to me than alternative.

                                (FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)

                                1. 4

                                  (FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)

                                  GitHub has a very extensive 2FA implementation and prefers Google Authenticator or similar apps as a second factor.

                                  https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/

                                  1. 2

                                    I don’t use Google’s search engine or any of their products nor do I have a Google account, and I don’t use social media - I have no Facebook or Twitter or MySpace or similar (that includes GitHub because I consider it social networking). Lobste.rs is about as far into ‘social networking’ as I go. Sadly, it appears that the GitHub 2FA requires using Google or a Google product - quite unfortunate.

                                    1. 9

                                      You can use any app implementing the appropriate TOTP mechanisms. Authenticator is just an example.

                                      https://help.github.com/articles/configuring-two-factor-authentication-via-a-totp-mobile-app/

                                      1. 5

                                        Google Authenticator does not require a Google account, nor does it connect with one in any way so far as I am aware.

                                        Github also offers U2F (Security Key) support, which provides the highest level of protection, including against phishing.

                                        1. 3

                                          This is very good to know - thank you for educating me. I only wish every service gave these sort of options.

                                        2. 1

                                          You can also use a U2F/FIDO dongle as a second factor (with Chrome or Firefox, or the safari extension if you use macOS). Yubikey is an example, but GitHub has also released and open sourced a software U2F app

                                      2. 0

                                        My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all

                                        A second factor is never less secure than one factor. Please stop spreading lies and FUD. The insecurity of MMS/SMS is only a concern if you are being targeted by someone with the resources required to physically locate you and bring equipment to spy on you and intercept your messages or socially engineer your cellular provider to transfer your service to their phone/SIM card.

                                        2FA with SMS is plenty secure to stop script kiddies or anyone with compromised passwords from accessing your account.

                                        1. 1

                                          I happen to disagree completely. This is not lies nor FUD. This is simple reality.

                                          The when the second factor is something that is easily recreated by a third party it does not enhance security. Since many common “two-factor” methods allow resetting of a password with only SMS/MMS and a password, the issue should be quite apparent.

                                          If you either do not believe or simply choose to ignore this risk, you do so at your own peril - but to accuse me of lying or spreading FUD only shows your shortsightedness here, especially with all of the recent exploits which have occurred in the wild.

                                          1. 1

                                            Give me an example of such a vulnerable service with SMS 2FA. I will create an account and enable 2FA. I will give my username and password and one year to compromise my account. If you succeed I will pay you $100USD.

                                            1. 1

                                              We both know $100 doesn’t even come close to covering the necessary expenses or risks of such an attack - $10,000 or $100,000 is a much different story - and it’s happened over and over and over.

                                              For example, see:

                                              Just because I’m not immediately able to exploit your account does not mean that it’s wise to throw best-practices to the wind.

                                              This is like deprecating MD5 or moving away from 512-bit keys - while you might not be able to immediately crack such a key or find a collision, there were warnings in place for years which were ignored - until the attacks become trivial, and then it’s a scramble to replace vulnerable practices and replace exploitable systems.

                                              I’m not sure what there is to gain in trying to downplay the risk and advising against best practices. Be part of the solution, not the problem.

                                              Edit: Your challenge is similar to: “I use remote access to my home computer extensively - I’ll switch to using Telnet for a month and pay you $100 when you’ve compromised my account.”

                                              Even if you can’t that doesn’t justify promoting insecure authentication and communication methods. Instead of arguing about the adaquecy of SMS 2FA long after it’s been exposed as weak, we should instead be pushing for secure solutions (as GitHub already has and was mentioned in the threads above).

                                              I also wanted to apologize for the condescending attitude in my precious response to you.

                                              1. 1

                                                So you’re admitting that SMS 2FA is perfectly fine for the average person unless they’ve been specifically targeted by someone who has a lot of money and resources.

                                                Got it.

                                                1. 1

                                                  DES, MD5, and unencrypted Telnet connections are perfectly fine for the average person too - until they are targeted by someone with modest resources or motivation.

                                                  So, yes, I admit that. It still is no excuse to refuse best practices and use insecure tech because it’s “usually fine”.

                                                  1. 1

                                                    Please study up on Threat Models. Grandma has a different Threat Model than Edward Snowden. Sure, Grandma should be using a very secure password with a hardware token for 2FA, but that is not a user friendly or accessible technology for Grandma. Her bank account is significantly more secure with SMS 2FA than nothing.

                                                    1. 1

                                                      That actually depends on how much money is in Grandma’s bank account. And if SMS can be used for a password reset, I’d highly recommend grandma avoid it - it simply is not safer than using a strong unique password. With the prevalence of password managers, this is now trivial.

                                                      While I don’t have any grandma’s left, I still have a mother in her 80’s, and, bless her heart, she uses 2FA with her bank - which is integrated into the banking application itself that runs on the tablet I bought her - it does not rely on SMS. At the onset of her forgetful old age she started using the open-source “pwsafe” program to generate and manage her passwords. She also understands phishing and similar risks better than most of the kids these days simply because she’s been using technology for many years. She grew up with it and knows more of the basics, because schools seem to no longer teach the basics outside of a computer science curriculum.

                                                      These days, being born in the 1930s or 1940s means that you would have entered college right at the first big tech boom and the introduction of widescale computing - I find that many “grandma/grandpa” types actually have a better understanding of technology and it’s risks than than millennials.

                                                      I do understand Theat Models, but this argument falls apart when it’s actually easier to use the strong unique passwords than the weaker ones - and the archetype of the technology oblivious senior, clinging to their fountain pens and their wall mounted rotary phones is, as of about ten years ago, a thing of the past.

                                                      1. 1

                                                        More on SMS 2FA posts:

                                                        https://pages.nist.gov/800-63-3/sp800-63b.html#pstnOOB

                                                        https://www.schneier.com/blog/archives/2016/08/nist_is_no_long.html

                                                        NIST is no longer recommending two-factor authentication systems that use SMS, because of their many insecurities. In the latest draft of its Digital Authentication Guideline, there’s the line: [Out of band verification] using SMS is deprecated, and will no longer be allowed in future releases of this guidance.

                                                        Since NIST has come out strongly against using SMS 2FA years ago it should be fairly straightforward to cease any recommendations for it’s use at this point.

                                      1. 2

                                        I use Algo by Trail of Bits. It’s super easy to set up (has a command line wizard to walk you through setting up your server on a variety of providers), generates mobile profiles for iOS to connect on demand on unknown networks, and had super secure defaults.

                                        1. 1

                                          Algo supports WireGuard now too, which is nice.

                                        1. 1

                                          Needs more docs which describe how to adapt it to other source code layouts. The handlebars example is a good start. Exciting claims, though! I want to hear more details about the algorithm before I try it.

                                          1. 3

                                            There’s a bit more info on the blog post on LinkedIn. Probably should have linked to that instead. Going to update the entry with this link as well

                                          1. 10

                                            It’s hard to reproduce failures

                                            If this is the case you’re doing it wrong. When using faked data and randomised test order in RSpec it seeds the RNG with a value that is printed at the end of the tests so you can rerun with exactly the same order and faked values. No need for trial and error.

                                            1. -1

                                              That sounds like a useful feature that Faker does not have or appreciate the need for.

                                              1. 8
                                                1. 4

                                                  This type of comment makes it sound like the article is meant as a hit job where the faker readme talks about setting a seed (as skade points out)

                                                  1. 1

                                                    I’ve now worked at three different companies that have used it and no one has raised or mentioned even this feature before, apologies for missing it. I think my point still stands.