NIST doesn’t recommend password rotation anymore, it’s not really a best practice.

    Think about it, an attacker isn’t likely to log into your account with your password; brute-forcing it will take forever with the usual 5-attempt lockout; these hacks usually grab the whole database… SO if a company has a policy that “Every X days you must change your password, and it must not match your past Y previous passwords” the general response to that is to increment some part of it (e.g “Aligator1” becomes “Aligator2”.) This means that they must be storing your past Y passwords somewhere (hopefully correctly encrypted) but with so many similar passwords encrypted with the same method, that’s just created a larger vector for them to attack the encryption algorithm itself.

    So. what’s the reasoning for rotating passwords? So that if your account is part of a data breach they won’t be able to access your other accounts? Hopefully, you’ve chosen a strong password, perhaps one you don’t know accessed through a password manager. Hopefully, these important sites/accounts require 2-factor authentication. Hopefully, you’re using a different password on each of these sites.

    Assuming the DB would’ve eventually been breached by rotating passwords you’ve ultimately done nothing except for help researchers crack various crypto algorithms.


      These times are the best of three runs each.


        Ah, understood. Very cool!


          I wouldn’t call the llvm one experimental, it is in excellent condition and has been for a long time now. But yeah the llvm one is what would surely do the prod builds for ios… I guess it just needs to be built against the xcode version then hopefully it will work.


            Patents cause code bloat… Who knew?


              D has 3 backends

              gcc, its own, and LLVM (experimental) [1]

              I actually think D fits well into this problem domain of ‘sharing business logic (non-UI)’ code across multiple languages and toolchains of the mobile dev world.

              Because D’s team invested big effort into multi-backend architectures, and into C++ abi compatibility across non-standard ABI of the C++ compilers

              [1] https://dlang.org/download.html


                I work in Chicago so I immediately thought of the much-delayed Jane Byrne Interchange construction.


                In January 2015 — just over a year into construction — university workers noticed the building had been sinking and shifting, leaving cracks in the foundation and making it impossible to shut some doors and windows, according to court records.

                Over the next 1½ years, IDOT blamed engineering firms it had hired for missing the poor soil conditions that contributed to the problem. That led to a redesign of a key retaining wall that boosted costs by $12.5 million and dragged out that part of the project at least 18 more months.


                  this wasn’t a rant against other disciplines. it was a rant against no existing cooperation between sciences:

                  The problem isn’t “python is easy”! You need to know some programming to get research done. Universities expect grad students to learn it on their own time.

                  the right thing to do would be writing down the algorithm and have someone else implement it. as you have written:

                  Languages are designed for programmers who know how to program,

                  There are many different systemic problems that make this happen, and you can’t just blame it on “python” or “cargo-cults” and call it a day.

                  imho it is a cargo cult to make people believe that they can do their own (professional) programming without sufficient training, python is just the one language which is used most for this, because it is perceived as easy to write programs in. contrary to this it has many behaviors which are unintuitive, like the mutability of lists or dicts (just as an example).


                    The Intel vulnerabilities are actually a negative example in innovation gap. Covert channels were discovered in 70’s-80’s, how to find timing channels in 1992, high-assurance moved fast to do 1995/96 paper showing Intel had a ton, called for leak-proof CPU’s, and ignored from then on. No amount of talking about covert channels got any adoption by even security-focused developers. Only super-tiny niche audience.

                    Eventually Percival or someone did cache-based channels in 2005 or so. That attack type became popular but general problem mostly ignored outside CompSci. Spectre/Meltdown triggered a much quicker wave of responses. Now, the popularity from it is causing CompSci folks to build on the parallel work that was going on in information-flow security. They’re building both leak-resistant hardware and analyses to find leaks.

                    So, from 80’s to 1995 on Intel-specific risks to 2019 or something on people acting on them. Following the trend, the current attack(s) are being mitigated but covert-channel analyses aren’t the norm. Even projects like OpenBSD don’t do them. Outside SPARK Ada, new languages focused on security ignore it, too, despite it being in ancient tools such as Gypsy Verification Environment and programming language. Gap still wide on mainstream side.


                      Yeah the monoid abstraction is one of the single most powerful (and simple!) abstractions that’s helped me even in languages outside of Haskell. You don’t need a fancy type system to make use of it either - if you can prove something satisfies the laws then you as the developer can use that knowledge to make engineering decisions.


                        The author writes:

                        Estimates matter because most people and businesses are date-driven

                        And here lies the problem exactly: Whenever I give you an estimate, you hear a deadline. I never gave you a deadline.

                        Or as a client once told me, “There is effort and there is duration”

                        I am in favor f estimation, when you understand what the answer you received is. Otherwise you are setting me up and this is something I do not like.


                          Lynx will actually handle a lot of things that, e.g., IE6 will not, because it is actively maintained. One of the hardest stumbling blocks standing between a retro browser and a modern bare-bones site is TLS versions (and ciphers). I run a self-consciously retro front-end for a modern web service, but I keep the TLS up to date because it handles logins. Lynx works fine on it (because it’s compiled against current OpenSSL), while anything older than early Firefox releases usually won’t (because of dropping support for SSL and for TLS 1.0).


                            I would love to use a GC’ed language for these tasks but what I need is control.

                            As you wrote later, there are API calls to control when to collect. And there’s always the option to not allocate on the GC heap at all and to use custom allocators.

                            I dunno, changing people’s minds is hard.

                            Yep. “GC is slow” is so ingrained I don’t know what to do about it. It’s similar to how a lot of people believe that code written in C is magically faster despite that defying logic, history, or benchmarks.


                              Did you rerun wc to make sure it is not the fs cache in the kernel that makes it fast?


                                They will actually compile the code and ship it to your clients. https://www.infoq.com/articles/ios-9-bitcode/

                                So, when I’m saying “exactly”, I mean it must be legal bitcode for the compiler toolchain you are using. This is a major nuisance, as the LLVM Apple ships with XCode is some internal branch. So, basically the only option is building custom compiler against XCode.

                                For an effort in Rust to do this, check here. https://github.com/getditto/rust-bitcode

                                I’m not well-versed in D, I assume the compiler is not based on LLVM?


                                  I think the mir libraries cover that to a certain degree.


                                    The lightning pace of C++ evolution doesn’t help this at all either.

                                    Personally, I use D because C++‘s lightning keeps giving me unpleasant electric shocks. Every new feature comes with new warnings about how it’s going to hurt you, plus it’s not like new features makes everyone not use the old ones. The language has gotten too complicated for any one person to understand it.


                                      Because our patent examiners don’t take the time to realize they’re patenting math.


                                        Got a cite for that assertion? Are you confusing “being part of the internet” with “having interactive login access” ?


                                          I should probably have said it was some old articles I read that in. Things might have improved a lot. Allocation was one of main things they mentioned. Appreciate the modern take on the situation.