Hey @feoh! I’ve several times tried to apply for an SRE where I live but NEVER got any answer back. My profile is probably still a bit too young (4years exp), but I’m looking for great environment and teams to learn from. Would you have any idea about the profile matching this kind of job @ Amazon?


      I like periodically checking on people who did clever projects in the past to see what they or their students are up to. @GeoffWozniak’s submission was by a person from Cyclone team. Looking at them, I found one is doing F* for Everest and the author of the submission was involved in this work.


          The model of creditor and debtor fits. There is a reason wages are called liabilities on the books.

            • Most languages aren’t AOT compiled, there’s usually a JIT in place (if even that, Ruby and python are run-time languages through and through). These languages did not exist 20 years ago, though their ancestors did (and died, and had some of the good bits resurrected, I use Clojure regularly, which is both modern and a throwback).

            • Automated testing is very much the norm today, it was a fringe idea 10 years ago and something that you were only crazy enough to do if you were building rockets or missiles or something.

            • Packages and entire machines are regularly downloaded from the internet and executed in production. I had someone tell me that a docker image was the best way to distribute and run a desktop Linux application.

            • Smartphones, and the old-as-new challenges of working around vendors locking them down.

            • The year of the Linux desktop surely came sometime in the last or next 20 years.

            • Near dominance of Linux in the cloud.

            • Cloud computing and the tooling around it.

            • The browser wars ended, though they started to heat up before the 20 year cutoff.

            • The last days of Moore’s law and the 10 years it took most of the industry to realize the party was over.

            • CUDA, related, the almost unbelievable advances in computer graphics. (Which we aren’t seeing in web/UI design, again, probably not for lack of trying, but maybe the right design hasn’t been struck)

            • Success with Neural Networks on some problem sets and their fledgling integration into other parts of the stack. Wondering when or if I’ll see a NN based linter I can drop into Emacs.

            I could go on too, QWERTY keyboards have been around 150 years because it’s good enough and the alternatives aren’t better then having one standard. I don’t think that the fact that my computer has a QWERTY keyboard on it is an aberration or failure, and not for lack of experimentation on my own part and on the parts of others. Now if only we could do something about that caps lock key… Oh wait, I remapped it.

            It’s easy to pick up on the greatest hits in computer science, 20, 30, and 40 years ago. There’s a ton of survivorship bias and you don’t point to all of those COBOL-alikes and stack-based languages which have all but vanished from the industry. If it seems like there’s no progress today, it’s only because it’s more difficult to pick the winners without the benefit of hindsight. There might be some innovation still buried that makes two way linking better then one way linking, but I don’t know what it is and my opinion is that it doesn’t exist.


              The root problems were discovered around 1992. Security community just ignored it all like everything high-assurance, security community did. I had a rant on that here whose main article is a comment with the links to that work. We knew about cache- and microarchitecture-based leaks in 1992. I’ve been recommending mitigation for a long time. Well, mitigation attempts haha. Mainstream security often ignores stuff done out of their own circles or standards. Politics. There’s plenty of work out there waiting to be used or improved on, though. I post a lot of it here since there’s smart programmers here with unusual quality & security focus.


                I don’t know the model fits, though. We’ve traditionally thought of these things as agreements. Then, built contract law to formalize it. I think that fits better. So, you agreed to do specific things for specific benefits for specific amount of time. That’s on top of workers having no rights (at-will employment) in many states. The models could be combined possibly.


                  That’s awesome. Maybe you can change my mind!

                  Directed graphs are more general then undirected graphs (You can implement two-way undirected graphs out of one way arrows, you can’t go the other way around). Almost every level of the stack from the tippy top of the application layer to the deepest depths of CPU caching and branch prediction is implemented in terms of one-way arrows and abstractions, I find it difficult to believe that this is a mistake.

                  EDIT: I realized that ‘general’ in this case has a different meaning for a software developer then it does in mathematics and here I was using the software-developers perspective of “can be readily implemented using”. Mathematically, something is more general when it can be described with fewer terms or axioms. Undirected graphs are more maths general because you have to add arrowheads to an undirected graph to make a directed graph, but for the software developer it feels more obvious that you could get a “bidirected” graph by adding a backwards arrow to each forwards arrow. The implementation of a directed graph from an undirected graph is difficult for a software developer because you have to figure out which way each arrow is supposed to go.


                    If you are not willing to do that kind of analysis, asking your boss which other topic should be dropped from your tasks is kind of effective.


                      I found it a great read as well and his blog has more. Thank you for submitting this.


                        It could be seen as a complete listing, if the “collection of usernames” isn’t interpreted to be the collection of all usernames the server has, but rather all usernames the attacker cares about.


                          Thank you. I wasn’t aware that this had taken place some time ago.

                          1. 8

                            Sometimes I like to think that I know how computers work, and then I read something written by someone who actually does and I’m humbled most completely.


                              Given enough time (possibly heat death of the universe scales) this method could create a full enumeration.


                                To enumerate can also mean “to build a list” which is closer to this usage, but I’d agree it was used imprecisely.

                                I’d prefer calling this a username oracle attack!


                                  I played a decent amount of this in one night this week so far (staying away from it the other nights in an attempt to keep from losing too much sleep).

                                  I really enjoy the effort that went into the setting, and have been finding all the “File Processing” puzzles quite interesting, as that sort of data processing is something I’ve done a fair amount, just not one word at a time.


                                    Quote from Wikipedia:

                                    An enumeration is a complete, ordered listing of all the items in a collection.

                                    Could someone enlight me on this? What the Article describes doesn’t seem like “complete listing”.


                                      This guy is absolutely terrible at communicating his ideas.


                                      The concept [of hypertext] is just so trivial and so self-evident

                                      This isn’t really accurate. Most people today don’t understand what hypertext is. (Just look at the comments on this thread!)

                                      The idea of navigable connections between ideas through mechanism is trivial (assuming you’re familiar with the western cyclopedic tradition) & many people independently invented similar systems, but hypertext has a very specific set of rules that interact in a fairly nuanced way. (The web implements approximately one-half of one of these rules, which is the source of a lot of confusion.)

                                      He has no idea about the complexity of implementing what he is talking about

                                      He has a pretty clear idea of the complexity of implementing what he’s talking about, because he’s been in close communication with different teams of serious professional developers actually implementing versions of it for many years.

                                      It’s easier to implement a proper hypertext system than a modern web browser – but, where browsers have hundreds of developers, all of the implementations of Xanadu ideas since the mid-80s have (as far as I am aware) had teams of at most three people.

                                      His idea are infeasible IMO.

                                      They’ve been implemented. Implementations are being used internally.

                                      The core ideas are pretty straightforward to implement. (I’ve written open source implementations of them in my free time, after leaving the project.)

                                      The primary difficulty in implementing these things is poor public-facing documentation (because Ted wrote all the public-facing documentation, and he doesn’t separate technical ideas from rants & marketing material). This is why I wrote my own documentation.

                                      Once the concepts are understood, most of them can be implemented in an hour or two. (I know, because I did exactly that many times.)

                                      what won was simplicity, the least (overall) effort

                                      Take a look at any W3C standard and tell me, with a straight face, that simplicity won.

                                      What won was organic growth. In other words: instead of thinking carefully and seriously about how things should be designed, they went with their gut and used the design that came to mind most quickly. This gives them an edge in terms of communication: a stupid idea is much easier to communicate than a simple idea, because it will be as obvious to the person who hears it as it is to the person who says it. However, it’s a nightmare when it comes to maintainability, because poorly-thought-out designs are inflexible.

                                      In terms of the actual number of elements necessary & the actual amount of text required to explain it, hypertext is simpler than webtech. The effort in a hypertext or translit system is the fifteen minutes you spend thinking hard about how all the pieces fit together, while the effort in webtech is trying to figure out how to make a pile of mismatched pieces do something that shouldn’t be done in the first place a decade after you learned to use all of them.


                                        I would also be interested on your thoughts about Lisp where the code is already structured data. This is an interesting property of Lisp but it does not seem to make it clearly easier to use.


                                          Compilers and interpreters use structured representations because those representations are more practical for the purposes of compiling and interpreting. It’s not a given that structured data is the most practical form for authoring. It might be. But what the compiler/interpreter does is not evidence of that.