1. 2
    1. 3

      Cool! Forgot that was a thing.

    1. 2

      Turbolinks seems like an interesting hack, but is it really a noticeable improvement in modern browsers? It seems like it’s an optimization that could be kind of done behind the scenes by the browser anyway, no?

      1. 6

        It seems like it’s an optimization that could be kind of done behind the scenes by the browser anyway, no?

        Browsers could never do this because it would break the web. Pretty much all websites assume that the JS VM is thrown away upon navigation. This means that they’re writing JS with non-idempotent transformations, which would break completely if you applied Turbolinks-style navigation to them.

        1. 2

          Feels like it on every site I’ve used it on. The initial request is the same, yeah, but you avoid all other requests for js, css, fonts (even if cached), re-parsing it all, and running most of the JS meant to run on load. Maybe if you don’t have very much js and css and you have caching set up perfectly you wouldn’t notice much.

          1. 1

            In short, adopting Turbolinks is the client-side mirror of switching from CGI to a persistent process on the server.

          2. 1

            It prevents the user from looking at a white page while their content loads, but otherwise all of the overhead is still there since they’re using HTTP and are sending whole HTML pages around.

            1. 6

              Not all the overhead - it doesn’t have to run scripts through the interpreter again, for example. In my experience content pages do seem snappier with turbolinks.

            2. [Comment removed by author]

              1. 2

                How else would you implement it without browser support? FWIW, Turbolinks degrades pretty gracefully. The player they have won’t work, but if you’re just making pages snappier it’ll just revert to normal page loads.

            1. 1
              1. 3

                The author’s argument is that because OpenBSD has packages of recent versions of Ruby, RVM/rbenv is unnecessary.

                Gem authors, like myself, often use several Ruby versions on their own machines to test and debug issues related to certain versions of Ruby and the libraries that we maintain. What if you work on one Ruby project for work that uses an older version of Ruby and you want your personal projects to use the latest version? 90% of professional Ruby developers, regardless of OS, still need a Ruby version switcher.

                1. 1

                  ~95% of the pulled NPM modules have already had their names claimed by a single, anonymous actor.

                  Maybe this is a good time to remind people of rimrafall?

                  This person has bumped the major version of every package to 2.0.0. As far as I can tell, most of the original packages were on versions <= 1.x.x.

                  1. 3

                    If you want to backdoor node apps, there are now plenty of available, established package names you could recreate that people already depend on. Enjoy!

                    1. 2

                      This was not a possibility, actually. Furthermore, all of the packages in question have new maintainers that are being monitored very, very closely.

                      1. 1

                        Do you need explicit permission to reclaim the previously used name? I didn’t gather that from the post or tweet which made it seem like you could adopt the name without intervention.

                        But I probably should have checked that rather than my “ready, fire, aim” approach above.

                        1. 1

                          I don’t believe that you did, no. But you did need npm’s help in order to publish old versions of the package, so the new maintainers could have only made new versions.

                          1. 2

                            Well, that’s still a problem then, since most people are using the caret syntax in their package.json, right?

                            Most of the packages seem to have been claimed by this unknown actor, who appears to have bumped the version number on every single package.

                            1. 4

                              I think there’s some subtitles around the karat and 0.0.z versions in this specific case, but generally, you’re right, I think.

                              who appears to have bumped the version number on every single package.

                              Yeah, that’s the only way to get it to show up that way, you have to publish a new version so the new metadata is applied.

                              1. 1

                                It seems the lack of a (default) lockfile makes this worse.

                      1. 20

                        Is there any doubt that when the FBI brings up a law from the 1700’s to justify breaking digital encryption in 2016 that they are completely making it up as they go along?

                        1. 4

                          They certainly aren’t making it up as they go along. They successfully used this law to force another company to unlock a locked smartphone. The big difference is that company already had the technique ready to use.

                          The FBI was trying to quietly create a precedent that would allow them to force companies to create techniques even when a law applies (think forcing you to pick a lock vs forcing you to hand over a key) which is a massive expansion.

                          Don’t get me wrong, the All Writs Act has a necessary place in law, filling in gaps of execution that haven’t been legislated, and nothing else can really replace it. But this application is so far out of precedent that it’s very worrying.

                          1. 3

                            More so than when the EFF quotes the first amendment to justify blogging?

                            1. 6

                              The First Amendment to the Constitution is hardly the legal equivalent of a 200-year-old obscure section of the US Code whose sole notable application is against smartphone manufacturers.

                              1. 2

                                Since when do notable applications determine the validity of a law?

                                Every time Congress passes a bill that says “on the internet”, people scream “we don’t need this law. the laws we have are just fine.”

                                Every time the police arrest somebody per a law that doesn’t say “on the internet”, people scream “these outdated laws don’t count.”

                                1. 4

                                  Have you seriously read the bills that people get up in arms about? Its not just because it ‘says “on the internet,”’ its normally because it would require breaking of basic internet security. This outcry is perfectly consistent with that.

                                  The gov’t is using this law that is designed to allow a judge to say, compel a landlord to hand over a key so that they can execute a search warrant, to try to force Apple to build a key which is not what the precedent sets and in fact apple argues is against the law.

                                  The argument is that this case satisfies neither satisfies the requirement that no other law applies more specifically and that the writ is “agreeable to the usages and principles of law.” [1] This is because of the Communications Assistance for Law Enforcement Act of 1992 which lays out the interaction and responsibilities between govt and companies when it comes to digital messages.

                                  If the FBI is successful, then they can use the All Writs Act to do whatever they want when it comes to electronics as long as the law didn’t explicit cover that type of electronics. If that happens then it would be preferable to pass more explicit laws to restrict their usage of this act.

                                  [1] http://www.nyulawreview.org/sites/default/files/pdf/NYULawReview-83-1-Portnoi.pdf If you can habdle it, I suggest reading the above article even if just for context of how this law is being used in general.

                                  1. 1

                                    The generalization on how the FBI is supposedly misusing US laws doesn’t sound very solid, but then again talking about how people are screaming against laws or their enforcement is hardly better. Sorry because I’m probably biased here as I value civil-rights associations higher than US appendices, especially those meddling with our rights to privacy. :p

                                    1. 2

                                      Pretty much the only way the FBI wins is through overturning precedent that would then say that forcing someone to hand over a key and forcing someone to pick a lock are legally equivalent. That would be a horrifying precedent to set because of the implications on the current state of 5th amendment precedents which say you can force someone to hand over knowledge that exists outside of themselves but not something that only exists within themselves (theoretical OS that has these features disabled).

                                2. 1

                                  In that case they are relying on an interpretation of the first amendment that wasn’t law until the 60s. Before then the U.S. didn’t really have free speech.

                              1. 5

                                Was anyone else hoping this was a Quenya/Sindarin generator?

                                1. 13

                                  Question to lobste.rs here: Is it necessary for a Ruby developer to know how to implement a linked list at all?

                                  To me that sounds like a weird thing to test for in an application developer since practically all application languages have their own list object.

                                  1. 11

                                    Actually, I have an interesting data point there. I just had a friend join me at Google, after they’d spent more than a decade at a large defense contractor well-known for both software and hardware. Google’s interviews are famous for involving algorithm questions - slightly more complicated than linked-list implementation, but it would be hard to pass them without knowing it as background knowledge. The other company’s are not.

                                    According to my friend, many of the highly productive programmers they know from the other company will cheerfully talk about how glad they are to leave their undergrad algorithms courses in the past and forget everything from them. Googlers… have a contrasting attitude, which was a large factor in coming here for this friend, and for myself.

                                    My conclusion is that whether data structures and algorithms knowledge is important to programmers depends on the nature of the work, and is also a culture question. It would be a surprise here to meet a coworker who wasn’t at least interested in discussing algorithms topics, even though they are only occasionally of direct importance.

                                    1. 7

                                      Depends on the work you’re doing. Plenty of developers can do their jobs perfectly well without understanding fundamental data structures.

                                      It would seem impossible to do effective performance analysis & many other tasks without understanding how basic data structures & algorithms work, though.

                                      1. 9

                                        That is to say: is it necessary in order to develop? No.

                                        Is it necessary in order to be successful long term? Almost certainly.

                                        1. 5

                                          Depends wildly on what you mean by successful, and what you’re working on in the day to day. Does seem like a waste of money to take a bunch of algorithms/programming classes and still be unable to implement a linked list.

                                          1. 4

                                            There’s an implicit assumption here that the primary goal of a college education is to be marketable. For many I would imagine that becoming marketable is not in fact their primary goal, and is superceded by things ranging from fulfilling the desires of some real or imagined societal or familial pressure, to broadening their cultural and intellectual horizons through interaction with people of varying backgrounds and fields. That’s not to say that being marketable isn’t important (I think it is quite important for long term happiness, as much as we’d like to imagine we can all be happy making low wages working with a non-marketable degree), but that we in professional STEM fields may overestimate the degree to which others value the marketability of their degree.

                                      2. 7

                                        It’s important to know how it’s implemented, so that in the event you find yourself using one (even if it’s fundamental as in erlang or in a stdlib somewhere), you understand its characteristics. Actually performing the implementation is, however, as you say, utterly pointless now, just like implementing any sort or any tree.

                                        1. 7

                                          I think its also important to know from a communication perspective. When I’m working with other developers I expect some basic fluency with the fundamental data structures and algorithms. I wouldn’t expect someone to implement a linked list but I would expect them to understand when the business problem can be well modeled like a linked list or tree and be able to communicate the idea using those terms.

                                          Writing a simple version of either is a easy way to demonstrate that understanding.

                                          1. 3

                                            I don’t know about ‘utterly pointless’ – understanding how different trees or sorts are implemented is valuable in recognizing other datastructures you have to build that are near-copies. It’s perhaps not super relevant to CRUD-building, but if you’re doing anything nontrivial behind that CRUD (for instance, many of the applications I’ve worked on have had very nontrivial business layers, involving stuff like decision support trees, etc). Understanding how to structure those trees effectively relied heavily on my understanding of abstract datastructures.

                                            I’m certainly not saying it’s the most important thing, but ‘utterly pointless’ is maybe a bit overzealous. This goes for things like knowing how to implement depth-first vs. breadth-first search, too – or understanding the complexity of a custom merge-and-balance operation vs. implementing a self-balancing tree (the aforementioned decision support tree program involved a fair amount of theory in how to effectively implement, whether via a M&B approach, or an online/self-balanced approach).

                                            1. 5

                                              I agree you need to know how to use trees, and communicate about their use; and that some stdlibs don’t have exactly the right tree types for every possible use case. But the context here is what questions you’d ask in a developer interview. Asking for a de novo implementation of an online red-black tree implementation merely tests whether the interviewee has recently completed an algorithms course in college.

                                              1. 5

                                                Ah – that I totally agree with. I’m not sure I could give you a de novo implementation of a red-black tree without the aid of a few pots of coffee and a couple of algorithms books. Much less on the fly in an interview.

                                                1. 6

                                                  As you say, the setting and the time constraint make even simple things much harder. “Implement this data structure” is NOT a good interview challenge, because it takes a few hours to do properly, even with reference materials at hand.

                                          2. 4

                                            As just one data point, I am a Ruby programmer of 4 years now and I do not know how to implement a linked list.

                                            1. 2

                                              That’s awesome. I bet there are a very large number of python and php programmers who have the same experience, and i bet most of you folks can go to your graves with fulfilling careers and lives without that ever being an issue. All of those languages are mutable and strongly prefer arrays and hash tables/dictionaries over linked lists anyway in their stdlibs.

                                            2. 0

                                              If the answer is “no”, should they know how to implement anything? On the other hand, it seems really strange they asked for it to be implemented in Java.

                                              I wouldn’t expect them to know all the details off the top of their head, but it’s not a very difficult problem at all. Even if they’ve never heard of linked lists before, they should be able to code it up once they know it’s a series of linked nodes. It’s not like they’re asking for some exotic balanced tree with tons of pointer juggling.

                                              That said, 30 minutes to implement the whole List API for a junior developer is a little tight. Hearing it was 25 public methods sounded unreasonable, but looking at the docs, most of them are just wrappers around some variation of a while(…) loop, so it ends up not being too bad. If I had to use it as an interview question I’d probably bump it up to 60 or 90 minutes, though.

                                              I’d be interested in seeing the code the author and his co-workers came up with. 6 hours seems like a really long time to not get the whole thing working.

                                            1. 8

                                              Perhap I’m just out of touch, but I was surprised that scaling to 1000 r/m (or 16 r/s) required any thought at all. I would have figured the default settings on any web framework would handle that with ease assuming the database can.

                                              Is this Ruby being slow? Does an out-of-the-box Go app doing the same thing need a scaling document? Or Java?

                                              1. 4

                                                I think it’s mainly a ruby/rails thing still. Go (Gin), Java (Dropwizard), Elixir (Phoenix), Clojure (Ring), and Haskell (Yesod) should all get that many (and more) out of the box. In fact there is a benchmark for a few: https://github.com/mroth/phoenix-showdown and they are all around a few kr/s (which isn’t heroku).

                                                I put a phoenix hello app a free heroku dyno and ran benchmarks against it using ab. A similar test with clojure that @peter ran (packaged as a fat jar and deployed to free dyno) got about 750 r/s. The point is that it’s certainly easy to achieve.

                                                ab -c 250 -n 4000 http://quiet-falls-9626.herokuapp.com/
                                                
                                                Server Software:        Cowboy
                                                Server Hostname:        quiet-falls-9626.herokuapp.com
                                                Server Port:            80
                                                
                                                Document Path:          /
                                                Document Length:        2140 bytes
                                                
                                                Concurrency Level:      250
                                                Time taken for tests:   3.141 seconds
                                                Complete requests:      4000
                                                Failed requests:        0
                                                Total transferred:      9652000 bytes
                                                HTML transferred:       8560000 bytes
                                                Requests per second:    1273.50 [#/sec] (mean)
                                                Time per request:       196.309 [ms] (mean)
                                                Time per request:       0.785 [ms] (mean, across all concurrent requests)
                                                Transfer rate:          3000.94 [Kbytes/sec] received
                                                

                                                This endpoint renders the template with passed variable

                                                ab -c 250 -n 4000 http://quiet-falls-9626.herokuapp.com/hello/tesla
                                                
                                                Document Path:          /hello/tesla
                                                Document Length:        1046 bytes
                                                
                                                Concurrency Level:      250
                                                Time taken for tests:   4.576 seconds
                                                Complete requests:      4000
                                                Failed requests:        0
                                                Total transferred:      5276000 bytes
                                                HTML transferred:       4184000 bytes
                                                Requests per second:    874.14 [#/sec] (mean)
                                                Time per request:       285.996 [ms] (mean)
                                                Time per request:       1.144 [ms] (mean, across all concurrent requests)
                                                Transfer rate:          1125.96 [Kbytes/sec] received
                                                

                                                And a benchmark against a Phoenix app serving data (keep in mind SSL shtuffs):

                                                Server Software: nginx/1.6.3
                                                Server Port: 443
                                                SSL/TLS Protocol: TLSv1,DHE-RSA-AES256-SHA,4096,256
                                                
                                                Document Path: /api/v0/messages
                                                Document Length: 1886 bytes
                                                
                                                Concurrency Level: 200
                                                Time taken for tests: 11.581 seconds
                                                Complete requests: 10000
                                                Failed requests: 0
                                                Keep-Alive requests: 10000
                                                Total transferred: 21880000 bytes
                                                HTML transferred: 18860000 bytes
                                                Requests per second: 863.47 [#/sec] (mean)
                                                Time per request: 231.624 [ms] (mean)
                                                Time per request: 1.158 [ms] (mean, across all concurrent requests)
                                                Transfer rate: 1844.99 [Kbytes/sec] received
                                                
                                                1. 2

                                                  You can just “crank dynos”, but I wanted to write an article about how to get to 1000 r/m efficiently. Any Rails application ran get to 1000 r/m by scaling horizontally, yes, but doing it efficiently with the least amount of servers possible is another matter entirely.

                                                  I’ve just seen so many overscaled Ruby apps that I knew this article had to be written.

                                                  1. 4

                                                    I understand, I’m not talking about cranking dynos either. I’m asking if/why Ruby/Rails is just this slow out of the box. I would expect 1000 r/m to be possible with a default configuration on any modern webstack.

                                                    In otherwords, it seems like not using Ruby is is the most efficient way to scale to 1000 r/m.

                                                    1. 1

                                                      Oh, it’s certainly capable. Consider that Basecamp claims a ~25ms median response time, Shopify has variously claimed ~45-100ms. And both of those companies are at massive scale.

                                                      1. 2

                                                        I don’t understand your response. I am talking about requests per minute, not response time.

                                                1. 2

                                                  The description of Unicorn is wrong.

                                                  while downloading the request off the socket, Unicorn blocks all of your other workers from accepting any new connections, and your host becomes unavailable.

                                                  Sockets don’t work like this, at all. Using curl it’s easy to show this doesn’t happen:

                                                  [terminal 1] $ curl –limit-rate 1 localhost:8080

                                                  [terminal 2] $ curl localhost:8080

                                                  The second will return immediately, even while the first connection is still slowly trickling the request.

                                                  @nateberkopec conflates a listening socket with an accepted socket. An arbitrary number of connections can be accepted from a listener, and this has no effect on the listener, or any of the other accepted connections.

                                                  Otherwise, good intro on network concerns in web apps. :)

                                                  1. 1

                                                    Hm, this may be related to Ruby’s implementation of sockets. It was my understanding that this behavior (socket blocking) was the reason Unicorn cannot serve slow clients effectively. I’ll have to do more research. Thanks!

                                                    1. 2

                                                      You’ll be able to serve worker_processes slow connections but not any after that. Typically Ruby and Python app servers are run with low numbers of workers, so you can still only handle a limited number of long-lasting requests. Something that buffers the requests thus still provides a huge advantage.

                                                      1. 2

                                                        That makes so much sense. I’m not sure why I thought it was more complicated than that.

                                                  1. 10

                                                    As thorough as this article is, it seems to gloss right over what I’d consider a five alarm fire.

                                                    its average server response time is 300ms

                                                    Maybe spend some time looking at that number? The same argument that you don’t need to go Twitter scale would seemingly apply to picking the right heroku stack. If you can’t get at least 10 req/s out of webrick, don’t waste time picking a new server layer. No?

                                                    1. 3

                                                      I see what you’re saying, but if you check out the linked “Scaling Twitter” presentation, Twitter was averaging 250-300ms server response times in 2007. And serving 600 requests per second. Stupid? Maybe. But sometimes you’re stuck with that and you need to start pulling other levers.

                                                      It’s also worth noting that an IO heavy application can increase throughput for free by switching to a multithreaded app server like Puma. Yes, your average response times won’t improve, but the amount of requests you can serve per second will increase as Ruby drops threads waiting on IO to go do other work.

                                                      Average response times are just one part of the scaling equation - there are other levers to pull. This post is about those other levers. The rest of my blog has plenty of resource on how to decrease response times.

                                                      1. 3

                                                        I liked Secrets to Speedy Ruby Apps On Heroku (though “On Heroku” is probably overly specific; scares away the rest of us).

                                                        I think the current post would be improved by more references to posts on caching, etc. And some guidance on when to do which. Like instead of saying this isn’t about memcache, suggest memcache first and then come back here?

                                                        1. 1

                                                          Thanks for the suggestion! All of these topics are so inter-related, I’m struggling on splitting them up into posts.

                                                      2. 1

                                                        Yep. If this were a Django app I’d be suggesting that you install Django Debug Toolkit and inspect the SQL queries required to construct one of these pages. Can the queries they be reduced, or cached, or indexed bettter, or run concurrently?

                                                        There’s presumably a Rails equivalent, or you can check out the query log or something. 300ms is a hell of a long time.

                                                      1. 3

                                                        The heroku router is not nginx.

                                                        1. 1

                                                          I swear it used to be. I’ll ask @schneems.

                                                          EDIT: You were right. It’s a custom erlang app: https://twitter.com/schneems/status/626500188970946560

                                                          1. 6

                                                            Long long ago it was. I work at Heroku on the routers :)

                                                            1. 2

                                                              Is the guy who wrote Learn You Some Erlang still on the team (think Fred Herb)? That was amazing. There’s also a book by the routing team (or ebook) though I can’t remember the name. Both are awesome reads.

                                                              Found it: Erlang in Anger.

                                                              1. 1

                                                                Yup, Fred Hebert is still here.

                                                        1. 8

                                                          we had 200 API servers running on m1.xlarge instance types with 24 unicorn workers per instance. This was to serve 3000 requests per second

                                                          This is what blew my mind.

                                                          1. 9

                                                            Well, yeah.

                                                            Given Little’s Law (L = λW), we can model this.

                                                            λ is the arrival rate (3000 req/sec), W is the average duration (which we can estimate at 1 seconds), and L is the average number of requests in-flight at any point in time, which is 3000. 200 instances running 24 workers each is 4800 workers total, so they’re running at ~65% capacity, which seems reasonable.

                                                            This is what happens when your scalability model is per-process but your processes are two or three orders of magnitude larger than a POSIX thread.

                                                            1. 2

                                                              An average response time of 1 second seems way too long, but I don’t know anything about Parse. Most Rails apps should be south of 300ms at least. Github and Shopify hover between 50 and 100ms.

                                                              1. 2

                                                                Seems, yes. Should, yes.

                                                                Is? Probably not.

                                                            2. 2

                                                              Extrapolating from one of my higher-traffic apps that gets 60 requests per second, 200 servers sounds about right. FWIW We’re using 4 c3.large instances with 4 workers each, so I’d guess their traffic patterns are quite different - it’s definitely a different problem domain.

                                                              I absolutely agree that a process per request model breaks down, and it does sound to me like moving beyond ruby made sense for their case. That’s the sort of problem that is great to have, honestly. This was a neat case study to read.