1. 3

    Just coming off holiday vacation, which was great. Really needed the break tbh. Unfortunately, also just discovered that our landlord is selling the house we’re renting, so we have to move by the end of the month. Welcome to 2017! :(

    I did some hacking over the holidays on the RTL8710 to get Rust up and running. Was a fun little project, hoping to dedicate some more time to it in the future. Particularly since I have no idea what I’m doing and it was a great learning experience :)

    Spent a bunch of time tuning up my astrophoto mount and doing some shakedown runs. Need to pack it all up for the move, but managed to acquire my first narrowband image which was exciting.

    And finally, getting back to working on my podcast (it’s a show which does quick overviews of CS algorithm/data structure papers). Trying to build up a backlog of completed shows before I go live, so that I can have a buffer in case I miss some weeks.

    1. 24

      This makes me very very thankful that the Rust team places such an emphasis on good documentation. It is so easy to let documentation fall by the wayside, and once you have commercial training providers pop up, it’s a lot harder to get documentation efforts going (because now you have businesses with a vested interest in the documentation remaining bad). Good documentation doesn’t just happen. It takes serious work and real buy-in from stakeholders, where they actually believe it is important to invest the time, money, and energy to do it right.

      1. 22

        It has certainly been a pleasure to have the support of the rest of the organization (both Mozilla and non-Mozilla) here. They’ve always said “we need good docs, and that means paying someone for them,” and it certainly would go much, much slower if I had some other job.

        1. 4

          Thanks for all the work! Your dedication really shows.

          Nothing advances language growth like good documentation (perhaps a fabulous, welcoming community, but Rust also has that).

        2. 3

          This has been my observation as well. As soon as you get a for-profit-company who makes their money by selling “professional services” running a project, the documentation always falls by the wayside. Typesafe (or, Lightbend, as they prefer to be called now) is no exception here - and why should they be? Their business model depends on them pumping out ShinyNewThings as fast as possible and then selling consulting services. It really shows in the Scala ecosystem, so many Lightbend projects with flashy webpages touting their Reactive Big Data Synergy, and then the UX for them is terrible.

          Meanwhile, the not for profit communities behind Rust, Clojure, Python, Elixir, etc put much more emphasis on delivering a smaller set of composable building blocks with thorough documentation.

          I know the author of the post in the google-groups thread says he/she doesn’t believe this is the case, but I’ve yet to see an exception here, it absolutely is not specific to scala.

          1. 1

            i think that’s the best explanation - not so much having a vested interest in bad documentation so they can sell training, but that features will always have a higher return on investment than documentation will, so that’s where time and energy gets focused.

          2. 5

            It also helps that Rust’s design is a lot cleaner than Scala’s. While both Rust and Scala are larger languages than most, in Rust, every language feature has a clear unique purpose, and it would be very hard to achieve all of Rust’s design goals with a smaller language. On the other hand, Scala is full of features that were thrown in just because they initially seemed like a good idea.

            1. 2

              Scala is full of features that were thrown in just because they initially seemed like a good idea

              Could you mention a few?

              1. 4

                Subclasses, traits and implicits: They all serve overlapping purposes (variations on ad-hoc polymorphism), which suggests they should be merged into a single feature. (Please don’t bring Java compatibility as an excuse.)

                Case classes make it easier to manipulate value objects by value, but their physical object identities are still there, waiting to be accidentally used. Instead, Scala could and should have provided actual value types. (Again, please don’t use Java compatibility as an excuse.)

                Extractors are inelegant: They hard-code support for a very specific use case into the core language, and they make pattern matching exhaustiveness checking unnecessarily difficult. If you actually want to enhance the expressive power of pattern matching, Haskell’s pattern guards are a superior solution.

                1. 7

                  Though I think we could link some of these features together, Java compatibility is a major design goal of Scala. Ignoring the reason for features to exist makes it easy to call them not good.

                  I also think that Scala follows a very C++-style design philosophy. Throwing in a huge amount of features gives people flexibility so long as they know what they’re doing.

                  As to whether this is a good idea… depends on who you ask ;)

                  1. 3

                    Subclasses, traits and implicits: They all serve overlapping purposes (variations on ad-hoc polymorphism), which suggests they should be merged into a single feature.

                    Classes and traits offer a clean distinction between classes that have initialization and classes that don’t. Having used a language without it, this distinction is essential to having practical multiple inheritance; I wish other languages would adopt it.

                    Implicits alone couldn’t offer the same functionality as inheritance. I hope there’s a better design “out there” - something that offers the functionality of both - but I’ve never seen it.

                    Case classes make it easier to manipulate value objects by value, but their physical object identities are still there, waiting to be accidentally used.

                    Where? What’s the difference? I mean sure you could call System.identityHashCode on a case class and get unpleasant behaviour, but you wouldn’t do that by accident.

                    1. 2

                      Classes and traits offer a clean distinction between classes that have initialization and classes that don’t.

                      Why do you need this distinction in the first place? In OCaml, heck, in C++, a class without initialization is just… a class without initialization. Going even further, in Eiffel, all effective classes have creation procedures, it’s just that some classes have empty ones.

                      Having used a language without it, this distinction is essential to having practical multiple inheritance

                      You’re conflating issues here. The linked article describes the unfortunate consequences of Python’s superclass linearization strategy for modularity: embedding one class hierarchy into another breaks the chain of superclass methods reached by repeatedly calling super. But this isn’t specifically related to initialization: it causes problems for normal (non-constructor) method calls as well.

                      Implicits alone couldn’t offer the same functionality as inheritance. I hope there’s a better design “out there” - something that offers the functionality of both - but I’ve never seen it.

                      A good starting point would be dissecting inheritance into multiple features, each of which does one thing and does it well.

                      Where? What’s the difference?

                      You can call eq on Options and Lists. How does this make sense?

                      1. 2

                        The linked article describes the unfortunate consequences of Python’s superclass linearization strategy for modularity: embedding one class hierarchy into another breaks the chain of superclass methods reached by repeatedly calling super. But this isn’t specifically related to initialization: it causes problems for normal (non-constructor) method calls as well.

                        In theory yes. In practice __init__ is where the problem happens, 99.9% of the time. Many languages feel these problems are severe enough to ban multiple inheritance outright; I find the Scala approach strikes the best balance (a class may inherit from multiple classes, but from at most one class that requires initialization), and the class/trait distinction is the simple way to implement that.

                        A good starting point would be dissecting inheritance into multiple features, each of which does one thing and does it well.

                        A starting point isn’t enough - Scala is a production language, not a research language. Choosing a mature approach over a supposedly better but unproven one is not bad design.

                        You can call eq on Options and Lists. How does this make sense?

                        It makes exactly as much sense as calling eq ever does.

                        1. 1

                          It isn’t often that I say C++ makes sense, but, in this particular regard, it does: when an object of a base class is being constructed, no object of the derived class exists yet, so virtual member function calls inside a base class constructor are resolved to the implementation provided by the base class: http://ideone.com/Ytr6xm . Even if you use virtual inheritance: http://ideone.com/zvUbI5 .

                          On the other hand, Java and Scala take the position that, even inside base class constructors, method calls must resolve to the implementation provided by the derived class: http://ideone.com/zv7iOq , http://ideone.com/uw1F43 . This is awkward precisely because it creates the initialization issues you mention - you could be calling a method of a class whose initialization logic hasn’t yet run.

                          To summarize: In C++, the constructor is what creates an object in the first place. In Java and Scala, the constructor is what runs immediately after the object has been created. The latter is an inferior design, because there exists a point in time, between object creation and initialization, in which the object is in a bogus state.

                          1. 1

                            when an object of a base class is being constructed, no object of the derived class exists yet, so virtual member function calls inside a base class constructor are resolved to the implementation provided by the base class: http://ideone.com/Ytr6xm . Even if you use virtual inheritance: http://ideone.com/zvUbI5 .

                            This is very confusing behaviour too. There’s no perfect answer here (except perhaps the checker framework with @Raw); I don’t think I’d call one approach inferior to the other.

                            1. 1

                              An object is a collection of methods that operate on a hidden data structure. There are two important things about a data structure: its invariants and the asymptotic complexity of its operations. Leaving the latter aside, the role of an object constructor is to establish the object’s internal invariants, which all other methods must preserve. Viewed under this light, the behavior of virtual member function calls inside constructors in C++ is the Right Thing ™.

                    2. 1

                      Subclasses, traits and implicits

                      I think lmm gave a good answer already. On top of that, I think that making up the requirement of merging typelclasses with dynamic dispatch is kind of a tall order given that Haskell itself can’t even manage to get type classes working in isolation.

                      Scala could and should have provided actual value types

                      Scala does provide value types. They are completely orthogonal to case classes.

                      If you actually want to enhance the expressive power of pattern matching, Haskell’s pattern guards are a superior solution.

                      This looks like the for-comprehensions Scala had since day one.

                      1. 2

                        merging typeclasses with dynamic dispatch is kind of a tall order given that Haskell itself can’t even manage to get type classes working in isolation.

                        I have no idea what you mean by “get type classes working in isolation”, but I’m pretty sure that, if you create an existential package where the existentially quantified type variable has a type class constraint, the methods of said type class are dynamically dispatched.

                        Scala does provide value types. They are completely orthogonal to case classes.

                        Okay, then the question is - why aren’t Option, List, etc. value types, when they clearly only make sense when used as value types?

                        This looks like the for-comprehensions Scala had since day one.

                        Pattern guards have nothing to do with monads. All a pattern guard does is produce values that can be used in the right-hand side of a pattern matching arm:

                        insert x (t:u:ts) | Just v <- node x t u = v : ts
                        insert x xs                              = leaf x : xs
                        

                        If node x t u evaluates to Nothing, then insert x (t:u:ts) evaluates to leaf x : t : u : ts.

                2. 1

                  (because now you have businesses with a vested interest in the documentation remaining bad)

                  I’ve seen this sentiment a bunch, is it really that common of a thing, or are people just getting angry at documentation and rationalizing it as EvilCompany trying to sell services?

                  E.g. I work at Elastic, and from time to time people complain about our documentation. And sometimes they claim it’s bad on purpose, because we want folks to buy services (these allegations almost always correlate with rageful tweets who ignore active attempts at help, fwiw).

                  I can 100% say that’s not the case for us… it’s just a part of the documentation that’s bad, or old and poorly worded. Our docs are in our github repo, we wrote a book and OSS’d it, and we have several full time technical writers on staff (which are distinct from our education/consulting teams). We recently added checks that run code snippets in the docs, and fail the build if you break them. Etc etc.

                  So I wonder if this sentiment is really justified, or if perhaps software just often has crappy documentation in places, entirely unrelated to offering services? Writing good documentation is hard, and good presentation of those docs often spans multiple departments (engineering for technical accuracy, marketing/web for proper integration into the site, infra if it requires special features like online REPL, etc).

                  I dunno, having been on the sharp end of the documentation stick, I can appreciate it isn’t as simple as “your docs suck because you make money on services”. I think people underestimate the work that goes into good documentation. And how quickly good docs turn bad due to bitrot.

                  Note: I know nothing about Scala, so it may really be the case :)

                  1. 2

                    So I wonder if this sentiment is really justified, or if perhaps software just often has crappy documentation in places, entirely unrelated to offering services?

                    I think it’s entirely likely that there’s no actual human from the business who looks at the docs situation and says “well, better not improve those; that would go against the best interests for the company”. That doesn’t mean there aren’t emergent factors at play which subtly incentivize other things over documentation which wouldn’t be there if the revenue was structured a different way. You don’t need ill intentions in order for this sentiment to be justified.

                1. 4

                  Against my better judgment, I decided to start a podcast. The premise is a bi-weekly show that talks about a CS paper, datastructure or algorithm. I’ve recorded a few shows and am now editing them. Spending a lot of time trying to figure out the right format, tweaking audio, setting up a better “studio” in my house, etc.

                  The biggest time-suck so far has been editing down the content. I originally wanted to do deep, technical reviews/crits of papers, just like a real journal club. But after cleaning up the narration and listening to the final audio… deep, hour-long reviews are just too dense to listen to. Without visuals it’s just too hard to follow imo.

                  So I’m going back and editing the episodes down to 30min, with a focus on showcasing the main algo/datastructure and the intuition behind them, without getting bogged down in the math or critiques.

                  Anyhow, it’s been fun. I’ve learned that my inner geek likes audio geekery too :)

                  1. 1

                    It’s a fairly well written article, but it doesn’t go far enough. Real programmers, of course, write in assembly.

                    1. 5

                      I think you mean:

                      Real programmers, of course, write raw binary.

                      1. 4

                        Real programmers write in Verilog! All this silly “software” is just abstraction on top of the real platform: physics!

                      2. 3

                        Ok, dudes, I fess up: Of course, the real programmer is Mel.

                      1. 6

                        I’ve reached recently the same conclusion, most websites don’t need javascript or very little. Newspapers, forums, video sharing, internal CRUDs. Cutting out the whole API + SPA stuff is often a lot more productive.

                        SPAs are nice for fancy features (live preview, realtime,..) but often not worth the complexity they add compared to the difference they make in the end-product.

                        Now most of my apps are js-free by default and then I pepper it with some light javascript when needed.

                        1. 8

                          This point of view never seems to take geography into account. It might work when you have 30ms latency. But, for example, I’m hosting my application for New Zealand users on Heroku, which offers either US or EU locations for deployment. So there’s a latency of 120-250 ms on every request. A roundtrip to the server for every little thing makes for a bad user experience in this situation.

                          1. 5

                            On the other hand, I’ve encountered plenty of SPAs that do terribly on bad networks. They tend to block/stall until a critical mass of resources have been downloaded, so you get to look at a blank page and spinner for several seconds instead of a progressively loading page. They also tend to download a silly amount of javascript (hundreds of Kb, sometimes Mb). They also get wonky if there are network hiccups and their state falls into some edge-case the programmer didn’t envision (because some APIs loaded but not others)

                            Source: I live in rural, upstate NY and have slow DSL. Half the internet is unbearable to use.

                            1. 1

                              Good point. There is no denying that the web is a horrible kludge of a platform, and your examples illustrate that. It’s very hard to get an application to work well. SPAs are, in a sense, a necessary evil. The “SPA platform” (for lack of a better term) wasn’t designed but evolved piece by ill-fitting piece.

                              Regarding edge cases though: I think that is more of a consequence of additional complexity present in SPAs rather than a drawback specific to them. If you move interaction with external APIs to the backend, it’s just as likely that the backend doesn’t handle all the combinations of responses properly.

                            2. 1

                              Here I would just host my app next to the users (bonus point: initial load is faster too). I agree that if you have to take into account a slow network and you can’t use a CDN, then, it’s gonna be custom solutions (local caching via js mostly).

                              1. 7

                                If we constrain the requirements enough (hosted nearby, CRUD only, no real time updates) then we can of course get a class of applications which don’t benefit from an SPA implementation. I’m not at all sure that this class contains “most” applications however. Nothing I’ve worked on in the last 5 years was in this class, for example.

                                I guess I just don’t like generalisations.

                                1. 1

                                  You can always host a backend-only app in multiple regions and you don’t have to restrict yourself to CRUD too.

                                  And if you have to, you can always have some pages use javascript for realtime.

                            3. 4

                              I find SPAs interesting, but mostly only if they go to the other extreme: only JS, no backend, to the extent that you can save the webpage offline and run it, because you have the entire app’s source code and required resources. If a backend is going to be obligatorily involved anyway, though…

                            1. 4

                              If you’re into caching algorithms, the (patented) ARC cache and (non-patented) CAR cache algos are good reads.

                              The PostgreSQL-ARC saga is an interesting read too.

                              1. 2

                                I’m Zachary Tong, and go by ‘polyfractal’ pretty much everywhere. I’ve been working at Elastic for the last 3+ years. I originally joined to write the Elasticsearch-PHP client (and still maintain it, alas), but have since migrated more to core Java development. I’m particularly interested in time-series, aggregations and related scalability problems. I also co-authored Elasticsearch: The Definitive Guide.

                                I work remotely, like most of the devs at Elastic. Since joining the company, I’ve lived in: Boston, Charleston, John’s Island, Plattsburgh (current location).

                                I suspect I’m like most folks here: I like to tinker with a lot of hobbies and side projects.

                                Non-tech hobbies:

                                Tech:

                                1. 1

                                  The sewing, wood working, leather working, and casting stuff is all really impressive and alien to me. How did you get started?

                                  1. 3

                                    A little bit of childhood experience and an obsessive desire to DIY things (known character flaw :) )

                                    My dad taught me basic woodworking as a kid, so I knew enough to get started and picked up more from Youtube/blogs/reddit. Ditto for sewing, my mom taught me how to sew on a machine as a kid. So after a hike with a really heavy backpack, I found an online community that DIY’s ultra-light equipment, ordered some cloth and just stumbled my way through the project.

                                    Casting was a plain silly idea, really should have just bought a ring like a normal person. But something about DIY'ing my band sounded fun/romantic/good story, and there were plenty of guides online. The internet is the great enabler, alas :)

                                1. 14

                                  My company (Elastic) is almost entirely remote on the engineering side of the house, so this may not necessarily apply.

                                  We’ve worked hard to have a strong culture of async communication to deal with timezones. Primary communication should go through hipchat/Slack, email, github tickets. Inside Slack, we encourage public channels over private communication, so more people can see the discussion and learn from it. If you need to talk to someone face-to-face, we hop on a zoom conference call. And if it is important or pertains to a lot of people, we always record these meetings and try to write up an email with notes afterwards.

                                  Once you’ve spread over all the timezones, someone is missing something at all times… so it’s important to try and record/take notes as much as possible.

                                  We have weekly engineering meetings, and smaller team meetings, over Zoom. These help keep everyone feeling connected to real humans. If you happen to work in one of our offices (we have a few, mainly for sales/marketing but some devs like to work from an office) we try to have everyone join the calls, rather than sitting in a conference room sharing one screen. Conference rooms inevitably leads to side discussions that are impossible for remote workers to hear, let alone join. So even if there are 10 people in one office, they all join from their personal laptop, sitting at their desk.

                                  We have an always-on video call that you can join to just hang out. Some people use it for impromptu discussions, others just like the background noise. Many don’t use it at all.

                                  It takes a lot of work to keep a remote company running smoothly. You have to be conscious about communication and recording/sharing for timezones that aren’t around.

                                  With all that said, we’re lucky that the main body of employees are remote, and only a few work from offices. I don’t think the other way around would work: you really need the majority of people remote so that the communication lanes stay open. If the remote workers are in the minority, you’ll have an uphill battle trying to stay included. It’s just human nature to drift towards in-person communication.

                                  1. 1

                                    Thanks for these! I had actually just setup a Tiny Tiny RSS image for the few engineering/academic blogs that I follow, this list will help fill it out. Skimmed some of the various blogs and they all look fascinating.

                                    1. 9

                                      Yeah this seems great and all, but I’m on a team that is all across the globe, and dedicated “office hours” would never work in all the timezones that we have people in. We actually have people with 0 overlap in their 8 hour workdays.

                                      I use email / internal forums / slack / IM / phone but I really like email, and it’s almost for the same reason as this article. I let email come in all day, but I dedicate an hour or so a day where I am dedicated to answering that email. Seems to work for me the same way that this article is suggesting. I can always set “Do Not Disturb” on my IM / Slack and force people to have to email or call me if it’s really that urgent.

                                      1. 14

                                        It’s weird how people think if something has been around for 10 years, it suddenly sucks and needs to be reinvented. I think email is great, and anything else is just glorified email. I can see why people would want to replace IRC though. I also think people forget there is a difference between Instant Messaging and Offline Messaging.

                                        1. 8

                                          anything else is just glorified email

                                          glorified centralized email, which makes it terrible right off the bat with no additional discussion required.

                                          1. [Comment removed by author]

                                            1. 4

                                              I don’t need to use anything Google has written to send and receive email to and from gmail accounts (and indeed, I don’t). Similarly, I can use any git toolset (not that I’m aware of one other than the usual one) to talk to github; from the source control perspective, it’s just another remote repository.

                                              1. 2

                                                I wish that were the case. I’ve stayed away for a lot of reasons, but every day my inbox gets another helping Google Calendar, Google Drive, Google Hangouts, and Google Groups - especially since Google Chat blocked chat with non-Google accounts. If I refuse all of it, I am the one last annoying person who objects to the standard and wants special treatment.

                                                Google owns email.

                                              2. 0

                                                Yes, those further illustrate the problem. Your point?

                                              3. 3

                                                People can run their own email servers, most don’t for 2 reasons: technical knowledge, and reliability.

                                                Also, why are you not complaining about centralized IRC, or centralized slack and friends?

                                                1. 2

                                                  Also, why are you not complaining about centralized IRC, or centralized slack and friends?

                                                  Em, I think whybboyd is? The response is to “anything else is just glorified email”, aka tools like IRC and slack. So whybboyd is saying “anything else is just glorified centralized email”.

                                                  So I think you’re in agreement :)

                                              4. 5

                                                It’s weird how people think if something has been around for 10 years, it suddenly sucks and needs to be reinvented.

                                                You can’t just let people use established standard protocols. This way lie interoperability and a functioning Internet.

                                            1. 3

                                              That’s great! I myself bought two of these machines off Ebay for work, a couple weeks ago. Don’t even have a use for the second one, but they were so cheap, he couldn’t help it.

                                              1. 1

                                                Awesome, hope your (eventual) build is as fun as mine was! :)

                                                The units were very pleasant to work with, and performance has been great so far. The only major downside that I can see are the non-standard rack sizes and the lack of much documentation / BIOS updates.

                                              1. 7

                                                Eh, I’m not sure if this is nitpicking or not, but the example isn’t targeting the OS. It’s targeting tools that are common on the OS: ssh, xargs, sort, uniq etc. In that light, those tools aren’t really any different from any other tool, including custom code and whole platforms like hadoop.

                                                That’s not to say there isn’t validity in using simple tools when they work. You don’t always need a hammer. But it’s not really “targeting the OS”, it’s targeting “simple tools commonly found on the OS”. But maybe I’m just splitting hairs :)

                                                1. 6

                                                  And the debate “what is the operating system” continues. For some, it’s just the kernel. For others, it’s the Kernel + user land. If it’s the kernel + user land, then POSIX is a valid “OS”, as it contains most of the utilities you question (ssh being the difference).

                                                1. 15

                                                  For work/fun: I built a 4-node Open Compute cluster to use as a desktop and home lab. I had X budget to spend on a new laptop/desktop…most people choose a Macbook Pro or a standard desktop. I decided to build a 4-node cluster using ebay'ed parts. It turned out pretty well! Pictures: https://imgur.com/a/c2SD4

                                                  For work, this week I’m mostly updating the Definitive Guide. Authoring a book is, as I’ve discovered, a never-ending anchor. We need to stop changing the software so I can take a break from updating the book! :angryfist:

                                                  Otherwise, the rest of my free time is going to wedding planning/logistics, so not much in the way of fun projects for a while. I’ve put my various hobbies on hold, otherwise I wouldn’t be helpful at all and my SO would be quite upset :)

                                                  1. 1

                                                    The cluster looks amazing, what’re you planning on using it for?

                                                    1. 3

                                                      Thanks! One of the nodes will be my day-to-day desktop. I’ve been using a Macbook Air for the last few years. It was fine for a while, but as I’ve moved into more intensive projects, it just doesn’t cut it. Our integration tests take like 40 minutes :)

                                                      The full cluster will be powered up when I need to run large Elasticsearch benchmarks/tests. I packed them full of memory so I could either run 4 nodes with a ton of resources, or spin up 12 nodes w/ 32gb each. I usually use a beefy Hetzner server for this sort of thing, but can decommission that now.

                                                    2. 1

                                                      That’s awesome! For a second, I thought the transformer was a car battery charger. :)

                                                      It never ceases to amaze me that people would blow their hardware budget on Macbooks when you can get so much more interesting/powerful stuff at that price.

                                                      My last company managed to double acquisition costs because they wouldn’t just let the devs build our own damn machines. :(

                                                      1. 1

                                                        What notebook do you recommend as a replacement for the Macbook Pro? I am checking Dell XPS 13 and Thinkpad X1 Carbon.

                                                        1. 3

                                                          Amusingly, for the price of a Macbook Pro I was able to build the 4-node OCP cluster and get a refurbished i5 XPS 13 touch model. :)

                                                          Admittedly, everything was used/refurbished, but still.

                                                          That said, the XPS 13 has been…problematic so far. The touch pad is really jumpy (need to tweak some Chrome and touchpad settings), mine refused to sleep when the lid is closed so now it’s set to hibernate, it makes a high pitch coil whine at times, etc. I’ve heard it doesn’t run Linux well either, so I’m attempting to see how much Win10 bothers me.

                                                          The hardware is really slick, it’s just the software/firmware that’s been a bit touch-and-go so far.

                                                          1. 3

                                                            I use the Thinkpad X1 Carbon with the highest tier i7 they offered. It’s 2015’s model aka “3rd Gen”, so Broadwell-based, rather than Skylake-based, which might have issues [0].

                                                            I run Linux Mint 17.3, and it works great. Battery life is better than any Linux laptop I’ve had, with the exception of an old Netbook that ran Gentoo. Anywhere from 3ish hours under heavy load to 7 hours on light browsing and text editing. Build quality of the X1C hardware has impressed me so far as well. Only downside I can pick at is the screen at full brightness isn’t quite as bright as I sometimes hope.

                                                            I’ve gotten to play with the newest XPS 13 (i.e. used two different ones for a day each), but not in Linux. For what it’s worth I didn’t have any of the issues that polyfractal describes. In my experience, Thinkpads end up getting their Linux hardware issues hammered out sooner-or-later. I have heard similar about the Dell lines that sell with Ubuntu pre-loaded, but I have no direct experience in the matter.

                                                            If you’re considering either of them a replacement for an Macbook Pro, I assume you must mean the MBP 13-inch, and I could at least recommend my 3rd Gen X1C. Neither of them would stand up to a 15-inch MBP, and I haven’t bothered to investigate which laptops might.

                                                            [0] http://mjg59.dreamwidth.org/41713.html

                                                            1. 1

                                                              So, my laptop is actually a kinda-chunky Lenovo Ideapad Y510p. It’s big, has a decent graphics card, full keyboard, and most importantly a matte screen. I’m pretty blind, so high-resolution displays don’t really do much for me sadly. I swapped it into using a Samsung SSD recently, and that’s made it an even happier camper. Battery life sans cable is about an hour and a half or two hours, less if I’m running it at full tilt boogie.

                                                              I tend to spend a lot of time around desktops and workstations, and the laptop is for the occasional blog writing at coffee shops or vacation gaming. I don’t really understand using littler keyboards, using touchpads, or wanting something svelte. I believe that if you’re on a machine, you use the best and most machine you can for your task, and ignore aesthetic considerations within reason.

                                                              That said, I make no claims that my approach is the only one–it’s just what’s worked for me. :)

                                                            2. 1

                                                              Thanks! I agree, when I started looking around at specs of the MBP vs what I could get in other notebooks… and then full desktops… and then used server equipment… it just seemed ridiculous to get an MBP.

                                                              But I’m pretty new to the whole OSX ecosystem (~2 years), so perhaps I’m just not quite as embedded as other folks :)

                                                            3. 1

                                                              Did you think about adding GPUs or did you run out of budget for that?

                                                              1. 1

                                                                I added an old Radeon HD6350 to the “desktop” node, which is powering a lower-res monitor. GPUs are a bit problematic in these units for a few reasons. First is just compatibility, other people are reporting issues with various newer GPUs, presumably because the BIOS is like 6 years old. Second is physical limitations, the plastic baffle that directs airflow limits the length of cards that can be placed in the riser. If you pull the baffle you could probably add a longer card, but then you’d need to make sure you have a unit stacked on top (and keep an eye on temps). And lastly, just weight. The riser card is pretty wobbly, I imagine a heavier card would start to bend the riser downwards precariously.

                                                                I’m planning on making a “tower box rack” for these soonish, and am going to see if I can get a PCI cable riser, then bolt the GPU to the case. Might make it easier to play around with better cards.

                                                            1. -8

                                                              Seriously? “A Tale of Two —-” right after I publish my post “A Tale of Two Programmers”?

                                                              1. 9

                                                                A Tale of Two Cities came out 157 years ago, so yeah.

                                                                1. 10

                                                                  May I suggest a few other Dickensian names instead to alleviate this embarrassing collision:

                                                                  • DevOps Copperfield
                                                                  • Great Expectations (of 100% uptime)
                                                                  • Bleak House Deployment
                                                                  • The Old Curiosity DevshOps
                                                                  1. 8

                                                                    Two Tales Considered Harmful.

                                                                    1. 2

                                                                      A Tale of Two Hard Problems Considered Harmful

                                                                2. 5

                                                                  Do we have a limit like that? Sorry, I didn’t know.

                                                                  1. 2

                                                                    This seems to have been sorted out, but for the record, no, there’s no such rule. The thing about cliches is that they get used a lot; things like this are going to happen.

                                                                    1. 1

                                                                      Thank you. I will keep that in mind.

                                                                      1. -2

                                                                        I wouldn’t say this has been sorted out. Its disturbing when an entire community casually disregards flagrant disrespect for another person’s hard work. Let alone the troubling lack of creativity that compels a person to copy the title of another post and then downvote the original.

                                                                    2. 2

                                                                      Hello. I wrote the article, and @szalansky posted it on my behalf. I actually chose the title and wrote the article long before I discovered lobste.rs, so it was just a coincidence.

                                                                      1. -8

                                                                        It seems highly suspicious.

                                                                        1. 5

                                                                          FYI you are wrong, and on the internet of all places!

                                                                          1. -8

                                                                            The odds of you publishing a post with the same title scheme as mine on the same day that i did a few hours after i did is HIGHLY unlikely. Thanks for copying my name you shill.

                                                                            1. 4

                                                                              If you have 23 people in the same room, there’s a 50% probability that they will share the same birthday. Sufficient numbers and basic statistics cause all kinds of “suspicious” behavior.

                                                                              The internet is a big place. Collisions occur. Also, no one cares that your articles are titled similarly, and perhaps more importantly, the title “scheme” is neither new, original or overly creative.

                                                                              1. -2

                                                                                Uh.. your analogy doesn’t quite hold up. If 23 people are in a room and asked in sequence to say the first word on their minds, there is a good chance they will be influenced by the person who speaks before them. I posted my article a few hours (if that) before this one. The article (the only one, might I add) had a title extremely similar to mine. That, if nothing else, is highly suspicious.

                                                                                1. 2

                                                                                  My point was that, given enough people/items/ocurrences/events/whatever, you’re bound to have “suspicious” behavior which is attributable to random chance. It’s just statistics.

                                                                                  Besides, if you want to get defensive about naming, “A Tale of Two Programmers”, by Jacques Mattheij predates your article by a good five years. I think you should apologize to Jacques for using his title.

                                                                                  Edit: Or any of these “A Tale of Two Programmers” for that matter:

                                                                                  (Which is obviously silly. Because no one cares about titles. Just like no one cares about your title, or the OP’s title. Why am I still responding? I don’t know.)

                                                                          2. 4

                                                                            It is, however I genuinely enjoyed this article and thought others might enjoy it too.

                                                                      1. 2

                                                                        We use asciidoc at Elastic and have been (mostly) happy with it. We use a single book-per-project, but the entire documentation is built as a single book to allow inter-project linking. What’s nice is that the docs will fail to build if inter-project links break (e.g. another team re-arranges their documentation or accidentally change an anchor). Basically eliminates link-rot.

                                                                        Asciidoc itself is fairly powerful, and you can express most practical layouts that you may want. Tables can be fairly janky, so I personally try to stay away from that. Honestly, my major complaint is that the build errors are often very cryptic. For example, if you use the wrong heading “size”, the build may not break until later in the book when it encounters the next header and barfs. Which can be tricky to diagnose back to the root problem.

                                                                        All in all, git + asciidoc has been a good experience for us in terms of maintainability, versioning and flexibility.

                                                                        1. 11

                                                                          So what I took away from this article was: people underestimate the tricks employed by the systems they use. Or perhaps the point is that the “modern” developer just doesn’t need to care anymore?

                                                                          For example, the raw JSON for each host might be 400 bytes, but the data that is actually “indexed” may well be close to the “old timer” solution.

                                                                          Under the covers Elasticsearch and Lucene use some of the exact tricks that are mentioned in the article. Terms are tracked in inverted indices and referenced by ordinal, so low-cardinality fields (["up", "down"]) are highly compressed. Postings are sorted and compressed using frame-of-reference encoding. Search is done heuristically by leap-frogging the sparsest iterator. Doc values use offset/delta/table encoding. Filters are encoded via Roaring Bitmaps and evaluated with standard bitwise logic. Etc etc.

                                                                          I’m sure this applies to all “modern” systems including relational DBs, other NoSQL, etc. These systems use “old timer” methods so that you don’t have to.

                                                                          And if/when your data ever grows past a single host’s memory, I imagine the “old timer” methods start to look suspiciously like reinventing the flavor-of-the-month distributed system :)

                                                                          1. 5

                                                                            Heh… a side point that the article didn’t touch on, but I once heard an argument that there’s no need to care about floating-point imprecisions, because everyone has already done a lot of work to make sure it works out. Specifically, this was an attempt to justify storing currencies as floating-point.

                                                                            I mean, I suppose it wouldn’t have gone that badly. I eventually determined that the programmer in question believed the IEEE formats were decimal.

                                                                            (Never have an argument like this with a coworker unless you really want their ire. Live and learn.)

                                                                            1. 7

                                                                              Oh dear, that could have ended very poorly indeed. :) You can escape from knowing how data structures work, but you can never escape from floating points and numerical instability!

                                                                          1. 3

                                                                            Semi-related, is there an framework somewhere for building/testing your own Go AI to against? I’ve briefly looked before, but all I can find are Go servers that look like they were last updated in 1998. I imagine I’m just looking in the wrong place, since it seems like Go is a hot topic to train AI against. Does everyone just roll their own game engine/server to test against?

                                                                            1. 5

                                                                              You most likely want to integrate with KGS, which is a server where mostly humans play each other.

                                                                              1. 1

                                                                                Oooh, I feel silly now. I skimmed KGS, but it seemed to be for humans only. I dug deeper after your recommendation and it seems there is a computer-go room with more details. Thanks!

                                                                                1. 1

                                                                                  No problem! Getting oriented in a community is nontrivial. :) Good luck with your project, if you decide to spend time on it!

                                                                              2. 5

                                                                                Besides playing on KGS, which is great for getting games against humans, here are a few other resources.

                                                                                CGOS has traditionally been the place to get a lot of test games against other computer opponents, sadly since the original developer (Don Dailey) passed away it has been less stable and hence less used. Recently Hiroshi Yamashita set it up on another server and it seems to be getting some traffic at (http://www.yss-aya.com/cgos/).

                                                                                Nick Wedd also holds a monthly computer tournament on KGS.

                                                                                Finally the computer go community generally hangs out on the computer go mailing list (it can also be accessed through gmane).

                                                                                1. 1

                                                                                  Awesome, thanks for this!

                                                                                2. 2

                                                                                  there are some open source bot pachi is one of the strongest the gogui offer the twogtp for automating testing

                                                                                1. 2

                                                                                  The hindquarter system at the end is simultaneously very cool and the stuff of nightmares!

                                                                                  Update: also the stuff of nightmares: having to clean the darn thing.

                                                                                  1. 6

                                                                                    Nah, cleaning is probably a thirty minute job with a pressure washer. :)

                                                                                    No reason to expect a baaaah-d time. It’d be the goat-to solution. Ewe would hardly have to work at all, basically mutton to worry about.

                                                                                    1. 2

                                                                                      There’s so much to go over though. I’d constantly worry I’d missed something, and the failure mode is awful. Rather than people not being able to comment on a blog, or make an online purchase, you could give lots of people food poisoning.

                                                                                      Of course, the cleaning may be (at least partly) automated too… that could be another cool video!

                                                                                      1. 1

                                                                                        I thought that it had a lack of cleaning - however the x-ray process would have added benefit that x-rays do kill some bacteria.

                                                                                        I would also assume that the room is cold - at 5 degrees C bacteria growth is fairly limited.

                                                                                    2. 3

                                                                                      Glad I’m not the only one that found the last bit both super cool and more than a little disconcerting. Something about the speed at which it moves, the precision and the nearly-human-like motions planted it firmly in the uncanny valley. Plus the jigsaw probably didn’t help :P

                                                                                      1. 2

                                                                                        I may (probably) be wrong, but I believe 83(b) can only be used if your company allows you to exercise early before your options have vested. To quote from that link (emphasis from the article):

                                                                                        […] Section 83(b) election generally cannot be made with respect to the receipt of a private company stock option. You must exercise the option first and acquire the stock before you can make a Section 83(b) election, and you would only make a Section 83(b) election in that instance if you exercised the option and acquired unvested stock (if the stock acquired on exercise of the stock option was vested, there would be no reason to make a Section 83(b) election).

                                                                                        Not all companies allow you to exercise early, so if you have to wait out your vesting cycle the 83(b) isn’t helpful and you’re back to square one with the AMT (or equivalent in your country).

                                                                                      1. 6

                                                                                        Remember too that learning to manage memory safely in C/C++ is much harder than learning Rust.

                                                                                        I completely disagree with this statement. It’s to subjective.

                                                                                        Either learning manage memory safely in C nor grasp an ownership abstraction in Rust is hard. Very hard. And it’s depend on your background. The fact is, we don’t have so much time to invest in learning completely different abstraction.

                                                                                        there is no compiler checking up on you in C/C++ to make sure your memory management is correct.

                                                                                        That is what tools like valgrind for!

                                                                                        1. 11

                                                                                          As someone who only vaguely knows C++, I like Rust because it simply won’t let me compile something horribly broken. The difficulty / learning curve may ultimately be the same, but the timeline of feedback is very different.

                                                                                          Rust forces me confront my lack of knowledge immediately, or it simply won’t compile. Yes, this hard. And yes, this can be very frustrating. But at least I won’t churn out some piece of code that superficially looks ok but is a ticking time bomb.

                                                                                          In contrast, C/C++ will generally let me compile something that is horribly broken as long as it satisfies the language semantics. It’s only later that I’ll discover my dumb mistake, usually after I’ve moved on to different sections of code.

                                                                                          As a newbie to manual memory management, that’s huge for me. I want to know right now that I messed something up, not hours/days later when things mysteriously start crashing or misbehaving. I don’t want to blissfully continue coding, thinking I’m doing things right, when in reality it’s all a house of cards waiting to tumble down.

                                                                                          That is what tools like valgrind for!

                                                                                          Eh, that’s like saying its ok that a knife cuts you because there are bandaids you can apply to your skin. It’d be better if the knife simply couldn’t cut skin (or your skin was impervious to knife cuts). Don’t get me wrong, tools like valgrind are great! But I’d prefer if the language was a bit more proactive in protecting my (dumb) self instead of relying on secondary tools.

                                                                                          1. 2

                                                                                            Eh, that’s like saying its ok that a knife cuts you because there are bandaids you can apply to your skin. It’d be better if the knife simply couldn’t cut skin (or your skin was impervious to knife cuts).

                                                                                            I love your metaphor. But, what if we don’t play with knife at the first place? For me, it’s like using valgrind as part of continuous integration pipeline. No time bomb running in production. (Again, maybe it’s not simple as running rustc. if build it’s OK then you believe that no time bomb running in production)

                                                                                            I don’t mean to contra on Rust. I learn Rust too though. And I agreed on the timeline of feedback in Rust. It’s like Iterative programming on steroid.

                                                                                            1. 2

                                                                                              Eh, that’s like saying its ok that a knife cuts you because there are bandaids you can apply to your skin. It’d be better if the knife simply couldn’t cut skin (or your skin was impervious to knife cuts). Don’t get me wrong, tools like valgrind are great! But I’d prefer if the language was a bit more proactive in protecting my (dumb) self instead of relying on secondary tools.

                                                                                              This is why I trust only Luke Cage to code in C safely.

                                                                                              1. 2

                                                                                                Here is something that might be of interest then:

                                                                                                http://www.tedunangst.com/flak/post/heartbleed-in-rust

                                                                                                1. 3

                                                                                                  Yep, I’ve read that…and I agree with it. I don’t think anyone is claiming that Rust (or other languages that strive to be safer) will protect you from everything. You can still live-lock yourself, reuse buffers, not validate input, call FFI with bad parameters, botch your unsafety, etc etc. It’s not a panacea.

                                                                                                  But Rust does protect you from certain classes of bugs that C/C++ does not. I’ll take some over nothing any day :)