1. 10

    I think it’s just the bees knees. At this point I’m irritated using anything besides MVS. In practice, it does exactly what I want, dependencies get updated at a decent schedule (and I can always force them to update myself), and everything has a layer of predictability that is lacking in other systems. It’s too bad that it took so much drama to get to this new optimum state but I’m glad Go pushed through with it and hope more systems adopt MVS soon.

    Honestly, it makes me wonder what other local optima need to be re-evaluated with a fresh perspective.

    1. 2

      This is big because it looked like GopherJS was abandoned and dead for a while.

      1. 12

        A while back I bought two of these USB Thinkpad keyboards, using the old (good) keyboard layout: https://www.newegg.com/lenovo-thinkpad-usb-wired/p/N82E16823218006

        I have used the crap out of them. They are the absolute best.

        Internally it’s just a USB controller attached to the same keyboard that shipped in older Thinkpads, so I’ve already fixed up at least one keyboard with parts from eBay.

        Despite things like Vimium or i3 or other ways to reduce mouse usage, most folks still need a mouse from time to time. Reducing the travel time from your keyboard to your mouse seems really high value to me, and I’m lost why most of these custom or fancy keyboard people don’t focus on having a nearby mouse of some kind?? I’m not the OP of this thread, but I highly empathize: https://www.reddit.com/r/MechanicalKeyboards/comments/626sga/how_about_trackpoints/

        These Thinkpad trackpoint keyboards are perfect. The mouse is right there.

        1. 10

          I love my shinobi tex, a mechanical homage to the thinkpad design: https://tex.com.tw/products/shinobi

          1. 4

            Just got yesterday mine. Such a pleasure to have again some key travel, and feeling the fingers match the keys. Really nice to alternate with the laptop keyboard (X1E Gen1) and is an incentive to work more at the desk with a big screen. For me the trackpoint on the shinobi work much more precise and easy. I was expecting a little more pressure resistance from the keys, but in the end I think it is quite comfortable. It’s really nice too that there is a deeper mold in the keycaps. Was expensive, but I’m definitely happy about this purchase.

            1. 4

              oh my gosh i’ve never seen this before, this is amazing!

              1. 4

                Woah! This is the first keyboard I’ve seen in years that tempts me…

                1. 3

                  How are the key symbol printings holding up? I got mine a week ago and I’m already noticing L-Ctl, Esc, and frequent letters fading. It’s not a big deal since I don’t really look but I’m surprised.

                  1. 3

                    I’ve been using mine for ~9 months daily, and while it’s true that some letters started fading very quickly, they seem to have reached a “plateau”. Definitely the discolouring has slowed its pace or the keycaps would be blank by now.

                    1. 2

                      Same here. Fading on frequent used keys. Been using it since last november.

                  2. 5

                    Thank you for your comment. I feel the same way about trackpoints, and your comment made me order a ThinkPad USB keyboard :)

                    I really like the newer chiclet design, so I’ve picked a more recent model. Luckily they seem to be designed with a similar concept; reuse of the existing laptop keyboard design (see https://dontai.com/wp/2018/09/06/thinkpad-wired-usb-keyboard-with-trackpoint-0b47190-disassembly-and-cleaning/ for disassembly). The number of key rows don’t really bother me, and for all I’ve tried I don’t feel comfortable on keyboards with mechanical switches. Too many hours on a ThinkPad, I think.

                    1. 4

                      i am very happy lenovo is still making these keyboards, even if it’s the new layout

                    2. 4

                      I have one of these and I love it! I’m a sucker for the trackpoint and I love the pre-chiclet key design. It’s super portable too - I can easily throw it in my backpack with my laptop if I’m going to be out of the (home) office all day.

                      It’s a little sad that these version seem to be so unavailable these days :(

                      1. 4

                        I’d recommend ThinkPad TrackPoint Keyboard II because it is wireless - via Bluetooth or Wireless Nano USB Dongle.

                        1. 5

                          I own the first generation as wired version and the micro USB socket is absolute garbage. Two out of three keyboards lose USB connection when the cable is moved slightly. But, this problem can be fixed pretty easily by disassembling the keyboard, bending the socket back to normal shape and then adding a large solder blob to the socket case such that it can’t bend that easily anymore. I fixed both keyboards reliably with this procedure.

                      1. 2

                        This seems to have quite a different purpose from other high level audio/music programming languages, like TidalCycles and SuperCollider. Those are used in the “LiveCoding” scene — coding the music during a live performance — and are fundamentally concurrent, letting you create multiple sequences and loops that run on their own until stopped or modified. Handel, on the other hand, is linear. I guess it would be more useful for, as the title says, composing. It still seems a bit limited since it only supports one sequence of notes/chords, akin to using only one hand on a piano.

                        1. 2

                          it appears to support multiple concurrent sequences. did you see the examples? https://github.com/ddj231/Handel/tree/master/Examples

                        1. 5

                          But, why? There’s already a kqueue/epoll-based async networking package for Go. It’s called “net” and it’s in the standard library. Even better, it uses Go to make it appear as if everything is synchronous.

                          A bit tongue-in-cheek, but is the runtime so bad at scheduling that this is necessary?

                          1. 2

                            The Go runtime performance will be fine for most applications.

                            The README says a bit more about who the target audience is.

                            The goal of this project is to create a server framework for Go that performs on par with Redis and Haproxy for packet handling. It was built to be the foundation for Tile38 and a future L7 proxy for Go.

                            1. 1

                              Yes, but that leaves the interesting question unanswered: Where is it doing better? Is it about reducing the goroutine stack usage?

                              1. 1

                                At the end of the readme, there’s a benchmark.

                                1. 1

                                  Hm. So, from the benchmarks, it looks like they’re comparing to a goroutine spawn per IO operation. I should play around a bit to see how it compares if goroutines do more IO before exiting, or get reused.

                          1. 2

                            I can’t believe I didn’t think to post this earlier, but holy crap, get a bidet. :) A bidet attachment for your toilet is one of the greatest home-improvement uses of $30 and 5 minutes (easy installation!) I can think of.

                            1. 8

                              I bought a rowing machine. It turns out Reddit is absolutely uniform in advice that

                              1. rowing is one of the better aerobic exercises you can do in that it’s low impact and works a surprisingly large degree of your muscles, and
                              2. more surprisingly, the only answer for what rowing machine to get is a “Concept 2” brand. i’ve never seen such a uniform product opinion from everyone about anything. try searching reddit for “which rowing machine should i get?”. Amazon (which is sold out) has >5k reviews @ 5 stars. want a cheaper rowing machine? “buy a used concept 2, or just save up”, they say.

                              so I got a Concept 2 rowing machine. Unlike many other types of exercise, I don’t hate it, which means I’ve been much more likely to actually do it.

                              1. 4

                                so I got a Concept 2 rowing machine. Unlike many other types of exercise, I don’t hate it, which means I’ve been much more likely to actually do it.

                                I was going to comment about exercise filling me with energy, and giving me a cool hobby that keeps me away from the keyboard and gets me stronger and healthier in the process.

                                For me it was calisthenics. I stumbled upon the YouTube and IG communities with all the kids doing cool stuff like muscle-ups and front/back levers. Working towards an exciting skill turned workouts from being a chore to being a reward and one of the best parts of the day. I train every day now - why wouldn’t I?

                                Thanks for the Concept 2 rowing machine recommendation! I’m looking into rowing as a running alternative this winter.

                                1. 3

                                  my wife is a runner and former cross country coach, but has kind of worn out her knees. she says the rowing machine is better than any elliptical she’s used and at this point she uses it more than me. it is definitely a good indoor choice

                                  1. 3

                                    I think elliptical and rowing machines are both excellent, probably the best two out of all the machines, as they both exercise much wider sets of muscle groups than most of the others. Can understand elliptical not being great for people with bad knees, although it’s a lot better than running in that regards as there’s no impact. One thing to watch out for with rowing machines is lower and mid back - it’s really important to pay attention to posture and not curve your back too much while rowing as it can put a lot of strain on your spine.

                                2. 3

                                  I’d say the same! I row in front of my TV year-round. Perhaps the only cardio I’ve really enjoyed. It helps me sleep longer and eat more intentionally. I’m not suddenly an Olympic god pulling sub-7s, but I feel better. I’m getting better!

                                  I’ve almost 3 Mm on my Model D, on the waiting list for the Dynamic rower. There are other rowers, like the RowPerfect3 and the Oartec DX, but I too lean towards C2. I love how they maintain the workout servers, charts, metrics, & CSV downloads.

                                  1. 2

                                    Concept 2 are very good for the money. I decided against them in favour of a WaterRower because it uses water resistance instead of gears, which means there’s no messing about with levels or anything, you just pull harder if you want more resistance, which seems much more natural to me. And it never ever clicks or clunks and you don’t have the weird fan noise effect. I also think it looks a lot nicer in wood (totally subjective of course), and you can store it upright against a wall so it takes up a lot less space. They are more expensive, admittedly, but lots of the parts have a lifetime guarantee and they’re really good about replacing them. Had mine for over 5 years with barely any problems - just one of the seat rollers broke after maybe 3 years, so I got in touch and told them, and they sent me a pack of 4 new ones, no questions asked. Good company (which I’m not affiliated with!) and a really really nice piece of kit.

                                    1. 1

                                      I used to row in high school and have nearly completed a basement refurb and reorganization project in part to make space for an erg (aka rower). It’s amazing that such an uncomplicated machine can afford such an excellent, all-body, joint-friendly aerobic workout. The Concept II has been around forever and is excellent.

                                    1. 9

                                      Interesting tangental thought:

                                      I saw this article and the URL and was like, “oh its the guy who hates Go again!” But when I looked at the actual website this author actually writes a significant amount about Go, the vast majority not negative. It seems like that the users of lobsters seem to be only interested in posting and upvoting anti-Go articles, and it is we who are biased.

                                      1. 4

                                        I’m the author of the linked-to article, and as you sort of noticed I’m pretty fond of Go (and use it fairly frequently). If I wasn’t, I wouldn’t write about it much or at all. I suspect that part of the way my writing about Go has come out the way it has is that I think it’s a lot easier to write about problems (including mistakes that you can make) than about things that are good and just work.

                                        (We have one core program in our fileserver environment that’s written in Go, for example, but it’s pretty boring; it sits there and just works, and Go made writing it straightforward, with simple patterns.)

                                        1. 3

                                          I quite fond of Go myself, and I enjoy reading your articles even when I thought you only talked about the problems! 😂

                                          I think negative language news (especially regarding “newer” languages) has more success on this site, so there’s a secondary filtering happening as well.

                                        2. 0

                                          who’s the guy who hates Go?

                                          1. 6

                                            This retrospective is amazing. So much information and feedback about the experience around xi. Thanks for sharing the link!

                                          1. 12

                                            i’ve been really digging lwn lately

                                            1. 6

                                              Same. Just subscribed!

                                              1. 6

                                                oh you know I just realized all the posts I’ve loved recently have been @benhoyt. Good job, Ben!

                                                1. 14

                                                  You’re welcome. Do subscribe – that’s how the (very small) team makes their living and how the site keeps going.

                                            1. 3

                                              This is great! We wrote https://godoc.org/github.com/spacemonkeygo/tlshowdy (which takes a slightly different approach) to make it take even less code than 105 lines, if that helps anyone. See the Peek method (which will return the ClientHello and a new Conn with the handshake bytes restarted)

                                              1. 12

                                                This blog post exactly describes the workflow of Gerrit, except that Gerrit handles most of this workflow for you, especially in the case where PART1 or something needs updates in response to code review.

                                                GitHub adopting the Gerrit model would be amazing. In the meantime, boy oh boy this is such an advertisement for Gerrit.

                                                1. 3

                                                  So, what does this mean for Signal?

                                                  1. 1

                                                    Lots of discussion on this at the moment. It means that Secure Value Recovery could be used by a malicious Signal server to exfiltrate user data while attesting to some benign version of the code. That user data could be used to “recover” someone else’s Signal account.

                                                  1. 12

                                                    That’s interesting, however OpenRA is already great if you want to play Red Alert :)

                                                    1. 11

                                                      OpenRA is indeed super, amazingly great. One comment EA said is they are releasing this code under the GPL so it will be compatible with OpenRA, so I assume the expectation is OpenRA can become even better with this.

                                                      1. 4

                                                        I’m pretty sure the original code will help with some edge cases, but still… it could be remarkable if they released that code when OpenRA needed it for real. Releasing it when OpenRA is already a better engine overall sounds more like “since assets is all we can sell now…”. Even then, idSoftware used to open source their engines before they became retrogaming engines.

                                                        1. 3

                                                          so I assume the expectation is OpenRA can become even better with this.

                                                          Unless there were secret ancient programming techniques locked away, I doubt this would be the case.

                                                          1. 11

                                                            Perhaps “better” means “a more exact rendition of the original”, and yeah, I do think the original code could help there.

                                                            1. 3

                                                              It can help understand some game behaviours.

                                                            2. 1

                                                              Will the assets be there too?

                                                          1. 9

                                                            I was really interested in IPFS a few years ago, but ultimately was disappointed that there seemed to be no passive way to host content. I’d like to have had the option to say along the lines o “I’m going to donate 5GB for hosting IPFS data, and the software will take care of the rest”.

                                                            My understanding was that, one has to explicitly mark some file as something you’d like to serve too, and only then will be really be permanent. Unless it got integrated into a browser-like boomark system, I have the feeling that most content will be lost because. Can anyone who has been following their developments tell me if they have improved on this situation?

                                                            1. 3

                                                              I thought they were planning to use a cryptocurrency (“Filecoin”) to incentivize hosting. I’m not really sure how that works though. I guess you “mine” Filecoins by hosting other people’s files, and then spend Filecoins to get other people to host your files.

                                                              1. 2

                                                                This is a hard problem to solve, because you want to prevent people from flooding all hosters; so there has to be either some kind of PoW or money involved. And with money involved, there’s now an incentive for hosters to misbehave, so you have to deal with them, and this is hard; there are some failed projects that tried to address it.

                                                                IPFS’ authors’ solution to this is Filecoin which, afaik, they had in mind since the beginning of IPFS, but it’s not complete yet.

                                                                1. 2

                                                                  My understanding was that, one has to explicitly mark some file as something you’d like to serve too,

                                                                  Sort of… my recollection is that when you run an IPFS node (which is just another peer on the network), you can host content on IPFS via your node, or you can pull content from the network through your node. If you publish content to your node, the content will always be available as long as your node is online. If another node on the network fetches your content, it will only be cached on the other node for some arbitrary length of time. So the only way to host something permanently on IPFS is to either run a node yourself or arrange for someone else’s node to keep your content in their cache (probably by paying them). It’s a novel protocol with interesting technology but from a practical standpoint, doesn’t seem to have much benefit over the traditional Internet in terms of content publishing and distribution, except for the fact that everything can be massively (and securely) cached.

                                                                  There are networks where you hand over a certain amount of disk space to the network and are then supposedly able to store your content (distributed, replicated) on other nodes around the Internet. But IPFS isn’t one of those.

                                                                  1. 1

                                                                    There are networks where you hand over a certain amount of disk space to the network and are then supposedly able to store your content (distributed, replicated) on other nodes around the Internet.

                                                                    What are some of them? Is Storj one of those?

                                                                    1. 3

                                                                      Freenet is one. You set aside an amount of disk space and encrypted chunks of files will be stored on your node. Another difference from IPFS is that when you add content to Freenet it pushes it out to other nodes immediately, so you can turn your node off and the content remains in the network through the other nodes.

                                                                      1. 2

                                                                        VP Eng of Storj here! Yes, Storj is (kinda) one of them, with money as an intermediary. Without getting into details, if you give data to Storj, as long as you have enough STORJ token escrowed (or a credit card on file), you and your computers could walk away and the network will keep your data alive. You can earn STORJ tokens by sharing your hard drive space.

                                                                        The user experience actually mimics AWS much more than you’d guess for a decentralized cryptocurrency storage product. Feel free to email me (jt@storj.io) if some lobste.rs community members want some free storage to try it out: https://tardigrade.io/satellites/

                                                                        1. 1

                                                                          Friend, I’ve been following your work for ages and have had no real incentive to try it. As a distributed systems nerd, I love what you’ve come up with. The thing which worries me is this bit:

                                                                          decentralized cryptocurrency storage product.

                                                                          I’m actually really worried about the cryptocurrency part of this, since it imbues an otherwise-interesting product with a high degree of sketchiness. Considering that cryptocurrency puts you in the same boat as Bitcoin (and the now-defunct art project Ponzicoin), why should I rethink things? Eager to learn more facts in this case. Thanks for taking the time to comment in the first place!

                                                                          1. 4

                                                                            Hi!

                                                                            I guess there’s a couple of things you might be saying here, and I’m not sure which, so I’ll respond to all of them!

                                                                            On the technical side:

                                                                            One thing that separates Storj (v3) from Sia, Maidsafe, Filecoin, etc, is that there really is no blockchain element whatsoever in the actual storage platform itself. The whitepaper I linked above is much more akin to a straight distributed systems pedigree sans blockchain than you’d imagine. Cryptocurrency is not used in the object storage hotpath at all (which I continue to maintain would be latency madness) - it’s only used for the economic system of background settlement. The architecture of the storage platform itself would continue to work fine (albeit less conveniently) if we swapped cryptocurrency for live goats.

                                                                            That said, it’s hard to subdivide goats in a way that retain many of the valuable properties of live goats. I think live goats make for a good example of why we went with cryptocurrency for the economic side of storage node operation - it’s really much more convenient to automate.

                                                                            As a user, though, our primary “Satellite” nodes will absolutely just take credit cards. If you look up “Tardigrade Cloud Storage”, you will be able to sign up and use the platform without learning one thing about cryptocurrency. In fact, that’s the very reason for the dual brands (tardigrade.io vs storj.io)

                                                                            On the adoption side:

                                                                            At a past cloud storage company I worked at before AWS existed, we spent a long time trying to convince companies it was okay to back up their most sensitive data offsite. It was a challenge! Now everyone takes it for granted. I think we are in a similar position at Storj, except now the challenge is decentralization and cryptocurrency.

                                                                            On the legal/compliance side:

                                                                            Yeah, cryptocurrency definitely has the feeling of a wild west saloon in both some good ways and bad. To that end, Storj has spent a significant investment in corporate governance. There’s definitely a lot of bad or shady actors in the ecosystem, and it’s painfully obvious that by choosing cryptocurrency we exist within that ecosystem and are often judged by the actions of neighbors. We’re not only doing everything we can to follow existing regulations with cryptocurrency tokens, we’re doing our best to follow the laws we think the puck could move towards, and follow those non-existent laws as well. Not that it makes a difference to you if you’re averse to the ecosystem in general, but Storj has been cited as an example of how to deal with cryptocurrency compliance the right way. There’s definitely a lot of uncertainty in the ecosystem, but our legal and compliance team are some of the best in the business, and we’re making sure to not only walk on the right side of the line, but stay far away from lines entirely.

                                                                            Without going into details I admit that’s a bit vague.

                                                                            Anyway, given the length of my response you can tell your point is something I think a lot about too. I think the cryptocurrency ecosystem desperately needs a complete shaking out of unscrupulous folks, and it seems like that’s about as unlikely to happen as a complete shaking out of unscrupulous folks from tons of other money-adjacent industries, but perhaps the bar doesn’t have to be raised very far to make things better.

                                                                            1. 2

                                                                              The lack of a blockchain is a selling point. Thanks for taking the time to respond. I’ll check out the whitepaper ASAP!

                                                                              1. 1

                                                                                if we swapped cryptocurrency for live goats.

                                                                                … I kinda want to live in this world

                                                                      2. 1

                                                                        You might want to check out Arweave.org.

                                                                        1. 1

                                                                          I have the feeling that most content will be lost

                                                                          Only if the person hosting it turns off their server? IPFS isn’t a storage system, like freenet, but a protocol that allows you to fetch data from anywhere it is stored on the network (for CDN, bandwidth, and harder-to-block). The person making the content available is still expected to bother storing/serving it somewhere themselves, just like with the normal web.

                                                                          1. 1

                                                                            If you want to donate some disk space you can start following some of the clusters here: https://collab.ipfscluster.io .

                                                                          1. 1

                                                                            ¯_(ツ)_/¯ https://www.jtolio.com/ (using some old hugo release with my own theme)

                                                                            1. 2

                                                                              You might enjoy https://shru.gg/r for shrug copypasta (you dropped an arm)

                                                                              1. 2

                                                                                lol! i love that it escapes it for you

                                                                                1. 2

                                                                                  That’s what I made it for! Could never remember the sequence

                                                                              2. 1

                                                                                I really like your theme! That being said in my opinion the hyperlink underlines are a bit jarring though - I’d get rid of them. As well as 120+ characters per line being a bit hard to chew through.

                                                                                1. 1

                                                                                  yeah i’ve been thinking about narrowing it again. screens are so big though! maybe i can do something where when i float images out they’re allowed to go outside of the reading width to make it look less empty

                                                                              1. 4

                                                                                (I posted this to the HN thread but was late and missed the window of getting insight, so sorry for the x-post from HN)

                                                                                I’m very puzzled by the consensus group load balancing section. The article emphasizes correctness of the Raft algorithm was super important (to the point that they skipped clear optimizations!!11), but, then immediately follows up with (as far as I can tell) a load-balancer wrapper approach for rebalancing and scaling. My “this feels like consensus bug city” detectors immediately went off. Consensus algorithms (including Raft and Paxos) are notoriously picky and hard to get right around cluster membership changes. If you try to end run around this by sharding to different clusters with a simple traffic director to choose which cluster, how does the traffic director achieve consensus with the clusters that the traffic is going to the right cluster? You haven’t solved any consensus problem, you’ve just moved it to your load balancers.

                                                                                A solution for this problem (to agree on which cluster the data is owned by) is 2-phase commit on top of the consensus clusters. It didn’t appear from the diagrams that that’s what they did here, so either I missed something, or this wouldn’t pass a Jepsen test.

                                                                                Did I miss something?

                                                                                (If you did build 2PC on top of these consensus clusters, you’d have built a significant portion of Spanner’s architecture inside of a secure enclave. That’s hilarious.)

                                                                                1. 3

                                                                                  I once bought a Sharp Zaurus SL-C1000 to polish source code en route. The screen was good enough, but the keyboard wasn’t.

                                                                                  1. 1

                                                                                    I miss my Zaurus a lot. What a great little device.

                                                                                  1. 4

                                                                                    I am still undecided if async is something nice, or some sort of infectious disease that fragments code bases.

                                                                                    (Though leaning towards nice)

                                                                                    1. 11

                                                                                      I’m firmly in the infection camp. http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ remains my go-to explanation for why.

                                                                                      1. 3

                                                                                        Python is a language that had green threads via gevent and monkey patching, a surprisingly nice solution that lets you have it both ways… Though they still added an async keyword haha.

                                                                                        1. 3

                                                                                          IMO the async keyword feels really hacky, but I get why: they have to differentiate to maintain compatibility.

                                                                                          The idea of gevent/monkey patching seems like a better approach. Ideally, the language runtime exposes an interface that low-level scheduling/IO libraries hook into, much like the Rust approach.

                                                                                          1. 2

                                                                                            gevent doesn’t really work with C modules (of which there are a lot), which means you still have a split codebase where you have to worry about what is blocking and what isn’t.

                                                                                            contrast to go as described in the above link, which just assumes all C packages will be blocking and transparently starts a threadpool for you so you don’t have to worry about it.

                                                                                          2. 2

                                                                                            Same, but Rust seems like it has to go this route due to it’s unique design philosophy.

                                                                                            If this were any other language I’d argue that the runtime should provide it transparently and let users get at the details if they wish.

                                                                                          3. 4

                                                                                            In Rust specifically it feels non-native; something people are importing from more “managed” languages.

                                                                                            I find one std::thread per resource (and mpsc channels) rock solid, and at the right abstraction level for the Rust programs I write, so personally, I won’t be partaking in async.

                                                                                            1. 4

                                                                                              Threading is the 95% solution, it will almost always be just fine. Async is for the 5% of the time when a system really needs minimum overhead for I/O.

                                                                                              Really, I find that Rust is by default so fast that I seldom have to worry about performance. Even doing things The Dumb And Simple Way is often more than fast enough.