Threads for alicebob

  1. 1

    If I remember correctly, when Go’s original design goal was to not have any generics. Maybe it was something to do with the way C++ did templates and leaving a bad taste in everyone’s mouth? I don’t remember but I’ve used generics and have never been in a situation where doing without generics wasn’t preferable or at the very least not worse. I seem to only hear generics being misused or absolutely needing generics.

    Maybe we should have generics in case they’re absolutely needed knowing that 99% of use cases you don’t need them and you’re just being clever? Or am I being daft?

    1. 1

      and you’re just being clever

      Yes, but I find not trying to be clever one of the hardest things to do in programming. It’s so very tempting.

      1. 1

        How would you, say, make a decent multi-thread-safe hash map without generics? Go’s sync.Map has serious performance problems because the only data structures which can be generic are the built-in implementations of maps, channels and arrays. Everything else has to use runtime polymorphism with indirection through interface{}.

      1. 2

        well done on the domainname, got a chuckle out of that. Not currently looking for nix CI, but I’m happy people are working on that.

        1. 1

          Thanks!

        1. 24

          Compared to the mess I’ve seen with reflect, these are really crimes on the level of jaywalking.

          1. 1

            I’ve got one of the Tuxedo’s with a usable resolution (3200x1800). Battery life is… fine (>4h), speed is fine. It’s ugly, but it’s also easy to open up and replace things (I replaces the battery once already).

            1. 14

              Overall, this is a well researched and detailed article, but the tone comes across as “this doesn’t monomorphism 100% and therefore Go generics are bad and slow” - which as a prevailing sentiment, represents a simply incomplete analysis. The Go team was obviously aware of the tradeoffs, and so it seems unfair in many ways.

              One key thing not discussed in this article are generic containers where the element type was previously interface{} and no methods are ever called on that interface. In this extremely common use case, Go’s generics are likely to be as fast as manually monomorphized code.

              Another key use case is for general code that need not be performance-critical, where reflection may have been used previously. In these cases, generics are likely to be strictly faster than reflection as well (potentially modulo some icache issues for megamorphic call sites).

              Finally, this design allows for future compiler enhancements - including additional inlining of indirect/interface calls!

              As an aside, if you were doing semi-automated monomorphization before with text templating, you now have a much richer and more robust for such a toolchain. That is, you can use Go’s syntax/parser and type-checker, then provide a code generator that spits out manually monomorphized Go code. If nobody has done this yet, I’m sure it will happen soon, as it’s quite straightforward.

              1. 9

                I didn’t get that tone, especially with the conclusion of the article. The author encourages folks to use generics in certain cases, shows cases where they do get optimized well, and is hopeful for a future where they get either full monomorphization and/or for the optimization heuristics to get better.

                To me this seemed like a very fair article, even if they did miss the case that you mentioned.

                1. 3

                  One key thing not discussed in this article are generic containers where the element type was previously interface{} and no methods are ever called on that interface. In this extremely common use case, Go’s generics are likely to be as fast as manually monomorphized code.

                  The article mentions that byteseq is fast. This is just a special case of that: the vtable indirection can’t slow you down if you never dispatch a method. :-)

                  That is, you can use Go’s syntax/parser and type-checker, then provide a code generator that spits out manually monomorphized Go code. If nobody has done this yet, I’m sure it will happen soon, as it’s quite straightforward.

                  I was looking into this last night. I think you can still use the “go2go” tool from the design prototyping of generics, but it’s no longer being maintained and will probably become subtly incompatible soon if it isn’t already.

                  1. 0

                    It’s hard to take it seriously as anything other than the Go team continuing to hate generics and trying to do everything they can do to discourage people from using them.

                    The fact that there are people here talking about (afaict) continuing to use old Go code generators to support generic code without an absurd memory hit demonstrates that Go’s generics have not achieved the most basic of performance goals.

                    1. 21

                      It’s hard to take it seriously as anything other than the Go team continuing to hate generics and trying to do everything they can do to discourage people from using them.

                      sigh It’s hard to take you seriously with this comment. You might have different opinions/preferences than the Go team, but to assume that they are trying to sabotage themselves is ridiculous.

                      Go’s generics have not achieved the most basic of performance goals

                      I’ve written and deployed at three major Go systems – one of which processes tens of petabytes of video per week – and I can count the number of times monomorphisation was necessary to achieve our performance goals on one hand. Generally, I copy/paste/tweak the < 100 lines of relevant code and move on with my work. Performance is not the only motivation for generics.

                      I’ve also written a fair bit of C++ in my life & have also had the experience where I had to disable monomorphization to avoid blowing the instruction cache. To say nothing of compile times.

                      You don’t like Go. That’s fine, but maybe don’t shit on the people who are working hard to create something useful for the people who do like it.

                      1. 8

                        Generally, I [do something simple] and move on with my work.

                        That summarizes my Go experience in the last decade. I miss this in basically every other language now.

                        Also the generics turned out very nice imho, I’m impressed with the balance they managed to strike in the design.

                        1. 6

                          Also, this is…. clearly a compiler heuristic that can be tightened or loosened in future releases. They just chose “all pointers are the same” in order to ship quickly.

                          1. 2

                            OTOH, no one can say they shipped generics “quickly“. Even Java did it quicker, though not better.

                            1. 1

                              It only took 11 years after this was posted: https://research.swtch.com/generic :-)

                          2. 0

                            The Go team has stated that they do not like generics. They only added them now because everyone working with Go was continuously frustrated by, and complaining about, the lack of generics.

                            Given that background, I believe it is reasonable to assume that the Go team did not consider competitive generics to be a significant goal.

                            Worrying about compile time is something compiler developers should do, but given the option between faster build time for a few developers, vs faster run time for huge numbers of end users, the latter is the correct call.

                            I obviously can’t speak to what your projects are, but I would consider repeatedly copy/pasting essentially the same code to be a problem.

                            I don’t like Go, but I also didn’t complain about the language. I complained about an implementation of generics with questionable performance tradeoffs, from a group that has historically vigorously argued against generics.

                            1. 6

                              The Go team has stated that they do not like generics.

                              Do you have a source for that claim? All I remember is a very early statement, that also has been on the website.

                              Looked it up. This was in since at least 2010[1].

                              Why does Go not have generic types?

                              Generics may well be added at some point. We don’t feel an urgency for them, although we understand some programmers do.

                              [1] https://web.archive.org/web/20101123010932/http://golang.org/doc/go_faq.html

                              1. 2

                                You’re right in that I can’t point to a single quote.

                                I can however point at the last decade of Go devs talking about supporting generics, which has pretty consistently taken the path of “generics make the language harder” (despite them being present in a bunch of builtin types anyway?), “generics make the code bigger”, “generics make compilation slower”. Your above quote even says that they recognize that it’s developers outside the core Go team that see the benefit of generics.

                                This is not me beating up on Go, nor is it me questioning the competence of the core team. This is me saying that in light of their historical reticence to support generics, this comes across as being an intentionally minimal viable implementation created primarily to appease the hoard of devs that want the feature, rather than to provide a good level of performance.

                                1. 7

                                  in light of their historical reticence to support generics

                                  Ian Lance Taylor, one of the largest and earliest Go contributors, was advocating or at least suggesting generics for quite some time. He was exploring design ideas since at least 2010, with a more serious design in 2013. I think this contradicts the “giving in and appeasing the masses” sentiment you’re projecting on the Go team.

                                  Come to think of it, he also wrote a very long rebuttal to a commenter on the golang-nuts mailing list who was essentially saying what you’re saying. I’ll see if I can find it. Edit: Here it is.

                                  1. 3

                                    this comes across as being an intentionally minimal viable implementation created primarily to appease the hoard of devs that want the feature, rather than to provide a good level of performance.

                                    That sounds like a very unconvincing argument to me. In my opinion they did a very good job with the accepted generics proposal, because it keeps the language simple while also helping to avoid a lot of boilerplate, especially in libraries. Also, the Go team has pointed out several points of the generics implementation they want to improve on in upcoming releases. Why should they do that if they just had implemented generics only to please “ the hoard of devs that want the feature”?

                                    1. 2

                                      I will say that compared to early generics proposals, the final design is quite a bit more Go-like. It’s unfortunate that the type constraints between the [] can get quite long, but if you ignore the type parameters, the functions look just like normal Go functions.

                                2. 4

                                  The Go team has stated that they do not like generics.

                                  I don’t think that is true at all. They stated (1) that they did not like either end of the tradeoffs with erasure (characterized by Java) causing slow programs and full specialization (characterized by C++ templates) causing slow compile times. And (2) that some designs are needlessly complex.

                                  They spent years refining the design - even collaborating with Academia - to minimize added type system complexity and choose a balanced performance/compile-time implementation tradeoff that met their objectives.

                                  given the option between faster build time for a few developers, vs faster run time for huge numbers of end users, the latter is the correct call.

                                  I couldn’t possibly disagree more.

                                  I would consider repeatedly copy/pasting essentially the same code to be a problem.

                                  I can’t think of any lowery severity problem affecting any of my projects.

                                  Anyway, I won’t reply to any further messages in this thread.

                                  1. 1

                                    They only added them now because everyone working with Go was continuously frustrated by, and complaining about, the lack of generics.

                                    I would phrase it that everyone not working with Go was complaining about the lack of generics, and the Google marketing team assigned to Go (Steve Francia and Carmen Ando being the most prominent) are working hard to sell Go to the enterprise, so it was priority to clear that bullet point up.

                                    People working with Go generally just bite the bullet and use code generation if necessary, but mostly just ignored the lack of generics.

                                3. 4

                                  Self-sabotage just for the sake of generics doesn’t make sense because this release slowed down build times for everyone not just people using generics: https://github.com/golang/go/issues/49569.

                              1. 3

                                You’ll have to remember to bypass the test cache if you modify data in external test. go test -count 1

                                1. 5

                                  Go’s test cache takes external files opened by your tests into account, via https://pkg.go.dev/internal/testlog

                                  1. 2

                                    I did not know that. That’s an impressive detail. Thanks!

                                1. 7

                                  I sometimes think about this in terms of “full stack” vs frontend / backend web dev. Obviously, truly “full stack” development is impossible unless you build your own hardware, OS, network stack, programming language, etc. etc. So “full stack” always at its best means “fuller stack” as compared to some alternative. I think the alternative it should be compared to is the alternative of saying “that’s not my problem.” “Oh, the button is too small to tap on mobile? That’s not my problem, I’m backend.” “Oh, the page has too slow of a time to first byte? That’s not my problem, I’m front end.” Etc. Full stack is just an attitude that if you care about the product, everything is your problem. :-)

                                  1. 13

                                    “Oh, the button is too small to tap on mobile? That’s not my problem, I’m backend.”

                                    “I would love to fix the button, but I fixed similar things in the past, and it didn’t look right, and I know I don’t have an eye for design, so I rather not. Also there are considerations about the screen size/software versions/things I don’t even know I should care about, which I don’t know enough about. So I’ll leave it to the people who actually know about this stuff, and they can do a quick solid fix. But while looking at the button noticed the page loaded a bit slower than I would like, and I saw some things I can improve there on the backend”

                                    1. 5

                                      I see this attitude sometimes called ownership. If you own it, everything is your problem. This attitude is absolutely valuable.

                                      However, I think there’s a question of means even within the shared goal of ownership or full-stack. If you see a problem, you could:

                                      • File a bug report, and navigate the process to get it prioritized, fixed, deployed to every one, pick up the change, deploy it to your context.
                                      • Change the code and deploy it to your context.

                                      Both are valid paths for an owner to follow. I think the second is often ignored or forgotten.

                                      1. 3

                                        Obviously, truly “full stack” development is impossible unless you build your own hardware, OS, network stack, programming language,

                                        Eh, choosing to take something off-the-shelf is a perfectly acceptable decision to make. For things like hardware and OS, that’s most likely the best decision. But the higher you get to the top, the more likely that you need to go custom. But even then, you might, for example #include<stdlib.h> for small parts of your backend.

                                        I was hired for a job once quite some time ago, when I figured the best solution was to make a very basic website that embeds some licensed video player that ran off a licensed media server that streamed stuff from an off-the-shelf camera that I screwed to the wall with off-the-shelf hardware. The job was “stream video from this location to the website”, so that’s full-stack - they didn’t hire me to just make a website, but rather to figure out the whole problem - but it actually made most sense for me to just buy 99% of the requirements. I perhaps could have done more custom and kept more of the money (the license fees came out of the same budget as my pay), but this way the project was done in two days instead of….. well i don’t even know. I probably still would have bought an off-the-shelf camera at least!

                                        Same thing with the website - good chance the full stack developer at least has some influence down to the hardware, but you’ll almost certainly just buy something even if you did have enough budget and know-how to design something so you save the time.

                                        1. 3

                                          Web developers calling themselves ‘full stack’ is one of my pet peeves. My team works on microarchitecture, architecture, language design and implementation, and OS and distributed system design and implementation. We’re aware that there are layers below us in the stack (we don’t do any gate-level design or anything that involves caring about the physics of IC fabrication processes) and layers above us (we don’t do graphics things or any user-facing HCI things [though that was how I started down this rabbit hole]). Full-stack web developers sit at the top one or two layers in two different stacks and deny the existence of the 20 layers below them in each stack.

                                          1. 4

                                            Your “ire” is misplaced, because it’s an industry term, sad as it is. Even most web developers I know who actually have some experience outside web dev hate the term, because you can be a web developer and still do SRE work, work on languages, or on the kernel. Sadly it’s too late, the ship has sailed. This is the term for web developers now. Maybe we should coin a new one for people actually going down from JS to a backend, to the OS and hardware :)

                                            1. 2

                                              The term changed many years ago though. I remember job postings for full stack engineers which meant taking care of the infrastructure as will as the web service code. I was very annoyed when it changed, but you can’t argue with descriptivists…

                                              I think the term you want exists already: generalist engineer. Although everyone will have a slightly different view on that one too.

                                        1. 5

                                          Apparently you can buy IPv6 addresses, use them for the servers on your home network, and then if you change your ISP, continue to use the same IP addresses?

                                          You need to be a RIR (RIPE/ARIN/LACNIC/APNIC/AfriNIC) member for that. The membership fee alone is within thousands/year. Then you need to arrange routing with the hosting providers, and those that are ready to do that will also charge at least hundreds per month. No public cloud I’m aware of supports that at all, so you also need your own hardware in a datacenter where your transit provider is present.

                                          In other words, owning your IPv6 network is completely out of reach for individuals and small projects. I believe it shouldn’t be that way and that RIRs are basically rent-seeking organizations now that resources they still can distribute (32-bit ASNs and IPv6 addresses) are anything but scarce, but I can’t see why it may change any soon.

                                          1. 10

                                            Vultr will let you do BGP with them for (as far as I know) no additional cost above the price of your VPS: https://www.vultr.com/docs/configuring-bgp-on-vultr/

                                            In the RIPE area at least, you can obtain a provider-independent IPv6 assignment via an LIR - you don’t have to go directly to RIPE. A cheap option is Snapserv, who offer an IPv6 PI assignment for 99 EUR/year and an ASN for a one-off fee of 99 EUR. These can both be transferred to another LIR if, for example, Snapserv went out of business, or you wanted to switch LIR for some other reason. They also offer IPv6 PA assignments for less money, but the trade-off is that a PA assignment is tied to the LIR.

                                            You do need to be multi-homed to justify the PI/ASN assignments, so you’d need to find another upstream provider in addition to Vultr. Someone I know uses Vultr and a HE tunnel to justify it.

                                            1. 1

                                              Interesting, that’s sure an improvement. My company is a RIPE member so I haven’t been watching the PI situation closely, I’m glad to see it improve.

                                            2. 9

                                              In other words, owning your IPv6 network is completely out of reach for individuals and small projects. I believe it shouldn’t be that way and that RIRs are basically rent-seeking organizations now that resources they still can distribute (32-bit ASNs and IPv6 addresses) are anything but scarce, but I can’t see why it may change any soon.

                                              I suspect the problem is routing tables. It would be trivial to assign every person a /64 without making a dent in the address space but then you’d end up with every router on any Internet backbone needing a few billion entries in its routing table. That would completely kill current (and near-future) hardware. Not to mention the fact that if everyone moving between ISPs required a BGP update, the total amount of BGP traffic would overwhelm networks’ abilities to handle the update rate.

                                              You need some mechanism to ration the number of routable networks and money tends to be how we ration things.

                                              1. 2

                                                I doubt this will ever be a problem in practice. Even among those who host their own servers, the number of people who want to own their address space is always going to be small.

                                                I’m also not advocating for making addresses free or charge, only for making them available for less than the current exorbitant prices that RIRs charge for membership.

                                              2. 2

                                                TIL, that’s really interesting. I just remember many, many years ago that people were entertaining this, but also with Sixxs and HE tunnels that kinda worked for a while.

                                                1. 2

                                                  Oh, but with tunnelbroker.net and similar, the provider owns the network, you just get a temporary permission to use it and can’t take it with you.

                                                  1. 1

                                                    Yes of course, but at least the way it works you could in theory use it longer despite switching ISPs. And I think my Sixxs account was nearly a decade old at the end. Some people might have moved cities three times in that time.

                                                2. 1

                                                  I always wish that addresses were more equitably distributed. With IPv6 there’s no reason not to. And yet ☹

                                                  1. 1

                                                    welp for some reason my ISP provides every customer a /64, I don’t know what the reason for that is. There is no single person the internet that needs a /64 and I’m certain no german household needs. But yeah waste tons of network space for no reason. IPv8 we’re coming..

                                                    1. 5

                                                      Its the minimum routing size and if you stray from it a lot of the protocol breaks, making it smaller would be insane. And its not wasteful, you could give every atom on the planet a /64, the address space is REALLY BIG. Ipv4 this is not. For it to show up in BGP it needs to be a /48, /32 is the minimum allocation. And there is as many of those as there are ip’s. It should be a /48 you’re given actually, not a /64 (or /60 /56 in comcast home/business cases)

                                                      Why do you believe ipv8 is needed because of /64 allocations? Can you back that up with some numbers?

                                                      I think we’re good to be honest: https://www.samsclass.info/ipv6/exhaustion.htm

                                                      1. 1

                                                        I haven’t done the math but I’ll let the last apnic report speak for itself in that regard (you’ll have to serach, its long and there’s no way to mark some chapter).

                                                        However, before we go too far down this path it is also useful to bear in mind that the 128 bits of address space in IPv6 has become largely a myth. We sliced off 64 bits in the address span for no particularly good reason, as it turns out. We then sliced off a further 48 bits for, again, no particularly good reason. So, the vastness of the address space represented by 128 bits in IPv6 is in fact, not so vast.

                                                        And

                                                        Today’s IPv6 environment has some providers using a /60 end site allocation unit, many using a /56, and many others using a /48

                                                        So It’s not really a standard that breaks things, because then things would already break.

                                                        I just don’t see a reason why we’re throwing away massive address ranges, even my private server gets a /64, and that’s one server, not a household or such thing.

                                                        1. 2

                                                          The main reason your LAN must be a /64 is that the second half of each address can contain a MAC address (SLAAC) or a big random number (privacy extension).

                                                          1. 1

                                                            So It’s not really a standard that breaks things, because then things would already break.

                                                            For routing, not in general, but going below /64 does break things like SLAAC. The original guidance was a /48, its been relaxed somewhat since the original rfc but can go down to a /64. Doing work or i’d pull up the original rfc. Going below /64 does break things, but not at that level being referenced.

                                                            I just don’t see a reason why we’re throwing away massive address ranges, even my private server gets a /64, and that’s one server, not a household or such thing.

                                                            Have to get out of the ipv4 conservation mindset, a /64 is massive yes, but 64 bits of address is ipv4 to the power of ipv4, that is… a large amount of space. It also enables things like having ephemeral ip addresses that change every 8 hours. Its better to think of a /64 as the minimum addressable/routable subnet, not a single /32 like you would have in ipv4. And there is A LOT of them, we aren’t at risk of running out even if we get allocation crazy. And thats not hyperbole, we could give every single device, human, animal, place, thing a /64 and still not approach running out.

                                                            1. 1

                                                              Today’s IPv6 environment has some providers using a /60 end site allocation unit, many using a /56, and many others using a /48

                                                              Also just realized that you might be confusing /60 or /56 as being smaller than a /64, its unintuitive but this is the mask of the subnets not the size. So smaller than /64 would be a CIDR above 64, not below. aka a /96 would break in my example. Its also why assigning “just* a /64 is a bit evil on the part of isp’s and the allocation should be larger.

                                                          2. 1

                                                            IPv8 we’re coming

                                                            fun fact: it’s called 6 because it’s the 6th version (kinda). Not because of the number of bytes (which is 8 anyway). You’re rooting for IPv7!

                                                      1. 2

                                                        dead-tree print editions and crude typesetting

                                                        The age of wasting lifeless corpses of trees for poor typesetting is, sadly, now. Old books have that problem occasionally, but in the last decade it became pervasive.

                                                        But Ada hasn’t quite taken off in the mainstream. Social factors - perhaps association with restrictive licenses and a proprietary compiler - have constrained adoption.

                                                        I wonder what proprietary compiler they are talking about. There are proprietary Ada compilers, but GNAT is free software.

                                                        Also, that book has the signature of Jean Ichbia, the principal Ada language designer!

                                                        1. 1

                                                          Old books have that problem occasionally, but in the last decade it became pervasive.

                                                          [anecdotal, no way I can find where I read this years ago] There was a period, around 2000?, where books had to be > 1000 pages or they wouldn’t sell. You ended up with books with whole std libraries included, just to pad the page count.

                                                        1. 8

                                                          Really a shame they chose to support a new diagram language like Mermaid rather than one that has been around for 30 years, is widely used, and extremely flexible? Especially since is already a massive set of Graphviz files on Github. A simple search finds approximately 2,441,463 Graphviz files on Github (Source: https://github.com/search?q=extension%3Agv&type=Code&ref=advsearch&l=&l= and https://github.com/search?q=extension%3Adot&type=Code&ref=advsearch&l=&l=). A Github search for Mermaid on turns up only 7,846 files (https://github.com/search?q=extension%3Ammd&type=Code&ref=advsearch&l=&l=), making extremely unpopular by comparison. Why would Github ignore Graphviz and choose to support Mermaid instead?

                                                          Perhaps they looked at the number of Graphviz files in existence and realized that due to it’s popularity, they’d have to have a lot more infrastructure in place and the cost would be much greater? For example, if you figured rendering a diagram cost 1 cent (just as an example, I have no idea how much it costs). Rendering their library of Mermaid would only cost them about $78. Whereas Graphviz would end up costing them $24,414.63. Perhaps there are other technical constraints here I am not aware of? Maybe they’ve already got good infrastructure in place for JS libraries, and therefore Mermaid is an easier implementation.

                                                          1. 9

                                                            To be fair, mermaid seems to be more of a format to be used inline in a markdown file, so searching for dedicated .mmd files won’t be a good indicator of its popularity.

                                                            (I don’t know anything much about either mermaid or viz, and don’t care whichever github supports)

                                                            1. 4

                                                              Graphviz is more akin to SVG than Mermaid. It’s an image format, not a text description language.

                                                              1. 4

                                                                I assume by graphciz they mean the dot format, which is meant for hand authoring. Though it seems like a different use case than mermaid.

                                                              2. 2

                                                                I mean, no reason we can’t have both in the future?

                                                                I say this as someone who only uses Graphviz, but I don’t think “older language/more files of it exist” is a fair comparison of which is more popular today. And I certainly wouldn’t praise Graphviz for being extremely flexible – AFAIK you resolve positioning errors (which are common) by cleverly adding invisible edges, or partitioning cleverly into subgraphs, or even by specifying coordinates for every single node, which is hell.

                                                              1. 5

                                                                (meta) Thanks for adding the short description of the project in the title, makes the list of articles here a bit easier to scroll through.

                                                                1. 2

                                                                  If it’s stupid, but it works … it’s not stupid.

                                                                  Well, maybe sometimes it’s a little bit dumb. I’ll take dumb but solves a problem over not solving the problem though.

                                                                  1. 1

                                                                    It does expose your TOTP code to the network.

                                                                    1. 1

                                                                      It is a fun hack, nothing anyone should use.

                                                                      1. 2

                                                                        i feel like maybe we should discuss that…

                                                                        is it exposing your TOTP code to the network? isn’t the whole point of TOTPs that any knowing the TOTP would not expose the underlying algorithm?

                                                                        is it even possible to guess a TOTP given knowledge of n previous TOTPs? i do know it’s fairly easy to brute force a TOTP when there is no rate limiting in place, and i think this would definitely be one of those cases

                                                                        1. 2

                                                                          Since it’s time-based, and nothing that I see (from my quick skim) is keeping track of which codes have been used, a network observer who sees what IP addresses you’re talking to should be able to bypass your TOTP protection as long as they connect to the same IP address within that 30 second window or whatever.

                                                                          1. 2

                                                                            I checked a few TOTP implementations out there and not all of them invalidate codes after use. Github for example happily accepts the same code multiple times within the same time period.

                                                                            I agree that blacklisting codes after use is good practice, but it’s just one more safety measure. Only checking the TOTP without blacklisting is not the same as not checking a TOTP

                                                                            1. 2

                                                                              Github for example happily accepts the same code multiple times within the same time period.

                                                                              That’s against the specs and a pretty serious bug. It’s called “one time” for a reason.

                                                                            2. 1

                                                                              If they can guess the IP then they have already broken your TOTP anyway…

                                                                              1. 4

                                                                                Somebody who can watch your IP traffic (watch, not decrypt!) does not need to guess the IP.

                                                                                1. 3

                                                                                  sure, but they still would need the SSH key to access the machine.

                                                                                  1. 1

                                                                                    TOTP is supposed to be the second factor that protects you when someone has stolen your first factor. If your security is only as good as the first factor, then you don’t have 2FA.

                                                                                  2. 2

                                                                                    Oh, sure, so they have a handful of seconds to try cracking your password before it rotates.

                                                                                    1. 1

                                                                                      Absolutely; that’s why a solution like fail2ban is probably the better idea and more comfortable to use.

                                                                                      1. 1

                                                                                        Yes, so at least it would provide that much protection – reducing the window of exposure.

                                                                            3. 1

                                                                              How? All the ip addresses exist, it just changes the firewall rules. You would have to bruteforce the code in the time to find it, no?

                                                                              1. 2

                                                                                no TLS for the TOTP “code”, it’s plain in the connection IP

                                                                          1. 3

                                                                            Currently, this method is not scalable as it requires over 1MB of CSS downloads and hundreds of requests per user. However, with the next upcoming draft of the CSS specification, CSS Values 4, it may dramatically shrink the number of requests per user by allowing the use of custom variables in URLs.

                                                                            sigh

                                                                            1. 1

                                                                              Thanks, fun background. I have to deal with crapp^Wmisconfigured telcos way too often still, to have them fix their DTMF settings.

                                                                              1. 3

                                                                                Impressive. And it links to sourceforge.net to finish it all off.

                                                                                1. 1

                                                                                  webb!

                                                                                  1. 4

                                                                                    I’ve found that having offline docs really helps to mitigate the sort of helplessness from frantically googling things, because the feedback loop can be sooo much quicker.

                                                                                    Highly recommend people install Dash/Zeal (and of course sources for libs you use when possible). It’s much easier to figure stuff out when you don’t have an HTTP request per page turn

                                                                                    1. 1

                                                                                      Is your internet particularly slow? I’ve never found the HTTP request of online docs to be particularly problematic, it takes far less time than reading the docs does, or typing the type name that I’m searching for, or whatever.

                                                                                      What I do find really useful is

                                                                                      • Duckduckgo’s !rust to search the rust standard library documentation from my browsers search bar (online, via http requests)
                                                                                      • Cargo’s cargo doc --open to build (and open in a browser) local documentation for my project and all it’s dependencies (except the standard library)

                                                                                      I really miss these when working with other languages, even for languages/dependencies with good documentation, I never know how to find it or navigate inside it as quickly. But that’s not because of latency from http requests, it’s because of familiarity and quality of search functions.

                                                                                      1. 2

                                                                                        I think a part of it is the interface of Zeal or Dash, I’d recommend you try it out! It’s the difference between a BMW and some Formula 1 car: both are pretty fast but one just outperforms consistently.

                                                                                        I used to read comics online back in school. Each page turn is a click. The next page would load in under a second, so real fast! But once I downloaded something offline and read from there, I was reading through stuff 3 or 4 times faster.

                                                                                        The data just being present is a huge advantage for these tools’ UIs, and you’re really able to move at the speed of thought

                                                                                        1. 1

                                                                                          was reading through stuff 3 or 4 times faster.

                                                                                          Ok, faster. But faster is not always better.

                                                                                      2. 1

                                                                                        +1 on offline docs

                                                                                        For me the low latency loop of local docs just helps me keep the flow of coding going. I love quickly searching docs and finding what I need, or using an IDE to open up the docs for me. I don’t feel helpless necessarily, just have the potential to lose my focus. And if my life is stressful at a given moment I tend to lose focus easily.

                                                                                      1. 6

                                                                                        I hope that is not your password on that sticky note

                                                                                        1. 41

                                                                                          I doubt it is. It’s hunter2 base64 encoded.

                                                                                          ➜ echo "aHVudGVyMgo=" | base64 --decode
                                                                                          hunter2
                                                                                          
                                                                                          
                                                                                          1. 28

                                                                                            I only see *******, does lobste.rs hide passwords if you put them in posts? That’s a neat feature. Here’s mine to test: *******.

                                                                                            1. 8

                                                                                              <DavidDiamond> Here’s mine to test: *******.
                                                                                              thats what I see

                                                                                              1. 3

                                                                                                hunter3

                                                                                                1. 3

                                                                                                  OH NO!

                                                                                        1. 27

                                                                                          I suggested a rant tag since this feels like a super vague long form subtweet that likely has a specific story/example behind it. I don’t understand what dhh actually complains about there and whether it’s genuine without knowing that context.

                                                                                          1. 11

                                                                                            Pretty sure he’s railing against the /r/programmerhumor style “software development is just copy-and-pasting from stack overflow right guiz!” meme. I’m sympathetic to his frustration because this joke (which was never that funny in the first place) has leaked into non-technical circles. I’ve had non techies say to me semi-seriously “programming, that’s just copying code from the internet, right?” and it galls a bit. Obviously we all copied code when we were starting out but it’s not something that proficient developers do often and to assert otherwise is a little demeaning.

                                                                                            1. 9

                                                                                              Obviously we all copied code when we were starting out

                                                                                              Well no, I copied examples from a book. Manually, line by line.

                                                                                              1. 6

                                                                                                I have 20 years experience and I regularly copy paste code rather than memorize apis or painstakingly figure out the api from its docs. I do the latter too, but if I can copy paste some code as a start all the better.

                                                                                                1. 4

                                                                                                  The meme starts being a bit more condescending now though. I frequently come across tweets saying things like « lol no one of us has any idea what we are doing we just copy paste stuff ». The copy pasting stuff is kinda true in a way (although a bit more complicated, even as a senior dev I copy paste snippets but know how to adapt them to my use case and test them), but the incompetence part is not. But it sadly starts to feel like there are tons of incompetent OR self deprecating people in the field. That’s pretty bad.

                                                                                                  This blog post resonates with me, it really pinpoints something.

                                                                                                  1. 3

                                                                                                    It’s cool if that’s what he wanted to say, but the inclusion of impostor syndrome and gatekeeping made me think otherwise.

                                                                                                    1. 3

                                                                                                      That was probably just him hedging against expected criticism

                                                                                                    2. 2

                                                                                                      Why am I paying this exorbitant salary, to attract people like you with a fancy degree and years of experience when all you ever do is a four-second copy-and-paste job?

                                                                                                      You pay it because it because I spent a long time achieving my degree and accumulating years of experience to be able to judge which code to copy and paste where, and why and in only four seconds at that.

                                                                                                      No matter the context these reductions are always boiled down to the easy to perform operation, never the understanding behind the operation.

                                                                                                    3. 5

                                                                                                      It absolutely feels like a subtweet, but I have no idea what the context was. Did someone at Basecamp just post the no idea dog one time too often?