Threads for alicebob

  1. 2

    bit weird they only accept paypal and bitcoin :(

    1. 1

      What would you like to use instead?

      1. 1

        good question. paypal always gives trouble with logging in for me.

    1. 6

      Okay, but this is because testify has a bad API. My be testing library doesn’t need a duplicate assert package because I just take the *testing.T and wrap it in be.Relaxed(t) when I want the test to keep chugging along.

      1. 2

        That looks simple, doesn’t drag in any dependencies, and uses generics for the Equal(). Nice!

        1. 1

          TY

      1. 10

        Github wiki/readmes are ok. IMO it’s harder to build and establish the discipline to document stuff than to actually write it down.

        1. 2

          same here; github wikis are good enough for my use cases. And it’s a normal git repo, so easy to edit/search locally.

        1. 1

          I can’t access this page, I get a “server not found” error.

          1. 1

            It was working for me not long ago; it seems available on web archive in the meantime: http://web.archive.org/web/20220914231535/https://poniesandlight.co.uk/reflect/island_rendergraph_1/

            1. 1

              ouch, sorry about that - it should work now. it might have been a case of too many requests at once, as the site is self-hosted.

              1. 2

                Still down here

                1. 1

                  😭 it looks like it’s working for me — but this seems like an intermittent issue with my hosting provider … i’d be grateful for recommendations for where to reliably host a static self-hosted website in 2022… github?!

                  1. 1

                    It looks like a DNS issue to me; somehow dig poniesandlight.co.uk @8.8.8.8 works but not with @1.1.1.1 (cloudfare)

                    1. 1

                      wow interesting … thank you for checking this out. the internet keeps surprising me.

                      1. 5

                        the internet keeps surprising me

                        The more I know about it the more I’m surprised it works at all.

                        1. 1

                          “the internet is made mostly of duct tape and hope” - anonymous

            1. 43

              Betteridge’s law of headlines strikes again.

              1. 6

                Not really, Betteridge’s Law is better applied to headlines like Will there every be a better VCS than Git?

                By assuming the answer to the headline in question is the default “No”, you’re basically assuming Git will never be surpassed.

                1. 3

                  That makes me sad. :-(

                  1. 18

                    Honestly I’m of the opinion that git’s underlying data model is actually pretty solid; it’s just the user interface that’s dogshit. Luckily that’s the easiest part to replace, and it doesn’t have any of the unfortunate network effect problems of changing systems altogether.

                    I’ve been using magit for a decade and a half; if magit (or any other alternate git frontends) had never existed, I would have dumped git ages ago, but … you don’t have to use the awful parts?

                    1. 16

                      Honestly I’m of the opinion that git’s underlying data model is actually pretty solid; it’s just the user interface that’s dogshit.

                      For what it’s worth, I do disagree, but not in a way relevant to this article. If we’re going to discuss Git’s data model, I’d love to discuss its inability to meaningfully track rebased/edited commits, the fact that heads are not version tracked in any meaningful capacity (yeah, you’ve got the reflog locally, but that’s it), that the data formats were standardized at once too early and too late (meaning that Git’s still struggling to improve its performance on the one hand, and that tools that work with Git have to constantly handle “invalid” repositories on the other), etc. But I absolutely, unquestionably agree that Git’s UI is the first 90% of the problem with Git—and I even agree that magit fixes a lot of those issues.

                      1. 4

                        The lack of ability to explicitly store file moves is also frustrating to me.

                        1. 3

                          Don’t forget that fixing capitalization errors with file names is a huge PITA on Mac.

                        2. 4

                          I’ve come to the conclusion that there’s something wrong with the data model in the sense that any practical use of Git with a team requires linearization of commit history to keep what’s changing when straight. I think a better data model would be able to keep track of the history branches and rebases. A squash or rebase should include some metadata that lets you get back the state before the rebase. In theory, you could just do a merge, but no one does that at scale because they make it too messy to keep track of what changed when.

                          1. 2

                            I don’t think that’s a data model problem. It’s a human problem. Git can store a branching history just fine. It’s just much easier for people to read a linearized list of changes and operate on diffs on a single axis.

                            1. 4

                              Kind of semantic debate whether the problem is the data model per se or not, but the thing I want Git to do—show me a linear rebased history by default but have the ability to also show me the pre-flattened history and the branch names(!) involved—can’t be done by using Git as it is. In theory you could build what I want using Git as the engine and a new UI layer on top, but it wouldn’t be interoperable with other people’s use of Git.

                              1. 3

                                It already has a distinction between git log, git log --graph and git log --decorate (if you don’t delete branches that you care about seeing). And yeah, you can add other UIs on top.

                                BTW: I never ever want my branch names immortalized in the history. I saw Mercurial do this, and that was the last time I’ve ever used it. IMHO people confuse having record of changes and ability to roll them back precisely with indiscriminately recording how the sausage has been made. These are close, but not the same.

                                1. 3

                                  git merge –no-ff (imo the only correct merge for more than a single commit) does use the branch name, but the message is editable if your branch had a useless name

                                  1. 2

                                    None of those show squashes/rebases.

                                    1. 3

                                      They’re not supposed to! Squashing and amending are important tools for cleaning up unwanted history. This is a very important ability, because it allows committing often, even before each change is final, and then fixing it up into readable changes rather than “wip”, “wip”, “oops, typo”, “final”, “final 2”.

                                      1. 4

                                        What I’m saying is, I want Git for Git. I want the ability to get back history that Git gives me for files, for Git itself. Git instead lets you either have one messy history (with a bunch of octopus merges) or one clean history (with rebase/linearization). But I want a clean history that I can see the history of and find out about octopuses (octopi?) behind it.

                          2. 4

                            No. The user interface is one of the best parts of Git, in that it reflects the internals quite transparently. The fundamental storage doesn’t model how people work: Git reasons entirely in terms of commits/snapshots, yet any view of these is 100% of the time presented as diffs.

                            Git will never allow you to cherry-pick meaningfully, and you’ll always need dirty hacks like rerere to re-solve already-solved conflicts. Not because of porcelain (that would have been solved ten years ago), but because snapshots aren’t the right model for that particular problem.

                            1. 2

                              How many people do all their filesystem work with CLI tools these days? Why should we do it for a content-addressable filesystem with a builtin VCS?

                              Never heard anyone complain that file managers abstract mv as “rename” either, why can’t git GUIs do the same in peace?

                              1. 8

                                How many people do all their filesystem work with CLI tools these days?

                                At least one. But I also prefer cables on my headphones.

                                1. 5

                                  Oh thank goodness, There’s two of us. I’m not alone!

                        1. 14

                          It doesn’t explain the fun stuff (error correction).

                          1. 5

                            It’s something I’d like to write about, reed solomon codes are incredible. Most technical presentations on it peruse rigorous definitions on bounds and theory and it isn’t too accessible.

                          1. 5

                            At my employer we don’t interview to solve problems. We interview to have a conversation about a problem. We expect some code to be written but don’t like the idea of “you have 60-90 mins to solve this tricky puzzle”. It isn’t a real world situation. So instead our process is not about solving anything but vetting if the person can walk and talk. Solving the problem is just a way to show-off.

                            Everyone will crap their pants on a timed puzzle and then you only end up hiring people who show off. I’d rather hire someone willing to learn hands-on and demonstrate said learning and understanding.

                            1. 4

                              We interview to have a conversation about a problem.

                              Similar to what I prefer. Ideally we can have a civilized chat about various programming languages, and talk a bit about what makes them special and their pros and cons. If I hire for, say, a role which happens to be mostly Python, then it’s a very bad signal if you only ever seen python, and if you can’t say anything bad about it. I guess I’m mostly trying to figure out how much context you have, and how much interest you have in programming in general.

                              1. 3

                                Absolutely. I can tell how involved you are and your experience level from your discussion of the language. Half the “experienced” devs I interview can’t tell me what an Array.prototype.reduce does or write a Promise wrapper. Yet they love Javascript and have been working in it for X years?

                            1. 1

                              If I remember correctly, when Go’s original design goal was to not have any generics. Maybe it was something to do with the way C++ did templates and leaving a bad taste in everyone’s mouth? I don’t remember but I’ve used generics and have never been in a situation where doing without generics wasn’t preferable or at the very least not worse. I seem to only hear generics being misused or absolutely needing generics.

                              Maybe we should have generics in case they’re absolutely needed knowing that 99% of use cases you don’t need them and you’re just being clever? Or am I being daft?

                              1. 1

                                and you’re just being clever

                                Yes, but I find not trying to be clever one of the hardest things to do in programming. It’s so very tempting.

                                1. 1

                                  How would you, say, make a decent multi-thread-safe hash map without generics? Go’s sync.Map has serious performance problems because the only data structures which can be generic are the built-in implementations of maps, channels and arrays. Everything else has to use runtime polymorphism with indirection through interface{}.

                                1. 2

                                  well done on the domainname, got a chuckle out of that. Not currently looking for nix CI, but I’m happy people are working on that.

                                  1. 1

                                    Thanks!

                                  1. 24

                                    Compared to the mess I’ve seen with reflect, these are really crimes on the level of jaywalking.

                                    1. 1

                                      I’ve got one of the Tuxedo’s with a usable resolution (3200x1800). Battery life is… fine (>4h), speed is fine. It’s ugly, but it’s also easy to open up and replace things (I replaces the battery once already).

                                      1. 14

                                        Overall, this is a well researched and detailed article, but the tone comes across as “this doesn’t monomorphism 100% and therefore Go generics are bad and slow” - which as a prevailing sentiment, represents a simply incomplete analysis. The Go team was obviously aware of the tradeoffs, and so it seems unfair in many ways.

                                        One key thing not discussed in this article are generic containers where the element type was previously interface{} and no methods are ever called on that interface. In this extremely common use case, Go’s generics are likely to be as fast as manually monomorphized code.

                                        Another key use case is for general code that need not be performance-critical, where reflection may have been used previously. In these cases, generics are likely to be strictly faster than reflection as well (potentially modulo some icache issues for megamorphic call sites).

                                        Finally, this design allows for future compiler enhancements - including additional inlining of indirect/interface calls!

                                        As an aside, if you were doing semi-automated monomorphization before with text templating, you now have a much richer and more robust for such a toolchain. That is, you can use Go’s syntax/parser and type-checker, then provide a code generator that spits out manually monomorphized Go code. If nobody has done this yet, I’m sure it will happen soon, as it’s quite straightforward.

                                        1. 9

                                          I didn’t get that tone, especially with the conclusion of the article. The author encourages folks to use generics in certain cases, shows cases where they do get optimized well, and is hopeful for a future where they get either full monomorphization and/or for the optimization heuristics to get better.

                                          To me this seemed like a very fair article, even if they did miss the case that you mentioned.

                                          1. 3

                                            One key thing not discussed in this article are generic containers where the element type was previously interface{} and no methods are ever called on that interface. In this extremely common use case, Go’s generics are likely to be as fast as manually monomorphized code.

                                            The article mentions that byteseq is fast. This is just a special case of that: the vtable indirection can’t slow you down if you never dispatch a method. :-)

                                            That is, you can use Go’s syntax/parser and type-checker, then provide a code generator that spits out manually monomorphized Go code. If nobody has done this yet, I’m sure it will happen soon, as it’s quite straightforward.

                                            I was looking into this last night. I think you can still use the “go2go” tool from the design prototyping of generics, but it’s no longer being maintained and will probably become subtly incompatible soon if it isn’t already.

                                            1. 0

                                              It’s hard to take it seriously as anything other than the Go team continuing to hate generics and trying to do everything they can do to discourage people from using them.

                                              The fact that there are people here talking about (afaict) continuing to use old Go code generators to support generic code without an absurd memory hit demonstrates that Go’s generics have not achieved the most basic of performance goals.

                                              1. 21

                                                It’s hard to take it seriously as anything other than the Go team continuing to hate generics and trying to do everything they can do to discourage people from using them.

                                                sigh It’s hard to take you seriously with this comment. You might have different opinions/preferences than the Go team, but to assume that they are trying to sabotage themselves is ridiculous.

                                                Go’s generics have not achieved the most basic of performance goals

                                                I’ve written and deployed at three major Go systems – one of which processes tens of petabytes of video per week – and I can count the number of times monomorphisation was necessary to achieve our performance goals on one hand. Generally, I copy/paste/tweak the < 100 lines of relevant code and move on with my work. Performance is not the only motivation for generics.

                                                I’ve also written a fair bit of C++ in my life & have also had the experience where I had to disable monomorphization to avoid blowing the instruction cache. To say nothing of compile times.

                                                You don’t like Go. That’s fine, but maybe don’t shit on the people who are working hard to create something useful for the people who do like it.

                                                1. 8

                                                  Generally, I [do something simple] and move on with my work.

                                                  That summarizes my Go experience in the last decade. I miss this in basically every other language now.

                                                  Also the generics turned out very nice imho, I’m impressed with the balance they managed to strike in the design.

                                                  1. 6

                                                    Also, this is…. clearly a compiler heuristic that can be tightened or loosened in future releases. They just chose “all pointers are the same” in order to ship quickly.

                                                    1. 2

                                                      OTOH, no one can say they shipped generics “quickly“. Even Java did it quicker, though not better.

                                                      1. 1

                                                        It only took 11 years after this was posted: https://research.swtch.com/generic :-)

                                                    2. 0

                                                      The Go team has stated that they do not like generics. They only added them now because everyone working with Go was continuously frustrated by, and complaining about, the lack of generics.

                                                      Given that background, I believe it is reasonable to assume that the Go team did not consider competitive generics to be a significant goal.

                                                      Worrying about compile time is something compiler developers should do, but given the option between faster build time for a few developers, vs faster run time for huge numbers of end users, the latter is the correct call.

                                                      I obviously can’t speak to what your projects are, but I would consider repeatedly copy/pasting essentially the same code to be a problem.

                                                      I don’t like Go, but I also didn’t complain about the language. I complained about an implementation of generics with questionable performance tradeoffs, from a group that has historically vigorously argued against generics.

                                                      1. 6

                                                        The Go team has stated that they do not like generics.

                                                        Do you have a source for that claim? All I remember is a very early statement, that also has been on the website.

                                                        Looked it up. This was in since at least 2010[1].

                                                        Why does Go not have generic types?

                                                        Generics may well be added at some point. We don’t feel an urgency for them, although we understand some programmers do.

                                                        [1] https://web.archive.org/web/20101123010932/http://golang.org/doc/go_faq.html

                                                        1. 2

                                                          You’re right in that I can’t point to a single quote.

                                                          I can however point at the last decade of Go devs talking about supporting generics, which has pretty consistently taken the path of “generics make the language harder” (despite them being present in a bunch of builtin types anyway?), “generics make the code bigger”, “generics make compilation slower”. Your above quote even says that they recognize that it’s developers outside the core Go team that see the benefit of generics.

                                                          This is not me beating up on Go, nor is it me questioning the competence of the core team. This is me saying that in light of their historical reticence to support generics, this comes across as being an intentionally minimal viable implementation created primarily to appease the hoard of devs that want the feature, rather than to provide a good level of performance.

                                                          1. 7

                                                            in light of their historical reticence to support generics

                                                            Ian Lance Taylor, one of the largest and earliest Go contributors, was advocating or at least suggesting generics for quite some time. He was exploring design ideas since at least 2010, with a more serious design in 2013. I think this contradicts the “giving in and appeasing the masses” sentiment you’re projecting on the Go team.

                                                            Come to think of it, he also wrote a very long rebuttal to a commenter on the golang-nuts mailing list who was essentially saying what you’re saying. I’ll see if I can find it. Edit: Here it is.

                                                            1. 3

                                                              this comes across as being an intentionally minimal viable implementation created primarily to appease the hoard of devs that want the feature, rather than to provide a good level of performance.

                                                              That sounds like a very unconvincing argument to me. In my opinion they did a very good job with the accepted generics proposal, because it keeps the language simple while also helping to avoid a lot of boilerplate, especially in libraries. Also, the Go team has pointed out several points of the generics implementation they want to improve on in upcoming releases. Why should they do that if they just had implemented generics only to please “ the hoard of devs that want the feature”?

                                                              1. 2

                                                                I will say that compared to early generics proposals, the final design is quite a bit more Go-like. It’s unfortunate that the type constraints between the [] can get quite long, but if you ignore the type parameters, the functions look just like normal Go functions.

                                                          2. 4

                                                            The Go team has stated that they do not like generics.

                                                            I don’t think that is true at all. They stated (1) that they did not like either end of the tradeoffs with erasure (characterized by Java) causing slow programs and full specialization (characterized by C++ templates) causing slow compile times. And (2) that some designs are needlessly complex.

                                                            They spent years refining the design - even collaborating with Academia - to minimize added type system complexity and choose a balanced performance/compile-time implementation tradeoff that met their objectives.

                                                            given the option between faster build time for a few developers, vs faster run time for huge numbers of end users, the latter is the correct call.

                                                            I couldn’t possibly disagree more.

                                                            I would consider repeatedly copy/pasting essentially the same code to be a problem.

                                                            I can’t think of any lowery severity problem affecting any of my projects.

                                                            Anyway, I won’t reply to any further messages in this thread.

                                                            1. 1

                                                              They only added them now because everyone working with Go was continuously frustrated by, and complaining about, the lack of generics.

                                                              I would phrase it that everyone not working with Go was complaining about the lack of generics, and the Google marketing team assigned to Go (Steve Francia and Carmen Ando being the most prominent) are working hard to sell Go to the enterprise, so it was priority to clear that bullet point up.

                                                              People working with Go generally just bite the bullet and use code generation if necessary, but mostly just ignored the lack of generics.

                                                          3. 4

                                                            Self-sabotage just for the sake of generics doesn’t make sense because this release slowed down build times for everyone not just people using generics: https://github.com/golang/go/issues/49569.

                                                        1. 3

                                                          You’ll have to remember to bypass the test cache if you modify data in external test. go test -count 1

                                                          1. 5

                                                            Go’s test cache takes external files opened by your tests into account, via https://pkg.go.dev/internal/testlog

                                                            1. 2

                                                              I did not know that. That’s an impressive detail. Thanks!

                                                          1. 7

                                                            I sometimes think about this in terms of “full stack” vs frontend / backend web dev. Obviously, truly “full stack” development is impossible unless you build your own hardware, OS, network stack, programming language, etc. etc. So “full stack” always at its best means “fuller stack” as compared to some alternative. I think the alternative it should be compared to is the alternative of saying “that’s not my problem.” “Oh, the button is too small to tap on mobile? That’s not my problem, I’m backend.” “Oh, the page has too slow of a time to first byte? That’s not my problem, I’m front end.” Etc. Full stack is just an attitude that if you care about the product, everything is your problem. :-)

                                                            1. 13

                                                              “Oh, the button is too small to tap on mobile? That’s not my problem, I’m backend.”

                                                              “I would love to fix the button, but I fixed similar things in the past, and it didn’t look right, and I know I don’t have an eye for design, so I rather not. Also there are considerations about the screen size/software versions/things I don’t even know I should care about, which I don’t know enough about. So I’ll leave it to the people who actually know about this stuff, and they can do a quick solid fix. But while looking at the button noticed the page loaded a bit slower than I would like, and I saw some things I can improve there on the backend”

                                                              1. 5

                                                                I see this attitude sometimes called ownership. If you own it, everything is your problem. This attitude is absolutely valuable.

                                                                However, I think there’s a question of means even within the shared goal of ownership or full-stack. If you see a problem, you could:

                                                                • File a bug report, and navigate the process to get it prioritized, fixed, deployed to every one, pick up the change, deploy it to your context.
                                                                • Change the code and deploy it to your context.

                                                                Both are valid paths for an owner to follow. I think the second is often ignored or forgotten.

                                                                1. 3

                                                                  Obviously, truly “full stack” development is impossible unless you build your own hardware, OS, network stack, programming language,

                                                                  Eh, choosing to take something off-the-shelf is a perfectly acceptable decision to make. For things like hardware and OS, that’s most likely the best decision. But the higher you get to the top, the more likely that you need to go custom. But even then, you might, for example #include<stdlib.h> for small parts of your backend.

                                                                  I was hired for a job once quite some time ago, when I figured the best solution was to make a very basic website that embeds some licensed video player that ran off a licensed media server that streamed stuff from an off-the-shelf camera that I screwed to the wall with off-the-shelf hardware. The job was “stream video from this location to the website”, so that’s full-stack - they didn’t hire me to just make a website, but rather to figure out the whole problem - but it actually made most sense for me to just buy 99% of the requirements. I perhaps could have done more custom and kept more of the money (the license fees came out of the same budget as my pay), but this way the project was done in two days instead of….. well i don’t even know. I probably still would have bought an off-the-shelf camera at least!

                                                                  Same thing with the website - good chance the full stack developer at least has some influence down to the hardware, but you’ll almost certainly just buy something even if you did have enough budget and know-how to design something so you save the time.

                                                                  1. 3

                                                                    Web developers calling themselves ‘full stack’ is one of my pet peeves. My team works on microarchitecture, architecture, language design and implementation, and OS and distributed system design and implementation. We’re aware that there are layers below us in the stack (we don’t do any gate-level design or anything that involves caring about the physics of IC fabrication processes) and layers above us (we don’t do graphics things or any user-facing HCI things [though that was how I started down this rabbit hole]). Full-stack web developers sit at the top one or two layers in two different stacks and deny the existence of the 20 layers below them in each stack.

                                                                    1. 4

                                                                      Your “ire” is misplaced, because it’s an industry term, sad as it is. Even most web developers I know who actually have some experience outside web dev hate the term, because you can be a web developer and still do SRE work, work on languages, or on the kernel. Sadly it’s too late, the ship has sailed. This is the term for web developers now. Maybe we should coin a new one for people actually going down from JS to a backend, to the OS and hardware :)

                                                                      1. 2

                                                                        The term changed many years ago though. I remember job postings for full stack engineers which meant taking care of the infrastructure as will as the web service code. I was very annoyed when it changed, but you can’t argue with descriptivists…

                                                                        I think the term you want exists already: generalist engineer. Although everyone will have a slightly different view on that one too.

                                                                  1. 5

                                                                    Apparently you can buy IPv6 addresses, use them for the servers on your home network, and then if you change your ISP, continue to use the same IP addresses?

                                                                    You need to be a RIR (RIPE/ARIN/LACNIC/APNIC/AfriNIC) member for that. The membership fee alone is within thousands/year. Then you need to arrange routing with the hosting providers, and those that are ready to do that will also charge at least hundreds per month. No public cloud I’m aware of supports that at all, so you also need your own hardware in a datacenter where your transit provider is present.

                                                                    In other words, owning your IPv6 network is completely out of reach for individuals and small projects. I believe it shouldn’t be that way and that RIRs are basically rent-seeking organizations now that resources they still can distribute (32-bit ASNs and IPv6 addresses) are anything but scarce, but I can’t see why it may change any soon.

                                                                    1. 10

                                                                      Vultr will let you do BGP with them for (as far as I know) no additional cost above the price of your VPS: https://www.vultr.com/docs/configuring-bgp-on-vultr/

                                                                      In the RIPE area at least, you can obtain a provider-independent IPv6 assignment via an LIR - you don’t have to go directly to RIPE. A cheap option is Snapserv, who offer an IPv6 PI assignment for 99 EUR/year and an ASN for a one-off fee of 99 EUR. These can both be transferred to another LIR if, for example, Snapserv went out of business, or you wanted to switch LIR for some other reason. They also offer IPv6 PA assignments for less money, but the trade-off is that a PA assignment is tied to the LIR.

                                                                      You do need to be multi-homed to justify the PI/ASN assignments, so you’d need to find another upstream provider in addition to Vultr. Someone I know uses Vultr and a HE tunnel to justify it.

                                                                      1. 1

                                                                        Interesting, that’s sure an improvement. My company is a RIPE member so I haven’t been watching the PI situation closely, I’m glad to see it improve.

                                                                      2. 9

                                                                        In other words, owning your IPv6 network is completely out of reach for individuals and small projects. I believe it shouldn’t be that way and that RIRs are basically rent-seeking organizations now that resources they still can distribute (32-bit ASNs and IPv6 addresses) are anything but scarce, but I can’t see why it may change any soon.

                                                                        I suspect the problem is routing tables. It would be trivial to assign every person a /64 without making a dent in the address space but then you’d end up with every router on any Internet backbone needing a few billion entries in its routing table. That would completely kill current (and near-future) hardware. Not to mention the fact that if everyone moving between ISPs required a BGP update, the total amount of BGP traffic would overwhelm networks’ abilities to handle the update rate.

                                                                        You need some mechanism to ration the number of routable networks and money tends to be how we ration things.

                                                                        1. 2

                                                                          I doubt this will ever be a problem in practice. Even among those who host their own servers, the number of people who want to own their address space is always going to be small.

                                                                          I’m also not advocating for making addresses free or charge, only for making them available for less than the current exorbitant prices that RIRs charge for membership.

                                                                        2. 2

                                                                          TIL, that’s really interesting. I just remember many, many years ago that people were entertaining this, but also with Sixxs and HE tunnels that kinda worked for a while.

                                                                          1. 2

                                                                            Oh, but with tunnelbroker.net and similar, the provider owns the network, you just get a temporary permission to use it and can’t take it with you.

                                                                            1. 1

                                                                              Yes of course, but at least the way it works you could in theory use it longer despite switching ISPs. And I think my Sixxs account was nearly a decade old at the end. Some people might have moved cities three times in that time.

                                                                          2. 1

                                                                            I always wish that addresses were more equitably distributed. With IPv6 there’s no reason not to. And yet ☹

                                                                            1. 1

                                                                              welp for some reason my ISP provides every customer a /64, I don’t know what the reason for that is. There is no single person the internet that needs a /64 and I’m certain no german household needs. But yeah waste tons of network space for no reason. IPv8 we’re coming..

                                                                              1. 5

                                                                                Its the minimum routing size and if you stray from it a lot of the protocol breaks, making it smaller would be insane. And its not wasteful, you could give every atom on the planet a /64, the address space is REALLY BIG. Ipv4 this is not. For it to show up in BGP it needs to be a /48, /32 is the minimum allocation. And there is as many of those as there are ip’s. It should be a /48 you’re given actually, not a /64 (or /60 /56 in comcast home/business cases)

                                                                                Why do you believe ipv8 is needed because of /64 allocations? Can you back that up with some numbers?

                                                                                I think we’re good to be honest: https://www.samsclass.info/ipv6/exhaustion.htm

                                                                                1. 1

                                                                                  I haven’t done the math but I’ll let the last apnic report speak for itself in that regard (you’ll have to serach, its long and there’s no way to mark some chapter).

                                                                                  However, before we go too far down this path it is also useful to bear in mind that the 128 bits of address space in IPv6 has become largely a myth. We sliced off 64 bits in the address span for no particularly good reason, as it turns out. We then sliced off a further 48 bits for, again, no particularly good reason. So, the vastness of the address space represented by 128 bits in IPv6 is in fact, not so vast.

                                                                                  And

                                                                                  Today’s IPv6 environment has some providers using a /60 end site allocation unit, many using a /56, and many others using a /48

                                                                                  So It’s not really a standard that breaks things, because then things would already break.

                                                                                  I just don’t see a reason why we’re throwing away massive address ranges, even my private server gets a /64, and that’s one server, not a household or such thing.

                                                                                  1. 2

                                                                                    The main reason your LAN must be a /64 is that the second half of each address can contain a MAC address (SLAAC) or a big random number (privacy extension).

                                                                                    1. 1

                                                                                      So It’s not really a standard that breaks things, because then things would already break.

                                                                                      For routing, not in general, but going below /64 does break things like SLAAC. The original guidance was a /48, its been relaxed somewhat since the original rfc but can go down to a /64. Doing work or i’d pull up the original rfc. Going below /64 does break things, but not at that level being referenced.

                                                                                      I just don’t see a reason why we’re throwing away massive address ranges, even my private server gets a /64, and that’s one server, not a household or such thing.

                                                                                      Have to get out of the ipv4 conservation mindset, a /64 is massive yes, but 64 bits of address is ipv4 to the power of ipv4, that is… a large amount of space. It also enables things like having ephemeral ip addresses that change every 8 hours. Its better to think of a /64 as the minimum addressable/routable subnet, not a single /32 like you would have in ipv4. And there is A LOT of them, we aren’t at risk of running out even if we get allocation crazy. And thats not hyperbole, we could give every single device, human, animal, place, thing a /64 and still not approach running out.

                                                                                      1. 1

                                                                                        Today’s IPv6 environment has some providers using a /60 end site allocation unit, many using a /56, and many others using a /48

                                                                                        Also just realized that you might be confusing /60 or /56 as being smaller than a /64, its unintuitive but this is the mask of the subnets not the size. So smaller than /64 would be a CIDR above 64, not below. aka a /96 would break in my example. Its also why assigning “just* a /64 is a bit evil on the part of isp’s and the allocation should be larger.

                                                                                    2. 1

                                                                                      IPv8 we’re coming

                                                                                      fun fact: it’s called 6 because it’s the 6th version (kinda). Not because of the number of bytes (which is 8 anyway). You’re rooting for IPv7!

                                                                                1. 2

                                                                                  dead-tree print editions and crude typesetting

                                                                                  The age of wasting lifeless corpses of trees for poor typesetting is, sadly, now. Old books have that problem occasionally, but in the last decade it became pervasive.

                                                                                  But Ada hasn’t quite taken off in the mainstream. Social factors - perhaps association with restrictive licenses and a proprietary compiler - have constrained adoption.

                                                                                  I wonder what proprietary compiler they are talking about. There are proprietary Ada compilers, but GNAT is free software.

                                                                                  Also, that book has the signature of Jean Ichbia, the principal Ada language designer!

                                                                                  1. 1

                                                                                    Old books have that problem occasionally, but in the last decade it became pervasive.

                                                                                    [anecdotal, no way I can find where I read this years ago] There was a period, around 2000?, where books had to be > 1000 pages or they wouldn’t sell. You ended up with books with whole std libraries included, just to pad the page count.

                                                                                  1. 8

                                                                                    Really a shame they chose to support a new diagram language like Mermaid rather than one that has been around for 30 years, is widely used, and extremely flexible? Especially since is already a massive set of Graphviz files on Github. A simple search finds approximately 2,441,463 Graphviz files on Github (Source: https://github.com/search?q=extension%3Agv&type=Code&ref=advsearch&l=&l= and https://github.com/search?q=extension%3Adot&type=Code&ref=advsearch&l=&l=). A Github search for Mermaid on turns up only 7,846 files (https://github.com/search?q=extension%3Ammd&type=Code&ref=advsearch&l=&l=), making extremely unpopular by comparison. Why would Github ignore Graphviz and choose to support Mermaid instead?

                                                                                    Perhaps they looked at the number of Graphviz files in existence and realized that due to it’s popularity, they’d have to have a lot more infrastructure in place and the cost would be much greater? For example, if you figured rendering a diagram cost 1 cent (just as an example, I have no idea how much it costs). Rendering their library of Mermaid would only cost them about $78. Whereas Graphviz would end up costing them $24,414.63. Perhaps there are other technical constraints here I am not aware of? Maybe they’ve already got good infrastructure in place for JS libraries, and therefore Mermaid is an easier implementation.

                                                                                    1. 9

                                                                                      To be fair, mermaid seems to be more of a format to be used inline in a markdown file, so searching for dedicated .mmd files won’t be a good indicator of its popularity.

                                                                                      (I don’t know anything much about either mermaid or viz, and don’t care whichever github supports)

                                                                                      1. 4

                                                                                        Graphviz is more akin to SVG than Mermaid. It’s an image format, not a text description language.

                                                                                        1. 4

                                                                                          I assume by graphciz they mean the dot format, which is meant for hand authoring. Though it seems like a different use case than mermaid.

                                                                                        2. 2

                                                                                          I mean, no reason we can’t have both in the future?

                                                                                          I say this as someone who only uses Graphviz, but I don’t think “older language/more files of it exist” is a fair comparison of which is more popular today. And I certainly wouldn’t praise Graphviz for being extremely flexible – AFAIK you resolve positioning errors (which are common) by cleverly adding invisible edges, or partitioning cleverly into subgraphs, or even by specifying coordinates for every single node, which is hell.

                                                                                        1. 5

                                                                                          (meta) Thanks for adding the short description of the project in the title, makes the list of articles here a bit easier to scroll through.

                                                                                          1. 2

                                                                                            If it’s stupid, but it works … it’s not stupid.

                                                                                            Well, maybe sometimes it’s a little bit dumb. I’ll take dumb but solves a problem over not solving the problem though.

                                                                                            1. 1

                                                                                              It does expose your TOTP code to the network.

                                                                                              1. 1

                                                                                                It is a fun hack, nothing anyone should use.

                                                                                                1. 2

                                                                                                  i feel like maybe we should discuss that…

                                                                                                  is it exposing your TOTP code to the network? isn’t the whole point of TOTPs that any knowing the TOTP would not expose the underlying algorithm?

                                                                                                  is it even possible to guess a TOTP given knowledge of n previous TOTPs? i do know it’s fairly easy to brute force a TOTP when there is no rate limiting in place, and i think this would definitely be one of those cases

                                                                                                  1. 2

                                                                                                    Since it’s time-based, and nothing that I see (from my quick skim) is keeping track of which codes have been used, a network observer who sees what IP addresses you’re talking to should be able to bypass your TOTP protection as long as they connect to the same IP address within that 30 second window or whatever.

                                                                                                    1. 2

                                                                                                      I checked a few TOTP implementations out there and not all of them invalidate codes after use. Github for example happily accepts the same code multiple times within the same time period.

                                                                                                      I agree that blacklisting codes after use is good practice, but it’s just one more safety measure. Only checking the TOTP without blacklisting is not the same as not checking a TOTP

                                                                                                      1. 2

                                                                                                        Github for example happily accepts the same code multiple times within the same time period.

                                                                                                        That’s against the specs and a pretty serious bug. It’s called “one time” for a reason.

                                                                                                      2. 1

                                                                                                        If they can guess the IP then they have already broken your TOTP anyway…

                                                                                                        1. 4

                                                                                                          Somebody who can watch your IP traffic (watch, not decrypt!) does not need to guess the IP.

                                                                                                          1. 3

                                                                                                            sure, but they still would need the SSH key to access the machine.

                                                                                                            1. 1

                                                                                                              TOTP is supposed to be the second factor that protects you when someone has stolen your first factor. If your security is only as good as the first factor, then you don’t have 2FA.

                                                                                                            2. 2

                                                                                                              Oh, sure, so they have a handful of seconds to try cracking your password before it rotates.

                                                                                                              1. 1

                                                                                                                Absolutely; that’s why a solution like fail2ban is probably the better idea and more comfortable to use.

                                                                                                                1. 1

                                                                                                                  Yes, so at least it would provide that much protection – reducing the window of exposure.

                                                                                                      3. 1

                                                                                                        How? All the ip addresses exist, it just changes the firewall rules. You would have to bruteforce the code in the time to find it, no?

                                                                                                        1. 2

                                                                                                          no TLS for the TOTP “code”, it’s plain in the connection IP