1. 24

    Horrible, horrible article.

    However, here’s something to think about: while privacy preserving tech is commendable, does it have to come at the cost of user freedoms? Hint: it doesn’t, and it shouldn’t.

    What user freedoms are being trampled? Author does not seem to specify any.

    I don’t mean to sound conspiratorial, but what’s to say that the server in production hasn’t been backdoored? In fact, the Signal server code hasn’t even been updated since April 2020. You’re telling me it’s undergone no changes?

    Serious accusation. Completely unfounded one. Two points are made. First, that backdooring the server would achieve something. Hint, it would not. E2E is exactly for that. Contact list crosschecking is being done inside SGX enclave, and clients are validating if SGX enclave is running particular version of code. What would server backdooring achieve? Author is clueless. Second accusation. “You are telling me it’s undergone no changes?” For half a year? On a platform where almost everything happens client-side? Server just shuffles cryptotext around. Nothing to see here.

    1. 11

      What user freedoms are being trampled? Author does not seem to specify any.

      Two come to mind: Freedom to distribute software, eg. in the F-Droid store, even if this means that not everyone has the newest version. Freedom to use my own Server, instead of trusting someone else, at the conscious expense of my security.

      1. 5

        You can distribute the software in the F-Droid store. You can’t use their trademark (the name signal) or servers while doing it.

        You also can run your own server with your own build of the app in the F-Droid store.

        Presumably what you want is to use the network they’ve built with your own client. I agree that would be nice-to-have, but AFAIK not even Stallman wants OSS licensing to require it.

        1. 5

          I can distribute, but I can’t actually cause people to use it. Like spam filters: I can send my email all right, it’s getting it received that’s more problematic. I can run my own server, but it won’t talk to the official one. It has to be a separate network, that, understandably, nobody will use.

          So yes, using their network with different clients would be very nice.

      2. 14

        Ad hominem much? Seriously, it hurts any argument you’re trying to make.

        The problem they allude to is that we have to trust that moxie is running the server code that he claims to run. It does seem suspicious that the server code has seen 0 changes in almost 1 year.

        People like to point out that signal has e2ee, and that the server doesn’t have to be trusted, but they (conveniently?) forget that signal collects a fair amount of information from users (phone numbers, contacts, other meta data), and has the potential to collect a lot more on the server side.

        1. 6

          Contact list crosschecking is being done inside SGX enclave, and clients are validating if SGX enclave is running particular version of code.

          Could you expand more on that? If I’m sending my contact list to a Signal server for crosschecking, how can I trust that server to keep the list private?

          1. 5

            Signal’s own description of the problem and what they are doing with it: https://signal.org/blog/private-contact-discovery/

            SGX page: https://en.wikipedia.org/wiki/Software_Guard_Extensions

            Long story short - it’s guaranteed by Intel. It’s a piece of the processor that user can load with code, lock and burn the key. Metaphorically - since there was never a key. Next, external application can talk to https server running from the enclave, and validate enclave’s claims about code that it runs with a help from Intel’s service.

            This tech has it’s limitations - it’s still buggy, exploits being published every year, but it will mature some day. It also has some limitations in it’s threat model - it does not cover de-capping and RAM page replay attacks.

            1. 15

              Signal’s own description of the problem and what they are doing with it:

              The problem still exists, you have to trust that they are doing what they say they do, and since it’s 100% centralized you have no way of knowing for certain that the server code they are running is what they say they run. And you can’t run it yourself since moxie is 110% hostile towards any sort of decentralization of his baby.

              1. 6

                you have no way of knowing for certain that the server code they are running is what they say they run.

                The server code is able to send a verification code derived from intels private key, the current time, and the hash of the built server code.

                In order to do that, they’ve either A) somehow gotten hold of intels private SGX key, B) successfully used an SGX bypass, or C) run code with a hash matching the one they’ve published, which comes from a reproducible build.

                I think that list is roughly in order of least to most likely.

                1. 5

                  I’d say an SGX bypass is more likely than any other. Intel’s opsec regarding their keys was flawless so far, hash collisions are hard (I think SGX uses SHA256 which is still unbroken in the general case?), but SGX and every other bolt-on “security” technology that Intel implemented since protected mode has been an utter disaster.

                2. 5

                  you have to trust that they are doing what they say they do

                  You’re trusting Intel OR Signal. That’s the whole point of SGX. A successful attack means they have to conspire together.

          1. 6

            Why is E2E encrypted XMPP not mentioned?

            1. 4

              Because the author wanted to moan about something that is actually currently pretty good, not to provide a conclusive and valid assessment. Just yet another opinion piece, that’s all.

              1. 2

                is Conversations E2E encrypted?

                1. 5

                  Conversations supports OMEMO (as do Chat Secure, Monal, Gajim, Converse.js, and many others), so when set up, yes.

                  1.  

                    tangential but do you know if Conversations is interoperable with AstraChat? Or any other iOS client?

                    1.  

                      Yes. On iOS I would recommend Siskin. When used in conjunction with a Siskin server it reportedly supports the latest iOS push requirements.

                      1.  

                        I don’t know AstraChat. I recommend Monal (although I don’t use iOS myself, but there are Monal-using people in my peer group). They are interoperable incl. OMEMO.

                1. 2

                  Yes, that graph is showing in gigabytes. We’re so lucky that bandwidth is free on Hetzner.

                  But it says 300 Mil on the left. And “bytes” on top. So I guess Mil stands for million, and 300 million bytes is 300 decimal megabytes, not gigabytes, unless my math is all wrong. Is my math all wrong?

                  1. 1

                    You’re correct that 300 million bytes is 300 MB (or around 286 MiB).

                    1. 1

                      My bad. I was reading the cloudflare graph when I wrote that. I think I uploaded the wrong image to Twitter. Oops. I’ll fix it.

                      1. 1

                        I think nevertheless your scale of “this could get expensive” would only be right if you were on a very expensive provider like google cloud. Or maybe if this were 15 years ago. Hetzner predicts that 20TB/mo is completely normal, and you are nowhere near that! A gigabyte in 2021 is a small thing.

                        Of course, it’s fine to plan very much ahead and optimize things, but maybe this will give people the wrong idea that it’s absolutely necessary to put cloudflare or a caching cdn in front of their website, or cut down RSS feeds. When even at your great level of popularity, it isn’t really needed.

                  1. 4

                    There was at least one effort to get this fixed, which lives in the gtk fork here: https://github.com/Dudemanguy/gtk

                    But apparently the code is not good enough to be merged (because it’s using potentially slow APIs?). There’s also a few more links to patches in the gitlab thread https://gitlab.gnome.org/GNOME/gtk/-/issues/233

                    This comment is also interesting: https://gitlab.gnome.org/GNOME/gtk/-/issues/233#note_106555

                    GtkFileChooserNative is also mentioned, which should get you a native file open dialog on macos and windows (not sure about a Qt one on KDE, I think there’s an env var that can be set).

                    There was also a branch with work on this on the GNOME gitlab but it seems stalled: https://gitlab.gnome.org/federico/gtk/-/tree/filechooser-icon-view-3-22

                    1. 18

                      Our sysadmin @alynpost is resigning as moderator and sysadmin to focus on other projects. Prgmr will no longer be donating hosting. For security’s sake, I’ve reset all tokens and you’ll have to log in again - sorry for the hassle.

                      Is there any risk that Lobste.rs could go offline in the future due to running costs?

                      1. 38

                        No. The new hosting bill is $75/month, which I don’t mind at all.

                        1. 14

                          Isn’t that very overpriced? 40€/month at hetzner gets you a dedicated machine with a Ryzen 5 3600, 64GB of RAM and 512GB of SSD on RAID1 (no affiliation or anything, it’s just the provider I know).

                          1. 8

                            Hetzner also just uses electricity from sustainable sources, while with digital ocean it depends on the location

                            1. 3

                              Hetzner is the goat! I use them for my VPS and it’s the best deal I’ve seen yet for cloud services. The fact that they’re environmentally friendly as well makes it that much better!

                            2. 5

                              Does Hetzner have managed MySQL? Seems like it’s a big hassle removed there.

                              1. 6

                                You can rent a managed server with Hetzner and they have a panel to install and mange MySQL on it, but I don’t think it’s comparable to DigitalOcean’s managed offerings.

                                1. 1

                                  Would be really interesting to hear what they’re doing with “managed”. Because based on the prices I’d say prgrmr.com is also not cheap compared to the hardware you get.

                            3. 5

                              Would you consider accepting donations for hosting?

                              1. 35

                                I appreciate the offers but prefer not to, no. Still looking for someone to print-on-demand stickers, though.

                                1. 12

                                  I’ll buy $75 worth of stickers every month to show my appreciation.

                                  1. 5

                                    Minor dissenting opinion:

                                    I support a lot of people on Patreon and expect nothing in return. Chipping in $5/month to Lobste.rs because I like the community and the stuff that gets shared here isn’t a tall order, and won’t come with any entitlement. (A lot of the people I support are artists and content creators that are usually in high demand from the rest of the community.)

                                    I can’t speak for the rest of the community, but I don’t think I’m particularly saintly in this regard. :P

                                    If the expenses grow, please don’t rule this option out entirely.

                                    1. 3

                                      It seems to me that the expectation comes from the design of sites which ask for monthly donations. Thinking out loud here, but a donations system which really was just a donations system, something more similar to ko-fi and didn’t have names attached, might help highlight the fact that by donating one is helping out rather than a new account tier?

                                      I personally also donate on Patreon and expect nothing.

                                    2. 4

                                      Thank you! That is a great attitude.

                                      I have one concern though. What happens when lobste.rs keeps growing and the bill increases? What is your maximum you would spend on the site? Wouldn‘t it be better to care about that rather earlier than later?

                                      1. 22

                                        By design, Lobsters grows pretty slowly. I’m thinking of design decisions like invites vs open signups, and a narrow focus rather than a subreddit for everything. Growth is not a goal like it would be in a startup, and I’d pause invites if we saw some kind of huge spike.

                                        Right off we should have plenty of spare capacity. I aimed to overprovision this new server and we’ll see if I eyeballed that correctly as we reach peak traffic during the US work week. If the hosting bill goes to about 10x current I’ll start reconsidering donations. But that may never happen! Hosting costs slowly decline as power gets cheaper, data centers get built, and fiber gets laid. Lobsters is cheap to run because it’s a CRUD SQL app pushing around text a few kilobytes at a time and our size increases slowly. I hope not to jinx it, but it seems likely that our hosting bill is flat or declines over the next decade.

                                      2. 2

                                        Not print-on-demand afaik, but Sticker Mule has been great to work with in the past for me.

                                        1. 1

                                          Redbubble do print on demand for stickers, iirc.

                                          1. 1

                                            I’m definitely in the market for some stickers if you find a service or have any left over from the first batch!

                                        2. 5

                                          Does hosting lobster requires lots of CPU or RAM?

                                          1. 5

                                            It’s Rails. So both :)

                                            1. -1

                                              #rust

                                      1. 1

                                        Once you have a reasonable volume of pull requests the branches of those pull requests get outdated quickly meaning they have to be rebased before merging or every change will have an extra merge commit in the history.

                                        And why are merge commits bad again?

                                        1. 9

                                          For feature branches a merge commit that updates against the default branch is just noise in the commit history. This is especially bad if the branch needed to be updated multiple times. In my opinion it is always better to rebase against the default branch to keep the commit history clean. Rebasing before a merge is often good practice anyways, e.g. to squash commits or rewrite commit messages.

                                          1. 2

                                            Indeed, I consider a feature branch being merged into a main branch without a merge commit to be an antipatten that makes the history less useful.

                                            1. 5

                                              This is not about the merge commit of the feature in main, its about merge requests in the feature branch when updating against main.

                                              1. 1

                                                Oh, I see. Yeah, I usually treat feature branches as roughly a patch series so keep merges out of that particular kind of flow, personally.

                                              2. 3

                                                The history should capture as much of the human process as possible without also encoding the details of how git creates that history.

                                                Thus, rebases and not merge commits.

                                                1. 1

                                                  If you really want to keep track of merges, you can use git rebase and then git merge --no-ff.

                                                  If a single feature may be developed and integrated progressively, having merge commits will add a lot of useless commits in the history, it’s an aesthetic choice that’s all.

                                              1. 2

                                                Just a strange feeling: not even the simplest things like time and current weather (grade) we can obtain from our advanced technologies. So what good they serve?

                                                1. 9

                                                  It’s not a lie though, you are obtaining the time. It’s just rounded to the nearest second instead of rounded down, which is a pretty intuitive thing.

                                                  1. 3

                                                    I hadn’t noticed until it was pointed out and it’s great. It always feels ‘wrong’ when you start a timer at (to use the example) 5s and the first thing you see is 4.something. I can imagine there were arguments about implementing this though.

                                                    1. 5

                                                      There could be an argument in favour of rounding up too. Starts with a full second 5, then the very moment you see 0, it’s over. Very intuitive.

                                                      1. 9

                                                        Yeah, I’m pretty sure this is how most people speak a countdown out loud. “Five…four…three…two…one…” and then “zero” or “time” or “go” means it’s over. You wouldn’t say “zero” if there was still any time left.

                                                        1. 1

                                                          this makes the most sense to me, if they aren’t showing milliseconds, it ‘ending’ on zero seems far more reasonable, e.g. https://i.imgur.com/Y1AlKks.gif

                                                        2. 2

                                                          I’ve always used this rounding up approach. The article touches on it but dismisses it as not useful when using it for hours and minutes. Of course, in a rounding up approach, you only want to ever round up the smallest unit you are displaying and continue to round down the others.

                                                          There is some philosophical argument about showing the rounded up time, however. If the timer shows 1s you might be inclined to believe there is at least a whole second left. With the rounding down approach, this holds true. For the rounding to nearest and rounding up approaches, however, the timer shows something different. Showing a value of 3s in those cases only indicates that you have no more than 3s left before the countdown expires.

                                                          My intuitive understanding of what a timer means is more inline with the presentation given by rounding down, but it is definitely strange to think that hitting 0 is not the end. I suppose that’s why I prefer the rounding up approach in the end even if I find it mildly misleading.

                                                    2. 4

                                                      I can get the current time and weather from my technology fine. What are you talking about?

                                                    1. 21

                                                      The job of the OS is to schedule which software gets what resources at any given time. Kubernetes does the same thing. You have resources on each one of your nodes, and Kubernetes assigns those resources to the software running on your cluster.

                                                      ehh, who’s the you here? This is starting from the assumption that I have a lot of nodes, which is only true in the context of me running infrastructure for a corporation; the you here is a corporation.

                                                      The first difference is the way you define what software to run. In a traditional OS, you have an init system (like systemd) that starts your software.

                                                      again, again, define traditional. Who’s tradition? In what context? In a traditional OS, software starts when you start using it, and then it stops when you stop using it. The idea that everything should be an always-running, fully-managed service is something that’s only traditional in the context of SAAS.

                                                      The thing that makes me feel cold about all this stuff is that we’re getting further and further away from building software that is designed for normal people to run on their own machines. So many people that run Kubernetes argue that it doesn’t even make sense unless you have people whose job it is to run Kubernetes itself. So it’s taken for granted that people are writing software that they can’t even run themselves. I dunno. All this stuff doesn’t make me excited, it makes me feel like a pawn.

                                                      1. 12

                                                        You’re right, you probably wouldn’t use Kubernetes as an individual.

                                                        I’ll take the bait a little bit though and point out that groups of people are not always corporations. For example, we run Kubernetes at the Open Computing Facility at our university. Humans need each other, and depending on other people doesn’t make you a pawn.

                                                        1. 8

                                                          Given the millions and millions spent on marketing, growth hacking, and advertising for the k8s ecosystem, I van say with some certainty we are all pawn-shaped.

                                                          1. 5

                                                            totally fair criticism. I think “corporation” in my comment could readily be substituted with “enterprise”, “institution”, “organization” or “collective”. “organization” is probably the most neutral term.

                                                            Humans need each other, and depending on other people doesn’t make you a pawn.

                                                            so I think this is where my interpretation is less charitable, and we could even look at my original comment as being vague and not explicitly acknowledging its own specific frame of reference:

                                                            In a traditional OS, software starts when you start using it, and then it stops when you stop using it.

                                                            again, who’s tradition, and in what context? Here I’m speaking of my tradition as a personal computer user, and the context is at home, for personal use. When thinking about Kubernetes (or container orchestration generally) there’s another context of historical importance: time-sharing. Now, I don’t have qualms with time-sharing, because time-sharing was a necessity at the time. The time-sharing computing environments of the sixties and seventies existed because the ownership of a home computer was unreasonably expensive: time-sharing existed to grant wider access to computing. Neat!

                                                            Circling back to your comment about dependency not inherently making someone a pawn and ask: who is dependent on whom, for what, and why? We might say of time-sharing at a university: a student is dependent on the university for access to computing because computers are too big and expensive for the student to own. Makes sense! The dependent relationship is, in a sense, axiomatic of the technology, and may even describe your usage of Kubernetes. If anything, the university wishes the student wasn’t dependent on them for this because it’s a burden to run.

                                                            But generally, Kubernetes is a different beast, and the reason there’s so much discussion of Kubernetes here and elsewhere in the tech industry is that Kubernetes is lucrative. Sure, it’s neat and interesting technology, but so is graphics or embedded programming or cybernetics, etc, etc, etc. There are lots of neat and interesting topics in programming that are very rarely discussed here and elsewhere in programming communities.

                                                            Although computers are getting faster, cheaper, and smaller, the computers owned by the majority of people are performing less and less local computation. Although advances in hardware should be making individuals more independent, the SAAS landscape that begat Kubernetes has only made people less independent. Instead of running computation locally, corporations want to run the computation for you and charge you some form of rent. This landscape of rentier computation that is dominating our industry has created dependent relationships that are not inherently necessary, but are instead instruments of profit-seeking and control. This industry-wide turn towards rentier computation is the root of my angst, and I would say is actually the point of Kubernetes.

                                                          2. 10

                                                            we’re getting further and further away from building software that is designed for normal people to run on their own machines

                                                            This resonates with me a lot. At work, we have some projects that are very easy to run locally and we have some that are much harder. Nearly all the projects that can be run locally get their features implemented more quickly and more reliably. Being able to run locally cuts way down on the feedback loop.

                                                            1. 2

                                                              I’m really looking forward to the built-in embed stuff in Go 1.16 for this reason. Yeah, there’s third-party tools that do it, but having it standardized will be great. I write Go servers and one thing I’ve done is implement every storage layer twice: once with a database, and once in process memory. The utility of this has been incredible, because I can compile a server into a single .exe file that I can literally PM to a colleague on Slack that they can just run and they have a working dev server with no setup at all. You can also do this with sqlite or other embedded databases if you need local persistence; I’ve done that in the past but I don’t do it in my current gig.

                                                              1. 2

                                                                I write Go servers and one thing I’ve done is implement every storage layer twice: once with a database, and once in process memory.

                                                                In my experience the overhead of implementing the logic twice does not pay out since it is very easy to spin up a MySQL or Postgres database, e.g. using docker. Of course this comes with the disadvantage of having to provide another dependency but at least the service then runs in a similar environment to production. Usually spinning up a test database is already documented/automated for testing.

                                                                1. 1

                                                                  That was my first thought, but upon reflection - the test implementation is really just an array of structs, and adds very little overhead at all.

                                                                  1. 1

                                                                    yeah, very often the implementation is just a mess of map[string]*Book, where there’s one Book for every model type and one map for every index, and then you slap a mutex around the whole thing and call it a day. It falls apart when the data is highly relational. I use the in-mem implementation for unit tests and for making debug binaries. I send debug binaries to non-developer staff. Asking them to install Docker alone would be a non-starter.

                                                            2. 4

                                                              Suppose that you, an individual, own two machines. You would like a process to execute on either of those machines, but you don’t want to have to manage the detail of which machine is actually performing the computation. At this point, you will need to build something very roughly Kubernetes-shaped.

                                                              The difficulty isn’t in having a “lot” of nodes, or in running things “on their own machines”; the difficulty is purely in having more than one machine.

                                                              1. 16

                                                                you don’t want to have to manage the detail of which machine is actually performing the computation

                                                                …is not a problem which people with two machines have. They pick one, the other, or both, and skip on all the complexity a system that chooses for you would entail.

                                                                1. 3

                                                                  I struggle with this. My reflex is to want to have that dynamic host management, but fact of the matter is my home boxes have had less pages than my ISP in the past five years. Plain old sysadmin is more than enough in all of my use cases. Docker is still kinda useful to not have to deal with the environment and setup and versions, but like. A lot of difficulty is sidestepped by just avoiding to buy into the complexity.

                                                                  I wonder if this also holds for “smaller” professional projects.

                                                                  1. 1

                                                                    Unfortunately, I think that your approach is reductive. I personally have had situations where I don’t particularly care which of two identical machines performs a workload; one example is when using CD/DVD burners to produce several discs. A common consumer-focused example is having a dual-GPU machine where the two GPUs are configured as one single logical unit; the consumer doesn’t care which GPU handles which frame. Our operating systems must perform similar logic to load-balance processes in SMP configurations.

                                                                    I think that you might want to consider the difficulty of being Buridan’s ass; this paradox constantly complicates my daily life.

                                                                    1. 3

                                                                      When I am faced with a situation in which I don’t particularly care which of two identical machines performs a workload, such as your CD burner example, I pick whichever, or both. Flip a coin, and you get out of the buridan’s ass paradox, if you will. Surely the computer can’t do better than that, if it’s truly the buridan’s ass paradox and both choices are equally good. Dual-GPU systems and multicore CPUs are nice in that they don’t really require changing anything from the user’s perspective. Moving from the good old sysadmin way to kubernetes is very much not like that.

                                                                      I’m sure there’s very valid use-cases for kubernetes, but not having to flip a coin to decide which of my two identical and equally in-reach computers will burn 20 CDs tonight is surely not worth the tradeoff.

                                                                      1. 3

                                                                        To wring one last insight from this line of thought, it’s interesting to note that in the dual-GPU case, a CPU-bound driver chooses which GPU gets which drawing command, based on which GPU is closer to memory which is also driver-managed; while in the typical SMP CPU configuration, one of the CPUs is the zeroth CPU and has the responsibility of booting its siblings. Either way, there’s a delegation of the responsibility of the coin flip. It’s interesting that, despite being set up to manage the consequences of the coin flip, the actual random decision of how to break symmetry and choose a first worker is not part of the system.

                                                                        And yet, at the same time, both GPU drivers and SMP kernels are relatively large. Even when they do not contain inner compilers and runtimes, they are inherently translating work requests from arbitrary and often untrusted processes into managed low-level actions, and in that translation, they often end up implementing the same approach that Kubernetes takes (and which I said upthread): Kubernetes manages objects which represent running processes. An SMP kernel manages UNIX-style process objects, but in order to support transparent migration between cores, it also has objects for physical memory banks, virtual memory pages, and IPC handles. A GPU driver manages renderbuffers, texturebuffers, and vertexbuffers; but in order to support transparent migration between CPU and GPU memory, it also has objects for GPU programs (shaders), for invoking GPU programs, for fencing GPU memory, and that’s not even getting into hotplugging!

                                                                        My takeaway here is that there is a minimum level of complexity involved in writing a scheduler which can transparently migrate some of its actions, and that that complexity may well require millions of lines of code in today’s languages.

                                                                  2. 5

                                                                    I mean, that’s not really an abstract thought-experiment, I do have two machines: my computer and my phone. I’d wager that nearly everyone here could say the same. In reality I have more like seven machines: a desktop, a laptop, a phone, two Raspberry Pi’s, a Switch, and a PS4. Each one of these is a computer far more powerful than the one that took the Apollo astronauts to the moon. The situation you’re talking about has quite literally never been a thing I’ve worried about. The only coordination problem I actually have between these machines is how I manage my ever-growing collection of pictures of my dog.

                                                                  3. 5

                                                                    My feelings exactly. Kubernetes is for huge groups. Really huge. If you only have one hundred or so staff, I am not convinced you get much benefit.

                                                                    If you’re happy in a large company, go wild. Enjoy the kubernets. It isn’t for me - I’m unsure whether I will ever join a group with more than ten or so again, but it won’t be soon.

                                                                  1. 1

                                                                    I’m still rocking a 2014 toshiba tecra Z40-A-11L, with 8GB of RAM and an SSD. No plans to upgrade. I’ve had to replace the keyboard once, and I almost had to do it a second time after a key popped off (but I managed to bend the metal piece back into place). It’s also missing a few screws from the bottom plate - they tend to fall off. So the built quality is not amazing… but it has survived terrifying falls when I tripped with the charger cable - more than I deserve. I’m still in awe when I remember picking it up from the floor and seeing nothing had broken.

                                                                    And in terms of performance, it does everything I need. Even if I lock the cpu frequency to 700MHz to save battery, it will be fast enough for the great majority of tasks. 1600x900 is the main downside.

                                                                    1. 28

                                                                      Love the tone of the post. Thought it was really funny.

                                                                      As for the actual service itself, the clear disdain for Silicon Valley startup bs makes me want to use it for a small website that I will almost certainly never finish.

                                                                      Then again, I’m 51% sure I could build a very similar service for cheaper. Then again, $5/month is pretty cheap. Then again, this is a brand new SaaS, and may not be around in a year. Then again, I have no users and no website and no one relying on me.

                                                                      Hmmmm.

                                                                      1. 57

                                                                        Oh don’t worry, it will definitely be around in a year, We are committed to giving our customers the best experience possible. I guarantee that IMGZ will be up and running for a long, long time, or until someone gives me two bucks and I “amazing journey” the entire customer base.

                                                                        1. 8

                                                                          I “amazing journey” the entire customer base

                                                                          I lol’ed ^_^

                                                                        2. 5

                                                                          The CHEAPASS tier is actually $5 per year, or ~42 cents per month.

                                                                          1. 5

                                                                            Still overpriced, if you ask me.

                                                                            1. 1

                                                                              Oops, completely misread that

                                                                              1. 3

                                                                                I mean you called it “pretty cheap” so there’s no going back now, expect a 12x price hike

                                                                                1. 3

                                                                                  For a 12x price hike, I would hope to see some Serious Business features. Maybe you could provide a Satisfaction Guarantee. Or maybe Enterprise Support.

                                                                                  1. 4

                                                                                    I will add that to the backlog, which is a euphemism for “no”.

                                                                                2. 2

                                                                                  5€/mo is what the whole server costs stavros to run.

                                                                                  1. 2

                                                                                    So he’d need at least 12 users to recoup his costs.

                                                                                    I think that may be treading too far into the dreaded “at-scale” territory for stavros.

                                                                                    1. 2

                                                                                      Oh God and they’ll think they have the right to email me for stuff, what a hassle

                                                                            1. 7

                                                                              I love your pricing page - those ‘our choice’ tags are such bullshit.

                                                                              1. 4

                                                                                What, like somehow $999,999.98 isn’t the best value for you?

                                                                                1. 2

                                                                                  What happens if someone seriously wants the $999k plan?

                                                                                  1. 16

                                                                                    I write an “amazing journey” post detailing how the service is seriously definitely never getting shut down for at least three days and retire on a beach while you deal with the rotting service.

                                                                                    1. 1

                                                                                      What startup is the phrase “amazing journey” in reference to?

                                                                                      1. 14

                                                                                        Many.

                                                                                        1. 3

                                                                                          Many! I think I’ve seen “incredible journey” used with that one email program you had to wait in line to get, one of the post-Flickr photo sites, Vine maybe?

                                                                                      2. 1

                                                                                        He throws a party… I mean… hires a lawyer and makes a Series-A announcement?

                                                                                  1. 7

                                                                                    For fairness, we should find some way to include Dream’s perspective.

                                                                                    My perspective on his perspective is that he goes through a lot of handwaving and psychological arguments to explain his situation. The speedrun team’s paper has a basic statistical argument which convinces me that something is unexplained, but I don’t feel like Dream has an explanation. But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                                                                    In a relative rarity for commonly-run games, the Minecraft speedrunning community allows many modifications to clients. It complicates affairs that Dream and many other runners routinely use these community-approved modifications.

                                                                                    1. 5

                                                                                      But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                                                                      This is the argument that always confuses me. At the end of the day, Minecraft is just some code running on someone else’s computer. Recorded behavior of this code is extremely different from what it should be. There are about a billion ways he could have modified the RNG, even live on stream with logfiles to show for it.

                                                                                      1. 1

                                                                                        I like to take a scientific stance when these sorts of controversies arise. When we don’t know how somebody cheated, but strongly suspect that their runs are not legitimate, then we should not immediately pass judgement, but work to find a deeper understanding of both the runner and the game. In the two most infamous cheating controversies in the wider speedrunning community, part of the resolution involved gaining deeper knowledge about how the games in question operated.

                                                                                      2. 3

                                                                                        But without a clear mechanism for how cheating was accomplished

                                                                                        Are you asking for a proof of concept of how to patch a minecraft executable or mod to get lucky like Dream was?

                                                                                        1. 3

                                                                                          Here’s one:

                                                                                          • open the minecraft 1.16.4.jar in your choice of archive program
                                                                                          • go to /data/minecraft/loot_tables/gameplay/piglin_bartering.json
                                                                                          • increase the weight of the ender pearl trade
                                                                                          • delete META_INF like in the good old days (it contains a checksum)
                                                                                          • save the archive

                                                                                          Anyone as familiar with Minecraft as dream would know how to do this.

                                                                                        2. 2

                                                                                          But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                                                                          We have a clear mechanism : he modded his game. That’s because when he was asked for game logs, he deleted them. Just from the odds alone, he is 100.00000000% guilty.

                                                                                          1. 3

                                                                                            As the original paper and video explain, Minecraft’s speedrunning community does not consider modified game clients to be automatically cheating. Rather, the nature of the precise modifications used are what determine cheaters.

                                                                                            While Dream did admit to destroying logs, he did also submit supporting files for his run. Examining community verification standards for Minecraft speedruns, it does not seem like he failed to follow community expectations. It is common for speedrunning communities to know about possible high-relability verification techniques, like input captures, but to also not require them. Verification is just as much about social expectations as about technical choices.

                                                                                            From the odds alone, Dream’s runs are probably illegitimate, sure, but we must refuse to be 100% certain, due to Cromwell’s Rule; if we are completely certain, then there’s no point in investigating or learning more. From the paper, the correct probability to take away is 13 nines of certainty, which is a relatively high amount of certainty. And crucially, this is the probability that our understanding of the situation is incomplete, not the probability that he cheated.

                                                                                            1. 4

                                                                                              But you said there’s no clear mechanism for how cheating was accomplished. Changing the probability tables through mods is a fairly clear and simple mechanism isn’t it?

                                                                                        1. 10

                                                                                          I’m afraid that, while this might be a reasonable definition of “feature parity”, and I won’t argue over whether it is, there’s some stuff that is missing for a practical migration from Xorg to Arcan for current users.

                                                                                          For comparison, consider the Wayland migration plan. Via XWayland, it is possible for X11 clients expecting to talk to Xorg to instead talk to XWayland and not know the difference. The X11 network protocol is still present and clients can be remote. I don’t know if I’ve experienced it, but I have heard various folks report that they did not know that they were running on top of Wayland, because so much of their existing X11 tooling continued to work.

                                                                                          If I am interpreting your drawing API diagram correctly, then today it is possible to host an entire Arcan server as an Xorg client. This gives a possible incremental migration where users can move each application from Xorg into Arcan, and when everything is migrated, then the Xorg server can be dismantled. This also suggests a useful way that I could contribute: I could package Arcan for my distribution of choice, as a plain X11 client with no distinguishing features. Is this a reasonable view of the current state of the project?

                                                                                          To quote from your previous article:

                                                                                          It is worthwhile to stress that this project in no way attempts to ‘replace’ Xorg in the sense that you can expect to transfer your individual workflow and mental model of how system graphics works without any kind of friction or effort.

                                                                                          Wayland does not attempt to do this either. In order to make every frame a painting, the compositor gives slightly different results on-screen from what the X11 client might believe is in the pixel buffer. This disconnection is ultimately empowering to the user, to the point of Compiz, so it’s hard to argue against it, but it’s not seamless. Nonetheless, Wayland has a practical path to hosting X11 clients.

                                                                                          In pragmatic terms, I used to give a demonstration at the local university where I perform many X11 party tricks; the finale was to remotely invoke a Firefox instance to run on a local Xorg server. For feature parity, I would expect to be able to give the same talk with Arcan, including the same sorts of practical demonstrations. I’ve already started imagining how the talk would change if Wayland were to become the paradigm which is taught first, and it’s different, but still doable.

                                                                                          1. 7

                                                                                            I’m afraid that, while this might be a reasonable definition of “feature parity”, and I won’t argue over whether it is, there’s some stuff that is missing for a practical migration from Xorg to Arcan for current users.

                                                                                            Hence why I begin by saying that the compatibility vector is a different subject matter altogether, and if you approach it prematurely, the entirety of the project will suffer - which is very much the case with Wayland.

                                                                                            There is support for wayland clients. There is support for x-over wayland clients. There is a separate Xorg (Xarcan) fork that work similarly to Xephyr. to the article on Dating my X(2016) - or this clip for that matter: https://www.youtube.com/watch?v=CIWZdEkgPfM

                                                                                            All those bases are covered, but also wholly uninteresting from my point of view. It is, at most, “due diligence”. If the set of current and drafted Wayland objects would be the fences to my FOSS desktop experience, I’d break my 25+ years of Linux/BSD/Solaris/… and go all in on Windows 10 in a heartbeat.

                                                                                            If that compatibility vector would turn out to be relevant in the grand scheme of things - I doubt it - I would rather burn a few weeks more on adding rootless mapping to Xarcan along with a scene-graph synch tactic so X window managers could manage ‘faked’ Arcan windows, but again, not much interest in that kind of work. Had I known then what I know now about Wayland internals, I would have pursued that path instead. They chose the wrong primitives from the start, and even the simple stuff is endlessly tedious and frustrating as a consequence.

                                                                                            Wayland does not attempt to do this either. In order to make every frame a painting, the compositor gives slightly different results on-screen from what the X11 client might believe is in the pixel buffer. This disconnection is ultimately empowering to the user, to the point of Compiz, so it’s hard to argue against it, but it’s not seamless. Nonetheless, Wayland has a practical path to hosting X11 clients.

                                                                                            One of the >many< problems with the rootless X transition model is that it works superficially for some 90%, then an endless stream of unpredictable breaks and more subtle quality degradation.

                                                                                            I could package Arcan for my distribution of choice, as a plain X11 client with no distinguishing features. Is this a reasonable view of the current state of the project?

                                                                                            Which distribution is that? chances are that there are people at work on it already. As for distinguishing features - even the included console WM works fine as a terminal-emulator with benefits.

                                                                                            In pragmatic terms, I used to give a demonstration at the local university where I perform many X11 party tricks; the finale was to remotely invoke a Firefox instance to run on a local Xorg server.

                                                                                            The ones I tend to do is the runtime network redirection as linked before, as well as sshing in, killall -9 arcan. Run it again from the tty and see clients coming back to life where they were. Others are using two laptops, closing the lid on the one and see the windows appear on the other - or running gnomeland on one of them, arcan/durden on the other and subjecting both to a while true; weston-terminal &; done kind of a loop. Earlier I used the drag+compose+share this thing (2013), seek to about 2:30 in.

                                                                                            There is something much more interesting around the corner though.

                                                                                            1. 2

                                                                                              Which distribution is that? chances are that there are people at work on it already. As for distinguishing features - even the included console WM works fine as a terminal-emulator with benefits.

                                                                                              Do you have a list of distributions with arcan packages? I could only find that it’s supported in Void. Arch seems to have AUR packages, but no wiki page.

                                                                                              1. 5

                                                                                                You can use Repology to search a bunch of package repositories at once!: https://repology.org/project/arcan/versions

                                                                                              2. 1

                                                                                                Hence why I begin by saying that the compatibility vector is a different subject matter altogether, and if you approach it prematurely, the entirety of the project will suffer - which is very much the case with Wayland.

                                                                                                Then why title your post “Arcan vs. Xorg: Feature parity”? Why should Xorg users care about the features of Arcan and whether those features can be put into correspondence with Xorg features, except for the case where Xorg users might become Arcan users?

                                                                                                For what it’s worth, there is a nixpkgs issue for packaging Arcan.

                                                                                                1. 6

                                                                                                  Because explaining something mildly complicated that doesn’t fit established, strong, categories without grounding it in some frame of reference that the target audience might be able to relate to, is really hard and impossible outside the span of seconds/minutes of attention you might get as a mostly unknown and obscure figure.

                                                                                                  A mapping from “X did it in this way, I understand that, what is Y in this space?”

                                                                                                  Xorg is a useful technical link as its inner properties are not that hard to unpack or find coherent descriptions of, and it is the more robust such possible reference - OSX and Windows are too opaque and Android is too alien with a wider system view. Wayland is a never-ending story of “oh but it is just a protocol” (the main, mostly defunct and incomplete document referred to itself as a display server, as did the initial todos and readmes). “no that is up to the compositor” “no not the compositor object in the protocol, the compositor!”.

                                                                                            1. 5

                                                                                              One point: ARM instructions tend to fixed-width instructions (like UTF-32), vs x86 instructions tend to vary in size (like UTF-8). I always loved that.

                                                                                              I’m intrigued by the Apple Silicon chip, but I can’t give you any one reason it should perform as well as it does, except maybe smaller process size / higher transistor count. I am also curious how well the Rosetta 2 can JIT x86 to native instructions.

                                                                                              1. 10

                                                                                                “Thumb-2 is a variable-length ISA. x86 is a bonkers-length ISA.” :)

                                                                                                1. 1

                                                                                                  The x86 is relatively mild compared to the VAX architecture. The x86 is capped at 15 bytes per instruction, while the VAX has several instructions that exceed that (and there’s one that, in theory, could use all of memory).

                                                                                                  1. 2

                                                                                                    If you really want to split your brain, look up the EPIC architecture on the 64-bit Itaniums. These were an implementation of VLIW (Very Long Instruction Word). In VLIW, you can just pass a huge instruction that tells what individual functional unit should do (essentially moving scheduling to the compiler). I think EPIC batched these in groups of three .. been I while since I read up on it.

                                                                                                    1. 6

                                                                                                      interestingly by one definition of RISC, this kind of thing makes itanium a RISC machine: The compiler is expect to work out dependencies, functional units to use, etc which was one of the foundational concepts of risc in the beginning. At some point RISC came to mean just “fewer instructions”, “fixed length instructions”, and “no operations directly with memory”.

                                                                                                      Honestly at this point I believe it is the latter that most universally distinguishes CISC and RISC at this point.

                                                                                                      1. 3

                                                                                                        Raymond Chen also wrote a series about the Itanium.

                                                                                                        https://devblogs.microsoft.com/oldnewthing/20150727-00/?p=90821

                                                                                                        It explains a bit of the architecture behind it.

                                                                                                      2. 1

                                                                                                        My (limited) understanding is that it’s not the instruction size as much as the fact that x86(-64) has piles of prefixes, weird special cases and outright ambiguous encodings. A more hardwarily inclined friend of mine once described the instruction decoding process to me as “you can never tell where an instruction boundary actually is, so just read a byte, try to figure out if you have a valid instruction, and if you don’t then read another byte and repeat”. Dunno if VAX is that pathological or not, but I’d expect most things that are actually designed rather than accreted to be better.

                                                                                                        1. 1

                                                                                                          The VAX is “read byte, decode, read more if you have to”, but then, most architectures which don’t have fixed sized instructions are like that. The VAX is actually quite nice—each opcode is 1 byte, each operand is 1 to 6 bytes in size, up to 6 operands (most instructions take two operands). Every instruction supports all addressing modes (with the exception of destinations not accepting immediate mode for obvious reasons). The one instruction that can potentially take “all of memory” is the CASE instruction, which, yes, implements a jump table.

                                                                                                    2. 6

                                                                                                      fixed-width instructions (like UTF-32)

                                                                                                      Off-topic tangent from a former i18n engineer, which in no way disagrees with your comment: UTF-32 is indeed a fixed-width encoding of Unicode code points but sadly, that leads some people to believe that it is a fixed-width encoding of characters which it isn’t: a single character can be represented by a variable-length sequence of code points.

                                                                                                      1. 10

                                                                                                        V̸̝̕ȅ̵̮r̷̨͆y̴̕ t̸̑ru̶̗͑ẹ̵̊.

                                                                                                      2. 6

                                                                                                        I can’t give you any one reason it should perform as well as it does, except maybe smaller process size / higher transistor count.

                                                                                                        One big thing: Apple packs an incredible amount of L1D/L1I and L2 cache into their ARM CPUs. Modern x86 CPUs also have beefy caches, but Apple takes it to the next level. For comparison: the current Ryzen family has 32KB L1I and L1D caches for each core; Apple’s M1 has 192KB of L1I and 128KB of L1D. Each Ryzen core also gets 512KB of L2; Apple’s M1 has 12MB of L2 shared across the 4 “performance” cores and another 4MB shared across the 4 “efficiency” cores.

                                                                                                        1. 7

                                                                                                          How can Apple afford these massive caches while other vendors can’t?

                                                                                                          1. 3

                                                                                                            I’m not an expert but here are some thoughts on what might be going on. In short, the 4 KB minimum page size on x86 puts an upper limit on the number of cache rows you can have.

                                                                                                            The calculation at the end is not right and I’d like to know exactly why. I’m pretty sure the A12 chip has 4-way associativity. Maybe the cache lookups are always aligned to 32 bits which is something I didn’t take into account.

                                                                                                          2. 3

                                                                                                            For comparison: the current Ryzen family has 32KB L1I and L1D caches for each core; Apple’s M1 has 192KB of L1I and 128KB of L1D. Each Ryzen core also gets 512KB of L2; Apple’s M1 has 12MB of L2 shared across the 4 “performance” cores

                                                                                                            This is somewhat incomplete. The 512KiB L2 on Ryzen is per core. Ryzen CPUs also have L3 cache that is shared by cores. E.g. the Ryzen 3700X has 16MiB L3 cache per core complex (32 MiB in total) and the 3900X has 64MiB in total (also 16MiB per core complex).

                                                                                                            1. 2

                                                                                                              How does the speed of L1 on the M1 compare to the speed of L1 on the Ryzen? Are they on par?

                                                                                                          1. 2

                                                                                                            I am using the norman layout. I have a few remaps:

                                                                                                            noremap n h
                                                                                                            noremap i j
                                                                                                            noremap o k
                                                                                                            noremap h l
                                                                                                            noremap l o
                                                                                                            noremap ; i
                                                                                                            noremap k n
                                                                                                            noremap K N
                                                                                                            
                                                                                                            noremap <C-W>n <C-W>h
                                                                                                            noremap <C-W>i <C-W>j
                                                                                                            noremap <C-W>o <C-W>k
                                                                                                            noremap <C-W>h <C-W>l
                                                                                                            

                                                                                                            This actually leaves hjkl 1 key shifted to the right - which makes sense, since proper home row placement leaves your index finger on the (qwerty) j, not h.

                                                                                                            1. 4

                                                                                                              There’s a thing you can do to evaluate this. Take a source listing of the candinate replacement. Count up the files and lines in them. Then look up dependencies.

                                                                                                              If there are dependencies, count their functionality into the system and add it to the line count. If the whole compiler + runtime is larger than 50k lines, then it will never replace C.

                                                                                                              (Zip didn’t pass this test).

                                                                                                              1. 12

                                                                                                                Why? GCC has millions of lines of code.

                                                                                                                1. 2

                                                                                                                  C spread across platforms by being simple to port. It was that because it’s not much. We’re talking about pre-GCC C here. By now GCC is so massive that vendors accommodate their platforms to fit it.

                                                                                                                  A language striving to replace C would need to have a pioneer’s structure as well. Otherwise it’s unable to skip the principal compiler on the platform and really replace it.

                                                                                                                  1. 7

                                                                                                                    The possibility of a small implementation of the language and the size of the main implementation aren’t necessarily related, though. Additionally, if you want to target a new platform nowadays, you’re better off adding an appropriate backend to GCC or LLVM instead of trying to implement C (or any other language) from scratch.

                                                                                                                    1. 2

                                                                                                                      Of course, It’s better to not attempt to replace C. It works fairly well for what it was made for.

                                                                                                                    2. 2

                                                                                                                      A lot of languages use LLVM as a back-end. This contradicts your thesis by adding a huge number of dependent LOC, but making porting really easy I.e. if there’s already an LLVM code generator for your platform, you’re mostly done.)

                                                                                                                      And these days, hopefully any compiler segregates the code generation logic enough that it can be ported without worrying how large the front-end side of it is.

                                                                                                                      1. 1

                                                                                                                        LLVM itself has been written in C++, and that is an extension of C. That contradicts that none of this has replaced C yet?

                                                                                                                        Honestly though I don’t believe it to be that important. I just don’t think that people move off C before the equivalent language can stand without C. There’s also a question of why to replace C? For example why would anybody want to write coreutils in a different language?

                                                                                                                        1. 3

                                                                                                                          For example why would anybody want to write coreutils in a different language?

                                                                                                                          https://github.com/uutils/coreutils#why

                                                                                                                  2. 1

                                                                                                                    Note that there is ongoing work on a self-hosted compiler instead of leveraging LLVM.

                                                                                                                  1. 1

                                                                                                                    I like the increase in personality and being a bit more open about yourself. Truly reminds me of the 90s web.

                                                                                                                    300kb is a lot in 90s terms though ;) I challenge you to dramatically reduce your website and will accept a challenge from you in return (mine is already quite slim. Less than 5kn iirc).

                                                                                                                    1. 2

                                                                                                                      Haha you’re absolutely right, but it’s not an authentic 90s site (did emojis even exist then?) it’s just a nod to the 90s. Considering we’re in the days of the multi megabyte webpages, I think 300kb is pretty good. 😊

                                                                                                                      Although, I do like a challenge…

                                                                                                                      1. 1

                                                                                                                        Sure. For webapps 300kb is alright. but for a blog? ;)

                                                                                                                        1. 1

                                                                                                                          I think for any webpage these days, 300KB is ok. It’s way lower than what it was with the old theme.

                                                                                                                          1. 1

                                                                                                                            Ah, come on. Accept my challenge and return the favor! I’d be curious what I ought to improve :)

                                                                                                                        2. 1

                                                                                                                          Emoji were standardized in unicode in 2010. Windows 7 added support for them in 2012. So no, they didn’t exist in the 90s, not even in the 00s (as we know them today).

                                                                                                                        3. 2
                                                                                                                          1. 1

                                                                                                                            Nice job! Looking forward to a write up of how you did it.ä and waiting for a counter challenge;-)

                                                                                                                        1. 6

                                                                                                                          this one is weird:

                                                                                                                          Calibre adds an <ol><li></ol> to every heading and subheading. Every ePub reader seems to handle this fine — except FBReader, my favoured ebook reader on Android, which displays a “1.” before each header.

                                                                                                                          Solution: after you’ve unzipped the files, go through and remove every <ol></ol>, convert the <li></li> to <p></p> and remove the value= attribute from the <p> or else epubcheck complains.

                                                                                                                          i considered poking at the fbreader source and seeing if i could fix it but thinking more about it it seems like fbreader is technically doing the right thing? i can’t figure out what the tags are for in the first place.

                                                                                                                          1. 2

                                                                                                                            Oh, arguably! But it’s also pretty much never what I actually want, and other ebook readers don’t do that?

                                                                                                                            I didn’t even mention other stuff, like how I’m using zip -f because that way the “mimetype” file stays both uncompressed and first in the zip file - if you just get your files and zip them up, epubcheck isn’t happy with that either. ePub is weird and annoying.

                                                                                                                            (edit: just adding that as a note!)

                                                                                                                            1. 3

                                                                                                                              yeah, i admit i don’t know much about epub generation; that line item just caught my interest because fbreader is my preferred epub reader too.

                                                                                                                              edit: ugh, just saw it was no longer open source. so much for that, then!

                                                                                                                                1. 2

                                                                                                                                  will check :-) Annoyingly, FBReader is still popular enough I should probably allow for it.

                                                                                                                                  1. 2

                                                                                                                                    thanks, i’ll check it out!

                                                                                                                            1. 25

                                                                                                                              It looks great, but I don’t see what it has to do with the web of the 90s. It’s just plain modern, with emoji everywhere.

                                                                                                                              1. 7

                                                                                                                                It looks great, but I don’t see what it has to do with the web of the 90s

                                                                                                                                From the 90s nodding back, it looks like this. The images don’t work because of some weird SVG magic I assume is to do with lazy loading, and of course it uses CSS.

                                                                                                                                Now to be fair this is a Commodore Amiga running OS 3.9, which technically was released in 2000 with a TCP/IP stack and last updated earlier this year. But the hardware at least dates back to 1992.

                                                                                                                                I think it looks pretty nice though, at least on a modern machine.

                                                                                                                                1. 1

                                                                                                                                  What browser are you using, and how does it manage to do modern TLS?

                                                                                                                                  1. 9

                                                                                                                                    I’m using IBrowse which is linked against current AmiSSL, so the TLS negotiation happens on the Amiga. It’s fairly quick on a 68060 at 50mhz.

                                                                                                                                    1. 3

                                                                                                                                      Now that’s more like a website out of the 90s, the IBrowse one. It’s amazing that commercial software for the Amiga is still updated - how many users could that thing have?

                                                                                                                                      1. 1

                                                                                                                                        On the website’s header:

                                                                                                                                        Did you know… you can drag a browser tab out of the window to open a new window?

                                                                                                                                        Chuckled at this.

                                                                                                                                        1. 2

                                                                                                                                          Laugh if you will, but IBrowse 2 had this in 1997-98, before almost any other browser.

                                                                                                                                          1. 1

                                                                                                                                            This one’s even better:

                                                                                                                                            Did you know… HTML frames can be resized by dragging the borders (even if they’re invisible)?

                                                                                                                                      2. 1

                                                                                                                                        Thanks to a proxy doing the TLS for me, I managed to load the page on IE8 (released 2009). I had to turn off javascript, or it would give a blank page. The result is not too pretty: https://www.cosarara.me/up/10264.png

                                                                                                                                        1. 1

                                                                                                                                          What do you use for an HTTPS to a HTTP proxy?

                                                                                                                                          1. 1

                                                                                                                                            And IE4 on windows 95: https://www.cosarara.me/up/4da38.png quite similar. Instead of directly showing a blank page when JS crashes, it asks if you want to stop the script, which is nicer. Too many BSODs though.

                                                                                                                                        2. 1

                                                                                                                                          could you post a direct link to the image? my browser is unable to render the imgur web page.

                                                                                                                                        3. 3

                                                                                                                                          I can’t tell the difference from the previous theme, aside from the overuse of emoji.

                                                                                                                                          edit: Ah oops. I was on mobile. Looks very funky on the desktop. Nice!

                                                                                                                                        1. 4

                                                                                                                                          One point I have to disagree with is the complaint about NAT. I know very little about networks, but I do know NAT is the bane of end to end, and is naturally hostile to peer-to-peer communications. There’s still hole punching, but that requires a server somewhere. Without that, people must configure their router, with no help from the software vendor (the router is independent from the computer or phone it routes).

                                                                                                                                          • Want to hide the layout of your network? Just randomise your local addresses.
                                                                                                                                          • Want to hide the existence of part of your network? Use a firewall.
                                                                                                                                          • Want to drop incoming traffic by default? Use a firewall, dammit.

                                                                                                                                          We could even have a “NAT” that doesn’t translate anything, but open ports like an IPv4 NAT would. Same “security” as before, and roughly the same disadvantages (can’t easily get through without the user explicitly configuring allowances).

                                                                                                                                          That said, don’t we have protocols to allow the computer on the home network to talk to the router and ask it to open up some port on a permanent basis? I just want to start my game, advertise my IP on the match server, and wait for incoming players. Or setup a chat room without using any server, because I too care about my privacy.

                                                                                                                                          1. 9

                                                                                                                                            The NAT section absolutely reads like this person doesn’t understand firewalls.

                                                                                                                                            Every computer just by default accessible anywhere in the world unless you specifically firewall things?

                                                                                                                                            Where in the world is there NAT without a stateful firewall denying all incoming connections by default? Removed the NAT (because IPv6) you’d be left with a firewall. Where’s the issue? Maybe they think you need to configure the firewall on each computer specifically? That’s what this seems to imply:

                                                                                                                                            the kind of devices that do actively use IPv6 (mobile devices, mainly), are able to just zeroconf themselves perfectly, which is nice from a “just works” perspective

                                                                                                                                            1. 2

                                                                                                                                              NATs do provide some obsfucation of addresses and can make it more difficult for an attacker to reach a device directly (ignoring, of course, intentionally forwarded ports..)

                                                                                                                                              1. 5

                                                                                                                                                I’m not convinced that this is true. Most IPv6 stacks (including the Windows one) support generating a new IPv6 address periodically. The stack holds on to the old address until all connection from it are closed. IPv6 lets you take this to extremes and generate a new IPv6 address for every outbound connection. It’s fairly easy to have a stable IPv6 address that you completely firewall off from the outside and use for connections from the local network, and pick a new address for every new outbound connection. If someone wants to map your internal network, they will never see the address that you listen on and they can’t easily tell when two connections are from the same machine or different ones.

                                                                                                                                                In contrast, IPv4 NAT relies on heuristics and is actively attacked by benign protocols such as STUN, so implementations often have vulnerabilities (there was one highlighted on this site this week).

                                                                                                                                                1. 3

                                                                                                                                                  No: I receive SSH bruteforce on my LAN from private IPv4 packets coming from outside, joyfully going through my ISP router from WAN to LAN. Firewalling, in IPv4 and IPv6 alike, prevents that, not NAT alone.

                                                                                                                                                  1. 2

                                                                                                                                                    Also Yes: a misconfigured firewall with NAT might have upstream routers not route devices individual addresses to it.

                                                                                                                                                    Also TLS does have sessioin resuption cookies, and maybe rotating the IPv6 at every request is possible? Not practical though…

                                                                                                                                                    Good point: How would we implement privacy-focused VPNs with IPv6? The Big Bad NAT?

                                                                                                                                                    1. 1

                                                                                                                                                      Also TLS does have sessioin resuption cookies, and maybe rotating the IPv6 at every request is possible? Not practical though…

                                                                                                                                                      With what I’ve researched, it’s possible, assuming the server supports them, which a number do not (because a lot of mechanisms like that, in the interest of accessibility or speed, are a compromise to security)

                                                                                                                                                2. 1

                                                                                                                                                  Where in the world is there NAT without a stateful firewall denying all incoming connections by default?

                                                                                                                                                  Ideally, nowhere. In the world I live in? I’ve seen a good handful of people order their firewall incorrectly and end up placing an ALLOW ALL rule as the first in the sequence, meaning they have effectively no firewall.

                                                                                                                                                  With IPv4, accidentally leaving your firewall wide open like that, assuming you run a network fully behind NAT, would lead to no real issue, since any port not given a NAT rule has no actual destination to pass it to.

                                                                                                                                                  For the record: I am in no way saying NAT alone should be your security policy. But looking at the design documents, the conclusion of them seems to say that if your firewall dies, for some reason, NAT at least still plays traffic cop (well, maybe more like traffic controller)

                                                                                                                                                3. 1

                                                                                                                                                  NAT is the bane of P2P, and is something that, yes, I do indeed understand why it’s not considered part of IPv6.

                                                                                                                                                  However the nice ability to have about, at current count, 6 different hosts, accessible all under the same IP with just a port number to deal with is nice - I don’t need to remember multiple addresses, certainly not stupidly long ones, I just know that anywhere in the world I can type in 96.94.238.189 and that’s the only number I need to memorize. And as far as my research has lead me, that’s just not possible in IPv6.

                                                                                                                                                  1. 2

                                                                                                                                                    haproxy?

                                                                                                                                                    1. 1

                                                                                                                                                      Of which, 90% of my network traffic is routed through. However, HAProxy is not the end-all be-all here, some things it just can’t do:

                                                                                                                                                      • SSH, which, at least, OpenSSH, does not have support for the PROXY protocol, nor host based routing. I need, currently, 3 separate ports NATted through to deal with the latter. We’ll get to the former in just a moment.
                                                                                                                                                      • Mail, be it SMTP, POP3, or IMAP, but especially SMTP, where the IP address connecting is massively important (See also: SPF), and, again, Postfix, which is what I’m using at an MTA, does not support the proxy protocol either, to the best of my knowledge.
                                                                                                                                                      • Very long-run connections, like IRC. While yes it can handle these, and no, I don’t mean logging, option logasap is a thing, but it really doesn’t seem like HAP was exactly meant to keep track of connections like that, especially when, once again, my ircd, UnrealIRCd, pays attention to the connecting IP (There is discussion about allowing PROXY support, I believe it’s experimental with WebIRC blocks, but for the entire server, it’s not supported)
                                                                                                                                                      • Anything with enough security reasoning to solidly slap fail2ban on it, such as… SMTP and SSH. Fail2Ban doesn’t understand PROXY, though, admittedly, you only need the application service to understand that and log it. But either way if I’m routing connections through another machine then fail2ban is useless without a fair bit of configuration to make it work cross machine, something that I wrote an entire service for, just to allow my Apache instances to correctly ban IPs at the HAProxy level.

                                                                                                                                                      As much as HAProxy is an amazing piece of kit that is very functional and flexible, not everything expects, or exactly allows, arbitrary reverse proxies without a lot of fiddling.

                                                                                                                                                    2. 1

                                                                                                                                                      Can’t you just use the same prefix for all IPV6 addresses? I imagine something like:

                                                                                                                                                      aa:bb:cc:dd:ee:ff:gg:hh::1
                                                                                                                                                      aa:bb:cc:dd:ee:ff:gg:hh::2
                                                                                                                                                      aa:bb:cc:dd:ee:ff:gg:hh::3
                                                                                                                                                      aa:bb:cc:dd:ee:ff:gg:hh::4
                                                                                                                                                      

                                                                                                                                                      The first part may still be a pain if you have to remember it and type it by hand, but the rest doesn’t sound that bad…

                                                                                                                                                      1. 1

                                                                                                                                                        In theory, yes. Also in theory, you should never have to type in an IPv6 address because DNS anyways (let’s just forget that sometimes when I’m configuring a fresh host on the network it has no DNS)

                                                                                                                                                        And minus setting up network prefix translation for your chosen prefix inside the private space, you’ll be dealing with something out of your control. For example, I just disconnected my phone from wifi. It now has a public IPv6 of 2600:387:c:5113::129. And it’s a lot easier to memorize 24 bits of decimal than 64 bits of hex. Heck, if you run a /24 inside the standard 192.168 prefix range for IPv4, you only really have to remember two numbers: your chosen prefix (in my case, 5), and the IP of the host you want to reach (say, 158). Therefore I can mentally remember that the pair (5, 158) is, say, the new container I just brought up, and I can probably hammer out 192.168.5.158 into my browser’s address bar before I’ve even fully recalled that.

                                                                                                                                                        IPv6, however, would likely cause me to have to memorize an entire address, or always be going back to my trusty ip a command to copy it. and something like 2600:387:c:5113 as a prefix isn’t something I can really compact, like I can compact 192.168.5 to 5. And being much longer, it’ll take more repitions to successfully memorize that recall away, meaning I just need to keep my host IP portion in memory.

                                                                                                                                                        God forbid if any part of that address changes on you though. Hopefully dynamic IP assignments (“from an ISP” dynamic, not “from DHCP” dynamic) won’t be a thing in IPv6 the way they are in IPv4