1. 2

    Yes, stack traces are lazy. But github.com/pkg/errors.WithStack() is very handy.

    1. 1

      I’m running sentry open source, but an older version, as newer versions require a very complex setup. I’d love something that was simple to deploy (Go/Rust service) and used the sentry client libs.

      1. 1

        More discord than matrix currently. But matrix has a discordification meta-issue: https://github.com/vector-im/element-web/issues/7487

        1. 3

          Re-write HDFS in something that doesn’t run on the JVM. Same for Hive.

          1. 1

            What on earth needs HDFS but can’t take 30ms of JVM startup time?

            1. 2

              It’s not 30ms, it’s more like 6 seconds when you run something like hdfs dfs -ls /

              To be clear, the above is the client warming up before it begins to contact the server. The server will respond to anything in an instant as there is no warm-up to respond to requests.

            2. 1

              Go? Rust? Python?

              1. 4

                I’m not sure if I’d rebuild in Rust or GoLang but most definitely not Python. There is a lot of CPU-intensive operations hiding in Hadoop.

                There’s actually an HDFS client written in Go. It’s awesome as there is no JVM warmup time when you execute it unlike the HDFS client that ships with Hadoop. Waiting 6 seconds every time you want to list the contents of another folder is frustrating. Someone now just needs to port everything else.

                1. 1

                  Sorry, I was trolling a bit with Python. That said, doesn’t recent JDK have native compilation support?

            1. 3

              Unicode. Seriously, I’d rewrite Unicode specifications from scratch.

              1. 2

                What would you change?

                1. 7

                  I would go back much further and redesign the alphabet and English spelling rules.

                  1. 4

                    I for one would not admit emojis into unicode. Maybe let whatever vendors want standardize something in the private use areas. But reading about new versions of unicode and the number of emojis added has me wondering about the state of progress in this world.

                    1. 5

                      Customers demand emojis. Software vendors have to implement Unicode support to accommodate that. Unicode support is more widespread.

                      I take that as a win.

                      Besides, sponsoring emoji funds Unicode development to some extent.

                      1. 3

                        MSN Messenger had emoji-like things 20+ years ago, but they were encoded as [[:picture name:]]. This works, because they are pictures, not characters. Making them characters causes all sorts of problems (what is the collation order of lower-case lambda, American flag and poop in your locale? In any sane system, the correct answer is ‘domain error’).

                        Computers have been able to display small images for at least a decade before Unicode even existed, trying to pretend that they’re characters is a horrible hack. It also reinvents the problems that Chinese and other idiographic languages have. A newspaper in a phonographic language can introduce a neologism by rearranging existing letters, one in an ideographic language has to either make a multi-glyph word or wait for their printing press to be updated with the new symbols. If I want a new pictogram in a system that communicate images, I can send you a picture. If I want to encode it as unicode then I need to wait for a new unicode standard to add it, then I need to wait for your and my software to support it.

                      2. 1

                        On the contrary, shipping new emoji is a great way to trick people into upgrading something when they might not otherwise be motivated. If you have some vulnerability fixes that you need to roll out quickly, bundle them alongside some new emoji and suddenly the update will become much more attractive to your users. Works every time. All hail the all-powerful emoji.

                        1. 1

                          Sure, let software vendors push security updates with emojis. Unicode the standard doesn’t need to do that.

                  1. 23
                    • the virtual console / terminal emulator / shell thing. On the systems level, it’d be great if we had a simple API to build text based applications (batch or interactive) which isn’t based on emulating escape codes for ancient hardware. On the users side, it’d be cool to have a shell with non-blocking commands, command palette, modern keyboard shortcuts, etc. Bonus points if you can make this a new stable interface to write apps against, the way win32 or, ahem, escape codes are stable

                    • relational database without SQL and nulls, and with wasm-style minimal relational language with text/binary format which is a target for orms/high level interactive query language. It’s insane that SQL injection is a thing,

                    1. 5

                      Oil has the start of this, called “headless mode” !

                      http://www.oilshell.org/blog/2021/06/hotos-shell-panel.html#oils-headless-mode-should-be-useful-for-ui-research

                      It’s a shell divorced from the terminal. A GUI / TUI can communicate with the shell over a Unix domain socket. There is a basic demo that works!

                      One slogan is that a shell UI should have a terminal (for external commands); it shouldn’t be a terminal.

                      As mentioned recently I need people to bang on the other side of this to make it happen, since I am more focused on the Oil language, etc.

                      1. 1

                        I’m curious how this is different from just running the shell and talking to it’s stdout/stdin?

                        1. 3

                          A few different issues, not exhaustive:

                          • If stdout to the shell is a pipe, then invoking say ls --color will inherit the pipe as stdout. This means isatty(stdout) will return false, which means you won’t get color.
                            • with the headless shell, the GUI can create a TTY and send the FD over the Unix domain socket, and ls will have the TTY as its stdout! This works!
                          • You don’t know when the output of ls ends and the output of the next command begins
                            • with the headless shell you can pass a different TTY every single time. Or you can pass a pipe
                          • You don’t know where the prompt begins and ends, and the where the output of ls begins, etc.
                            • with the headless shell, you can send commands that render the prompt, return it, and display it yourself in a different area of the GUI

                          Also, with the headless shell, you can make a GUI completion and history interface. In other words, do what GNU readline does, but do it in a GUI, etc. This makes a lot of sense since say Jupyter notebook and the web browser already have GUIs for history and completion.

                          (Note there is a bash Jupyter kernel, but it’s limited and doesn’t appear to do any of these things. It appears to scrape stdin/stdout. If anyone has experience I’d be interested in feedback)

                          1. 1

                            Terminals offer “capabilities”, stuff like querying the width and height, or writing those weird escapes that change the color. I would guess there would either be no capabilities available to a headless shell, or maybe their own limited set of capabilities emulated or ignored in the UI. I haven’t looked at the source so this is merely speculation.

                            1. 1

                              Well a typical usage would be to have a GUI process and a shell process, with the division of labor like this:

                              1. GUI process starts the “headless shell” process (osh --headless), which involves setting up a Unix domain socket to communicate over.
                              2. GUI process allows the user to enter a shell command. This is just a text box or whatever.
                              3. GUI process creates a TTY for the output of this command.
                              4. GUI process sends the command, along with the file descriptor for the TTY over the Unix Domain socket to the headless shell, which sets the file descriptor state, parses, and executes the command
                              5. GUI process reads from the other end of the TTY and renders terminal escape codes

                              So the point here is that the shell knows nothing about terminals or escape codes. This is all handled in the GUI process.

                              You could have a shell multiplexer without a terminal multiplexer, etc.

                              If none of the commands needed a terminal, then the GUI doesn’t even need a terminal. It could just do everything over pipes.

                              So there is a lot of flexibility in the kinds of UIs you can make – it’s not hard-coded into the shell. The headless shell doesn’t print the prompt, and it doesn’t handle completion or history, etc. Those are all UI issues.

                        2. 1

                          To be fair, SQL injection is only a thing if you use your DB driver insanely wrong.

                          1. 1

                            I agree on the need for a relational query language that’s better than SQL

                          1. 18

                            The whole damn thing.

                            Instead of having this Frankenstein’s monster of different OSs and different programming languages and browsers that are OSs and OSs that are browsers, just have one thing.

                            There is one language. There is one modular OS written in this language. You can hot-fix the code. Bits and pieces are stripped out for lower powered machines. Someone who knows security has designed this thing to be secure.

                            The same code can run on your local machine, or on someone else’s machine. A website is just a document on someone else’s machine. It can run scripts on their machine or yours. Except on your machine they can’t run unless you let them and they can’t do I/O unless you let them.

                            There is one email protocol. Email addresses can’t be spoofed. If someone doesn’t like getting an email from you, they can charge you a dollar for it.

                            There is one IM protocol. It’s used by computers including cellphones.

                            There is one teleconferencing protocol.

                            There is one document format. Plain text with simple markup for formatting, alignment, links and images. It looks a lot like Markdown, probably.

                            Every GUI program is a CLI program underneath and can be scripted.

                            (Some of this was inspired by legends of what LISP can do.)

                            1. 24

                              Goodness, no - are you INSANE? Technological monocultures are one of the greatest non-ecological threats to the human race!

                              1. 1

                                I need some elaboration here. Why would it be a threat to have everyone use the same OS and the same programming language and the same communications protocols?

                                1. 6

                                  One vulnerability to rule them all.

                                  1. 2

                                    Pithy as that sounds, it is not convincing for me.

                                    Having many different systems and languages in order to have security by obscurity by having many different vulnerabilities does not sound like a good idea.

                                    I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

                                    1. 4

                                      It is not security through obscurity, it is security through diversity, which is a very different thing. Security through obscurity says that you may have vulnerabilities but you’ve tried to hide them so an attacker can’t exploit them because they don’t know about them. This works as well as your secrecy mechanism. It is generally considered bad because information disclosure vulnerabilities are the hardest to fix and they are the root of your security in a system that depends on obscurity.

                                      Security through diversity, in contrast, says that you may have vulnerabilities but they won’t affect your entire fleet. You can build reliable systems on top of this. For example, the Verisign-run DNS roots use a mixture of FreeBSD and Linux and a mixture of bind, unbound, and their own in-house DNS server. If you find a Linux vulnerability, you can take out half of the machines, but the other half will still work (just slower). Similarly, a FreeBSD vulnerability can take out half of them. A bind or unbound vulnerability will take out a third of them. A bind vulnerability that depends on something OS-specific will take out about a sixth.

                                      This is really important when it comes to self-propagating malware. Back in the XP days, there were several worms that would compromise every Windows machine on the local network. I recall doing a fresh install of Windows XP and connecting it to the university network to install Windows update: it was compromised before it was able to download the fix for the vulnerability that the worm was exploiting. If we’d only had XP machines on the network, getting out of that would have been very difficult. Because we had a load of Linux machines and Macs, we were able to download the latest roll-up fix for Windows, burn it to a CD, redo the install, and then do an offline update.

                                      Looking at the growing Linux / Docker monoculture today, I wonder how much damage a motivated individual with a Linux remote arbitrary-code execution vulnerability could do.

                                      1. 1

                                        Sure, but is this an intentional strategy? Did we set out to have Windows and Mac and Linux in order that we could prevent viruses from spreading? It’s an accidental observation and not a really compelling one.

                                        I’ve pointed out my thinking in this part of the thread https://lobste.rs/s/sdum3p/if_you_could_rewrite_anything_from#c_ennbfs

                                        In short, there must be more principled ways of securing our computers than hoping multiple green field implementations of the same application have different sets of bugs.

                                      2. 3

                                        A few examples come to mine though—heartbleed (which affected anyone using OpenSSL) and Specter (anyone using the x86 platform). Also, Microsoft Windows for years had plenty of critical exploits because it had well over 90% of the desktop market.

                                        You might also want to look up the impending doom of bananas, because over 90% of bananas sold today are genetic clones (it’s basically one plant) and there’s a fungus threatening to kill the banana market. A monoculture is a bad idea.

                                        1. 1

                                          Yes, for humans (and other living things) the idea of immunity through obscurity (to coin a phrase) is evolutionarily advantageous. Our varied responses to COVID is one such immediate example. It does have the drawback that it makes it harder to develop therapies since we see population specificity in responses.

                                          I don’t buy that the we need to employ the same idea in an engineered system. It’s a convenient back-ported bullet list advantage of having a chaotic mess of OSes and programming languages, but it certainly wasn’t intentional.

                                          I’d rather have an engineered, intentional robustness to the systems we build.

                                          1. 4

                                            To go in a slightly different direction—building codes. The farther north you go, the steeper roofs tend to get. In Sweden, one needs a steep roof to shed show buildup, but where I live (South Florida, just north of Cuba) building such a roof would be a waste of resources because we don’t have snow—we just need a shallow angle to shed rain water. Conversely, we don’t need codes to deal with earthquakes, nor does California need to deal with hurricanes. Yet it would be so much simpler to have a single building code in the US. I’m sure there are plenty of people who would love to force such a thing everywhere if only to make their lives easier (or for rent-seeking purposes).

                                            1. 2

                                              We have different houses for different environments, and we have different programs for different use cases. This does not mean we need different programing languages.

                                        2. 2

                                          I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

                                          In principle, yeah. But even the best security engineers are human and prone to fail.

                                          If every deployment was the same version of the same software, then attackers could find an exploitable bug and exploit it across every single system.

                                          Would you like to drive in a car where every single engine blows up, killing all inside the car? If all cars are the same, they’ll all explode. We’d eventually move back to horse and buggy. ;-) Having a variety of cars helps mitigate issues other cars have–while still having problems of its own.

                                          1. 1

                                            In this heterogeneous system we have more bugs (assuming the same rate of bugs everywhere) and fewer reports (since there are fewer users per system) and a more drawn out deployment of fixes. I don’t think this is better.

                                            1. 1

                                              Sure, you’d have more bugs. But the bugs would (hopefully) be in different, distinct places. One car might blow up, another might just blow a tire.

                                              From an attacker’s perspective, if everyone drives the same car, it the attacker knows that the flaws from one car are reproducible with 100% success rate, then the attacker doesn’t need to spend time/resources of other cars. The attacker can just reuse and continue to rinse, reuse, recycle. All are vulnerable to the same bug. All can be exploited in the same manner reliably, time after another.

                                              1. 3

                                                To go by the car analogy, the bugs that would be uncovered by drivers rather than during the testing process would be rare ones, like, if I hit the gas pedal and brake at the same time it exposes a bug in the ECU that leads to total loss of power at any speed.

                                                I’d rather drive a car a million other drivers have been driving than drive a car that’s driven by 100 people. Because over a million drivers it’s much more likely someone hits the gas and brake at the same time and uncovers the bug which can then be fixed in one go.

                                  2. 3
                                    1. 1

                                      Yes, that’s probably the LISP thing I was thinking of, thanks!

                                    2. 2

                                      I agree completely!

                                      We would need to put some safety measures in place, and there would have to be processes defined for how you go about suggesting/approving/adding/changing designs (that anyone can be a part of), but otherwise, it would be a boon for the human race. In two generations, we would all be experts in our computers and systems would interoperate with everything!

                                      There would be no need to learn new tools every X months. The UI would familiar to everyone, and any improvements would be forced to go through human testing/trials before being accepted, since it would be used by everyone! There would be continual advancements in every area of life. Time would be spent on improving the existing experience/tool, instead of recreating or fixing things.

                                      1. 2

                                        I would also like to rewrite most stuff from the ground up. But monocultures aren’t good. Orthogonality in basic building blocks is very important. And picking the right abstractions to avoid footguns. Some ideas, not necessarily the best ones:

                                        • proven correct microkernel written in rust (or similar borrow-checked language), something like L4
                                        • capability based OS
                                        • no TCP/HTTP monoculture in networks (SCTP? pubsub networks?)
                                        • are our current processor architectures anywhere near sane? could safe concurrency be encouraged at a hardware level?
                                        • less walled gardens and centralisation
                                        1. 2

                                          proven correct microkernel written in rust (or similar borrow-checked language), something like L4

                                          A solved problem. seL4, including support for capabilities.

                                          1. 5

                                            seL4 is proven correct by treating a lot of things as axioms and by presenting a programmer model that punts all of the difficult bits to get correct to application developers, making it almost impossible to write correct code on top of. It’s a fantastic demonstration of the state of modern proof tools, it’s a terrible example of a microkernel.

                                            1. 2

                                              FUD unless proven otherwise.

                                              Counter-examples exist; seL4 can definitely be used, as demonstrated by many successful uses.

                                              The seL4 foundation is getting a lot of high profile members.

                                              Furthermore, Genode, which is relatively easy to use, supports seL4 as a kernel.

                                        2. 2

                                          Someone wrote a detailed vision of rebuilding everything from scratch, if you’re interested. 1

                                            1. 11

                                              I never understood this thing.

                                              1. 7

                                                I think that is deliberate.

                                            2. 1

                                              And one leader to rule them all. No, thanks.

                                              1. 4

                                                Well, I was thinking of something even worse - design by committee, like for electrical stuff, but your idea sounds better.

                                              2. 1

                                                We already have this, dozens of them. All you need to do is point guns at everybody and make them use your favourite. What a terrible idea.

                                              1. 16

                                                Rewrite the whole Unix tool space to emit and accept ND-JSON instead of idiosyncratic formats.

                                                1. 5

                                                  I’m a HUGE fan of libucl. It supports several constructs, including JSON.

                                                  1. 3

                                                    I was thinking YAML in that it’s still human readable at the console but also obviously processable, and JSON is a subset.

                                                    1. 7

                                                      YAML is a terrible format that should literally never be used. :-) There’s always a better choice than YAML.

                                                      1. 3

                                                        Strict yaml, without the stupid gotchas

                                                        1. 2

                                                          I don’t think yaml is the best choice for this. Most of the time user’s will want to see tables rather than a nested format like yaml. I guess it is a bit nicer to debug than JSON but ideally the user would never see it. If it was going to hit your terminal it would be rendered for human viewing.

                                                          Yaml is also super complex and has a lot of extensions that are sparsely supported. JSON is a much better format for interop.

                                                          On the other hand the possibility of passing graphs between programs is both intriguing and terrifying.

                                                          1. 3

                                                            I was recently trying to figure out how, from inside a CLI tool I’m building, to determine whether a program was outputting to a screen for a user to view, or a pipe, for another program to consume… Turns out it’s not as straightforward as I thought. I do believe the modern rust version of cat, bat can do this. Because my thought is…. why not both?

                                                        2. 1

                                                          this is a good idea and wouldn’t even be that much work

                                                        1. 12

                                                          A store-and-forward messaging system. I often find myself in areas of dodgy connectivity and would love to have a way to “catch up” when I’m connected, head back out to , and then queue up responses for when I have some connectivity back. I’d also like some QoS for messages that are important to me (like from my partner).

                                                          1. 5

                                                            This might not be quite what you’re looking for, but Scuttlebutt was built to do exactly that.

                                                            From memory, the creator did quite a lot of sailing, and wanted a way to receive information via mesh, potentially from other boats, who could propagate data back to shore.

                                                            1. 1

                                                              I am also very interested in this.

                                                              1. 1

                                                                Matrix, p2p matrix in particular, should handle that. It does allow for modern nntp replacement.

                                                                  1. 3

                                                                    Also https://matomo.org/ and https://usefathom.com/ are popular alternatives

                                                                  1. 8

                                                                    I once worked for a company where they managed to create about half a million subversion commits in just 2 or 3 years, with about 3 developers working on it. I’ll leave it as an exercise to guess how they managed to do that :-)

                                                                    1. 4

                                                                      By my calculation (500,000 / 3 years / 3 devs / 225 days worked per year) that’s about 250 commits per day, per developer. That’s one every 2 minutes from each developer. Am I doing the math right? That’s gotta be auto-commit-on-save or auto-commit-every-2-mins or similar? Do tell…

                                                                      1. 9

                                                                        Those numbers sound about right; and no, it wasn’t automatic, editor saves did create commits as @ptman mentioned, the reason being some rather unconventional use of SVN: commits were how you got your changes to your development environment. Any change you wanted to show up you had to commit.

                                                                        This comment turned in to an entire story, which I posted on my website; the most relevant bits:

                                                                        They had a server in the office which ran the SVN server and OpenVZ to give every developer their own container running Apache and PHP, and that’s what you would use for development. How do you get your code to that container? NFS? SMB? FTP? Nah, that’s so boring! SVN is a much better tool for this!

                                                                        The way this worked is that on every push the SVN server would run this PHP script to copy the changes to the right container based on the committer, the idea being that everyone only got their their own changes and not other people’s. You didn’t work off a martin branch – branches are for losers – you would always commit to the main trunk branch, which was the only branch people used. The script would look at the committer and copy all the files that commit touched to that person’s container. Every once in a while you manually updated your directory to get other people’s changes. Two people working on the same file at the same time was … unwise.

                                                                        That PHP script was indecipherable with umpteemt levels of nesting. No one dared to touch it because it “mostly worked”, some of the time anyway. If it was the third Tuesday of the month.

                                                                        Every little change you wanted to see you had to commit. Add a debug print? Commit. Improve that print? Commit. Found the bug and fixed it? Commit. Remove that print again? Commit. Fix up the comment? Commit. People had their editors set up to commit and push to SVN on save. You could easily rack up hundreds of commits on a single day.

                                                                        I don’t recall exactly how many people worked on this and for how long, I think it was about 3-4 developers over a timespan of 2-3 years before I joined, maybe even shorter. It was a pretty small company. I do distinctly remember reaching the half a million mark.

                                                                        It really was a subversion of subversion.

                                                                        I worked like this for a few days before I told them to give me access to the server so I could set up SMB because this was just unworkable for me. Aside from mucking up your SVN log, you need to run a command every time which is just annoying (I would stick that in a Vim autocmd now, but I didn’t know about those back then, and since the machine they gave me was Windows I didn’t really know how to do file watching either). The reason it took me that long was because this was my first real programming job, and was a bit too insecure to ask sooner 😅 It also made me doubt myself: “Am I not understanding SVN correct? Is this normal? Do all companies work like this?” Turns out I did understand it correctly, that it’s not, and that no one does.

                                                                        I migrated the entire shebang to Vagrant and mercurial about a year later. I didn’t bother retaining the subversion history. rm -rf .svn; hg init; hg ci -m 'import svn code'; hg push and I called it a day.

                                                                        1. 2

                                                                          Subversion supported webdav. Maybe they were using autosaving editors and saves created commits. So were there any commit messages at all?

                                                                        2. 1

                                                                          Never cleaning their history and auto-comitting changes every 10m?

                                                                        1. 4

                                                                          Product release fluff page. Neat, but basically advertising.

                                                                          1. 3

                                                                            And it’s not even a standalone product just a feature of tailscale so I cannot even get this without getting some other thing!

                                                                            1. 4

                                                                              I think it has some nice points about how changing the network architecture makes applications easier. I wonder how easy it would be for https://github.com/tonarino/innernet or https://github.com/juanfont/headscale to add taildrop or something similar.

                                                                          1. 14

                                                                            Whenever someone wants to send me a file […] they could just send me a magnet link

                                                                            Ha. Well. Other than everything in the BitTorrent world being designed for mass sharing and feeling like overkill for one-to-one “beaming”, there are two elephants in the room, none of them having anything to do with the “piracy perception”. First, the possibility of both sides being behind awful cgNAT/NAT444, making p2p connections impossible. Second, the privacy thing: do you even want your recepients to know your home IP address? Probably not always.

                                                                            1. 3

                                                                              As a particular example of the Inverse DRM Problem, I don’t think that it’s possible to receive a file over an addressed switching network without telling somebody some portion of your address. (Recall that the DRM Problem is that you cannot create a secure computing enclave within somebody else’s machine and keep the inputs and outputs private from them. The Inverse DRM Problem is that you cannot exist as a single small node in a homogenous network without projecting most of your information across your neighbors.) For example, I usually recommend Magic Wormhole for transferring files, but it exposes addressing information to a trusted third-party intermediate server.

                                                                              1. 4

                                                                                Yeah, I’m not talking about intermediaries though, only specifically the recepients. If you want “Snowden level” security, use Tor, but for casually sending something to someone I only know online, all I need is some intermediary to just “abstract” the content away one step from me.

                                                                              2. 2

                                                                                Does UDP hole punching not work behind cgNAT?

                                                                                edit: I know port forwarding is impossible but I can’t find anything on hole punching not working.

                                                                                1. 3

                                                                                  I recall a network company doing some overlay network so people can communicate easily (like sending files or stuff to each other), no matter their machine. Apparently, a good percentage of their users have to pass through their centralised servers (used as a fallback), because even hole punching didn’t work.

                                                                                  Besides, whole punching requires a handshake server to begin with. What we would really like is a direct peer-to-peer connection, and that’s just flat out impossible if both sides are under NAT.

                                                                                  1. 5
                                                                                    1. 3

                                                                                      Yes, that’s the one:

                                                                                      This is a good time to have the awkward part of our chat: what happens when we empty our entire bag of tricks, and we still can’t get through? A lot of NAT traversal code out there gives up and declares connectivity impossible. That’s obviously not acceptable for us; Tailscale is nothing without the connectivity.

                                                                                      We could use a relay that both sides can talk to unimpeded, and have it shuffle packets back and forth. But wait, isn’t that terrible?

                                                                                      Sort of. It’s certainly not as good as a direct connection, but if the relay is “near enough” to the network path your direct connection would have taken, and has enough bandwidth, the impact on your connection quality isn’t huge. There will be a bit more latency, maybe less bandwidth. That’s still much better than no connection at all, which is where we were heading.

                                                                              1. 9

                                                                                Asking before a website could set a cookie is actually how browsers from the 90s worked. Lynx still works like that by default.

                                                                                The problem with asking the browser is that … every website will just ask this. Even for something as pointless and intrusive as notifications every damn fucking website will ask you to send those horrible things. I have the notification permissions set to just “always deny” in Firefox.

                                                                                And if every website (including Lobsters, for example) would ask for cookie permissions people will just click “yes”. I would just click “yes”; life is short, I have better things to do than review 200 cookies every day. Besides, there are many more tracking techniques than just “cookies”, and the focus on just that is rather outdated.

                                                                                I’ve been trying to come up with a better alternative ever since the EPrivacy directive was introduced, and thus far I haven’t really managed to think of something better. I think the GDPR is a step in the right direction as it focuses less on “information stored in the browser” and more on “identifiable information”.

                                                                                Enforcement is an issue, but this is a fixable issue.

                                                                                1. 9

                                                                                  Asking before a website could set a cookie is actually how browsers from the 90s worked.

                                                                                  But that’s not what the law demands. Lobsters has no cookie popup. Neither does GitHub. Even though both sites use cookies.

                                                                                  And it’s not because either of them are flouting the law, but because they’re not using the cookies for tracking. The browser can’t possibly know if a cookie is used for tracking, or for authentication, or even potentially for both. That’s one thing that makes legal solutions different from technical ones; the police have permission to check what the server side is doing, while your browser does not.

                                                                                  1. 7

                                                                                    I get your point here, but can we please not further spread the myth that “the police” go about enforcing laws like this? A better phrase may be “the courts” or more simply “the state”

                                                                                    1. 1

                                                                                      Well, sure; but the article was talking about asking for permission to set any cookie, as I understood it anyway. I’m not sure it’s realistic to ask notifications only for “bad” cookies, that will only work if it’s enforced, and if the (current) law is enforced by the regulatory bodies then this entire proposal is a bit of a moot point as regular “cookie popups” will work pretty much identical.

                                                                                    2. 2

                                                                                      https://www.goatcounter.com/ is certainly a step in the right direction!

                                                                                      1. 1

                                                                                        A saner default would just be to limit cookies to session duration and auto-delete them when all tabs from that origin are closed. I have the Firefox extension “Cookie AutoDelete” set to do this. If you visit a website for 30 seconds, you get cookies for 30 seconds.

                                                                                        The EU cookie law was insane from the beginning because browsers give people the power to control this in the first place. It would have made sense for something like, for example, facial recognition in a shopping mall, because that’s not something you have the power to prevent. It treats “setting cookies” as though it’s something done that bypasses browser controls, when literally no cookie can be set without the browser agreeing to it. The article above even suggests something resembling a browser permission request, but this misses the point that this should always have (and always has) been the role of the browser, and not some website-implemented website-specific UI.

                                                                                        1. 5

                                                                                          Most users don’t want their login and settings cookies to be deleted when they close a window; they just never want to have Google Analytics enabled, regardless of whether they keep their session open or not.

                                                                                          1. 3

                                                                                            I use Cookie AutoDelete as well, but I don’t think it’s really an option “for the masses”, at least not with the current implementation/UI. An improved version with a friendlier non-technical UI could perhaps be an option though.

                                                                                            But this still won’t prevent other types of fingerprinting/tracking, so it’s a very limited solution anyway. The more prevalent cookie blocking becomes, the more incentive there is to circumvent it and use other methods. This is why I don’t think these kind of technical means are really the road forward, unless all fingerprinting/tracking becomes impossible/hard, and that’s a lot easier said than done because a lot of these things rely on pretty essential features.

                                                                                        1. 8

                                                                                          Why not jump from old and quirky IRC protocol to e.g. Matrix? Also, matrix is an open federation, so this kind of grab shouldn’t be possible.

                                                                                          1. 15

                                                                                            We are ourselves old and quirky.

                                                                                            Freenode had a ~25 year run, which is significantly better than the median free tier on an online service.

                                                                                            1. 5

                                                                                              It is indeed quite the accomplishment. But IRC is clearly on the decline.

                                                                                              1. 1

                                                                                                IRC works well with very slow connections like dialup and archaic machines, but not unstable connections unfortunately, my main complaint is the lack of at least a small chat log without using 3rd party services/sw. Any small disruption will make me lose messages in a rural area internet.

                                                                                            2. 7

                                                                                              My main issue with Matrix is the lack of a client that I can run easily on extremely low-powered hardware. Just about all the major, well-supported clients are built on Electron. Compare that to IRC: you can have useful IRC client in just about ~5k lines of C (yes, I’ve written my own IRC client).

                                                                                              1. 4

                                                                                                I have to wonder if this is really the limiting factor for IRC - if we’re measuring protocols based on what you can write on a coke can, IRC might win, but is that what people actually want?

                                                                                                1. 2

                                                                                                  i do. the irc client i use is very fast and configurable. i don’t want to run a full web browser and 100000 tons of javascript just to exchange text with people. i currently use the weechat-matrix plugin for weechat to access matrix but it is unmaintained and missing many features i guess.

                                                                                                  1. 4

                                                                                                    weechat-matrix isn’t unmaintained - it’s stable. the author is prioritising matrix-rust-sdk, but weechat should work great.

                                                                                                    1. 2

                                                                                                      Yeah ok, ‘unmaintained’ is a little strong. My point is, nothing new is being added to improve support for matrix (multiline messages, etc) and there are lots and lots of quirks, having used it daily for many months now. And the author has made it clear they have no interest in improving the existing plugin while they go off and RWIR..

                                                                                                      It’s “good enough” for me, but a far cry from supporting everything matrix has to offer. That’s the case for almost all matrix clients though, as I’m sure you are aware.

                                                                                                    2. 3

                                                                                                      Fwiw I heard yesterday about https://github.com/poljar/weechat-matrix-rs . When it’s cooked it might be a good way for me to try Matrix seriously.

                                                                                                      1. 1

                                                                                                        Yeah, that has been around for a bit, and seems to be progressing along slowly. I don’t think it’s anywhere close to replacing the old python version of the plugin in its current state, and seems to be a long ways off from being there.

                                                                                                        1. 1

                                                                                                          Ah, good to know, thank you. Will keep an eye on it :)

                                                                                                    1. 1

                                                                                                      https://github.com/tulir/gomuks is roughly 15k lines of Go (+ non-trivial LOC from dependencies of course).

                                                                                                    2. 4

                                                                                                      try hosting matrix

                                                                                                      1. 3

                                                                                                        What makes you think I haven’t?

                                                                                                        1. 5

                                                                                                          Everyone I’ve talked to personally who’s tried this has nothing but horror stories when it comes to running their own homeserver. The consensus I’ve heard is that it’s only practical if you have staff to look after it or if you prevent your channels from federating.

                                                                                                          I admire the vision but they have a long way to go before actually realizing the benefits of a decentralized system.

                                                                                                          1. 4

                                                                                                            I’ve had very few issues running it myself. I have Synapse, Postgres, and Nginx running along with IRC, Discord, and Slack app services on a 2 GB VPS. Other than the occasional upgrade, I’ve had minimal issues. I manage everything through my service manager, so usually it’s as simple as running an upgrade task and then restarting the service. That said, I have a lot of experience running web services, so that might contribute.

                                                                                                            1. 2

                                                                                                              I’ve configured synapse by hand and using https://github.com/spantaleev/matrix-docker-ansible-deploy/ . Both work well provided you read the documentation.

                                                                                                            2. 0

                                                                                                              you are using matrix.org

                                                                                                              1. 3

                                                                                                                I have an account on matrix.org, true. That doesn’t prevent me from having accounts elsewhere. A matrix.org account is sometimes useful.

                                                                                                                1. 3

                                                                                                                  w.kline.sh/

                                                                                                                  I run my own too (@sumit:battlepenguin.im). It works pretty well, and I even have bridges working. Overall I think it’s way easier to stand up than XMPP (everything is over HTTP; there is that weird federated port but you can now use a normal LetsEncrypt cert and stick it behind a Traefik or HAProxy frontend).

                                                                                                                  I will say, scaling it would be difficult. I’ve heard other people complain about larger matrix servers with a lot of users and matrix.org has had issues with theirs after multiple huge refactors that dropped CPU usage. I think Matrix would be way better if there were multiple server implementations like ActivityPub does (Mastodon, Pleroma, Peertube, etc.) but it looks like development on the Go implementation is still slow going.

                                                                                                          2. 1

                                                                                                            Yes, go to Matrix, let the eternal september end here.

                                                                                                          1. 4

                                                                                                            Some interesting comments from the project lead over at HN.

                                                                                                            And here’s the user-facing post. I created a space for Haskell here (it includes the IRC bridge to #haskell).

                                                                                                            1. 1

                                                                                                              I created a space for Haskell here (it includes the IRC bridge to #haskell).

                                                                                                              URL changed to: https://matrix.to/#/#haskell-space:matrix.org

                                                                                                              1. 1

                                                                                                                How did you set the URL of the space? Been wanting to do this for Pikelet, which is currently a random hash…

                                                                                                                1. 1

                                                                                                                  You add an alias for the room that is the space. The UI is probably missing currently, but you can use the API. https://matrix.org/docs/spec/client_server/r0.6.1#put-matrix-client-r0-directory-room-roomalias

                                                                                                                  1. 1

                                                                                                                    Is there a way to send this request from element in the browser, or so I need to do some more involved shenanigans for this?

                                                                                                                    1. 5

                                                                                                                      More involved shenanigans, sadly. Adding aliases to Spaces is top of the list for the next wave of work in the beta though.

                                                                                                                      1. 1

                                                                                                                        No worries, looking forward to it! Thanks for your efforts!

                                                                                                            1. 1

                                                                                                              I have a hard time figuring out how to get matrix setup and working. Like what the backend and frontend are and how they work. Am I not understanding what it is?

                                                                                                              1. 10

                                                                                                                TL;DR: If you want to try it out, download the Element client and let it walk you through making an account.

                                                                                                                You’ll have to choose a Matrix homeserver (like an email provider). If you won’t use it that frequently, the free matrix.org homeserver is good but slow. For more serious use, consider a subscription to Element Matrix Services, where they host a homeserver for you. Or you can try to self-host synapse. I wouldn’t.

                                                                                                                Other homeservers are being developed right now (Conduit is pretty cool). But none are ready for production just yet. And unfortunately the choice of homeserver is still important because your account will be tied to it. In the future, the “multi-homed accounts” feature will make this initial choice less important (hopefully).


                                                                                                                There are two basic components to understand if you’re just getting into Matrix, and the two components are best understood as an analog to email, which is really the only popular federated protocol today.

                                                                                                                There’s the Matrix homeserver, which is like your email provider. It’s where the messages and account information are stored. It’s what actually takes care of sending and receiving your messages. It’s where you sign up. Multiple people can have an account on the same homeserver. Synapse is the most popular homeserver right now, it’s developed by the team that founded Matrix, and it’s considered to be slow (Python) and is slated to be replaced.

                                                                                                                Then there’s Matrix clients. Just like email, Matrix is standardized. You can use any email client to get/send your Gmail, and you can use any Matrix client to get/send Matrix messages. Element is the most popular Matrix client (again made by the team that created Matrix). It’s the most feature-complete by far. It’s written in Electron, so it’s bloated. But it works fairly well.

                                                                                                                1. 3

                                                                                                                  I wouldn’t.

                                                                                                                  Can you elaborate ? We use a synapse server at work and it works.

                                                                                                                  1. 1

                                                                                                                    I should have been clearer. I meant that I don’t advise trying to self-host the homeserver at all. Self-hosting anything is a ton of work if done properly. Timely security updates, migrations, frequent (and tested) backups, and reliability all come to mind as things I personally don’t want to have to worry about. Element Matrix Services seems like a good deal for just a few people.

                                                                                                                    1. 3

                                                                                                                      But these challanges aren’t at all Synapse-specific, are they? Updates. migrations and proper backups are something you have to do with any server that you self-host. And after running a homeserver for a few years, the only migration I ever had to do is from an older to a newer PostgreSQL version by simply dumping the whole database and reading it back in. All schema migrations are done automatically by Synapse and I never had any problems with that. Hosting a Matrix server with Synapse is so easy if you compare it e.g. to hosting your own email server. And Synapse really is battle-tested because it’s dogfooded at the huge matrix.org homeserver instance.

                                                                                                                      1. 1

                                                                                                                        No they’re definitely not specific to Synapse. That was pretty much my point.

                                                                                                                        And I know Synapse has put a ton of work into being easy to deploy. But I still won’t ever recommend managing infrastructure to anyone. It’s awesome that Synapse makes it as easy as possible for people like us to self-host it, but $5/month is probably well worth the lack of headache for most people.

                                                                                                                  2. 2

                                                                                                                    As far as I can tell, none of the homeserver implementations are ready for self-hosting – unless you disable federation, and then what’s the point?

                                                                                                                    1. 3

                                                                                                                      I’m not sure where you’re getting that impression. I’m hosting two different Synapse instances myself. I just update them when new releases come out; it’s been relatively painless.

                                                                                                                      1. 1

                                                                                                                        Can you please give a reason why you don’t think Synapse is ready for self-hosting? I’ve been doing it for years with enabled federation and I never had any serious problems.

                                                                                                                        1. 1

                                                                                                                          Sure. I’ve heard again and again that if you enable federation on Synapse and someone joins a large room, the server bogs down and starts chewing through resources. Has that changed?

                                                                                                                          Also note that I’d be running it on an old laptop or a raspberry pi, just like I would run any chat server – IRC, Jabber, etc.

                                                                                                                    2. 1

                                                                                                                      .. I mean, probably? What exactly are you struggling with?

                                                                                                                      1. 2

                                                                                                                        Uh oh, now I feel even dumber. The main website has information about something called Synapse and there is “element” which is a frontend I believe, but how do you install a matrix server and start using it?

                                                                                                                        1. 13

                                                                                                                          My attempt at clarification:

                                                                                                                          • Matrix is a protocol for a federated database, currently primarily used for chat
                                                                                                                          • Synapse is the reference home server (dendrite, conduit, construct etc. are other alternatives)
                                                                                                                          • Element is the reference client (there are versions of element for the web (electron), android and ios)
                                                                                                                          • A user account is (currently) local to a home server
                                                                                                                          • A chat room is federated and not located on a specific home server. The state is shared across home servers of all users that have joined the room.
                                                                                                                          • There are P2P tests where the client and home server are bundled together on e.g. a mobile phone
                                                                                                                          • Spaces are a way to organize rooms. Spaces are just a special case of a room and can include other rooms and spaces.
                                                                                                                          1. 4

                                                                                                                            Thank you! That clarifies a lot. I was stuck thinking Matrix is the server. So, Matrix is a protocol for a federated database, that’s very interesting and cool.

                                                                                                                            1. 1

                                                                                                                              Is it legitimate for me, as a user rather than someone who’s interested in the infrastructure, to just think of Matrix being like a finer-grained version of IRC, where instead of (mainly) a few big networks there are more smaller networks and instead of joining e.g. #linux on freenode, I’d join e.g. #linux:somewhere …

                                                                                                                              Would I now discover ‘rooms’ by starting from a project’s website, for example, rather than just joining some set of federated servers and looking for rooms with appropriate names?

                                                                                                                              I just searched for ‘linux room matrix’ and the top hit was an Arch Linux room #archlinux:archlinux.org

                                                                                                                              (I don’t really want to join a general Linux room - just using it as an example)

                                                                                                                              1. 3

                                                                                                                                Well, generally NO. Most all matrix home servers are all joined together via the federated protocol. So if you join #archlinux:archlinux.org on homeserver A, and your BFF uses homeserver B, you will still see each other and communicate with each other in that room like if you were both on homeserver A.

                                                                                                                                One COULD create a non-federated home server, but that’s not the typical use case, and the reasons to do so would be odd. If you are doing for example a chat server for internal chat @ $WORK, using Matrix is probably a terrible idea. Zulip, Mattermost, etc are all better solutions for that use-case.

                                                                                                                                1. 2

                                                                                                                                  Discovering rooms is currently a bit problematic, as room directories are per server. But a client can query room directories from any server (that allows public queries). Spaces will probably help a lot with room discovery, as they can form deep hierarchies.

                                                                                                                              2. 8

                                                                                                                                I did a video to try to explain it last year (which i should update, but it’s still usable, even if it calls Element by its old name of Riot): https://www.youtube.com/watch?v=dDddKmdLEdg

                                                                                                                                1. 3

                                                                                                                                  I recommend starting off by just creating an account at app.element.io and using the default homeserver so you don’t have to host anything youself

                                                                                                                                  1. 2

                                                                                                                                    Synapse is a server implementation and Element is one of the clients.

                                                                                                                                    Installing Synapse: https://matrix.org/docs/guides/installing-synapse

                                                                                                                                    1. 1

                                                                                                                                      Uh oh, now I feel even dumber.

                                                                                                                                      Don’t. The Matrix project is pretty terrible at both naming and the new user experience.

                                                                                                                                      1. 2

                                                                                                                                        Not trying to hate on them or anything. @ptman ‘s comment above really helped.

                                                                                                                                        1. 1

                                                                                                                                          Yeah, I wish them every success - but what I guess I’ll call the introductory surface of the system is fairly painful.

                                                                                                                                1. 6

                                                                                                                                  Redit is a text editor for the Amiga. It’s fairly new – the first version came out in 2014 – but one of its most interesting features is that it works on versions of AmigaOS all the way back to at least 1.2 from 1986!

                                                                                                                                  The source is available, and it’s a really well-done piece of software.

                                                                                                                                  1. 0

                                                                                                                                    The source is available

                                                                                                                                    For some definition of available. Not Open Source nor Free Software, as it doesn’t have the 4 freedoms.

                                                                                                                                    Unmodifiziert darf dieses Archiv beliebig kopiert und verbreitet werden.
                                                                                                                                    Es drfen jedoch keine Binaries verbreitet werden, welche auf Grundlage
                                                                                                                                    dieses Quellcodes entstanden sind.
                                                                                                                                    Bitte fragen Sie mich, wenn Sie den Quellcode darber hinaus nutzen wollen.
                                                                                                                                    
                                                                                                                                    1. 5

                                                                                                                                      So, source available, not open source or free software, exactly as stated.

                                                                                                                                  1. 1

                                                                                                                                    Make it as simple as possible, but not any simpler. SNMP: I’ll make it simpler.

                                                                                                                                    The protocol is so “simple” (for some value of simple) that it’s actually not simple to use.

                                                                                                                                    1. 2