1. -2

    It’s Dendrite based, so I have doubts about the sustainability due to New Vector’s constant changes on Synapse breaking compat with everyone else; another sad manifestation of the perverse incentives preventing Matrix from being a stable and performant alternative.

    1. 22

      Wow, this is some quality FUD. We haven’t broken backwards compatibility ever on the Client Server API - you can spin up a Matrix client written pre-launch in 2014 and it should work fine with a 2020 Matrix server. On the Federation (Server-Server) API, we upgrade the room synchronisation protocol on a semi-regular basis using room versions (https://matrix.org/docs/spec/#id5)… and we’ve still kept backwards compatibility with the earlier room versions.

      For wider context: New Vector (NV) is the company set up by the original Matrix team which sponsors much of the development of the core Matrix project. The accusation here is that New Vector is somehow incentivised to sabotage development of Synapse (the reference Matrix server) to prioritise its own commercial interests at the expense of the wider Matrix network. This is categorically untrue. All work that NV people do on Synapse is donated unilaterally to the Matrix.org Foundation, which is governed incredibly strictly (including by the UK non-profit regulator) to ensure the protocol and reference implementations advance for the benefit of everyone - without ever prioritising any commercial interests, particularly NV (now or in the future). https://matrix.org/foundation spells this out.

      So: a) Synapse doesn’t break compat; b) NV doesn’t prioritise itself when doing Matrix.org development, and if it did, the Foundation would course-correct; c) Dendrite work is mostly funded by NV anyway :/

      1. 4

        This is the impression I got from observing the Matrix project, Dendrite’s progress specifically over Synapse, and third-party implementations. The biggest pain point is how checking off features seems to be prioritized over actual polish and optimizations to the servers themselves.

        1. 6

          I think you are extrapolating incorrectly. Almost all work in Synapse over the last year or so has been around polish & optimisations - and Dendrite dev was on hold in order to focus on Synapse stability & perf. In the last few months we’ve been able to afford to spin up Dendrite dev again, and meanwhile Synapse is (for once) stable. We’re almost at the point of going back to feature dev - but other than E2EE-by-default, there’s been very little feature dev since 2018; it’s all been about stabilising for Matrix 1.0 (in June 2019) and then subsequent polish.

          I think you might be basing your conclusions on where things were at in 2018 or so (when we had indeed been rushing features out the door in order to try to secure long term funding for the project, and to try to keep up with the centralised competition).

        2. 2

          What % of Matrix development is funded/driven by New Vector? It’s cool that they are pushing for this kind of software.

          1. 3

            NV is the startup which the original Matrix team formed in 2017 in order to fund themselves to continue working on Matrix fulltime. Of the core Matrix.org codebase, probably ~90% of it is written by NV employees, all of which is donated to the Matrix.org Foundation, which was set up as an independent custodian of the protocol itself (protecting it from NV and any other commercial players). The wider ecosystem building on Matrix spans hundreds (thousands?) of various projects, projects and contributors.

            So it’s not coincidence that NV drives a lot of Matrix development, given all but one of the team who created Matrix work there :)

        3. 6

          I can’t follow the reason for your criticism without further explanation. What breaking changes did Synapse introduce in the past? Who is “everyone else”? What kind of “perverse incentives” do you mean? Why should this be a problem for Dendrite, which is also developed by New Vector? Any sources?

          1. 0

            New Vector is basically paid to develop features over stabilizing and polishing the low-level details. This made federation somewhat of a moving target, and it means things like performance are moved to the wayside.

          2. 5

            It constantly amazes me that the only thing that a lot of people like to complain more than proprietary software is free and open source software that is done by people other than themselves.

          1. 8

            Currently working on a foss replacement for the “social proofs” section of keybase that provided the most value with the least intrusion into users’ lives.

            Currently have most of the user-facing infra up and running, I now just need to set up the workers that cycle through verifications and continually re-test them.

            I also submitted a PR for freenode’s project management system about 2 weeks ago, I just need to push that over the line now to make finding some broken registrations a little easier.

            1. 4

              Firstly, a parser for IRC in Erlang, then an application that allows you to spin bots. The hope is that it will grow into something that allows there to be a reasonable amount of synthetic traffic on an IRC network for testing.

              1. 3

                “Are links to tweets valid submissions”

                yes, but the nature of the format means its probably harder to them to be well suited for the site I suppose.

                There are foone threads that get posted every so often, but they are often in depth threads that cover surprising topics that are appreciated by the community.

                1. 5

                  Saturday: FOSDEM! I’ve been hoping to visit for some time now, but never quite managed it. This year, I’m lucky enough in that work has taken me to less than an hours drive, so I really have no excuse. I’m surprised there’s no thread to discuss it.

                  Sunday: Sadly, same work also has me leaving the country Sunday, where I’ll be off to plan some disaster recovery with another office :(

                  1. 3

                    Regardless of where you lie on the issue, lobsters has so far been pretty good at staying away from partisan projects with little technical innovation.

                    There’s not much to this, it’s just a collection of configs that exists based on strongly held political opinions. I’m not sure if we want to consider it a good posting here.

                    1. 3

                      I wonder if there’s value in leaving breadcrumbs for those who follow to solve problems like this.

                      I know of a lot of places in code I’ve written, or read, where it’s clear that there was low hanging fruit waiting to be picked if the time was ever right, but it was rarely marked. How much time and money do we lose in trying to uncover what was known and lost?

                      I might start leaving // OPTI: comments in code if I think something is worth investigating later, should the need arise.

                      1. 1

                        I try to do this where I can with ‘TODO’ comments explaining what I/others would want to do at some point in the future wrt optimizations or other similar changes..

                        1. 1

                          and do you ever get to fix them?

                          1. 3

                            I’m working on a game in my spare time. There’s a couple of places I know I can probably get a significant speed-up, but doing that work will take time which is better spent developing mechanics.

                            As a result of leaving those TODO comments, I know exactly where I should look if performance ever becomes an issue. Or, if performance of a particular piece of code never becomes an issue, I won’t have wasted my time optimizing it, yet having those comments about how the code might scale worse than it could have is still valuable.

                            I do like the idea of using // OPTI: instead of // TODO: for these cases though, exactly in order to differentiate between stuff which should be done in the future (“TODO”) and intentionally missed optimization opportunities (“OPTI”).

                            1. 2

                              Yeah, my text editor of choice (vim) will highlight the TODO lines so they are obvious, which helps. I’ve addressed some since adding them, sometimes grepping for TODO and picking one to do is a nice break from the norm.

                              1. 3

                                It is nice to do todos, indeed. But most TODOs I see in production code are from programmers long gone, which will never get done.

                              2. 1

                                I’m not craftyguy, but yes, a few months ago. I had opportunity to profile code I wrote to improve the speed. For the background, it’s written in Lua [1] and I intended for it to be a “proof-of-concept” but of course it was put into production without my knowledge [2]. We were able to go several years before I profiled the code. One area was expected, another area not so expected. I got about a 25% boost in speed, not too bad for the small amount of code that actually changed.

                                [1] It parses SIP messages, and I use LPEG to do the parsing.

                                [2] That happened about four years ago.

                          1. 1

                            The readme talks about being closely linked to Git, does this mean it’s not possible to use with Mercurial?

                            1. 1

                              Yes, that is correct.

                            1. 3

                              Thanks for flying freenode!

                              I hang out in the channel and I’m pleased to say that it’s one of the better communities we host. It covers a lot of discussion and people generally get on.

                              Please be sure to let us know if there’s anything we can do to make your time there any better :)

                              1. 3

                                I am returning from Belgium, where I am currently working, to Shropshire for two weeks, where I will be starting my air traffickers course.

                                It’s been good to work out in this part of the company. I’ve come from my software background onto the coal face in a very admin heavy functions. We have loads of spreadsheets that I’ll be thinking about how to formalise into useful software tools on the train across. I hope I can get a few in place when I get back in a fortnight, before I leave for good in January!

                                1. 1

                                  waves from North Shropshire

                                1. 10

                                  Alternatively, you can look into using https://sourcehut.org/ - which can be self hosted, or use a pre-hosted instance.

                                  Docs on it are available at https://man.sr.ht/dispatch.sr.ht/ and https://man.sr.ht/builds.sr.ht/

                                  I don’t use sourcehut myself - I’ve mostly left tech, but I do still care a lot about it, and I think increasingly that github/lab/etc are not healthy for it.

                                  1. 4

                                    Can you elaborate on why self-hosting Sourcehut is better for the ecosystem than self-hosting GitLab (which is free and libre)?

                                    1. 2

                                      GitLab is open core, relies heavily on complex web technologies and javascript, and is primarily based on it’s gitlab specific tooling flow.

                                      Sourcehut is fully open, extremely light on browser features (meaning you can better control your exposed surface to potentially risky tech), and relies on less specific tooling (for example, email and lists rather than internal comment tracking, PRs, etc).

                                      Additionally and most importantly, it’s just different. Variation in ideology, implementation, and direction are important and give users choice. I present it primarily as an alternative.

                                      1. 1

                                        Gitlab is an order of magnitude more complex than anything I have seen. It is the industry standard and seems to be claimed as a fundamental developer right to overcome the difficulty to grasp Git.

                                        Using gitlab means you do not need everyone to be a git veteran, but that you need at least one git guru and one gitlab guru (that can update it in case of CVE).

                                        Depends on your team I guess.

                                        By using a simpler git porcelain, you need as much git skills (more for handling git itself, less for adapting to sometimes surprising Gitlab decisiobs, such as unidirectional merges instead of merging branches against each other to flatten the tree), but less Gitlab skills.

                                        1. 1

                                          free and libre

                                          Sourcehut is also free and libre (AGPL). The hosted instance is charged only.

                                          1. 3

                                            Yes, I suppose I should have said “…which is also free and libre.”

                                      1. 6

                                        The reactions on reddit were similarly negative. I’m strongly in favor of creative people being able to support themselves from their work. Trying to monetize your work should not be controversial.

                                        Want to contribute to Onivim? Don’t. They make a profit out of your contributions.

                                        “Want to contribute to the Linux kernel? Don’t. Red Hat makes a profit out of your contributions.” Making money is not incompatible with open source software.

                                        In a somewhat related way, the indie game Ooblets just announced that they’re taking funding from Epic Games to support their development in exchange for an exclusive launch on the Epic Games Store. There were many extremely negative takes that I saw on Twitter, Reddit, etc. and it’s disheartening to me that people are more interested in maintaining their abstract moral principles over allowing creators to support themselves in a stable way.

                                        1. 4

                                          “Want to contribute to the Linux kernel? Don’t. Red Hat makes a profit out of your contributions.”

                                          The difference here is that the Linux kernel is licensed in a way that allows redistribution, whereas onivim does not allow you to distribute it even if you have made contributions to it.

                                          Making money is not incompatible with open source software.

                                          Well that’s just blatantly wrong.

                                          1. 4

                                            Why do you think that making money and open source are incompatible?

                                            Drew Devault is writing a well developed GitHub/GitLab/etc competitor, https://sourcehut.org/, that is making money despite being pretty radically open source and user respecting.

                                            1. 2

                                              Dang, I misread ‘incompatible’ as ‘compatible’. I agree with you, and now I can no longer edit my comment above to correct it.

                                        1. 15

                                          A related and upstream point on this can be found here: https://cmpwn.com/@kline/102333166678467931

                                          While we don’t want to change IRC radically, there is absolutely the issue that more and more projects and people see IRC as being full of sharp edges, or lacking what they need. We’re really interested in what we can do that enhances, rather than changes the protocol. A hard line for us is not to change how older clients can use our network, as those clients and users are very important to us - but we also want to smooth the way for new and migrating projects.

                                          It’s a fact that IRC is shrinking, and in the face of things like the moznet closure, we should be looking to keep IRC healthy. This doesn’t mean “growth” as our primary target, but we do need to understand what people want to keep the protocol competitive and true to itself. We don’t want to be a matrix catch-up, we want to be able to compete with it as the different protocol and ecosystem it always has been.

                                          1. 2

                                            FWIW here are my comments on why I barely use IRC from a year ago:

                                            https://news.ycombinator.com/item?id=16495984

                                            The tl;dr is that I like the shell (hence spending a long time writing one), and I used BBSes back in the mid-90’s, and I used Gopher before Netscape existed, but I’ve never gotten into IRC.

                                            Since then I started using Zulip for https://oilshell.zulipchat.com, and it works quite well (aside from most people not knowing how to use it, which is sumountable obstacle). It’s better and faster than Slack IMO.

                                            1. 3

                                              . I just don’t want to spend my mental energy on my chat client [for IRC] (https://news.ycombinator.com/item?id=16495984)

                                              Likewise I don’t want to spend mental energy signing up for and trying to figure out some new IRC replacement, especially since it’ll presumably sit in the browser, where I try to minimise how much text I have to type. Whereas for IRC, I can use any number of frontends, including one right in my editor.

                                              Just like for using the shell, TeX, etc.; IRC has some upfront time/mental energy cost, but then after that it’s really easy, comfortable, and powerful. I have no interest in setting up/signing up for Slack, Zulip, [insert name here], ….

                                              1. 4

                                                it’ll presumably sit in the browser

                                                See https://github.com/zulip/zulip-terminal

                                                Also, for Mattermost (a open-source self-hostable Slack alternative) there’s Matterhorn. I prefer the web-based clients myself but there are some options; even slack-term is a thing.

                                                1. 3

                                                  When there’s a https://github.com/zulip/zulip.el maybe I’ll take a look.

                                                  But as far as I can tell everything’s that worth talking about is on Freenode.

                                                2. 1

                                                  Yeah, that’s totally fair. I don’t want to convince anyone who likes iRC not to use it. I’m just explaining why most people don’t prefer it.

                                                  Although I don’t necesssarily agree with the equivalence. I would say the signup cost to Slack/Zulip is less than the setup cost to IRC, depending on your definition of usability. If you already set IRC up 10 years ago, then obviously the equation changes.

                                                  1. 2

                                                    The setup/learning cost to something like Slack/Zulip is less than the setup/learning cost of IRC in the same way that the setup/learning cost of Word is less than the setup/learning cost of (La)TeX, but you pay a high hidden cost in that now you have to use Slack/Word rather than IRC/LaTeX.

                                            1. 8

                                              the infinite backlog encourages a culture of catching up rather than setting the expectation that conversations are ephemeral

                                              This was a very interesting point I hadn’t even considered before!

                                              1. 7

                                                An alternative, however, is the option to display to newly joined users the last, say, 25 lines of history in the channel. Many users may think a channel is dead, or struggle to understand the context of the current conversation on join, and this is something that could help alleviate it.

                                                It’s something we’re considering, and would likely be opt-in by channel operators via a channel mode - with the added bonus of this meaning that users who are sensitive to such backlog can be warned by their client when it receives the channel modes.

                                                1. 2

                                                  An alternative, however, is the option to display to newly joined users the last, say, 25 lines of history in the channel. Many users may think a channel is dead, or struggle to understand the context of the current conversation on join, and this is something that could help alleviate it.

                                                  This is how XMPP has worked for years, and is the main reason I chose it over IRC when I was first getting into these things back in the day.

                                                2. 4

                                                  This was a very interesting point I hadn’t even considered before!

                                                  One interesting thing: XMPP supports “infinite backlog” but some rooms (such as prosody) explicitly do not enable it for the same reason - to keep the discussion ephemeral.

                                                  1. 3

                                                    Only with some recent draft extensions does XMPP support infinite backlog. By default it uses a finite backlog to get you some context when you join – which is of course and amazing killer feature by itself and the main reason I originally switched to XMPP.

                                                    1. 4

                                                      Only with some recent draft extensions does XMPP support infinite backlog.

                                                      I know that in XMPP timeline it’s “recent” but I think it’s good to put it into perspective: MAM was initially released in 2012. Prosody module implementing that is 7 years old.

                                                      1. 1

                                                        Just as context for people reading the thread and wondering, Prosody’s mod_mam (backlog module) expires messages after one week by default. You can configure it for months, years, or forever, but honestly, one week is a pretty good default.

                                                    2. 1

                                                      I don’t think ephemerality is as common as he makes it out to be, but putting the onus of logging on individual users is itself a big deal & desirable. Social norms on irc are that everybody keeps full logs & nobody ever logs off (and bigger channels tend to have logger bots that don’t have anybody on ignore, often keeping public logs), so arbitrary scrollback is generally available if you ask for it.

                                                      The difference is that, because everybody keeps their own logs, authority with regard to recording history is also distributed: Slack, Inc can rewrite the history of any conversation on Slack any way they like, but it’s much harder for somebody to hack into the machines of everybody on an IRC channel and falsify their logs.

                                                      The logs we keep on IRC are records of what we see (or what we would see, if we weren’t AFK), reflecting our ignorelist and such, & the absence of a single canonical record is philosophically important.

                                                      1. 1

                                                        This is the reason I never really got into web BBSes as general-purpose chatting platforms. (Unlike domain-specific news aggregators like Lobsters, where we’re only commenting on articles.) I love that I can just drop in and drop out any time I like with IRC.

                                                      1. 3

                                                        I have no affiliation to the project but I posted this because it seems like a great solution to the on-going problems with the SKS network, particularly surrounding on-going privacy issues and the abuse of key metadata to post illegal content.

                                                        The new keyserver seems to finally allow the deletion of keys—this is not possible with SKS—and also identity verification by email is finally supported. They seem to have clean separation for identity and non-identity information in keys and all in all it looks like a great evolution from SKS.

                                                        1. 3

                                                          Where do we learn more about the concerns around the SKS network? Sounds interesting and it helps build up point you present.

                                                            1. 4

                                                              The article has some interesting links, which I’ll post for convenience:

                                                              The SKS Devel mailing list has actually had quite a few discussions about this too lately—a very small sample:

                                                                1. 2

                                                                  The maintainer’s attitude in that first linked ticket is alarming. “The user isn’t supposed to trust us, so there’s no reason not to display bogus data.” Are you kidding me?!

                                                                  1. 1

                                                                    Yes, but the bigger problem is that even if they would want to change it SKS is without actual developers. There are people that maintain it by fixing small bugs here and there but the software is completely and utterly bug-ridden (I had the unfortunate “opportunity” to test it).

                                                                    https://keys.openpgp.org is not mind-blowing¹ but it’s basically a sane keyserver. To have something like this in 2019 shows only in what dire situation is PGP now.

                                                                    ¹ actually I think it’s lacking a little bit compared to “modern” solutions such as Keybase

                                                                    1. 2

                                                                      Even the people that work developing GPG would agree that the situation is sort of bad. Real-world adoption of GPG is almost nil. Support of GPG, say by major email clients, is almost nil. The architecture with the trust model is ‘perfect’ but it’s not user-friendly. GPG-encrypted email traffic is almost not measurable. The code base is apparently a bit of a mess. It needs maybe a bit of funding and probably some less perfect, but more pragmatic and usable strategies of improving security.

                                                                      1. 2

                                                                        Agreed with what you said. I spent some time thinking about this and concluded that at the end the problem is mostly in tooling and UX, not inherent to GPG.

                                                                        As an example: XMPP was described by Google as being “non-mobile friendly” and it took just one person to create a really good mobile XMPP client that can be used by regular people. (I’m using it with my family and it’s better than Hangouts!).

                                                                        GPG too can be brought back from the dead but the effort to do that is enormous because there are multiple parties participating. But there are some good things happening, Web Key Directory, easy to use web clients, keys.openpgp.org

                                                                        Why is it important to work on GPG instead of dumping it for Signal et al.? Because GPG is based on a standard, this is not a “product” that can be sunsetted when investors run away or a manager decides that something else is shiny now.

                                                                        1. 2

                                                                          Look at what keybase is doing. That’s what GPG should have been. Some keyserver that actually verifies things, so that when you get a key with an email address, you know that that email belongs to the person who uploaded the key, unlike the current model, where anyone can upload any key with any data.

                                                                          The whole web-of-trust thing doesn’t help me when I want to get an email from some person overseas I have never met.

                                                                          1. 2

                                                                            That’s what GPG should have been. Some keyserver that actually verifies things, so that when you get a key with an email address, you know that that email belongs to the person who uploaded the key, unlike the current model, where anyone can upload any key with any data.

                                                                            If I understood the idea correctly the submission is already what you propose (maybe you’re aware of that? Hard to tell through text alone…)

                                                            1. 8

                                                              I’ve not (yet) been able to watch the video - no transcript is available for me and I’m not in a situation where I can listen (a11y people take note).

                                                              If “fragmentation” and “commercial power plays” aren’t strong contenders I’ll be very disappointed, and Canonical are one of the major offenders here. There was no need to push Upstart when the rest of the world was leaning into systemd (be that right or wrong), likewise Mir vs. wayland, bzr vs. git, etc.

                                                              Canonical has a remarkable desire, it seems to me, to want to be like redhat - build a big userbase, control the software they use and peel it away from the traditional foss consensus, and then dominate the ability to provide services for that software. I’m not sure it’s healthy for users, nor the concept of “linux on the desktop”.

                                                              1. 14

                                                                There was no need to push Upstart when the rest of the world was leaning into systemd (be that right or wrong), likewise Mir vs. wayland, bzr vs. git, etc.

                                                                Upstart came along quite a bit before systemd, 4 years I think it was in fact. As I recall, the early systemd blog posts even referenced upstart regarding lessons learned (good and bad).

                                                                As for Canonical pushing upstart – for a while it also wasn’t assured that systemd pickup would be as quick or as pervasive as it ended up being – Redhat pushed it pretty hard, with Fedora being the first major distro to adopt it.

                                                                That said, Canonical certainly /did/ drag their feet converting, but then again.. look how long it took debian to change! It was something like a year after ubuntu did? (EDIT: I read the wrong date here. See here for update)

                                                                Not that I defend upstart /at all/. I found it to be pretty darn buggy in fact.

                                                                1. 9

                                                                  Upstart came along quite a bit before systemd, 4 years I think it was in fact. As I recall, the early systemd blog posts even referenced upstart regarding lessons learned (good and bad).

                                                                  To add to this, Lennart states: “Before we began working on systemd we were pushing for Canonical’s Upstart to be widely adopted (and Fedora/RHEL used it too for a while). However, we eventually came to the conclusion that its design was inherently flawed at its core…”

                                                                  That said, Canonical certainly /did/ drag their feet converting, but then again.. look how long it took debian to change! It was something like a year after ubuntu did?

                                                                  Not sure where you’re getting that from, but Mark Shuttleworth announced that Ubuntu would adopt systemd the same week as the Debian decision.

                                                                  1. 4

                                                                    Not sure where you’re getting that from, but Mark Shuttleworth announced that Ubuntu would adopt systemd the same week as the Debian decision.

                                                                    Ah. I couldn’t remember the timeline, and looked at the wikipedia ubuntu version history. It looks like I accidentally, and certainly erroneously, used the announcement date instead of the actual release date.

                                                                    Thanks for noticing and correcting!

                                                                2. 6

                                                                  I doubt I’ll ever watch this video, since I don’t really care for videos. But I’d happily read a transcript.

                                                                  Mostly I’m curious about what Mark’s definition of “success” would be. I don’t have stats handy, but in my limited experience Linux desktop usage seems pretty strong in certain technical and professional settings. Meanwhile desktop OS usage of any variety has declined in relative terms, thanks to the rise of the mobile platforms. If he means success as a consumer OS, it’s not clear to me that any players besides Canonical were ever tilting at that particular windmill.

                                                                  1. 8

                                                                    So much this. People keep making the “Year of the Linux Desktop” joke mostly for historical reasons, I think. So far as I can tell GNU/Linux based Desktop and Laptop systems have been very good for quite some time, very usable (and used) by non enthusiasts, and also Desktop as a target is in strong decline.

                                                                    1. 5

                                                                      Aside from hardware issues, the one thing that’s bad about them is that they keep making changes that break fundamental parts of the platform users rely on. That or just not QA testing enough on those. This is easy for them to avoid like the proprietary ones do with their stronger assurances of backward compatibility. I mean, sure Microsoft tried doing something similar with Windows 8 but look how that went. The Linux desktops should make sure basic functionality always works and is consistent over time.

                                                                      Recent example that just happened is I can’t open PDF’s with Firefox on Ubuntu. The JS reader always clobbers the abstract texts I copy and paste in ways the native apps don’t. So, if I want to use the text, I’ll re-open the PDF in native reader from within the JS reader with open/save button or Firefox with ask feature. Suddenly, I can’t do that. It’s also suggesting opening it with a shell script, “env,” or finding the specific executable in Linux filesystem (what Windows/Mac user would…?). I’ll debug this new problem later. Meanwhile, yet another critical part of workflow has broken for no justifiable reason if any QA is getting done.

                                                                      I can’t remember that kind of stuff happening on Windows (NT onward) until Vista’s issues with hardware. Aside from bloat, it worked fine with some apps needing WinXP compatibility mode. Simple fix. I’m likewise not knocking Linux on hardware issues: just the one’s developers can avoid easily. Wireless suddenly stops working, can’t open PDF’s, weird interactions between three ways of managing packages I need, and so on. The best proof is probably that several, small distros did fix some of these problems despite not having millions of dollars.

                                                                    2. 2

                                                                      If he means success as a consumer OS, it’s not clear to me that any players besides Canonical were ever tilting at that particular windmill.

                                                                      Strictly speaking, Chrome OS put a Linux kernel (albeit a somewhat non-standard one) on all sorts of consumer machines.

                                                                      1. 9

                                                                        While loads of people talk about ChromeOS and Android in these discussions, I think “Linux on the desktop” is more about “free software on the desktop”, and they don’t really hit the mark, though Android has absolutely been a pretty important step.

                                                                    3. 1

                                                                      A proper transcript would be much nicer, but youtube does do a fairly decent job at automatically captioning the video so you can turn that on and watch it silently if you really want…

                                                                    1. 3

                                                                      I have one of the last major assessments in the initial training for my new job [1]. I’ll be away from Saturday morning until Friday doing some practical leadership stuff - the weather is set to be a cosy 25C, a massive improvement over the 3” of snow we had a couple weeks back on the practice!

                                                                      https://lobste.rs/s/fkehwl/what_are_you_doing_this_weekend#c_oz5mcz

                                                                      1. 3

                                                                        This is a pretty nice thing, but I can’t help but to feel a little frustrated. All my monitors have orientation detection built in, and it works on Windows, but I can’t find any way for them to report on Linux, meaning I have to do so by hand.

                                                                        Similarly, I can’t seem to get laptop-driven brightness setting on my monitors. Local screen works fine, but DP attached monitors don’t seem to do the right thing, and I can’t figure out why.

                                                                        1. 1

                                                                          Hmm that is really frustrating. My monitor is from the stone age, so it definitely doesn’t have a feature like that. Are the monitors connected to your machine only by HDMI? I wonder if it’s partially the kernel’s drivers, partially your windowing system.

                                                                          1. 4

                                                                            No, they’re connected by DisplayPort. It’s definitely a software issue, it was a pretty big step down moving from Debian Jessie to Stretch. MST used to work, now it doesn’t, as well as DPMS which has regressed.

                                                                            Remote monitor brightness control etc has worked since before VGA connections went out of fashion with I2C lines, but I’m not sure how well it’s been replaced.

                                                                            Brightness: never seen it work on displayport on linux Orientation: never seen it work on displayport on linux MST: Stretch broke it here - I can no longer address chained monitors DPMS: Stretch broke it here - if a screen sleeps, it can’t be woken up without undocking my laptop, turning the monitor off and on, and redocking.

                                                                            I use my dock a lot less and for less important things these days so it’s not a big deal, it just would be cool if some developer time was poured into these little quality of life areas, is all.

                                                                            1. 4

                                                                              Brightness can be controlled over DDC/CI these days, which is still I2C…

                                                                              And for some really weird unknown reason, even Windows doesn’t do DDC/CI brightness control out of the box. I had to download ClickMonitorDDC to do it.

                                                                              Here’s something for Linux that should do it.

                                                                        1. 25

                                                                          Preparing to finalise my transition out of tech. On Wednesday I travel south into England for a bit for some tests for my new job, and as long as I pass these mostly formalities, I’ll be starting in my new industry in January - leaving me free to pursue foss work on my own terms, rather than spent my brains development budget on what my employer wants.

                                                                          1. 12

                                                                            May I ask what you’re moving into? If it’s intentional that you didn’t mention it, that’s fine, but I’m curious.

                                                                            1. 9

                                                                              I’ll hopefully be starting as a trainee air traffic controller - it’s not really that I didn’t mention it intentionally as felt it was unimportant. What matters is that when I do work on software in the future, I can instead focus on doing software stuff I enjoy rather than what I’m told to do.

                                                                            2. 3

                                                                              congrats on your new found freedom!

                                                                              1. 1

                                                                                Is it the work itself you don’t enjoy doing? Or all the externalities and silly walks introduced by the business aspects of our industry?

                                                                                Said another way, will you still hack on things for fun after you transition?

                                                                                I love my job, but even so look forward to retiring someday, so I can spend large blocks of time playing with whatever I want whenever I want, and also taking care of myself even better than I do today.

                                                                                1. 1

                                                                                  I found that working in tech wasn’t as much fun as I’d hoped. I think a fair chunk of it is that nothing is ever as fun when you’re doing it for someone else as yourself. I’ll still continue to hack on stuff on my own terms for sure.

                                                                                  The flip side is that my new role will be doing things that are completely orthogonal to software, so it would be the case that no employer or task in this industry is really comparable - the benefits and tradeoffs are all different.

                                                                                  1. 3

                                                                                    Well, I’m sorry to hear it didn’t work out but I hope your new career is everything you think it will be and more! I sometimes wonder if having come into the industry SO early as I did gives me a different perspective. There’s still a big part of me that secretly thinks “Seriously? You’re willing to PAY me for this? HONEST?” :)

                                                                              1. 1

                                                                                If I remember right, the most common way to do this is to build a spectrogram of your sampled audio (which is basically an FFT over time) and look at what spectrograms of reference audio it appears to be a subset of. There’s no reason why you couldn’t adapt another implementation to report not just a match, but also where the match was found. You might find it needs tuned because there’s more information carried in music than speech, or that the overall approach doesn’t work too well, but it’s what I know for now.

                                                                                As an aside, what does “silence on the waveform” actually mean? A zero crossing point? A number of samples all at 0? This might be a worthwhile step forward but it’s trivially defeated by overlaying small amounts of noise, or carefully putting the two subsections back together after removing a word, etc.

                                                                                1. 1

                                                                                  what does “silence on the waveform” actually mean

                                                                                  Forgive me, I’m still building my vocabulary in this context! I think I mean 0 for an extended time: Audacity shows the waveform flat at 0 when zoomed really far in. In the candidate haystacks, there’s virtually none of that.

                                                                                  trivially defeated

                                                                                  Yeah, it would be. Detection beyond a sloppy “copy and paste” job is out of scope right now.

                                                                                  1. 1

                                                                                    Cool. I think it’s a worthwhile and probably interesting project anyway. As mentioned, one approach would be to create a spectrogram, and then identify features in the time-frequency-intensity space, and look for those same features in other places.

                                                                                    There’s probably useful research on this in computer vision, where they instead view it as X-Y-intensity for b/w images.