1. 7

    with this extra $50 million they’ll surely have the resources to support federating their servers. let’s see it moxie.

    1.  

      I hope so too, but Moxie Marlinspike voiced quite principal concerns against federations before https://signal.org/blog/the-ecosystem-is-moving/

      1. 5

        That was the day I realised I had to boycott Signal too

        1.  

          Generally I don’t like “me too” posts, but in this case, me too! This is unacceptable. Using phone numbers as sole userids, is also unacceptable in my book.

          1.  

            What is wrong with using phone numbers as ids? Was it wrong 50 years ago?

            1.  

              First, I don’t want to give my phone number to strangers. I am okay with giving my e-mail address (or some other kind of token) to strangers.

              Second, at least for myself, e-mail addresses are eternal, while phone numbers are very ephemeral. Especially if you travel or move a lot.

              Third, Signal doesn’t just depend on your phone number, it somehow depends on your SIM card (not sure of tech details). You can’t change your SIM card and continue to use Signal smoothly. For me this is a blocker. It means I can’t use Signal even for testing purposes, as I switch SIM cards often.

              Apple iMessage gets this right. You can have any number of ids, including phone numbers or e-mails. I am identified by either one of those. I can be contacted by people who have either in their address book. And I can switch my SIM card any time I want. Of course, iMessage is not equivalent to Signal, nor is iMessage a good example to follow apart from the UX.

              Also I must add a fourth point about Signal. Until relatively recently there was no way to use it on a real computer. Now there’s an Electron application, which to me still means there is no way to use it on a real computer. I do not know if 3rd parties can implement real native desktop applications or not, but there are no such applications today.

              1.  

                Third, Signal doesn’t just depend on your phone number, it somehow depends on your SIM card (not sure of tech details). You can’t change your SIM card and continue to use Signal smoothly. For me this is a blocker. It means I can’t use Signal even for testing purposes, as I switch SIM cards often.

                I have a burner phone that was initially set up with a throw-away prepaid SIM. After doing the initial setup (including with Signal), I threw away the SIM and put the phone in airplane mode. The phone now sits behind a fully Tor-ified wireless network. Signal’s still working fine.

                Maybe if I were to put in a new SIM card, Signal might go crazy.

                And since this is a burner phone that sits behind Tor with a number that’s meant to be public, here it is: +1 443-546-8752. :)

                1.  

                  I have a burner phone

                  You can’t legally acquire a pre-paid SIM in the European Union without registering it against your ID. They did it to ‘thwart terrorism’.

                  1.  

                    Interesting. They give out pre-paid SIMs as promotions on the street here in Sweden, or at least they used to. Maybe the ID check comes at the first top-up.

                    1.  

                      In the past you were able to obtain them anonymously.

                      They still give them away like candy but it won’t operate unless you register it by providing your ID at the operator. Though I’m speaking based on Poland - don’t know how other countries regulated this.

                      1.  

                        I see. I don’t know if it’s a specific EU-related law / regulation or whether each country has their own rules.

                        1.  

                          Some EU countries have regulations limiting the possibility to purchase prepaid cards to the stationary shops of telecommunications operators. Such solutions have been adopted i.a. in Germany, United Kingdom, Spain, Bulgaria and Hungary. Obligation to collect data concerning subscribers who use telecommunications services can be found i.a. in the German law.

                          source: http://krakowexpats.pl/utilities/mandatory-registration-of-prepaid-sim-cards/

                          Funny I thought it was a cross EU law. Regardless, that still makes it very annoying that signal has no other means of making an ID. I don’t really want to give my mobile to everyone, and there is no way to use signal anonymously in countries that do regulate sim registration.

                    2.  

                      What that would do is create a black market for pre-paid SIMs, where you have a single entity registering tons of SIMs and reselling them pre-activated.

                      1.  

                        That is what is happening on the street, criminals approach durnkards etc. to register a SIM on them and resell or use that themselves.

                        Point is, for a regular person there is no legal way to obtain an anonymous SIM. Creating a legal entity registering SIMs is also not possible. This means that signal can’t be used anonymously if you want to stay on the legal side.

                        1.  

                          Completely agreed. It’s unfortunate to see such silly laws that are so easy to be skirted around. All it does is make people who would otherwise be honest and trustworthy break the law.

                    3.  

                      At least in my country, phone numbers can, and eventually, will be reallocated when not in use for several years. So aren’t you running at a small risk that someone else might register ‘your’ number with Signal in a few years?

                    4.  

                      A me-too-style reply for what @lattera said.

                      A friend lives abroad and got a local SIM on a visit once. When he went back, he discarded his SIM, unknown to me.

                      When I heard he might be visiting again, I sent a Signal message asking if this still works. To our surprise, it did.

                      So this myth needs to be busted.

                      It may have a bug, though, as I sent a Signal message to another friend and got a reply from a foreign phone number. He told me it’s the number of a SIM he used on a business trip.

                      That’s a different issue someone else can hunt down, but Signal is more anonymous than eg. Bitcoin as it stands today.

              2.  

                this is so dumb

                • it’s a messaging app… how much can people’s expectations evolve? how have they evolved since signal’s inception?
                • the cost of switching between services is low only for services that already have mass adoption. if moxie started fucking around with the protocol and people weren’t having it, network effects mean there would be no alternative (whatsapp and facebook messenger are not alternatives)
            1. 1

              Windows 10 in particular is such a miserable experience, it’s just abhorent. I’m at a big dumb corp now that’s still standardized on Win 7. Does anyone know of any large organizations using win 10? I saw 8.1 a couple of years ago at a fortune 500.

              In other words, how can MS not see 10 as a giant mis-step?

              1. 1

                …because most of those Corporations stayed on Windows XP as long as possible and in some caseseven longer. It seems logical to me that they would stay on Windows 7 as long as possible because the incentives for upgrading are just aren’t there in such environments. If they would use free software, the would probably be on CentOS 5 or something like this at this point.

              1. 4

                Saw this on hacker news and thought, other people here might be interested in Unix shells in general and reformist approaches to their improvement specifically.

                The not-so-small amount of in-house shell scripts I have at work only needs to work with zsh nowadays, because that’s what most of us are using. There’s one fish user they just run our shared scripts with zsh. I can imagine that modernish might be really useful for people who need to ship shell scripts to environment which they do not control.

                1. 11

                  Maybe the title could mention that it’s about xmake? That would save non-xmake-users like me a click :)

                  1. 2

                    Electron bashings remind of Haskell (or similar) programmers bashing PHP.

                    Just like PHP, Electron has problems, but it has many practical benefits and that’s why it’s popular despite having those problems.

                    Their popularity (despite their problems) shows just how weak and problematic some of the alternatives are.

                    A minority of purists will moan while people continue to produce value using these tools.

                    1. 1

                      Yes, but there’s another thing: PHP had become extremely popular for historic reasons (shared web hosts where you could not execute custom binaries but had mod_php or so; much simpler to get started programming for than CGI binaries) and then just stayed there, at least to some extend, also for economical reasons: There are vast amounts of people who feel comfortable programming PHP and find Jobs doing it, so they often don’t really see a reason to look into anything else. Because a) it’s popular, b) ‘works’ for them c) heard that the other stuff is more complicated.

                      With electron, getting started is easy. When you hit performance problems or so, you often have already invested far too much resources in the platform to switch. With QT for example, getting started, is perceived as much more difficult.

                    1. 10

                      Some of us miss native desktop applications that worked well. It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app. But at the same time not everyone is satisfied with the solution of “build them all as electron apps starting with a cross-platform browser base plus web technology for the UI”. I can sympathize with app developers who in no way want to sign up to build for 2 or 3 platforms, but I feel like berating dissatisfied users is unjust here. Try comparing a high quality native macOS app like Fantastical with literally any other approach to calendar software: electron, web, java, whatever. Native works great, everything else is unbearable.

                      1. 8

                        I think people are just tired of seeing posts like Electron is cancer every other day. Electron is here, people use it, and it solves a real problem. It would be much more productive to talk about how it can be improved in terms of performance and resource usage at this point.

                        1. 3

                          One wonders if it really can be improved all that much. It seems like the basic model has a lot of overhead that’s pretty much baked in.

                          1. 2

                            There’s a huge opening in the space for something Electron-like, which doesn’t have the “actual browser” overhead. I’m certain this is a research / marketing / exposure problem more than a technical one (in that there has to be something that would work better we just don’t know about because it’s sitting unloved in a repo with 3 watchers somewhere.)

                            Cheers!

                            1. 2

                              There’s a huge opening in the space for something Electron-like, which doesn’t have the “actual browser” overhead.

                              Is there? Electron’s popularity seems like it’s heavily dependent on the proposition “re-use your HTML/CSS and JS from your web app’s front-end” rather than on “here’s a cross-platform app runtime”. We’ve had the latter forever, and they’ve never been that popular.

                              I don’t know if there’s any space for anything to deliver the former while claiming it doesn’t have “actual browser” overhead.

                              1. 1

                                “re-use your HTML/CSS and JS from your web app’s front-end”

                                But that’s not what’s happening here at all - we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end. So, toss out the “web-app” part, and you’re left with HTML/DOM as a tree-based metaphor for UI layout, and a javascript runtime that can push that tree around.

                                I don’t know if there’s any space for anything to deliver the former while claiming it doesn’t have “actual browser” overhead.

                                There’s a lot more to “actual browser” than a JS runtime, DOM and canvas: does an application platform need to support all the media codecs and image formats, including all the DRM stuff? Does it need always on, compiled in built-in OpenGL contexts and networking and legacy CSS support, etc.?

                                I’d argue that “re-use your HTML/CSS/JS skills and understanding” is the thing that makes Electron popular, more so than “re-use your existing front end code”, and we might get a lot further pushing on that while jettisoning webkit than arguing that everything needs to be siloed to the App Store (or Windows Marketplace, or whatever).

                                1. 2

                                  But that’s not what’s happening here at all - we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end. So, toss out the “web-app” part, and you’re left with HTML/DOM as a tree-based metaphor for UI layout, and a javascript runtime that can push that tree around.

                                  Huh? We’re talking about people complaining that Electron apps are slow, clunky, non-native feeling piles of crap.

                                  Sure, there are a couple of outliers like Atom and VSCode that went that way for from-scratch development, but most of the worst offenders that people complain about are apps like Slack, Todoist, Twitch – massive power, CPU, and RAM sucks for tiny amounts of functionality that are barely more than app-ized versions of a browser tab.

                                  “Electron is fine if you ignore all of the bad apps using it” is a terribly uncompelling argument.

                                  1. 1

                                    Huh? We’re talking about people complaining that Electron apps are slow, clunky, non-native feeling piles of crap.

                                    Sure, there are a couple of outliers like Atom and VSCode that went that way for from-scratch development, but most of the worst offenders that people complain about are apps like Slack, Todoist, Twitch – massive power, CPU, and RAM sucks for tiny amounts of functionality that are barely more than app-ized versions of a browser tab.

                                    “Electron is fine if you ignore all of the bad apps using it” is a terribly uncompelling argument.

                                    A couple things:

                                    1. Literally no one in this thread up til now has mentioned any of Slack/Twitch/Todoist.
                                    2. “Electron is bad because some teams don’t expend the effort to make good apps” is not my favorite argument.

                                    I think it’s disingenous to say “there can be no value to this platform because people write bad apps with it.”

                                    There are plenty of pretty good or better apps, as you say: Discord, VSCode, Atom with caveats.

                                    And there are plenty of bad apps that are native: I mean, how many shitty apps are in the Windows Marketplace? Those are all written “native”. How full is the App Store of desktop apps that are poorly designed and implemented, despite being written in Swift?

                                    Is the web bad because lots of people write web apps that don’t work very well?

                                    I’m trying to make the case that there’s value to Electron, despite (or possibly due to!) it’s “not-nativeness”, not defending applications which, I agree, don’t really justify their own existence.

                                    Tools don’t kill people.

                                  2. 1

                                    we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end.

                                    I’m really not an expert in the matter, just genuinely curious from my ignorance: why not? If it is HTML/CSS/JS code and it’s already working, why not just uploading it as a webapp as well? I always wondered why there is no such thing as an Atom webapp. Is it because it would take too long to load? The logic and frontend are already there.

                                    1. 2

                                      I’m referring to Atom, Hyper, Visual Studio Code, etc. here specifically.

                                      I don’t think there’s any problem with bringing your front end to desktop via something like Electron. I do it at work with CEFSharp in Windows to support a USB peripheral in our frontend.

                                      If it is HTML/CSS/JS code and it’s already working, why not just uploading it as a webapp as well?

                                      I think the goal with the web platform is that you could - see APIs for device access, workers, etc. At the moment, platforms like Electron exist to allow native access to things you couldn’t have otherwise, that feels like a implementation detail to me, and may not be the case forever.

                                      no such thing as an Atom webapp

                                      https://aws.amazon.com/cloud9/

                                      These things exist, the browser is just a not great place for them currently, because of the restrictions we have to put on things for security, performance, etc. But getting to that point is one view of forward progress, and one that I ascribe to.

                              2. 1

                                I can think of a number of things that could be done off top of my head. For example, the runtime could be modularized. This would allow only loading parts that are relevant to a specific application. Another thing that can be done is to share the runtime between applications. I’m sure there are plenty of other things that can be done. At the same time, a lot can be done in applications themselves. The recent post on Atom development blog documents a slew of optimizations and improvements.

                            2. 4

                              It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

                              It’s a necessarily sacrifice if you want apps that are and feel truly native that belong on the platform; a cross-platform Qt or (worse) Swing app is better than Electron, but still inferior to the app with a UI designed specifically for the platform and its ideals, HIG, etc.

                              1. 1

                                If we were talking about, say, a watch vs a VR system, then I understand “the necessary sacrifice” - the two platforms hardly have anything in common in terms of user interface. But desktops? Most people probably can’t even tell the difference between them! The desktop platforms are extremely close to each other in terms of UI, so I agree that it’s tragic to keep writing the same thing over and over.

                                I think it’s an example of insane inefficiency inherent in a system based on competition (in this case, between OS vendors), but that’s a whole different rabbit hole.

                                1. 2

                                  I am not a UX person and spend most of my time in a Terminal, Emacs and Firefox, but I don’t think modern GUIs on Linux (Gnome), OS X and Windows are too common. All of them have windows and a bunch of similar widgets, but the conventions what goes where can be quite different. That most people can’t tell, does not mean much because most people can’t tell the difference between a native app and an electron one either. They just feel the difference if you put them on another platform. Just look how disoriented many pro users are if you give them a machine with one of the other major systems.

                                  1. 1

                                    I run Window Maker. I love focus-follows-mouse, where a window can be focused without being on top, which is anathema to MacOS (or macOS or whatever the not-iOS is called this week) and not possible in Windows, either. My point is, there are enough little things (except focus-follows-mouse is hardly little if that’s what you’re used to) which you can’t paper over and say “good enough” if you want it to be good enough.

                                2. 2

                                  It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

                                  There is a huge middle ground between shipping a web browser and duplicating code. Unfortunately that requires people to acknowledge something they’ve spent alot of time working to ignore.

                                  Basically c is very cross platform. This is heresy but true. I’m actually curious: can anyone name a platform where python or javascript run where c doesn’t run?

                                  UI libraries don’t need to be 100% of your app. If you hire a couple software engineers they can show you how to create business logic interfaces that are separate from the core services provided by the app. Most of your app does not have to be UI toolkit specific logic for displaying buttons and windows.

                                  Source: was on a team that shipped cross platform caching/network filesystem. It was a few years back, but the portion of our code that had to vary between linux/osx/windows was not that big. Also writing in c opened the door for shared business logic (api client code) on osx/linux/windows/ios/android.

                                  Electron works because the web technologies have a low bar to entry. That’s not always a bad thing. I’m not trying to be a troll and say web developers aren’t real developers, but in my experience, as someone who started out as a web developer, there’s alot of really bad ones because you start your path with a bit of html and some copy-pasted javascript from the web.

                                  1. 1

                                    There’s nothing heretical about saying C is cross-platform. It’s also too much work for too little gain when it comes to GUI applications most of the time. C is a systems programming language, for software which must run at machine speed and/or interface with low-level machine components. Writing the UI in C is a bad move unless it’s absolutely forced on you by speed constraints.

                                  2. 1

                                    It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

                                    ++ Yes!

                                    Try comparing a high quality native macOS app like Fantastical with literally any other approach to calendar software: electron, web, java, whatever. Native works great, everything else is unbearable.

                                    Wait, what? I think there’s two different things here. Is Fantastical a great app because it’s written in native Cocoa and ObjC (or Swift), or is it great because it’s been well designed, well implemented, meets your specific user needs, etc? Are those things orthoganal?

                                    I think it’s easy to shit on poorly made Electron apps, but I think the promise of crossplatform UI - especially for tools like Atom or Hyper, where “native feeling” UI is less of a goal - is much too great to allow us to be thrown back to “only Windows users get this”, even if it is “only OS X users get this” now.

                                    It’s a tricky balancing act, but as a desktop Linux user with no plans to go back, I hope that we don’t give up on it just because it takes more work.

                                    Cheers!


                                    PS: Thanks for the invite, cross posted my email response if that’s ok :)

                                    1. 2

                                      Wait, what? I think there’s two different things here. Is Fantastical a great app because it’s written in native Cocoa and ObjC (or Swift), or is it great because it’s been well designed, well implemented, meets your specific user needs, etc? Are those things orthoganal?

                                      My personal view is that nothing is truly well designed if it doesn’t play well and fit in with other applications on the system. Fantastical is very well designed, and an integral part of that great design is that it effortlessly fits in with everything else on the platform.

                                      “Great design” and “native” aren’t orthogonal; the latter is a necessary-but-not-sufficient part of the former.

                                      1. 1

                                        “Great design” and “native” aren’t orthogonal; the latter is a necessary-but-not-sufficient part of the former.

                                        Have to agree to disagree here, I guess. I definitely can believe that there can be well-designed, not-native application experinces, but I think that depends on the success and ‘well-designed-ness’ of the platform you’re talking about.

                                        As part of necessary background context, I run Linux on my laptop, with a WM (i3) rather than a full desktop manager, because I really didn’t like the design and cohesiveness of Gnome and KDE the last time I tried a full suite. Many, many apps that could have been well designed if they weren’t pushed into a framework that didn’t fit them.

                                        I look at Tomboy vs. Evernote as a good example. Tomboy is certainly well integrated, and feels very native in a Gnome desktop, and yet if put next to each other, Evernote is going to get the “well-designed” cred, despite not feeling native on really any platform it’s on.

                                        Sublime Text isn’t “native” to any of the platforms it runs on either.

                                        Anyway, I feel like I’m losing the thread of discussion, and I don’t want to turn this into “App A is better than App B”, so I’ll say that I think I understand a lot of the concerns people have with Electron-like platforms better than I did before, and thank you for the conversation.

                                        Cheers!

                                  1. 2

                                    neat idea.

                                    1. 3

                                      I agree, looks really quite nice. But I can’t help to wonder how up to date those Wikipedia articles are or even can be, if the are not directly updated by the author(s) after a new release? Does Wikipedia source them automatically? I’d guess “no”, but I do not know. Perhaps git tags and/or regexes on release pages would be good additional sources for some of that software.

                                      1. 5

                                        There’s a Github-wiki-bot, that “automatically extracts stable releases and the release dates of Software from GitHub” as of August 2017 (Discussion to Wikidata property “software version”).

                                        1. 2

                                          fantastic, thanks for the links!

                                      1. 8

                                        The ultra fine granularity and relative low quality of the NPM packages make ideal circumstances for abuse, as the article eloquently shows. It’s a good case for the “batteries included” environments, which would be less prone to that type of attack. At the end of the day, the root cause is more social and organizational than technical – and that’s a fascinating aspect of computer security.

                                        1. 1

                                          Counter-point: the fine granularity allows for not pulling much dead code in. With all the jokes about left-pad, it is easier to audit then activesupport.

                                          I don’t think a good case for any of these environments can be made. I’d say the question of how to audit artifacts that literally include code from all over the world properly is unsolved.

                                          1. 1

                                            I don’t have to audit activesupport, I have to trust the people with commit access. That’s a much smaller group than the set of package authors in my npm dependancies

                                        2. 4

                                          This is not npm problem. More like third-party code can be malicious problem.

                                          1. 6

                                            Yes, but 3rd-party code which can effectively be published by everyone without oversight by any more trusted parties and where it’s accepted to ship minified code and/or compiler output in packages; such ecosystems are especially vulnerable to the scenario described in the post. And then there’s this cultural norm to put every few lines in its own packages so that you end up with implicit trust to literally hundreds of people in many projects.

                                            I mean Debian for example has got it’s own fair share of criticism and lacks in people to audit changes, but: You need to earn community trust before you can push anything in to the main repositories and minified code is not accepted.

                                            1. 2

                                              Also, package signing is mandatory in Debian and unsupported in npm; it’s ordinary for npm users to import code from hundreds or thousands of authors, any of whom can be hacked.

                                        1. 1

                                          This article furthers my belief that hardly anyone reads the manual anymore.

                                          :h

                                          What version of Bash supports this?

                                          1. globbing vs regexps

                                          shopt -s extglob adds some nice expansions.

                                          1. 2

                                            This article furthers my belief that hardly anyone reads the manual anymore.

                                            Could be, at least not the manuals of their shells. I work with Linux systems for a living but I do admit, that I never read the whole bash or zsh manual from top to bottom. Most of my knowledge has been acquired from books, coworkers, and searches. I do often search in the bash manpage before searching online, but I never really stumbled upon the section about :h there until I had a look yesterday after this post.

                                            (And found quite a few other interesting ones in ‘history expansion / modifiers’)

                                            What version of Bash supports this?

                                            Not sure what you mean here. bash 4.4-5 on my debian stretch machine does support it, has it been in the first release already?

                                            1. 1

                                              bash 4.4-5 on my debian stretch machine does support it, has it been in the first release already?

                                              I see the problem. The author’s original post contained an error:

                                              grep isthere /long/path/to/some/file/or/other.txt
                                              ls /long/path/to/some/file/or/other.txt:h
                                              

                                              This has been removed.

                                          1. 2

                                            Oh, I am not sure if I’ve ever stumbled upon :h before, but It looks really useful! And I must admit, that I had just forgotten about <(). Had seen it before, but never really used it, forgot it and used temp files just two weeks ago, going to replace them with <()>

                                            1. 1

                                              I’m using SyncThing at home. Just mirror and sync a folder across multiple machines.

                                              One downside I see is the lack of storage somewhere else while all laptops are at home. Geographic risk.

                                              It also requires all machines to store the full state. ~100GB in my case.

                                              1. 4

                                                One popular differentiation between file synchronization and backups are that you can travel back in time with your backups. What happens if you - or more realistically: software you use - deletes or corrupts a file in your SyncThing repository? It would still be gone/corrupted and the problem would automatically be synced to all your machines, right?

                                                Personally I use borgbackup, a fork of attic, with a RAID 1 in my local NAS and an online repository to which I, honestly, don’t sync too often because even deltas take ages with the very low bandwidth I got at home, so I did the initial upload by taking disks/machines to work …and hope that the online copies are recent ‘enough’ and I can’t really resist the thought that in scenarios where both disks in my NAS and the original machines are gone/broken (fire at home, burglaries, etc.) I would probably loose access to my online storage too. I should test my backups more often!

                                                1. 1

                                                  I use Borg too! At home and at work. I also highly recommend rsync.net, who are not the cheapest, but have an excellent system based on firing commands over ssh. They also have a special discount for borg and attic users http://www.rsync.net/products/attic.html

                                                  1. 1

                                                    Hmm - that’s really not the cheapest!

                                                    3c/gb (on the attic discount) is 30% dearer than s3 (which replicates your data to multiple DCs vs rsync.net which only has RAID).

                                                    1. 1

                                                      True, though S3 has a relatively high outgoing bandwidth fee of 9c/gb (vs. free for rsync.net), so you lose about a year of the accumulated 0.7c/gb/mo savings if you ever do a restore. Possibly also some before then depending on what kind of incremental backup setup you have (is it doing two-way traffic to the remote storage to compute the diffs?).

                                                      1. 2

                                                        Ahh, I hadn’t accounted for the outgoing bandwidth.

                                                        That said, if I ever need to do a full restore, it means both my local drives have failed at once (or, more likely, my house has burned down / flooded); in any case, an expensive proposition.

                                                        AFAIK glacier (at 13% the price of rsync) is the real cheap option (assuming you’re OK with recovery being slow or expensive).

                                                        RE traffic for diffs: I’m using perkeep (nee camlistore) which is content-addressable, so it can just compare the list of filenames to figure out what to sync.

                                                      2. 1

                                                        Eh - I don’t mind paying for a service with an actual UNIX filesystem, and borg installed. Plus they don’t charge for usage so it’s not that far off. Not to shit on S3, it’s a great service, I was just posting an alternative.

                                                        1. 1

                                                          Yeah that’s fair, being able to use familiar tools is easily worth the difference (assuming a reasonable dataset size).

                                                  2. 1

                                                    syncThing is awsome for slow backup stuff. But i wish i could configure it such that it checks for file changes more often. Currently it takes like 5 minutes before a change is detected which results in me using Dropbox for working directory usecases.

                                                    1. 5

                                                      You can configure the scan time for syncthing, you can also run syncthing-inotify helper to get real-time updates

                                                    2. 1

                                                      That’s one huge advantage of Resilio Sync. You don’t have to store the full state in every linked node. But until RS works on OpenBSD, it’s a no-go for me.

                                                    1. 11

                                                      I started using Debian stable on all my desktop machines when Jessie became stable and never looked back. I’ve used the Firefox ESR in stretch until recently when I upgraded to FF 57 manually. Besides that, Docker, Signal, Riot and Keybase are the things on my machines which are not in the Debian repositories, but that would not really improve with testing or unstable. If I need more up-to-date stuff for development it’s mostly in containers or python virtualenvs these days anyway and I love that my desktops aren’t moving targets anymore.

                                                      1. 3

                                                        I used to pull in nix in order to reliably install any software on my system where the version in Debian Stable was too old. But nowadays the only software I don’t get from Debian is the software that I need to build from source because I’m actively contributing to it.

                                                        The only exceptions are when I needed Docker for work and when I manually upgraded Firefox to version 57 out of curiosity about how it would break my extensions.

                                                        1. 3

                                                          I use Ubuntu LTS for the same reasons. I do get the latest Firefox automatically.

                                                          Do you see any advantage in Debian stable over Ubuntu LTS?

                                                          1. 8

                                                            For me the advantage is of political/social nature. Ubuntu is in the end a product and at the mercy of Cannonical just like Fedora is as product of RedHat. If you trust them, they have the advantage of being able to pay much more people then those who are being payed (by 3rd parties) to work on Debian. But I don’t, at least not in the long run: I don’t know for how long my interest as a user will align with their business interests and stuff like the ‘suggestions’ for amazon products in unity or advertisements for proprietary software in MOTD on Ubuntu servers makes me sceptical. I still trust Debians governance much more even after the sub-optimal handling of the systemd debates. Ah, and I am already using Debian on almost all servers, privately and at work ;)

                                                        1. 2

                                                          If you’re using backports on top of stable then you’re effectively using a less-popular, less-well-tested variant of testing.

                                                          In theory regular stable releases make sense for a distribution that extensively patches and integrates the software it distributes. But given that Debian’s policy and practices predictably lead to major security vulnerabilities like their SSH key vulnerability, I figure such patching and integrating is worse than useless, and prefer distributions that ship “vanilla” upstream software as far as possible. Such distributions have much less need for a slow stable release cadence like Debian’s, because there’s far less modification and integration to be doing.

                                                          1. 6

                                                            a less-popular, less-well-tested variant of testing.

                                                            Not at all. Going to testing means moving everything to testing. Moving Linux, moving gcc, moving libc. Stable + backports means almost everything is on stable except the things you explicitly move to backports. My current package distribution is:

                                                            stretch: 5323
                                                            stretch-backports: 7
                                                            

                                                            The 7 packages I have from backports are: ldc, liboctave-dev, liboctave4, libphobos2-ldc-dev, libphobos2-ldc72, octave, and octave-common. Just Octave and the LDC compiler for D. Hardly could call them important system packages.

                                                            1. 1

                                                              It’s worth remembering that the purpose of the computer is to run user programs, not to run the OS. I’d suggest that the programs a user enables backports for are likely to be those programs the user cares most about - precisely the most important packages.

                                                              1. 5

                                                                I am running stable because I don’t want to have distracting glitches on the side of the things I actually care about. I have the energy to chase after D or Octave bugs (after all, it’s kind of what I do), so I do want newer things of those. I don’t want to be chasing after Gnome or graphics driver bugs. Those system things get frozen so I can focus on the things I have the energy for.

                                                                1. 1

                                                                  As a maintainer you’re in a rather unusual position; you’re, in a sense, running Octave for the sake of running Octave. Whereas most people with Octave installed are probably using Octave to do something, in which case Octave bugs would be a serious issue for them, probably more so than bugs in Gnome or graphics drivers.

                                                            2. 4

                                                              But given that Debian’s policy and practices predictably lead to major security vulnerabilities like their SSH key vulnerability

                                                              Could you elaborate on that? How do those policies and practices do so predictably? And what would preferable alternatives look like in your opinion?

                                                              1. 4

                                                                Making changes to security-critical code without having them audited specifically from a security perspective will predictably result in security vulnerabilities. Preferred alternatives would be either to have a dedicated, qualified security team review all Debian changes to security-critical code, or to exempt security-critical code from Debian’s policy of aggressively patching upstream code to comply with Debian policy. Tools like OpenSSH do by and large receive adequate security review but those researchers and security professional work with the “vanilla” source from upstream; no-one qualified is reviewing the Debian-patched version of OpenSSH and that’s still true even after one of the biggest security vulnerabilities in software history.

                                                            1. 9

                                                              Even after wasm becomes first-class in browsers, on the same level of javascript, why write UI code in systems programming language without GC? Considering that actual UI is DOM elements, only “glue code” is written in Rust.

                                                              Anyway, exploring this possibility is cool.

                                                              BTW, Rust might be useful in actual desktop GUIs because of fast startup (unlike JVM and languages that compile sources on program start), better interop with C and controllable memory consumption. React-like library might be cool on desktop too.

                                                              1. 6

                                                                i’ve written some elm-inspired desktop gui code in ocaml+gtk; it’s a very pleasant paradigm to program in. you don’t really need a framework, the gui library provides an event loop for you.

                                                                1. 1

                                                                  The ocaml+gtk code sounds interesting, do you have that online by chance?

                                                                  1. 5

                                                                    model and controller

                                                                    gtkgui view

                                                                    web view

                                                                    it worked out pretty nicely, writing a web view as well helped a lot in factoring the gui code properly, and now i get to use the gtk frontend to prototype features for the web frontend (i expect the web frontend to be the one i actually “ship”, but writing a desktop app is a lot easier)

                                                                    1. 1

                                                                      thank you, reads nicely!

                                                              1. 2

                                                                Nice presentation. Could someone with a deeper understanding than me comment on the role which coreboot has in this context? It works fine with me_cleaner and seems to circumvent UEFI completely if used with the right payload, but what about SMM?

                                                                1. 14

                                                                  As old as my SSH keys are and as big a fan as I am of taking security advice from blog posts, could anyone working in security/crypto weigh in on the author’s choice of key generation settings?

                                                                  1. 28

                                                                    Ed25519 is probably your safest bet, compatibility permitting.

                                                                    The -a 100 rounds is probably overkill. That’s just a tradeoff with how annoyed you want to be, and how fast your computer is, so maybe you can live with it, but I think the default is fine. The attack scenario is somebody steals your ssh keys. The password only needs to hold until you have a chance to rotate keys, not until the end of time. So you can do some modelling about how long that is based on who you think is out to get your keys and how many computers they have.

                                                                    1. 8

                                                                      It does add a lag, I agree, but in practical terms on my 2013 laptop this is ~1 second, and with ssh-agent only happens when you login and run the agent, which for a lot of people (I assume) would be once daily.

                                                                      I’d say it’s worthwhile.

                                                                      1. 35

                                                                        Whatever floats your boat. :) You’re not wrong, but I’ll just throw in a note that going from 16 to 100 rounds increases security by a factor of 6. Adding one random letter to your password increases security by a factor of 26 and it takes me much less time to type that letter than the extra 84 rounds of hashing. Of course, if you’re already at peak password and unable to memorize one more letter, that’s not an option.

                                                                        1. 4

                                                                          Thank you for educating me. :-)

                                                                      2. 6

                                                                        Gnome users should be aware that the keyring can’t handle ed25519 keys at the moment.

                                                                        1. 9

                                                                          Pretty disappointing. ECDSA keys have been around for more than six years now.

                                                                          1. 2

                                                                            …and it still does not handle GPG smartcards correctly among other problems. I really like much of the gnome-desktop, but It’s quite sobering that I still need to search for the new way to get rid of the keyring, or at least its ssh-agent and gpg-agent functionalities, with every other release of the gnome desktop.

                                                                      1. 2

                                                                        Direct link to about page: http://www.aoeui.xyz/about which has the proposed explanation for non-lobsters. :)

                                                                        1. 1

                                                                          /about works fine, but the start page links to /about.html which does not exist.

                                                                          1. 1

                                                                            It seems like the about link at the bottom of the page works, but the one at the top doesn’t (“Software reviews. Read the about page.” links to /about.html instead of /about)