1. 1

    This project looks very similar in spirit to https://wiki.archlinux.org/index.php/Netctl, I’d be curious to play with this and see if it solves any issues netctl (which I’ve always used on my Arch systems) cannot (or as easily).

    1. 1

      On my Arch servers, I stick with systemd-networkd, how has netctl been for you?

      1. 1

        I’ve been using netctl since netcfg was deprecated a very, very long time ago - basically zero issues on my desktop or laptops (other than a really strange glance at a coffee shop once for using wifi-menu to connect…)

        netctl-auto is fantastic for laptop purposes too, generally speaking.

        I haven’t used systemd-networkd to give you any sense of comparison between the two.

    1. 1

      What are the pros and cons of either method when using this vs. Ansible with networking modules?

      1. 3

        Well, Ansible’s networking modules aren’t actually at all about managing a general purpose server’s network configuration. They are all about logging into a managed switch/router (e.g. Cisco or Juniper hardware) and speaking the iOS shell configuration script.

        But I’m guessing that’s not what you mean, and actually are referencing using Ansible to deploy network configuration details to a bunch of servers. There is one fundamental difference in approach which should inform your choice: Netplan is local, Ansible is remote.

        Ansible requires a server to have an IP already so that it can reach it from a remote source (your laptop). Netplan is meant to run as part of the boot-up procedure, assigning the server it’s first IP address. Both could manage NetworkManager or systemd-networkd configuration, so then it’s a more personal choice. Between managing the configuration directly via Ansible and templated files or indirectly via a YAML configuration file for Netplan, which is more appropriate for your network?

        Personally, the way I’d see it, Netplan is much more useful in highly dynamic environments, like cloud providers. While Ansible fits a bit better in slightly more stable environments, such as on-premise deployments of small/medium companies.

      1. 2

        This mailing this thread is from August 2014, and it’s been almost 4 and half years since then. Is this still relevant?

        1. 1

          They are still making new Unicode standards, so at least as far as strcoll, that can change at any time.

        1. 12

          Depends on how far you are willing to go I guess. At one extreme you have airgapped devices, always on VPNs, and a phone that either nevers leaves the house or is always on airplane mode. In the middle you have some sane practices like using Signal for most communication, basic use of VPNs, avoiding exposing information on social media platforms, and generally trying not to be a “data product”. At the other extreme you have “data cows” in the “data farm” that enjoy the cushy life that comes from always-on surveillance.

          Your list should probably be tiered based on who you are addressing and where their starting point is.

          1. 8

            Agreed. I can come up with a thousand ideas, but which ones are useful or not will depend on your “threat model” – who exactly are you trying to protect yourself from.

            • If you’re trying to protect yourself from opportunistic hackers who don’t have a personal/vested interest in you, there’s some simple steps.
            • If you’re trying to protect yourself from someone you know and interact with on a semi-frequent basis, there’s some more steps to build on top of that.
            • If you’re trying to protect yourself from very close people, like a spouse, here’s some advanced stuff with the caveat that many people in that position take a negative view of you doing that.
            • If you’re trying to protect yourself from governments, here’s a much harder list which will dramatically impact how you use the internet and basically every computing device out there.

            And that’s just one facet of your model. There’s more details: Who you want to have access? How much do you value absolute privacy versus connecting with less paranoid people? How flexible are you in any of your positions?

            So the unfortunate thing is there isn’t any universal list of steps since there isn’t an average person – everyone will answer these and other questions differently/. Which steps each person should take depends on those answers.

            1. 2

              If you’re trying to protect yourself from governments, here’s a much harder list which will dramatically impact how you use the internet and basically every computing device out there.

              That’s the old model from hacker culture. Thing is, it ignored economics: black market turned into lots of specialists cranking out more stuff at lower prices. New and old vulnerabilities are a plenty. The services people use are often insecure by default. Whether an interest or not, it doesn’t take a government’s resources to attack what lots of people use. At one point, I saw kits online for the price of a gaming rig. Expect it to get worse as they put more smart devices in their home.

              1. 4

                Indeed, I simplified a bit too much. As you said, what was thought of only possible with the resources of a nation-state is now more readily available. For example, all of the NSA’s “Shadow Broker” stuff is now in the hands of many more people in the black/grey markets.

                When I originally wrote the comment, I was more thinking about BGP hijacking, Sybil attacks, and the like - but a threat model is not a static document. It changes, both because of personal changes or because market forces have changed. What is possible changes every day.

                1. 2

                  Shadow Broker stuff is a great example. Forgot to mention the leaks. :)

          1. 2

            Here’s a quick fork of it adding colors and a loop: https://gist.github.com/evaryont/95e530853829495fc86870dff26d17b7

            1. 2

              That sounds nice! Though with 21.5% of the certificates surveyed in the wild being invalid, I wonder if it would be more interesting to propose a grammar description that is more lenient than the standard. I’ll read up on the finer details, to see if they were able to classify the violations (But 32 pages I’ll likely only finish this reading when this thread has long since left the front page :))

              Also, it’s a bit unfortunate that they sourced certificates from talking to the public IPv4 space. I suppose they’ll miss a quite a few certificates due CDNs and virtual hosting. It seems likely that you’d get a more relistic set by looking at Certificate Transparency logs. The upside would be that you’d skew towards actual in-use production certs and avoid all sorts of oddities (e.g., self-signed certs). Building on top of a this would also be more useful to inform the suggestion better practices and standard improvements.

              1. 2

                Sounds like some great ideas! :)

                1. 1

                  I don’t know if you have seen this yet, but did they also mention which ports they hit in the public space to get the certificates? I wonder how many open ports >1024 are out there with valid, publicly-signed certs. (I know of a few myself for sure, running ports like 8080 or 4567.)

                1. 3

                  I wish there was a comparison between this and zsh-syntax-highlighting and why one might want to change.

                  1. 1

                    Well the comparison would be a superset: fizsh includes zsh-syntax-highlighting and zsh-history-substring-search, all 3 projects part of the zsh-users group.

                  1. 4

                    For Ruby fans, I think Middleman has a very similar approach and would fit this use case quite well.

                    1. 1

                      I’m having trouble confirming one way or another, so hopefully someone’s Google-fu is stronger than mine:

                      Is the exFAT patent that Microsoft has successfully sued companies into paying for (including Samsung as mentioned in this article) included or not in this collection? Some people are saying it’s not included.

                      1. 6

                        The headline is click-baity (but one I still agree with), since Jeff’s message that clarifies his stance is later on in the post:

                        Computers, courtesy of smartphones, are now such a pervasive part of average life for average people that there is no longer any such thing as “computer security”. There is only security. In other words, these are normal security practices everyone should be familiar with. Not just computer geeks. Not just political activists and politicians. Not just journalists and nonprofits.

                        1. 6

                          I thought it was echoing something a lot of us are saying where the devices are so ridiculously insecure with so much attack surface that computer security doesn’t exist. Then, one must avoid them for secrets or high-integrity activities wherever possible while operating in a mindset that what’s left is already compromised on some level. Then I read the article, it was about something else entirely, thought the title sucked, and moved on.

                          1. 2

                            It would be a far more interesting article if it was about designing to avoid footguns. Unfortunately, wasn’t the case :-(

                        1. 7

                          “I love my domain registrar.” Has anyone ever said this?

                          Yes, because I use gandi.net

                          1. 3

                            I have been using Gandi more than 10 years and they are stellar. They contribute financially to free software projects, they are technically competent, and the two times I had to contact support they were extremely helpful. So, yes, I love my registrar.

                            I have also used their VPS hosting for nearly a decade without any issues.

                            1. 2

                              Same! Gandi is amazing, no reservations at all. I very much love them and they have been absolutely a pleasure to do business with.

                              1. 1

                                Same but with Hover.

                              1. 3

                                I use zsh and it’s suite of completion mostly. I would possibly use bash’s more often if various servers I administrate installed the completion package (which is usually broken out into a separate package), but they don’t and I don’t care to push that. (Again, happy with Zsh’s built-in library.)

                                Fish does a neat trick to supplement it’s completion – it parses the output of man! Pretty good for most commands, but some weird manpages will throw it for a loop. Those are far and few between IIRC, but I haven’t touched fish for a long while.

                                1. 4

                                  I use zsh which has the more comprehensive coverage with completions. Especially when you consider things like support for system commands on non-Linux systems. Note that zsh itself includes most of them and the separate zsh-completions project is only a small collection and of lower quality.

                                  Zsh’s is much the superior system but you’d have to emulate a whole lot more to support them. Completion matches can have descriptions which makes it vastly more useful. The process of matching what is on the command-line against the candidates is much more flexible and is not limited to dividing the command-line up by shell arguments - any arbitrary point can be the start and finish point for each completion candidate. And as that implies, what is to the right of the cursor can also be significant.

                                  My advice would be to take the zsh compadd builtin approach which is more flexible and extensible than compgen/complete, do your own implementation of _arguments (which covers 90% of most completions) and similarly your own _files etc. It’d then be straightforward for people to write completions targetting both oilshell and zsh.

                                  1. 2

                                    Hm interesting, yeah I am looking around the Completion/ dir in the zsh source now and it looks pretty rich and comprehensive.

                                    I also just tried out zsh and I didn’t realize it had all the descriptions, which is useful too. Don’t they get out of date though? I guess most commands don’t change that much?

                                    I recall recall skimming through parts of the zsh manual like a year ago, and from what I remember there are 2 different completion systems, and it seemed like there was a “froth” of bugs, or at least special cases.

                                    I will take another look, maybe that impression is wrong.

                                    I think the better strategy might be to get decent bash-like completion for OSH, and then convince someone to contribute ZSH emulation :)

                                    I guess I am mainly interested in the shell system that has the best existing corpus of completion scripts. Because I don’t want to boil the ocen and duplicate that logic in yet another system. zsh does seem like a good candidate for that. But I don’t understand yet how it works. Any pointers are appreciated.

                                    I’ll look into _arguments… it might cover 90% of cases, but it’s not clear what it would take to run 90% of completions scripts unmodified.

                                    1. 3

                                      The zsh descriptions do get out of date. The strings are copied by the completion script author, so if the --help text changes, the script will need to be updated too.

                                      Zsh’s completion system is vast and old, the best combination. That’s why the 2 engines exist still today, as there are a number of completion scripts that are in the old style. I believe that most of those underneath Completion/ are using the newer system.

                                      1. 2

                                        Of the 2 systems, the old, compctl system was deprecated 20 years ago. Everything under Completion/ uses the new system. I wouldn’t say there’s a “froth” of bugs - it is just that there is a lot to it.

                                        It isn’t the descriptions so much as the options themselves that can get out of date. The task of keeping them up-to-date is semi-automated based on sources such as --help output and they are mostly well maintained.

                                    1. 8

                                      After feeling some pain about how complicated all software, IT and Enterprise architecture toolsets are, I got this idea of a dead simple architecture sketching tool, that’s based on just nodes and named lines. The input for this tool is a text file that contains triples in the form of subject-verb-object, like so:

                                      # comments via shebang
                                      # declare nouns (for better typo recovery)
                                      internet, web front, app server, DB, Redis
                                      # declare verbs (perhaps not necessary and may be dropped)
                                      flows, proxies, reads/writes
                                      # subject-verb-object are separated by more than 1 whitespace (" " or "\t") 
                                      # somewhat like Robot Framework does it
                                      # prepositions (to, from, at, etc.) seemed redundant, so not using them
                                      internet     flows          web front
                                      web front    proxies        app server
                                      app server   reads/writes   DB
                                      app server   reads/writes   Redis

                                      I’m not married to the syntax yet, but it seems fine after a few iterations of experimentation. A tool could then read this and produce a nice graph via graphwiz. And you don’t really need a tool, you could do these sketches on a piece of paper in no time. But when you want to store and share the design, a text file + tool is a good combo.

                                      Tried to write a parser for this language in Rust’s pest, but that was a bit of a pain – pest’s error modes are quite odd. Seems like when my parser has a problem, Pest blows up on position 0 every time. Perhaps I’ll just do a dumb line-by-line parser by hand instead. And I’d like to implement in Crystal rather than Rust. Also thought about doing it in Swift, but that doesn’t seem to work so well outside of MacOS/Linux (I’m thinking BSDs here) yet so nope.

                                      Best part? The tool’s name is Architect Sketch.

                                      1. 5

                                        Reminds me of mermaid. It has shortcomings but has worked for me most of the times I’ve had to diagram something over the past few years.

                                        1. 4

                                          I’ve been working on something similar myself, with the goal of actually deploying infrastructure on it. I think this is a good avenue to explore, and pairs really well with rule engines and datalog systems.

                                          1. 2

                                            This is something I’ve been thinking about a lot as well.

                                            For my use case, I just want to dump, document, or understand how my multitude of web apps, daemons, and IoT devices all interact with one another.

                                            I had built a rough prototype with ruby and shell scripts using similar syntax, but I couldn’t ever get Graphviz to generate a “pretty” layout for the whole thing.

                                          1. 6

                                            For $WORK, we have just deployed a new version of our entire technology platform! Our clients are doing their first major migration starting on the 2nd (today, for most folks reading this), adding 1,000 users per week throughout the entire summer. This is migrating from a physical colo’d C# monolith stack to a hybrid Rails/C# microservice architecture spread across AWS and Azure (with some of it calling back to the older stack). It’s the culmination of many months of efforts by every team in the company.

                                            I’m nervous, but I’ve been dedicated to load testing it using Selenium, Firefox, and AWS instances – 1,300 of them to be precise. It’s handled that simultaneous load well enough. I hope to finish collecting all of the details and publish a blog post on how I set up this pseudo-bot-net of mine.

                                            My personal infrastructure at home and other personal projects have been put on hold since I lived and breathed this preparation work for the past couple months. This week I hope that between refreshing every monitoring page in the network I can spend some time kicking the development back into gear.

                                            1. 4

                                              For people not familiar with Eclipse release convention: Eclipse releases every year in June. Previous releases were Neon in 2016 and Oxygen in 2017.

                                              1. 3

                                                Ah, the website does a poor job mentioning that Photon is a release of the Eclipse IDE, and not some sub-project under the foundation’s umbrella. Thanks!

                                              1. 7

                                                Winds looks awesome, but the dependency on a bunch of cloud hosted, closed-source PaaS doesn’t seem so great. There doesn’t seem to be any way for someone to completely self-host Winds.

                                                1. 20

                                                  Bitwarden is my tool of choice for this. I haven’t been a fan of other more CLI-centric password managers as they usually don’t have browser integration. The usability of using an in-browser UI to generate a random password and the prompts to save it when I submit forms are very important IMO. Nothing has come close to that while also being open source.

                                                  1. 3

                                                    One thing that irks me about Bitwarden is having to provide an email address and getting an installation id & key if I’d like to self host it for myself. Please correct me if I’m wrong but from what I understand, even for using it without the “premium” features one still needs to perform this step.

                                                    If so, I think I’ll stick with my pass + rofi-pass + Password Store for Android combo for now.

                                                    1. 5

                                                      This is true, there are ways around it, if you work a little, since it is OSS. However, there are a few 3rd party tools, 2 of which are server implementations: bitwarden-go(https://github.com/VictorNine/bitwarden-go) and bitwarden-ruby(https://github.com/jcs/bitwarden-ruby).

                                                      There is also a CLI tool (https://fossil.birl.ca/bitwarden-cli/doc/trunk/docs/build/html/index.html)

                                                    2. 2

                                                      Are you self-hosting it or using the hosted version? I’m somehow always sceptical of having hosted password storage, even if it’s encrypted and everything.

                                                      1. 1

                                                        If it’s not encrypted, they see your secrets. If it is encrypted, they’re in control of your secrets. In self-hosted setup, you are in control of your secrets. If encrypted, you might loose them. If sync’d to third party (preferably multiple), you still might loose key. If on scattered paper copies, each in safe place, you probably won’t. For some failures, write-once (i.e. CD-R) or append-only storage can help where a clean copy can be reproduced from the pieces.

                                                        That’s pretty much my style of doing this. It’s not as easy as 1Password or something, though. There’s the real tradeoff.

                                                        1. 2

                                                          It is encrypted, here is a link on how the crypto works in english: https://fossil.birl.ca/bitwarden-cli/doc/trunk/docs/build/html/crypto.html

                                                          I agree Bitwarden is not quite as user friendly(or as secure if using local vaults) as 1Password, but for an OSS app, it’s definitely at the top of the list on user friendliness of password managers.

                                                          I run a server locally on my LAN, and my phone/etc sync to it. I definitely don’t want my secrets out in the cloud somewhere, no matter how encrypted they might be.

                                                    1. 4

                                                      Mentioned in the comments of the post, they have also fixed enabling word wrap and showing the status bar at the same time. I was always confused as to why those two settings were intermingled with each other.

                                                      1. 2

                                                        I find it a little ironic that after using the open-web browser that I am not able to inspect the sessionstore-backups/recovery.jsonlz4 file after a crash to recover some textfield data, as Mozilla Firefox is using a non-standard compression format, which cannot be examined with lzcat nor even with lz4cat from ports.

                                                        The bug report about this lack of open formats has been filed 3 years ago, and suggests lz4 has actually been standardised long ago, yet this is still unfixed in Mozilla.

                                                        Sad state of affairs, TBH. The whole choice of a non-standard format for user’s data is troubling; the lack of progress on this bug, after several years, no less, is even more so.

                                                        1. 15

                                                          https://bugzilla.mozilla.org/show_bug.cgi?id=1209390#c10 states that when Mozilla adopted using LZ4 compression there wasn’t a standard to begin with. Yeah, no one has migrated the format to the standard variant, which sucks, but it isn’t like they went out of their way in order to hide things from the user.

                                                          It was probably unwise for Mozilla to shift to using that compression algorithm when it wasn’t fully baked, though I trust that the benefits outweighed the risks back then.

                                                          1. 14

                                                            This will sound disappointing to you, but your case is as edge-caseish as it gets.

                                                            It’s hard to prioritize those things over things that affect more users. Note that other browser makers have security teams larger than all of Mozilla’s staff. Mozilla has to make those hard decisions.

                                                            These jsonlz4 data structure are meant to be internal (but your still welcome to use the open source implementation within Firefox to mess with it).

                                                            1. 2

                                                              I got downvoted twice for “incorrect” though I tried my best to be neutral and objective. Please let me know, what I should change to make these statements more correct and why. I’m happy to have this conversation.

                                                              1. 0

                                                                Priorities can be criticized.

                                                                Mozilla obviously has more than enough money that they could pay devs to fix this — just sell Mozilla’s investment in the CliqZ GmbH and there would be enough to do so.

                                                                But no, Mozilla sets its priorities as limiting what users can do, adding more analytics and tracking, and more cross promotions.

                                                                Third party cookie isolation still isn’t fully done, while at the same time money is spent on adding more analytics to AMO, on CliqZ, on the Mr Robot addon, and even on Pocket. Which still isn’t ooen source.

                                                                Mozilla has betrayed every single value of its manifesto, and has set priorities opposite of what it once stood for.

                                                                That can be criticized.

                                                                1. 11

                                                                  Wow, that escalated quickly :) It sounds to me that you’re already arguing in bad faith, but I think I’ll be able to respond to each of your points individually in a meaningful and polite way. Maybe we can uplift this conversation a tiny bit? However, I’ll do this with my Mozilla hat off, as this is purely based on public information and I don’t work on Cliqz or Pocket or any of those things you mention. Here we go:

                                                                  • Cliqz: Mozilla wants a web with more than just a few centralized search engines. For those silos to end, decentralization and experimentation is required. Cliqz attempts to do that
                                                                  • Telemetry respects your privacy
                                                                  • You can isolate cookies easily. EIther based on custom labels (“Multi Account Containers”) or based on the first party domain (i.e., the website in the URL bar). The former is in the settings, the latter is behind a pref (first party isolate). For your convenience, there’s also an add-on for first party isolation
                                                                  • Cross Promotions: The web economy is based on horrible ads that are annoying and tracking users. To show that ads can be profitable without being tracking or annoying, Mozilla shows sponsored content (opt-out btw) by computing the recommendations locally on your own device
                                                                  • Some of the pocket source code is already open source. It’s not a lot, that’s true. But we consider that a bug.
                                                                  1. 2

                                                                    As someone who also got into 1-3 arguments against firefox I guess you’ll always have to deal with criticism that is nit picking, because you’ve written “OSS, privacy respecting, open web” on your chest. Still it is obvious you won’t implement an lz4 file upgrade mechanism (oh boy is that funny when it’s only some tiny app and it’s sqlite tables). Because there are much more important things than two users not being able to use their default tools to inspect the internals of firefox.

                                                                    1. 2

                                                                      Sure, but it’s obvious that somehow Mozilla has enough money to buy shares in one of the largest Advertisement and Tracking companies’ subsidiaries (Burda, the company most known for shitty ads and its Tabloids, owns CliqZ), where Burda retains majority control.

                                                                      And yet, there’s not enough left to actually fix the rest.

                                                                      And no, I’m not talking about Telemetry — I’m talking about the fact that about:addons and addons.mozilla.org use proprietary analytics from Google, and send all page interactions to Google. If I wanted Google to know what I do, I’d use Chrome.

                                                                      Yet somehow Mozilla also had enough money to convert all its tracking from the old, self-hosted Piwik instance to this.

                                                                      None of your arguments fix the problem that Mozilla somehow sees it as higher priority to track its users and invest in tracking companies than to fix its bugs or promote open standards. None of your arguments even address that.

                                                                      1. 3

                                                                        about:addons code using Google analytics has been fixed and is now using telemetry APIs, adhering to the global control toggle. Will update with the link, when I’m not on a phone.

                                                                        Either way, Google Analytics uses a mozilla-customized privacy policy that prevents Google from using the data.

                                                                        If your tinfoil hat is still unimpressed, you’ll have to block those addresses via /etc/hosts (no offense.. I do too).

                                                                    2. 3

                                                                      I won’t comment on the rest of your comment, but this is really a pretty tiny issue. If you really want to read your sessionstore as a JSON file, it’s as easy as git clone https://github.com/Thrilleratplay/node-jsonlz4-decompress && cd node-jsonlz4-decompress && npm install && node index.js /path/to/your/sessionstore.jsonlz4. (that package isn’t in the NPM repos for some reason, even though the readme claims it is, but looking at the source code it seems pretty legit)

                                                                      Sure, this isn’t perfect, but dude, it’s just an internal datastructure which uses a format which is slightly non-standard, but which still has open-source tools to easily read it - and looking at the source code, the format is only slightly different from regular lz4.