1. 8
    1. 3

      This should be updated to include the alpha channels (8 and 4 long). I wonder how long the list would get at that point.

    1. 2

      Here’s a link to the source code: https://github.com/nickdichev/markdown-live.

      1. 20

        My advice, which is worth every penny you pay for it:

        Don’t maintain a test environment. Rather, write, maintain and use code to build an all-new copy of production, making specific changes. If you use puppet, chef, ansible or such to set up production, use the same, and if you have a database, restore the latest backup and perhaps delete records.

        The specific changes may include deleting 99% of the users or other records, using the smallest possible VM instances if you’re on a public cloud, and should include removing the ability to send mail, but it ought to be a faithful copy by default. Including all the data has drawbacks, including only 1% has drawbacks, I’ve suffered both, pick your poison.

        Don’t let them diverge. Recreate the copy every week, or maybe even every night, automatically.

        1. 9

          Seconding this.

          One of the nice things about having “staging” being basically a hot-standby of production is that, in a pinch, you can cut over to serve from it if you need to. Additionally, the act of getting things organized to provision that system will usually help you spot issues with your existing production deployment–and if you can’t rebuild prod from a script automatically, you have a ticking timebomb on your hands.

          As far as database stuff goes, use the database backups from prod (hopefully taken every night) and perhaps run them through an anonymizing ETL to do things like scramble sensitive customer data and names. You can’t beat the shape (and issues) of real data for testing purposes.

          1. 2

            Pardon a soapbox digression: Friendlysock is big improvement over your previous persona. Merci.

            1. 1

              It’s not a bad idea to make use of a secondary by having it be available to tests. Though I would argue instead for multiple availability zones and auto scaling groups if you want production to be high availability. Having staging as a secondary makes it difficult for certain databases like Couch base to have automatic fail over since the data is not in sync and in both cases your gonna have to spin up new servers anyways.

            2. 8

              We basically do this. our production DB (and other production datastores) are restored every hour, so when a developer/tester runs our code they can specify –db=hourly and it will talk to the hourly copy(actually we do this through ENV variables, but can override that with a cli option) . We do the same for daily. We don’t have a weekly.

              Most of our development happens in daily. Our development rarely needs to live past a day, as our changes tend to be pretty small anymore. If we have some long-lived branch that needs it’s own DB to play in(like a huge long-lasting DB change or something) we spin out a copy of daily just for that purpose, we limit it to one, and it’s called dev.

              All of our debugging and user issue fixing happens in hourly. It’s very rare that a user bug gets to us in < 1hr that can’t be reproduced easily. When that happens we usually just wait for the next hour tick to happen, to make sure it’s still not reproducible before closing.

              It makes life very nice to do this. We get to debug and troubleshoot in what is essentially a live environment, with real data, without caring if we break it badly (since it’s just an at most 1 hour old copy of production, and will automatically get rebuilt every hour of every day).

              Plus this means all of our dev and test systems have the same security and access controls as production, if we are re-building them EVERY HOUR, it needs to be identical to production.

              Also this is all automated, and is restored from our near-term backup(s). So we know our backups work every single hour of every day. This does mean keeping your near-term backups very close to production, since it’s tied so tightly to our development workflow. We do of course also do longer-term backups that are just bit-for-bit copies of the near-term stuck at a particular time(i.e. daily, weekly, monthly).

              Overall, definitely do this and make your development life lazy.

              1. 1

                I’m sorry, what is the distinction you’re making that makes this not a test environment? The syncing databases?

                1. 2

                  If I understand correctly, the point is that this entire environment, infrastructure included, is effectively ephemeral. It is not a persistent set of servers with a managed set of data, instead, it’s a stand by copy of production recreated every week, or day. Thus, it’s less of a classic environment and more like a temporary copy. (That is always available.)

                  1. 4

                    Yes, precisely.

                    OP wants the test environment to be usable for testing, etc., all of which implies that for the unknown case that comes up next week, the test and production environments should be equivalent.

                    One could say “well, we could just maintain both environments, and when we change one we’ll do the same change on the other”. I say that’s rubbish, doesn’t happen, sooner or later the test environment has unrealistic data and significant but unknown divergences. The way to get equivalence is to force the two to be the same, so that

                    • quick hacks done during testing get wiped and replaced by a faithful copy of production every night or sunday
                    • mistakes don’t live forever and slowly increase divergence
                    • data is realistic by default and every difference is a conscious decision
                    • people trust that the test environment is usable for testing

                    Put differently, the distinction is not the noun (“environment”) but the verb (“maintain” vs “regenerate”).

                    1. 2

                      Ah, okay. That’s an interesting distinction you make – I take it for granted that the entire infrastructure is generated with automation and hence can be created / destroyed at will.

                      1. 2

                        LOLWTFsomething. Even clueful teams fail a little now and then.

                        Getting the big important database right seems particularly difficult. Nowhere I’ve worked and nowhere I’ve heard details about was really able to tear down and set up the database without significant downtime.

              1. 2

                I believe this should be merged with an earlier discussion - https://lobste.rs/s/bm0db2/implementing_fully_immutable_files

                1. 1

                  This project looks very similar in spirit to https://wiki.archlinux.org/index.php/Netctl, I’d be curious to play with this and see if it solves any issues netctl (which I’ve always used on my Arch systems) cannot (or as easily).

                  1. 1

                    On my Arch servers, I stick with systemd-networkd, how has netctl been for you?

                    1. 1

                      I’ve been using netctl since netcfg was deprecated a very, very long time ago - basically zero issues on my desktop or laptops (other than a really strange glance at a coffee shop once for using wifi-menu to connect…)

                      netctl-auto is fantastic for laptop purposes too, generally speaking.

                      I haven’t used systemd-networkd to give you any sense of comparison between the two.

                  1. 1

                    What are the pros and cons of either method when using this vs. Ansible with networking modules?

                    1. 3

                      Well, Ansible’s networking modules aren’t actually at all about managing a general purpose server’s network configuration. They are all about logging into a managed switch/router (e.g. Cisco or Juniper hardware) and speaking the iOS shell configuration script.

                      But I’m guessing that’s not what you mean, and actually are referencing using Ansible to deploy network configuration details to a bunch of servers. There is one fundamental difference in approach which should inform your choice: Netplan is local, Ansible is remote.

                      Ansible requires a server to have an IP already so that it can reach it from a remote source (your laptop). Netplan is meant to run as part of the boot-up procedure, assigning the server it’s first IP address. Both could manage NetworkManager or systemd-networkd configuration, so then it’s a more personal choice. Between managing the configuration directly via Ansible and templated files or indirectly via a YAML configuration file for Netplan, which is more appropriate for your network?

                      Personally, the way I’d see it, Netplan is much more useful in highly dynamic environments, like cloud providers. While Ansible fits a bit better in slightly more stable environments, such as on-premise deployments of small/medium companies.

                    1. 2

                      This mailing this thread is from August 2014, and it’s been almost 4 and half years since then. Is this still relevant?

                      1. 1

                        They are still making new Unicode standards, so at least as far as strcoll, that can change at any time.

                      1. 12

                        Depends on how far you are willing to go I guess. At one extreme you have airgapped devices, always on VPNs, and a phone that either nevers leaves the house or is always on airplane mode. In the middle you have some sane practices like using Signal for most communication, basic use of VPNs, avoiding exposing information on social media platforms, and generally trying not to be a “data product”. At the other extreme you have “data cows” in the “data farm” that enjoy the cushy life that comes from always-on surveillance.

                        Your list should probably be tiered based on who you are addressing and where their starting point is.

                        1. 8

                          Agreed. I can come up with a thousand ideas, but which ones are useful or not will depend on your “threat model” – who exactly are you trying to protect yourself from.

                          • If you’re trying to protect yourself from opportunistic hackers who don’t have a personal/vested interest in you, there’s some simple steps.
                          • If you’re trying to protect yourself from someone you know and interact with on a semi-frequent basis, there’s some more steps to build on top of that.
                          • If you’re trying to protect yourself from very close people, like a spouse, here’s some advanced stuff with the caveat that many people in that position take a negative view of you doing that.
                          • If you’re trying to protect yourself from governments, here’s a much harder list which will dramatically impact how you use the internet and basically every computing device out there.

                          And that’s just one facet of your model. There’s more details: Who you want to have access? How much do you value absolute privacy versus connecting with less paranoid people? How flexible are you in any of your positions?

                          So the unfortunate thing is there isn’t any universal list of steps since there isn’t an average person – everyone will answer these and other questions differently/. Which steps each person should take depends on those answers.

                          1. 2

                            If you’re trying to protect yourself from governments, here’s a much harder list which will dramatically impact how you use the internet and basically every computing device out there.

                            That’s the old model from hacker culture. Thing is, it ignored economics: black market turned into lots of specialists cranking out more stuff at lower prices. New and old vulnerabilities are a plenty. The services people use are often insecure by default. Whether an interest or not, it doesn’t take a government’s resources to attack what lots of people use. At one point, I saw kits online for the price of a gaming rig. Expect it to get worse as they put more smart devices in their home.

                            1. 4

                              Indeed, I simplified a bit too much. As you said, what was thought of only possible with the resources of a nation-state is now more readily available. For example, all of the NSA’s “Shadow Broker” stuff is now in the hands of many more people in the black/grey markets.

                              When I originally wrote the comment, I was more thinking about BGP hijacking, Sybil attacks, and the like - but a threat model is not a static document. It changes, both because of personal changes or because market forces have changed. What is possible changes every day.

                              1. 2

                                Shadow Broker stuff is a great example. Forgot to mention the leaks. :)

                        1. 2

                          Here’s a quick fork of it adding colors and a loop: https://gist.github.com/evaryont/95e530853829495fc86870dff26d17b7

                          1. 2

                            That sounds nice! Though with 21.5% of the certificates surveyed in the wild being invalid, I wonder if it would be more interesting to propose a grammar description that is more lenient than the standard. I’ll read up on the finer details, to see if they were able to classify the violations (But 32 pages I’ll likely only finish this reading when this thread has long since left the front page :))

                            Also, it’s a bit unfortunate that they sourced certificates from talking to the public IPv4 space. I suppose they’ll miss a quite a few certificates due CDNs and virtual hosting. It seems likely that you’d get a more relistic set by looking at Certificate Transparency logs. The upside would be that you’d skew towards actual in-use production certs and avoid all sorts of oddities (e.g., self-signed certs). Building on top of a this would also be more useful to inform the suggestion better practices and standard improvements.

                            1. 2

                              Sounds like some great ideas! :)

                              1. 1

                                I don’t know if you have seen this yet, but did they also mention which ports they hit in the public space to get the certificates? I wonder how many open ports >1024 are out there with valid, publicly-signed certs. (I know of a few myself for sure, running ports like 8080 or 4567.)

                              1. 3

                                I wish there was a comparison between this and zsh-syntax-highlighting and why one might want to change.

                                1. 1

                                  Well the comparison would be a superset: fizsh includes zsh-syntax-highlighting and zsh-history-substring-search, all 3 projects part of the zsh-users group.

                                1. 4

                                  For Ruby fans, I think Middleman has a very similar approach and would fit this use case quite well.

                                  1. 1

                                    I’m having trouble confirming one way or another, so hopefully someone’s Google-fu is stronger than mine:

                                    Is the exFAT patent that Microsoft has successfully sued companies into paying for (including Samsung as mentioned in this article) included or not in this collection? Some people are saying it’s not included.

                                    1. 6

                                      The headline is click-baity (but one I still agree with), since Jeff’s message that clarifies his stance is later on in the post:

                                      Computers, courtesy of smartphones, are now such a pervasive part of average life for average people that there is no longer any such thing as “computer security”. There is only security. In other words, these are normal security practices everyone should be familiar with. Not just computer geeks. Not just political activists and politicians. Not just journalists and nonprofits.

                                      1. 6

                                        I thought it was echoing something a lot of us are saying where the devices are so ridiculously insecure with so much attack surface that computer security doesn’t exist. Then, one must avoid them for secrets or high-integrity activities wherever possible while operating in a mindset that what’s left is already compromised on some level. Then I read the article, it was about something else entirely, thought the title sucked, and moved on.

                                        1. 2

                                          It would be a far more interesting article if it was about designing to avoid footguns. Unfortunately, wasn’t the case :-(

                                      1. 7

                                        “I love my domain registrar.” Has anyone ever said this?

                                        Yes, because I use gandi.net

                                        1. 3

                                          I have been using Gandi more than 10 years and they are stellar. They contribute financially to free software projects, they are technically competent, and the two times I had to contact support they were extremely helpful. So, yes, I love my registrar.

                                          I have also used their VPS hosting for nearly a decade without any issues.

                                          1. 2

                                            Same! Gandi is amazing, no reservations at all. I very much love them and they have been absolutely a pleasure to do business with.

                                            1. 1

                                              Same but with Hover.

                                            1. 3

                                              I use zsh and it’s suite of completion mostly. I would possibly use bash’s more often if various servers I administrate installed the completion package (which is usually broken out into a separate package), but they don’t and I don’t care to push that. (Again, happy with Zsh’s built-in library.)

                                              Fish does a neat trick to supplement it’s completion – it parses the output of man! Pretty good for most commands, but some weird manpages will throw it for a loop. Those are far and few between IIRC, but I haven’t touched fish for a long while.

                                              1. 4

                                                I use zsh which has the more comprehensive coverage with completions. Especially when you consider things like support for system commands on non-Linux systems. Note that zsh itself includes most of them and the separate zsh-completions project is only a small collection and of lower quality.

                                                Zsh’s is much the superior system but you’d have to emulate a whole lot more to support them. Completion matches can have descriptions which makes it vastly more useful. The process of matching what is on the command-line against the candidates is much more flexible and is not limited to dividing the command-line up by shell arguments - any arbitrary point can be the start and finish point for each completion candidate. And as that implies, what is to the right of the cursor can also be significant.

                                                My advice would be to take the zsh compadd builtin approach which is more flexible and extensible than compgen/complete, do your own implementation of _arguments (which covers 90% of most completions) and similarly your own _files etc. It’d then be straightforward for people to write completions targetting both oilshell and zsh.

                                                1. 2

                                                  Hm interesting, yeah I am looking around the Completion/ dir in the zsh source now and it looks pretty rich and comprehensive.

                                                  I also just tried out zsh and I didn’t realize it had all the descriptions, which is useful too. Don’t they get out of date though? I guess most commands don’t change that much?

                                                  I recall recall skimming through parts of the zsh manual like a year ago, and from what I remember there are 2 different completion systems, and it seemed like there was a “froth” of bugs, or at least special cases.

                                                  I will take another look, maybe that impression is wrong.

                                                  I think the better strategy might be to get decent bash-like completion for OSH, and then convince someone to contribute ZSH emulation :)

                                                  I guess I am mainly interested in the shell system that has the best existing corpus of completion scripts. Because I don’t want to boil the ocen and duplicate that logic in yet another system. zsh does seem like a good candidate for that. But I don’t understand yet how it works. Any pointers are appreciated.

                                                  I’ll look into _arguments… it might cover 90% of cases, but it’s not clear what it would take to run 90% of completions scripts unmodified.

                                                  1. 3

                                                    The zsh descriptions do get out of date. The strings are copied by the completion script author, so if the --help text changes, the script will need to be updated too.

                                                    Zsh’s completion system is vast and old, the best combination. That’s why the 2 engines exist still today, as there are a number of completion scripts that are in the old style. I believe that most of those underneath Completion/ are using the newer system.

                                                    1. 2

                                                      Of the 2 systems, the old, compctl system was deprecated 20 years ago. Everything under Completion/ uses the new system. I wouldn’t say there’s a “froth” of bugs - it is just that there is a lot to it.

                                                      It isn’t the descriptions so much as the options themselves that can get out of date. The task of keeping them up-to-date is semi-automated based on sources such as --help output and they are mostly well maintained.

                                                  1. 8

                                                    After feeling some pain about how complicated all software, IT and Enterprise architecture toolsets are, I got this idea of a dead simple architecture sketching tool, that’s based on just nodes and named lines. The input for this tool is a text file that contains triples in the form of subject-verb-object, like so:

                                                    # comments via shebang
                                                    # declare nouns (for better typo recovery)
                                                    internet, web front, app server, DB, Redis
                                                    # declare verbs (perhaps not necessary and may be dropped)
                                                    flows, proxies, reads/writes
                                                    
                                                    # subject-verb-object are separated by more than 1 whitespace (" " or "\t") 
                                                    # somewhat like Robot Framework does it
                                                    # prepositions (to, from, at, etc.) seemed redundant, so not using them
                                                    internet     flows          web front
                                                    web front    proxies        app server
                                                    app server   reads/writes   DB
                                                    app server   reads/writes   Redis
                                                    

                                                    I’m not married to the syntax yet, but it seems fine after a few iterations of experimentation. A tool could then read this and produce a nice graph via graphwiz. And you don’t really need a tool, you could do these sketches on a piece of paper in no time. But when you want to store and share the design, a text file + tool is a good combo.

                                                    Tried to write a parser for this language in Rust’s pest, but that was a bit of a pain – pest’s error modes are quite odd. Seems like when my parser has a problem, Pest blows up on position 0 every time. Perhaps I’ll just do a dumb line-by-line parser by hand instead. And I’d like to implement in Crystal rather than Rust. Also thought about doing it in Swift, but that doesn’t seem to work so well outside of MacOS/Linux (I’m thinking BSDs here) yet so nope.

                                                    Best part? The tool’s name is Architect Sketch.

                                                    1. 5

                                                      Reminds me of mermaid. It has shortcomings but has worked for me most of the times I’ve had to diagram something over the past few years.

                                                      1. 4

                                                        I’ve been working on something similar myself, with the goal of actually deploying infrastructure on it. I think this is a good avenue to explore, and pairs really well with rule engines and datalog systems.

                                                        1. 2

                                                          This is something I’ve been thinking about a lot as well.

                                                          For my use case, I just want to dump, document, or understand how my multitude of web apps, daemons, and IoT devices all interact with one another.

                                                          I had built a rough prototype with ruby and shell scripts using similar syntax, but I couldn’t ever get Graphviz to generate a “pretty” layout for the whole thing.

                                                        1. 6

                                                          For $WORK, we have just deployed a new version of our entire technology platform! Our clients are doing their first major migration starting on the 2nd (today, for most folks reading this), adding 1,000 users per week throughout the entire summer. This is migrating from a physical colo’d C# monolith stack to a hybrid Rails/C# microservice architecture spread across AWS and Azure (with some of it calling back to the older stack). It’s the culmination of many months of efforts by every team in the company.

                                                          I’m nervous, but I’ve been dedicated to load testing it using Selenium, Firefox, and AWS instances – 1,300 of them to be precise. It’s handled that simultaneous load well enough. I hope to finish collecting all of the details and publish a blog post on how I set up this pseudo-bot-net of mine.

                                                          My personal infrastructure at home and other personal projects have been put on hold since I lived and breathed this preparation work for the past couple months. This week I hope that between refreshing every monitoring page in the network I can spend some time kicking the development back into gear.