1. 6

    I have 2 set up so far, but I have 2 more I’m considering.

    For the two I have, one is a Pi 4 4GB running Kodi (by way of LibreELEC), connected to my TV. The other one is a Pi 3 running Home Assistant. Both run cool/fine, and are single purpose for each.

    I’m also considering running a music server (some combination of MPD and snapcast) for one of the remaining pi’s. I had purchased a 314GB HDD from WD when they sold them stupid cheap, and it’s been sitting idle since then. That’s plenty of storage for a music collection.

    1. 2

      I really like this. I have been thinking about replacing my Apple TV for some time. How do you control your Kodi install? Keyboard? Or does it work with some remote controller?

      1. 3

        You can use your TV remote. (If it’s connected with hdmi and supports CEC protocol)

        1. 2

          Oh I see, I have to check that out. Thanks!

        2. 1

          You can also use a mobile app (Kore) to control Kodi on your local network.

      1. 4

        This is such a well-reasoned and smart approach to time representation for computing, I’m sad that it wasn’t adopted more widely (namely, by UNIX systems of the time) but it’s also more complex than counting the number of seconds.

        1. 0

          The worse solution is clearly the better one.

        1. 2

          Clever approach. I wonder if this can be (or already has) spread to other OS’s.

          1. 8

            I fell back to an old approach:

            Step 1. What are my friends, coworkers, or target audience using?

            Step 2. Install that.

            Step 3. Send them a message.

            Step 4: Depending on audience, try to interest them in private messaging.

            1. 3

              Same. Which means I basically have half a dozen messenger apps.

              1. 2

                I like your style. A lot of people say ‘oh my friends would never switch’. Often people never even bother asking. In my experience people are more receptive to new things than they are given credit for.

                1. 2

                  And, private conversations are a good selling point. Most people want that, but don’t change the defaults. If you switch them to a private-by-default alternative, they sometimes become cheerleaders in their own circles.

              1. 2

                I don’t consider this a real risk in my book.

                1. People do a decent job of keeping the private key, private. So while someone might know that you could use a particular key, they don’t have access to it. Not very useful knowledge.
                2. Most security advice ensures you do not expose SSH to the public, or have a separate host/FQDN away from the obvious or very public hosts.

                This is useful as a bit of enumeration, but doesn’t seem that worrisome.

                1. 3

                  Most security advice ensures you do not expose SSH to the public

                  What would be the more secure entry point to your system? Some VPN? I consider ssh, only keyfile login allowed, no root login, to be a fairly good and secure entry point to my home network.

                  1. 1

                    Agreed, VPN is an encrypted tunnel, just like SSH. It’s not inherently more secure than SSH.

                  2. 1
                    1. Likely that few people are lucky enough to have a github/gitlab account name that matches their login id making the username effectively a salt. This could have been a problem if the comment was retained on the keys served up though.

                    Enumeration can be worrying for some, similar to how you can use SSL certs to discover supersets of a group of machines you are trying to tie together and identify ownership of.

                  1. 8
                    1. 3

                      This should be updated to include the alpha channels (8 and 4 long). I wonder how long the list would get at that point.

                    1. 2

                      Here’s a link to the source code: https://github.com/nickdichev/markdown-live.

                      1. 20

                        My advice, which is worth every penny you pay for it:

                        Don’t maintain a test environment. Rather, write, maintain and use code to build an all-new copy of production, making specific changes. If you use puppet, chef, ansible or such to set up production, use the same, and if you have a database, restore the latest backup and perhaps delete records.

                        The specific changes may include deleting 99% of the users or other records, using the smallest possible VM instances if you’re on a public cloud, and should include removing the ability to send mail, but it ought to be a faithful copy by default. Including all the data has drawbacks, including only 1% has drawbacks, I’ve suffered both, pick your poison.

                        Don’t let them diverge. Recreate the copy every week, or maybe even every night, automatically.

                        1. 9

                          Seconding this.

                          One of the nice things about having “staging” being basically a hot-standby of production is that, in a pinch, you can cut over to serve from it if you need to. Additionally, the act of getting things organized to provision that system will usually help you spot issues with your existing production deployment–and if you can’t rebuild prod from a script automatically, you have a ticking timebomb on your hands.

                          As far as database stuff goes, use the database backups from prod (hopefully taken every night) and perhaps run them through an anonymizing ETL to do things like scramble sensitive customer data and names. You can’t beat the shape (and issues) of real data for testing purposes.

                          1. 2

                            Pardon a soapbox digression: Friendlysock is big improvement over your previous persona. Merci.

                            1. 1

                              It’s not a bad idea to make use of a secondary by having it be available to tests. Though I would argue instead for multiple availability zones and auto scaling groups if you want production to be high availability. Having staging as a secondary makes it difficult for certain databases like Couch base to have automatic fail over since the data is not in sync and in both cases your gonna have to spin up new servers anyways.

                            2. 8

                              We basically do this. our production DB (and other production datastores) are restored every hour, so when a developer/tester runs our code they can specify –db=hourly and it will talk to the hourly copy(actually we do this through ENV variables, but can override that with a cli option) . We do the same for daily. We don’t have a weekly.

                              Most of our development happens in daily. Our development rarely needs to live past a day, as our changes tend to be pretty small anymore. If we have some long-lived branch that needs it’s own DB to play in(like a huge long-lasting DB change or something) we spin out a copy of daily just for that purpose, we limit it to one, and it’s called dev.

                              All of our debugging and user issue fixing happens in hourly. It’s very rare that a user bug gets to us in < 1hr that can’t be reproduced easily. When that happens we usually just wait for the next hour tick to happen, to make sure it’s still not reproducible before closing.

                              It makes life very nice to do this. We get to debug and troubleshoot in what is essentially a live environment, with real data, without caring if we break it badly (since it’s just an at most 1 hour old copy of production, and will automatically get rebuilt every hour of every day).

                              Plus this means all of our dev and test systems have the same security and access controls as production, if we are re-building them EVERY HOUR, it needs to be identical to production.

                              Also this is all automated, and is restored from our near-term backup(s). So we know our backups work every single hour of every day. This does mean keeping your near-term backups very close to production, since it’s tied so tightly to our development workflow. We do of course also do longer-term backups that are just bit-for-bit copies of the near-term stuck at a particular time(i.e. daily, weekly, monthly).

                              Overall, definitely do this and make your development life lazy.

                              1. 1

                                I’m sorry, what is the distinction you’re making that makes this not a test environment? The syncing databases?

                                1. 2

                                  If I understand correctly, the point is that this entire environment, infrastructure included, is effectively ephemeral. It is not a persistent set of servers with a managed set of data, instead, it’s a stand by copy of production recreated every week, or day. Thus, it’s less of a classic environment and more like a temporary copy. (That is always available.)

                                  1. 4

                                    Yes, precisely.

                                    OP wants the test environment to be usable for testing, etc., all of which implies that for the unknown case that comes up next week, the test and production environments should be equivalent.

                                    One could say “well, we could just maintain both environments, and when we change one we’ll do the same change on the other”. I say that’s rubbish, doesn’t happen, sooner or later the test environment has unrealistic data and significant but unknown divergences. The way to get equivalence is to force the two to be the same, so that

                                    • quick hacks done during testing get wiped and replaced by a faithful copy of production every night or sunday
                                    • mistakes don’t live forever and slowly increase divergence
                                    • data is realistic by default and every difference is a conscious decision
                                    • people trust that the test environment is usable for testing

                                    Put differently, the distinction is not the noun (“environment”) but the verb (“maintain” vs “regenerate”).

                                    1. 2

                                      Ah, okay. That’s an interesting distinction you make – I take it for granted that the entire infrastructure is generated with automation and hence can be created / destroyed at will.

                                      1. 2

                                        LOLWTFsomething. Even clueful teams fail a little now and then.

                                        Getting the big important database right seems particularly difficult. Nowhere I’ve worked and nowhere I’ve heard details about was really able to tear down and set up the database without significant downtime.

                              1. 2

                                I believe this should be merged with an earlier discussion - https://lobste.rs/s/bm0db2/implementing_fully_immutable_files

                                1. 1

                                  This project looks very similar in spirit to https://wiki.archlinux.org/index.php/Netctl, I’d be curious to play with this and see if it solves any issues netctl (which I’ve always used on my Arch systems) cannot (or as easily).

                                  1. 1

                                    On my Arch servers, I stick with systemd-networkd, how has netctl been for you?

                                    1. 1

                                      I’ve been using netctl since netcfg was deprecated a very, very long time ago - basically zero issues on my desktop or laptops (other than a really strange glance at a coffee shop once for using wifi-menu to connect…)

                                      netctl-auto is fantastic for laptop purposes too, generally speaking.

                                      I haven’t used systemd-networkd to give you any sense of comparison between the two.

                                  1. 1

                                    What are the pros and cons of either method when using this vs. Ansible with networking modules?

                                    1. 3

                                      Well, Ansible’s networking modules aren’t actually at all about managing a general purpose server’s network configuration. They are all about logging into a managed switch/router (e.g. Cisco or Juniper hardware) and speaking the iOS shell configuration script.

                                      But I’m guessing that’s not what you mean, and actually are referencing using Ansible to deploy network configuration details to a bunch of servers. There is one fundamental difference in approach which should inform your choice: Netplan is local, Ansible is remote.

                                      Ansible requires a server to have an IP already so that it can reach it from a remote source (your laptop). Netplan is meant to run as part of the boot-up procedure, assigning the server it’s first IP address. Both could manage NetworkManager or systemd-networkd configuration, so then it’s a more personal choice. Between managing the configuration directly via Ansible and templated files or indirectly via a YAML configuration file for Netplan, which is more appropriate for your network?

                                      Personally, the way I’d see it, Netplan is much more useful in highly dynamic environments, like cloud providers. While Ansible fits a bit better in slightly more stable environments, such as on-premise deployments of small/medium companies.

                                    1. 2

                                      This mailing this thread is from August 2014, and it’s been almost 4 and half years since then. Is this still relevant?

                                      1. 1

                                        They are still making new Unicode standards, so at least as far as strcoll, that can change at any time.

                                      1. 12

                                        Depends on how far you are willing to go I guess. At one extreme you have airgapped devices, always on VPNs, and a phone that either nevers leaves the house or is always on airplane mode. In the middle you have some sane practices like using Signal for most communication, basic use of VPNs, avoiding exposing information on social media platforms, and generally trying not to be a “data product”. At the other extreme you have “data cows” in the “data farm” that enjoy the cushy life that comes from always-on surveillance.

                                        Your list should probably be tiered based on who you are addressing and where their starting point is.

                                        1. 8

                                          Agreed. I can come up with a thousand ideas, but which ones are useful or not will depend on your “threat model” – who exactly are you trying to protect yourself from.

                                          • If you’re trying to protect yourself from opportunistic hackers who don’t have a personal/vested interest in you, there’s some simple steps.
                                          • If you’re trying to protect yourself from someone you know and interact with on a semi-frequent basis, there’s some more steps to build on top of that.
                                          • If you’re trying to protect yourself from very close people, like a spouse, here’s some advanced stuff with the caveat that many people in that position take a negative view of you doing that.
                                          • If you’re trying to protect yourself from governments, here’s a much harder list which will dramatically impact how you use the internet and basically every computing device out there.

                                          And that’s just one facet of your model. There’s more details: Who you want to have access? How much do you value absolute privacy versus connecting with less paranoid people? How flexible are you in any of your positions?

                                          So the unfortunate thing is there isn’t any universal list of steps since there isn’t an average person – everyone will answer these and other questions differently/. Which steps each person should take depends on those answers.

                                          1. 2

                                            If you’re trying to protect yourself from governments, here’s a much harder list which will dramatically impact how you use the internet and basically every computing device out there.

                                            That’s the old model from hacker culture. Thing is, it ignored economics: black market turned into lots of specialists cranking out more stuff at lower prices. New and old vulnerabilities are a plenty. The services people use are often insecure by default. Whether an interest or not, it doesn’t take a government’s resources to attack what lots of people use. At one point, I saw kits online for the price of a gaming rig. Expect it to get worse as they put more smart devices in their home.

                                            1. 4

                                              Indeed, I simplified a bit too much. As you said, what was thought of only possible with the resources of a nation-state is now more readily available. For example, all of the NSA’s “Shadow Broker” stuff is now in the hands of many more people in the black/grey markets.

                                              When I originally wrote the comment, I was more thinking about BGP hijacking, Sybil attacks, and the like - but a threat model is not a static document. It changes, both because of personal changes or because market forces have changed. What is possible changes every day.

                                              1. 2

                                                Shadow Broker stuff is a great example. Forgot to mention the leaks. :)

                                        1. 2

                                          Here’s a quick fork of it adding colors and a loop: https://gist.github.com/evaryont/95e530853829495fc86870dff26d17b7

                                          1. 2

                                            That sounds nice! Though with 21.5% of the certificates surveyed in the wild being invalid, I wonder if it would be more interesting to propose a grammar description that is more lenient than the standard. I’ll read up on the finer details, to see if they were able to classify the violations (But 32 pages I’ll likely only finish this reading when this thread has long since left the front page :))

                                            Also, it’s a bit unfortunate that they sourced certificates from talking to the public IPv4 space. I suppose they’ll miss a quite a few certificates due CDNs and virtual hosting. It seems likely that you’d get a more relistic set by looking at Certificate Transparency logs. The upside would be that you’d skew towards actual in-use production certs and avoid all sorts of oddities (e.g., self-signed certs). Building on top of a this would also be more useful to inform the suggestion better practices and standard improvements.

                                            1. 2

                                              Sounds like some great ideas! :)

                                              1. 1

                                                I don’t know if you have seen this yet, but did they also mention which ports they hit in the public space to get the certificates? I wonder how many open ports >1024 are out there with valid, publicly-signed certs. (I know of a few myself for sure, running ports like 8080 or 4567.)

                                              1. 3

                                                I wish there was a comparison between this and zsh-syntax-highlighting and why one might want to change.

                                                1. 1

                                                  Well the comparison would be a superset: fizsh includes zsh-syntax-highlighting and zsh-history-substring-search, all 3 projects part of the zsh-users group.

                                                1. 4

                                                  For Ruby fans, I think Middleman has a very similar approach and would fit this use case quite well.

                                                  1. 1

                                                    I’m having trouble confirming one way or another, so hopefully someone’s Google-fu is stronger than mine:

                                                    Is the exFAT patent that Microsoft has successfully sued companies into paying for (including Samsung as mentioned in this article) included or not in this collection? Some people are saying it’s not included.

                                                    1. 6

                                                      The headline is click-baity (but one I still agree with), since Jeff’s message that clarifies his stance is later on in the post:

                                                      Computers, courtesy of smartphones, are now such a pervasive part of average life for average people that there is no longer any such thing as “computer security”. There is only security. In other words, these are normal security practices everyone should be familiar with. Not just computer geeks. Not just political activists and politicians. Not just journalists and nonprofits.

                                                      1. 6

                                                        I thought it was echoing something a lot of us are saying where the devices are so ridiculously insecure with so much attack surface that computer security doesn’t exist. Then, one must avoid them for secrets or high-integrity activities wherever possible while operating in a mindset that what’s left is already compromised on some level. Then I read the article, it was about something else entirely, thought the title sucked, and moved on.

                                                        1. 2

                                                          It would be a far more interesting article if it was about designing to avoid footguns. Unfortunately, wasn’t the case :-(