1. 9

    If you’d asked me five minutes ago what network filesystem would be the best way to share files between Windows and Linux, 9P would not have been in my top five.

    Can other 9P clients access WSL environments this way? Can Windows Explorer connect to other 9P hosts?

    1. 2

      I’m curious to hear your top 5 (or whatever).

      1. 1
        1. SMB (obviously)
        2. WebDAV
        3. FTP
        4. regular HTTP
        5. SFTP

        SFTP goes last because unike the others, I don’t think Windows comes with a client by default… although in this crazy WSL era, who knows?

        1. 5
          C:\Users\Calvin>sftp
          usage: sftp [-46aCfpqrv] [-B buffer_size] [-b batchfile] [-c cipher]
                    [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-l limit]
                    [-o ssh_option] [-P port] [-R num_requests] [-S program]
                    [-s subsystem | sftp_server] destination
          
           Directory of C:\Windows\system32\openssh
          
          2018-09-15  05:09 AM    <DIR>          .
          2018-09-15  05:09 AM    <DIR>          ..
          2018-09-15  05:09 AM           322,560 scp.exe
          2018-09-15  05:09 AM           390,144 sftp.exe
          2018-09-15  05:09 AM           491,520 ssh-add.exe
          2018-09-15  05:09 AM           384,512 ssh-agent.exe
          2018-09-15  05:09 AM           637,952 ssh-keygen.exe
          2018-09-15  05:09 AM           530,432 ssh-keyscan.exe
          2018-09-15  05:09 AM           882,688 ssh.exe
          
      2. 2

        I thought you were joking, until I read the link for myself and did a spit-take.

        Based on the wording in the article and the linked article about socket use, I suspect that unmodified 9p clients won’t be able to directly use this. They might have just added some 9p extensions(like Erlang on Xen to support WSL, or they might have more drastic(but “legal 9p”) modifications like Styx on a Brick

        Still though, this made my day!

      1. 17

        if only there was a browser that was committed to the open web

        1. 5

          I use Firefox on GNU and Android, which puts me in a tiny minority. See my comment at top level for why I brought up this post.

          1. 10

            My frustration with firefox was their support of DRM/EME is a serious violation to the idea of an open web

            1. 16

              I agree that the EME is a horrible thing for the open web. However, I think a strong Firefox is one of the most important things for the open web, and people would’ve been switching even faster from Firefox to Chrome if Chrome was the only way to play Netflix/HBO/whatever.

              At least they implemented it in the best way possible; AFAIK, Firefox ships with no closed source EME blobs, just the infrastructure, and the blobs aren’t downloaded before the user requests it.

              1. 7

                I agree but if this is our biggest frustration with Firefox it is in good shape.

                1. 2

                  there’s also the ssl pushing and deferring to google to say which sites are dangerous

                2. 4

                  You have to pick your battles DRM wasn’t one that could be won right away. Firefox waited out for a long time after all the other browsers added drm. It did not cause drm to be stopped, it just caused everyone to leave firefox to watch videos.

                  If firefox keeps its users it can use its power to win other battles, right now they are adding tracker blocking and stopping autoplay. If they stuck to having no drm they probably wouldn’t even exist anymore.

              2. 2

                there’s not.

              1. 15

                “The reason for this high percentage is because Windows has been written mostly in C and C++, “

                That’s part of the reason. The other part is that Microsoft is intentionally not spending enough money applying the tools for finding bugs or proving their absence in C/C++ code. Tools that Microsoft can afford to not only license: they could acquire the entire company, apply it to their code base, do all the fixes w/ false positives identified, and still have billions of dollars. They might have even cornered the market on C/C++ analyzers buying all of them at some point. They just don’t care.

                Corporate apathy from the product divisions of Microsoft are the major cause here. MS Research should be highlighted for steadily inventing awesome, more-practical-than-usual tools for these things. One knocked out most of their driver issues (aka blue screens). The product division doesn’t adopt more, even the automated stuff. Even Midori, with all its advances, was only fielded in one product that I know of.

                They also use C/C++. The empirical evidence tells us teams with little focus on or time to address QA who use those languages will produce more vulnerable code than memory-safe languages. They should probably get off C/C++ slowly over time if they are minimizing QA and not throwing all kinds of verification tooling at what their teams produce. They need a safe-by-default language with ability to selectively tune modules so we at least know where the dragons will be. The rest might be fine… by default.

                That’s why dodging C/C++ is best fit for these kinds of companies and projects. Which is most of them that I can tell.

                1. 27

                  That’s part of the reason.

                  It’s actually the entire reason.

                  The other part is that Microsoft is intentionally not spending enough money applying the tools for finding bugs or proving their absence in C/C++ code. Tools that Microsoft can afford to not only license: they could acquire the entire company, apply it to their code base, do all the fixes w/ false positives identified, and still have billions of dollars. They might have even cornered the market on C/C++ analyzers buying all of them at some point. They just don’t care.

                  This is an extremely misinformed and baseless opinion. I used to work on MS Office and I can’t imagine a company spending more time and effort on securing C/C++ codebases with amazing static analysis tooling, insanely smart people working on teams entirely dedicated to security, the SDL, etc. It’s a herculean effort.

                  1. 8

                    Microsoft spent a long time not doing security. This let them acquire a legacy codebase which was full of problems, esp design. They also intentionally used obfuscated formats to block competition. While achieving that, it made long-term maintainability and security worse. While their security features were basic, smaller companies were designing security kernels into their OS’s that they were building in safer languages such Pascal, Ada, and Gypsy. Some of this work predates the existence of Microsoft. Some of them tried to sell to Microsoft who aren’t interested in those or other such products to this day. They build their own stuff with less assurance than projects like this. See Layered Design and Assurance sections that deserve the phrase “herculean effort.”

                    Note: That project, using a basic analysis for timing channels, found side channels in caches using relatively, low-cost labor. I heard Microsoft is offering six digits for anyone that finds those with critical impact. Maybe they should’ve just applied public methods that worked. Data61 did, finding lots of new components to hit.

                    So, Microsoft ignored high-assurance security despite being one of few companies that could afford to apply it, at least on new code. They got hit by every preventable defect you could think of. Eventually, they caved into pressure to save their image by hiring Steve Lipner from VAX VMM. Microsoft’s priority was quickly shipping massive piles of features not designed with security in mind connected to legacy code with same problem. So, Lipner’s philosophy there was to dramatically water down the TCSEC into the SDL to add some review, testing, and other steps to each phase of their lifecycle. This made huge improvements to the security of their codebase but fell totally short of what Microsoft was capable of doing with money on hand.

                    Today, we’ve seen both small companies and FOSS projects delivering useful functionality with great security. They often had access to fewer tools than Microsoft. The trick was they used careful design, minimalism in implementation style, and whatever tools they had available. Unlike Microsoft’s code, theirs did way better during pentesting. We also have some suppliers formally verifying their OS’s or core protocols. MS Research has world-class people in that are who Microsoft’s managers largely don’t use. I remember Hyper-V using VCC with the company using some of their static analysis, tools. They applied driver verifier, which eliminated most blue screens. That’s about it. There’s also tools like RV-Match, Trust-in-Soft, and Astree Analyzer that can prove absence of the kind of errors hitting Microsoft’s codebase. Far as I know, they’re not using them. There’s also about five types of automated, test generation that knock out piles of bugs. Their developers didn’t mention Microsoft using them even though MS Research could’ve clean slated them in better form in short time if management requested.

                    Just piles and piles of methods out there, mostly from 1970’s-1990’s, that do better than what Microsoft is doing now. Altran/Praxis said their Correct-by-Construction methodology cost about 50% extra to do. Cleanroom costs anywhere from negative (lifecycle savings) to that. You’re telling me Microsoft is throwing herculean effort at their codebase but the results consistently have more defects than smaller players’ builds? I doubt that. I think they’re likely throwing herculean effort at inferior design techniques, implementation methods (including C/C++), and inadequate tooling. This is great for checklists, even producing a lot of results. For actual security, their money spent to vulnerabilities found ratio is way worse than what those smaller firms were doing on projects with a handful of developers. A firm with billions a year in profit employing people with similar skill should do better. If they cared and listened to their best experts in security and verification. ;)

                    1. 6

                      They applied driver verifier, which eliminated most blue screens. That’s about it. There’s also tools like RV-Match, Trust-in-Soft, and Astree Analyzer that can prove absence of the kind of errors hitting Microsoft’s codebase. Far as I know, they’re not using them.

                      I don’t know why you’re persisting here but you have absolutely no information. Microsoft has an internal version of SAL/PREfast that does static analysis beyond the compiler and it’s amazing, and they’ve been using it for far longer than most of the industry has cared about static analysis. It can find things like unsafe uses of raw buffers, unsafe array access, return values that go unbound, etc.

                      1. 1

                        Microsoft has an internal version of SAL/PREfast that does static analysis beyond the compiler and it’s amazing

                        I mentioned their use of internal tooling for static analysis in my comment on top of a few others (SLAM and VCC). Apparently I did have information since you just corroborated it. Since you’re continuing with unsubstantiated claims of rigour, let’s just go with what others were doing comparing it to Microsoft. Just tell me which of these methods have been in use in your groups. I’d like the names of the tools, too, so any corrections I get tell me about tools that I can pass onto other developers. I’m leaving off review since I know Lipner had that in every part of SDL.

                        1. Formal specifications. Z, VDM, ASM’s… companies like IBM started specifying requirements and/or high-level design in a way where they could capture properties about the data and interfaces. Those tools caught all kinds of errors. VDM and ASM at different points had code generators. Many companies used SPIN for analyzing concurrency in apps and protocols with similar successes. TLA+ is taking over that niche now. Eiffel and Praxis were using Design by Contract for their stuff. Where the requirements and/or design of Microsoft’s components formally specified to prevent errors due to ambiguities, incorrect bounds, and/or interface mistakes?

                        2. Static analyzers. The standard practice at places with funding is to use more than one since each tend to catch errors others miss. They’ll use at least one, badass, commercial tool they have to pay for combined with a mix and match of open-source tools with proven track record. NASA’s teams with less funding than Microsoft were using about four. Some folks mix and match the commercial ones with at least one that finds tons of things w/ false positives combined with one of the “no-false positive,” sound tools. Which three or four other static analyzers were Microsoft using in addition to their internal tool?

                        3. Dynamic analysis. Did they use any of those tools to supplement the static analysis and test generation through program analysis? Teams loading up on tooling will typically try at least one. It’s usually Daikon.

                        4. Testing. Path-based, symbolic, combinatorial, spec/model-contract-based, concolic… lots of phrases and methods in Lobsters submissions for automatically generating tests from software that hit high coverage throughout it. If I was Microsoft, I’d use one of each just alternating them during overnight runs. The rest of the time before the morning would be fuzzing. Which automated, test generators were they using while you were at Microsoft? Did they at least try KLEE on production code? And how much fuzzing did they do?

                        5. Concurrency. I mentioned model-checking in SPIN. However, Eiffel had a model called SCOOP that makes concurrent code about as easy to write as sequential code. There’s been a few that automatically put in locks after annotations. IBM had a tool for their Java apps that would automatically find lots of concurrency errors. Was Microsoft following high-assurance practice in using something like Ada Ravenscar or SCOOP to be immune to some of these heisenbugs? Followed by an analysis tool and/or model-checking to catch the dead/live-locks that might slip through?

                        6. Aside from OS’s, a number of projects like cryptlib and OP Web Browser (later Chrome, too) were building in forms of security kernels inside the app with isolation mechanisms, enforcement of policies, and/or dropping privileges of whole app or specific components when unneeded. Guttman adopted what was called “assured pipelines” inside his library. You specify exactly what order functions should be called in what states of the program. They do these through the security kernel instead of just jumps. It can be an actual kernel underneath processes or just a module that checks calls. Did Microsoft’s products use such security mechanisms to prevent situations such as opening a Word document from controlling an entire computer?

                        7. Compilers had a bad habit of introducing bugs and/or removing security checks. The optimizations also screwed things up a lot. A number of teams in academia started writing their compilers in strongly-typed, safe, functional languages using a combo of intermediate languages and type system to improve quality. A few verified compilers for simple languages with CompCert doing one for C. AdaCore modified their IDE’s to allow developers to see the assembly easily for human inspection. Did you hear about or see the compiler teams at Microsoft use 1-5 on top of the safe, functional, type-driven approaches that were working? Did they license CompCert to protect their C code like companies in safety-critical are doing? Did they develop or contract their own certifying compiler for C and/or C++?

                        8. Drivers and protocols. SLAM was a nice, success story. It would be great if they forced all drivers to also undergo at least 2-4 above. They could also acquire AbsInt to get both CompCert and Astree Analyzer. Astree proves absence of all the common problems in code structured in ways similar to how one might write device drivers. That’s on top of probably many other things in Windows ecosystem. At one point, Astree was the only tool that could do this. You hear about Microsoft buying AbsInt or just licensing Astree to use on as many drivers and protocols as it can with passing being mandatory for interoperability certification and branding?

                        9. Formal proof. MS Research is awesome at all that stuff. They have tools that make it easier than usual. They even applied some of them to MS products as part of their funded, research activities. Microsoft adopted a few. They have more, though, like the ability to do everything from proving code correctness in Dafny to type-safety in assembly. Did you see them using any of that at least on riskiest and most-used routines? Did any assembly code you see use their type- and/or memory-safe variants? Were they using Frama-C or SPARK Ada on critical routines?

                        So, there’s a few just based on what academics, smaller companies, defense contractors, smartcard companies, and a random number of players in other industries have been doing. If they can do it, so can Microsoft. Some of this was done on tech that predates Microsoft. So, it would’ve been easier for Microsoft to do it than the forerunners. These are also a diverse array of things that industries do when they really want to achieve the goal. They throw everything at it that they can afford to do. Most can’t afford to do all of this. Some do, though, with those NASA teams and Praxis having way, way, less money than Microsoft’s operations side. They still put in a herculean effort with some of the lowest-defect, most-reliable software in existence coming out. If Microsoft did likewise, you’ll be able to name a lot of tools and approaches in 1-9.

                        I actually hope you do since I’d love to be wrong. Problem is many of these are trick questions: many CVE’s Microsoft got hit by would be impossible if they applied these methods. Empirical data already refutes your point or refutes my belief that using 1-9 creates low-defect software. Most evidence indicates my belief that 1-9 bottoms out the defects is true. So, I defaulted to believing Microsoft wasn’t using these methods… wasn’t putting in herculean effort like other companies, some academics, and some folks FOSSing stuff were. Their employees might not even know much of this stuff exists. That would be more damning given MS Research has bright folks who certainly pushed adoption of some of that. What of 1-9 did Microsoft do in your group and other groups doing production software?

                        Note: That excludes Midori even though it was used in a production app. It was mostly just for research. I haven’t heard about any migrations of their services to it.

                        1. 3

                          This is a really weird conversation. Do you want to know more because you’re curious or are you quizzing me? I honestly can’t tell.

                          1. 3

                            I keep track of what methods different companies use to assure software, what results, and (if good results) their availability. I do comparisons among them. I do it mostly to learn and pass things on. I also use it to objectively assess what they’re doing for software correctness, reliability, and security. I assessed Microsoft based on their simultaneous decline in 0-days and blue screens (success of their methods) plus evidence in what remains that they were not using strongest methods that they certainly could afford. I claimed apathy was the reason that they’d ignore those methods, especially automated ones, in their assurance activities. They could give their teams the training and budget if they wanted.

                            You said they were doing all kinds of things, putting in massive effort, and I had no clue what I was talking about. You kept insisting that last part without evidence past an argument from authority (“I was there, I know, trust me”). I very rarely have people tell me I’m clueless about verification tech, esp if seeing my Lobsters submissions and comments. Maybe you’re new I thought missing stuff like this.

                            In any case, whether I’m right or wrong, I decided to test your claim by presenting different classes of assurance techniques companies outside Microsoft were using (esp projects/companies with fewer resources), describe some benefits/examples, and then ask what Microsoft was doing there. If I’m wrong, you’ll have examples of most of that maybe even with descriptions of or links to the tools. If I’m right, you’ll be looking at that stuff saying “we didn’t do almost any of that stuff.” You might even wonder if it was even possible to do that stuff given the budget and time-to-market requirements forced on you. If so, that makes it even more likely I’m right about them intentionally limiting what security you could achieve to maximize their numbers for the execs.

                            So, I’m simply testing your claims in a way that gets you to give us actual data to work with, maybe teaches us about awesome QA methods, and maybe teaches you about some that you weren’t aware of. I try to be helpful even in debate. There’s also others reading that might benefit from specific techniques and assessments in these comments. They’ll tune out as the thread gets deeper and more flamish. Hence, me just switching right to asking about specific methods already used outside Microsoft, some for decades, to see if Microsoft was matching that effort or doing less-effective efforts.

                            1. 1

                              I’m sorry but without primary sources, you’re appealing to (self) authority to make your claim. You seem to be self assured in your position, enough to analyze internal Microsoft decisions.

                              1. 1

                                Im really not. Ive submitted endlessly examples of these activities countering problems Microsoft is experiencing. If they’re experiencing them, then it’s a valid inference that Microsoft isn’t using them.

                                The other person is doing argument from authority by saying we should trust a huge claim contradicting empiricsl observations just because they claim to know what Microsoft was doing. No evidence past that. Weird that you call my comment out rather than theirs or both of ours if you wanted provable, primary sources or evidence.

                                1. 1

                                  If they’re experiencing them, then it’s a valid inference that Microsoft isn’t using them.

                                  No, you’re creating a black-and-white issue where there isn’t one. Formal methods (as you undoubtedly know) have many issues when used in practice. Maybe Microsoft did try to use these tools (especially given how many of them come from MSR), but found their codebases intractable, the amount of instrumentation to be too onerous to ship without a major rewrite, or maybe they did run these tools and classes of bugs were still not found, or there were too many false positives to justify using a tool. It feels overly simplistic, and almost disingenuous, to insinuate that just by having certain classes of bugs that they didn’t run the formal checkers or implement the formal methods that counteract them.

                                  1. 1

                                    Although you bring up sensible points, let me illustrate why I’m not using them in a Microsoft assessment by applying your comment pre-SDL. That was back when Microsoft was hit by vulnerabilities left and right. We claimed they weren’t investing in security based on that field evidence. We also cited OS’s and products with fewer vulnerabilities and/or better containment as evidence of what they could’ve been doing. You could’ve come in with the same counter:

                                    “No, you’re creating a black-and-white issue where there isn’t one. Security engineering (as you undoubtedly know) has many issues when used in commercial products. Maybe Microsoft did try to use such methods (especially given how many in MSR know about them), but found their codebases intractable, the amount of instrumentation to be too onerous to ship without a major rewrite, or maybe they did do reviews and run security tools and classes of bugs were still not found, or there were too many constraints or false positives to justify attempting to secure their legacy products or use security tools. It feels overly simplistic, and almost disingenuous, to insinuate that just by having certain classes of bugs that they didn’t have a security team in place or use the techniques that counter them.”

                                    Yet, we’d have been right because it wasn’t simplistic: methods and tools that consistently improve things in specific ways would’ve likely done it for them. Then, they adopted a few of them, new results came in, and they matched our outcome-based predictions (i.e. corroboration). Piles of companies, including IBM, were using some on my list. Some of these were on big projects, new and legacy. Hell, fuzzers are a low-effort tool that work on about everything. Like shooting fish in a barrel I’m told.

                                    I doubt Microsoft is doing something so unique that no extra techniques could’ve helped them. If anything, you’re introducing a new claim against decades of field evidence saying Microsoft’s code is so unique that universal, QA methods all fail on them except a few, specific tools they’re already using. Since mine is status quo about QA results, burden of proof is on you to prove Microsoft code is the exception to the rule.

                                    That brings me to another problem with your counter: you exclusively focus on formal methods, the smallest part of my comment. If that’s all I said, I could about half-concede it to you at least on legacy, C/C++ code they had. However, my assessment had a range of techniques which included push-button tools like source-based test generators, fuzzers, and 3rd-party static analyzers. We’re talking low to no effort tooling that works on fairly-arbitrary code. You’re telling me Microsoft is about the only company who couldn’t benefit from using extra tools like that? And with what evidence?

                                    The simplest test of your counter is quickly DuckDuckGoing if anyone used a fuzzer on Microsoft products getting results in relatively, little effort. Yup. I stand by my method of extrapolating likely internal investment in techniques from results those suppliers achieved as assessed by independent parties hacking the software. If one is low and other is high, then they probably aren’t doing enough of the good things.

                    2. 3

                      The point is that they could do more. I believe Microsoft can invest more in security, but probably can’t do so economically now. Which is, as you said, herculean, since I believe other companies can invest more in security, even in economically sound manner.

                      1. 2

                        The Office team is like a completely separate company compared to how other MS teams work, so maybe Office was better than the others’ at this?

                        1. 1

                          Could very well be!

                    1. 10

                      This is what I posted to a similar topic over on reddit r/selfhosted recently:

                      Data Center

                      Dedicated FreeBSD 11 server on a ZFS mirror, from OVH. The host is very slim and really just runs ezjail, and unbound as a local resolver. All the action happens inside jails.

                      • MySQL jail - provides database in a “container” for the other jails
                      • PowerDNS jail - Authoritative DNS server, backed by MySQL
                      • LAMP stack jail - a place to host web pages with apache and PHP, for the most part. Using PHP-FPM with per-vhost chroots and UIDs. Containers within containers! Very happy with this setup. Notably hosts:
                        • Ampache - which the whole family uses for mobile-friendly music streaming.
                        • Chevereto - image hosting
                        • NSEdit - Web app for managing our DNS server.
                        • WP Ultimate Recipe - recipe database built on wordpress
                        • Wallabag - read-it-later article saver like Pocket
                        • Lots of WordPress sites for friends and family and assorted custom scratch-built things and experiments
                      • NextCloud jail - NextCloud on nginx + php-fpm. In it’s own jail to keep it completely separated. The whole family uses it for files, calendars and contacts.
                      • Minecraft server jail
                      • Email jail - Custom email hosting setup built on Postfix, Courier-IMAP, Maildrop, MySQL, Apache, PHP. I’ve been hosting my own email since the 90s.
                      • Legacy jail - Really just the old server, that went P2V two or three server moves ago - so easy to do when everything lives in ZFS jails. This is deprecated and I have been moving things off it (very slowly).

                      Home Network

                      PoS old FreeBSD 11 server with a ragtag collection of hard drives on ZFS. It’s mainly our home NAS, storing our media, but also hosts:

                      • Nagios jail - Network monitoring and notification
                      • Asterisk jail - Home and office VoIP
                      • Assorted experiments and goofs

                      Raspberry Pi 3A - Kodi in the livingroom, playing media from the NAS

                      Raspberry Pi 2A - Custom dashboard app showing server/network status and weather and stuff.

                      Raspberry Pi 1A - Running sprinklers_pi to control our lawn sprinklers.

                      Remaining Pain Points

                      Still getting a decent KeePass workflow established

                      Need to setup a VPN at home

                      Still don’t have Ampache sharing working. It should be easy for me to tell a friend to, “listen to this song” or “this album”. Need to get a good beets workflow going to clean things up.

                      Need to pick a wiki for a family knowledge base.

                      Asterisk is crusty and no one is developing personal-scale VoIP managers, because everyone just uses cell phones these days.

                      Need more hard drives.

                      1. 4

                        Would you be willing to move off KeePass to Bitwarden? Did it myself a while back using bitwarden_rs. Super easy to host and everything Just Works™. Also would allow groups for shared passwords between the family.

                        1. 2

                          What kind of KeePass workflow are you looking for? I have personal and shared key databases in KeePassXC, and share the shared ones with SyncThing – I assume NextCloud could do that level of file sharing for you. I’m very happy with it so far, but it’s also so trivial I suspect you’re looking for something beyond that, no?

                          1. 1

                            So, I have KeyWeb on OSX, and pair it with ChromelPass in Chrome. I save my DB inside a NextCloud folder so that it is available on my other devices/platforms. I like it generally, but it always seems to be locked when I want to add something to it, so I have to type in a high-entropy password and select the key file, and by that time ChromelPass has forgotten that I was trying to save a password and given up. So like I log out and back in, and save the password, “now that I’m ready”. It’s not as integrated or smooth in chrome as the built-in password db, so it’s easy to forget it, and I always have a sense of, “but do I really have it saved?” on new additions.

                            I don’t actually have an Android app yet. What do people use there?

                          2. 1

                            Asterisk

                            Yep, personal-scale VOIP doesn’t quite make sense when most folks having unlimited call and internet calls. I’ve only seen personal VIOP on my friend family. He has parents living abroad and it’s easier to deploy VOIP rather than teaching them how to use APPs.

                            1. 1

                              How’s your experience with Kodi? My annual Plex sub has just renewed so I’ve got plenty of time to look up a replacement, but they’re adding a bunch of crap I don’t want, and don’t seem to be fixing things that annoy me so I’d like this to be my last year at that particular teat.

                              1. 4

                                Kodi, like Plex, has garbage UI full of extremely frustrating UX quirks. But, in my house, it’s still my main way of consuming my library (with an nvidia shield as the client). But they also serve different audiences: Kodi is hard to serve externally and is mostly just a client, while Plex is good at remote, shared access and solving the server side.

                                1. 2

                                  It works well. It’s not a great UI. The family is comfortable with the Roku UI even though it is terrible and they complain about it. If there’s something on both, they’ll play the one on the Roku first every time. Searching is meh. Playlist building is straight-up user-surly. Indexing is hit-or-miss and needs a lot of interventions. Actual video playing is great. Playing music through it is not fun.

                                  1. 2

                                    I’ve looked into plex alternatives as well. Emby was kind of interesting, but they recently closed some of the source, and then there was a fork of it by some members of the community. Going to wait and see how that shakes out.

                                    Universal media server (UMS) paired with Infuse (appletv/ios) is kind of interesting – the main drawback is how large the local storage usage in infuse gets, and how slowly it builds the first time. If only it pulled more metadata from the server at runtime. I tried pairing infuse with plex (recent supportly configuration), and it had the same issue with local storage size and slow initial build time. It’s unfortunate, because otherwise I found it fairly decent (UI/UX).

                                  2. 1

                                    What’s your experience been like with Chevereto? I’m in the market for something very much like it, and I see it mentioned a fair bit, but I don’t run any other PHP/MySQL things so I’m a bit wary.

                                    1. 2

                                      Minimal, honestly. I set it up and it runs nicely, but I haven’t really used it heavily.

                                    2. 1

                                      Nice list. Have you considered to run a database in each jail instead of having a dedicated MySQL jail? I have been looking for a discussion of the pros and cons of both approaches.

                                      1. 2

                                        Yes. I mean, 15 years ago I had one server with no jails/containers/etc and everything was just stacked in one big messy pile, and we all know what happens with that over time and upgrades. I moved that whole thing into its own jail, just to draw a line around the mess, and started a new server layout, with pieces in separate jails. I love having stuff compartmentalized into its own container so that upgrades are never an issue. I never have to, “upgrade PHP because upgrading MySQL upgraded gettext which now conflicts with, bah! no!” If anything, I am moving (carefully) towards further containerization right now. For instance, I’d like to have PHP in it’s own jail separate from the web server, so that I can run several versions and just pick which socket to connect a site to in the config. But as you guessed, it is a balance. I never want to get into installing simple docker web apps that each install a web server and a db server and duplicate each other in stupid and wasteful ways. On the other hand, for somethings, it is nice to have a self-contained “package”, where if something got busy enough to need it’s own server, I could just move it to bigger hardware in one shot.

                                    1. 1

                                      Already self-hosted:

                                      • A ZFS storage array for network storage, pc backup, VM storage, digital packratting, etc. Hosted on OmniOS VM, exposed as NFS and smb shares to the rest of the network.
                                      • Plex for movies and music (Kodi when internal)
                                      • Minecraft servers for playing with friends (Docker on Ubuntu)
                                      • Some scratch Windows VMs for messing
                                      • DNS server, network analytics, and other goodies in pfSense

                                      My self-hosting todo list:

                                      • VPN server
                                      • Star Trek GIF repository
                                      • Apache Guacamole for an easy-to-access farm of VMs for testing and development purposes
                                      • Dockerizing my Plex server
                                      • Personal document storage
                                      • Dropbox-like storage for my friends (Seafile?)
                                      • Mercurial code repos
                                      • My static websites

                                      And of course one of the joys of self-hosting is getting to play with lots of cool hardware you otherwise wouldn’t justify: https://imgur.com/gallery/eO1XDbH

                                      1. 2

                                        Using Mercurial is such a delight after having had to work with git every day. I never think about version control and all the arcane commands, I just do my work. I hope I never have to go back.

                                        1. 2

                                          When diving into the zfs deep-end I evaluated FreeNAS and OmniOS+nappit and settled on the latter. Solaris-derivatives natively play nicely with Windows permissions and nappit was ultimately easier to configure despite being much less polished.

                                          I’m surprised more people don’t run this setup.