1. 3

    Finishing my script to help me upload 70K photos into Google Photos. Such a nightmare.

    1. 5

      An interesting take. Glad to see Linux is still an option and really surprising that perceived performance between KDE and Gnome have flipped.

      • Surprising to hear that there isn’t a Google Drive client on Linux (as I recall there used to be one), don’t many engineers at Google use “Goobuntu”? Perhaps they don’t open source the client for public use.
      • I know that Steam works on both, do you find that your OS dictates what games you play most, or no?
      • OP didn’t mention the screen quality or eyesight issues, curious if there is a noticeable difference between the two? As I suspect there would be.
      1. 9

        Goobuntu (Ubuntu) was replaced by gLinux (Debian) a couple of years ago for maintainability reasons. They’re functionally the same though.

        The machines that we develop on is about what we think gets the programming job done, not as an indication of the target platform.

        My guess is that the numbers were crunched and found that Linux users would not have made up enough share to warrant a client. I’ve never missed it, I do all my office work directly in the browser, and we have company-wide disk snapshotting for backup purposes. On my laptop (which isn’t snapshotted) I use RSync.

        1. 1

          Ahh interesting, thanks for the update.

          The machines that we develop on is about what we think gets the programming job done, not as an indication of the target platform.

          Of course, but I’d imagine that some engineers would want to have native document sync with GDrive. I also use GDrive, but honestly found the syncing annoying when the usage flow is nearly always New tab > drive.google.com > search doc. But certainly someone on gLinux wanted to keep it? :shrug:

          What exactly are you rsync’ing against?

          1. 4

            Laptop (not snapshotted) > Desktop (snapshotted)

            But yeah, we just use the web interface for all docs writing stuff. For documentation (not documents), we have an internal Markdown renderer (think GitHub wiki with internal integrations). No one writes documents outside of a centralized system, and so has no need to back them up with a client.

        2. 6

          (I’m not OP) I recently started playing games on Linux via Steam. For reference, I’ve never been a Windows gamer – had been a console gamer up to that point. To answer your question:

          do you find that your OS dictates what games you play most, or no

          Pretty much. I play only what will work, so that means the game must either officially be supported under “Steam OS + Linux”, or work via Proton. But this is just me. Others are free to dual boot, which, of course, vastly broadens their spectrum of available games.

          1. 5

            I used to be a dual booter, but since Proton, so many games have been working on Linux that I stopped booting to Windows. Then at some point my windows installation broke and I never bothered to fix it.

            1. 3

              That’s cool. However, I think we’re a ways off from totally being on par with native Windows. Several anti-cheat systems are triggered by running under Linux. And protondb shows that there are still many games that don’t run.

              That said, things are improving steadily month by month, so that’s encouraging.

              1. 2

                That’s true, I didn’t mean to imply that all games I would like to play work on Proton now. But enough of them work now that instead of dealing with Windows for a game that doesn’t work on Proton, I usually just go and find something else that does.

                If you have a group of gaming buddies, that obviously won’t work, though. It won’t be long before they get hooked up to a Windows-only game.

            2. 2

              Same here, I find the biggest area where I need to switch back to windows is for multiplayer. I used to lan a lot and still have many of those contacts. I find a lot of games that have a host/client multiplayer, for example RTS games, have issues on linux even if the single-player works flawlessly. This means I have to keep dual boot available.

              Even though linux does strongly influence which games I play, the range and variety is amazing and it is not reducing the quality or diversity of games I play at all. There are just a few windows only titles that I might play slightly more if they were available on linux.

              While we are on the subject, what are people’s recommendations for a gaming distro? I am on Mint at the moment which is good, but I like to have options.

              1. 1

                I don’t know if I’d call it a gaming distro, but I have been using Gentoo for many years, and it seems to be doing just fine with Steam (which I just installed a couple months ago).

                1. 1

                  Frankly, I’m not sure you need a gaming distro. I’ve had little issues running Steam and Wine (using Lutris) games on Void Linux, Debian, etc. (Mind you: always using Nvidia.)

                  1. 2

                    I actually phrased that really badly, thanks for the correction. I tried out a dedicated gaming distro and it was rubbish. Mint is a variation on Ubuntu.I was looking at Debian to try next.

                    It seems like the thing to look for is just something well supported with all the common libraries, so most big distros appear to be fine for gaming. The reason I am not entirely pleased with Mint is that they seem a bit too conservative in terms of adding new stuff to the package manager when it comes out. On the one hand that makes it more stable, but on the other games use a lot of weird stuff sometimes and it makes things a bit messy if you have to install things from outside the package manager.

              2. 4

                perceived performance between KDE and Gnome have flipped

                Gnome Shell is huge and slow. A Canonical engineer (Ubuntu has switched from Unity to Gnome) has recently started to improve its performance with very good results but this also shows how terrible the performance was before: memory leaks, huge redraws all of the time and no clipping, … Now this needs to trickle down to users and the comments might change then.

                PS: KDE has not gotten a lot more bloat or slowness over the years and I don’t know if Gnome will be faster and lighter or if both will be similar.

                1. 2

                  The lack of a Google Drive client is shameful, but I tried Insync and it’s the best money I’ve ever spent on a Linux app. Much better than the Mac version of Google Drive which was super buggy

                1. 6

                  I use a self-hosted instance of https://tt-rss.org/, and have been for several years. Both with the standard web-ui & the android app. It’s fine. I really enjoy my read history synced between my various devices. It’s not the most elegant UI, it has some quirks, especially in the web-ui, but it’s gets the job done well enough. I’ve tried a few others, but haven’t come across anything that works quite as well.

                  1. 2

                    Also good luck if you wade into the official forums for support or a bug report.

                    I’ve also been using it for years because it simply works. Wanted to change servers and use the docker container but I postponed that because that was absolutely not working and I am not in the mood to argue with the maintainer. Not sure what I will do, but I use it together with NewsPlus on Android and don’t really want to change that setup. (That Android app hasn’t been updated for ages but I bought it and will use it as long as it works, because I love it.)

                    1. 1

                      linuxserver.io had tt-rss as a container they supported but had to stop due to reasonable(?) changes asked of the repo maintainer. The forums seem to be rather hostile. I’ve taken to just cloning and building the image myself (which the maintainer IIRC argued is what they think everyone wants to do) but is categorically the opposite of what I want do. I want a trusted repository in which to pull a minimal image that is up to date.

                      Sad links of despair:

                      1. 1

                        Yes, I also skimmed or read all of those. Some changes were integrated after weeks of discussion but for some reason or other I couldn’t get it to work, just 2-3 weeks ago (could be my setup, sure).

                        1. 1

                          Ahh, if all you want is an image. Feel free to use mine!

                          https://hub.docker.com/r/dalanmiller/tt-rss

                    2. 1

                      Same here. There is an official package in Arch Linux, I use that.

                      1. 1

                        I also self-host Tiny Tiny RSS. On iOS I use Fiery Feeds which has a much better UI.

                      1. 2

                        Can I piggy-back on this question to ask a related one? How do I determine if a flaky or intermittently slow connection is primarily caused by my ISP or my router? I know I have dead zones, but this sometimes affects devices that are 5 feet from the router.

                        1. 2

                          Needs more info. Is your router also your modem/switch? Are you using ADSL? Cable?

                          1. 1

                            Cable modem provided by my ISP, with my own router plugged into it.

                          2. 2

                            I would use Unifi’s Wifiman app to determine the wifi quality. in conjunction with doing some iperf tests between your client and your router/AP. That way you can separate your local network quality from your ISP.

                            1. 1

                              Thanks.

                          1. 8
                            1. 2

                              I love pcengines, but I wish more people would use the APU4 instead: https://www.pcengines.ch/apu4b4.htm it’s very excellent and much better for my workloads.

                              1. 4

                                There is no APU4 yet ;)

                                The apu4b4 model belongs to the APU2 series. Here is a list of all models within that series:

                                • apu2d0 (2 GB DRAM, 2 i211AT NICs)
                                • apu2e2 (2 GB DRAM, 3 i211AT NICs)
                                • apu2e4 (4 GB DRAM, 3 i210AT NICs)
                                • apu3c2 (2 GB DRAM, 3 i211AT NICs, optimized for 3G/LTE modems)
                                • apu3c4 (4 GB DRAM, 3 i211AT NICs, optimized for 3G/LTE modems)
                                • apu4d2 (2 GB DRAM, 4 i211AT NICs)
                                • apu4d4 (4 GB DRAM, 4 i211AT NICs)
                                1. 2

                                  I got an APU2 before the APU4 was out. I don’t see any major differences besides an additional Ethernet port and SIM slot. I’m curious, what makes it so much better for you, and why does it matter what other people use?

                                  1. 3

                                    Oh you are correct, I was thinking the original APU so this was my mistake. The APU & ALIX didn’t have AES-NI support and were really hard to get to handle gigabit saturation, which is what I was thinking.

                                2. 1

                                  I posted about OpenWRT on the Netgear 7800 above, but honestly thinking of switching to one of these. This is pretty dope. Thanks a bunch for sharing!

                                  Seems like maybe they’re releasing apu3 soon? Seems like this mentions it: https://pcengines.ch/spi1a.htm

                                  1. 1

                                    But how do you use this? Do you install OpenWRT on this and use it as your router?

                                    1. 2

                                      I’m using its predecessor ALIX with OpenBSD for

                                      • routing/firewalling between 3 subnets (LAN, WLAN, Uplink)
                                      • DHCP
                                      • DNS
                                    2. 1

                                      I see the APU2 mentioned a lot recently. What’s the big selling point for it? Would I use it instead of a Ubiquity Edge Router X?

                                      1. 4

                                        One of the selling points for me is it being an amd64 machine and thus (probably) having better support in most OS. Being designed by a Swiss company is also nice.

                                        1. 2

                                          Thanks. Use case is what I think it is? Edge Router/VPN Endpoint/things Raspberry Pis are used for? But amd64 and coreboot

                                    1. 2

                                      Company: Stripe

                                      Company site: https://stripe.com

                                      Position(s): Integration Engineer

                                      Location: Melbourne, Singapore, Tokyo | ONSITE

                                      Description: My team is looking for strong technical generalists who are comfortable in multiple programming languages, interested in working with our users, and delving into complex integration problems spanning time, currencies, and alternative payment methods. Most people who have heard for Stripe mainly think of us as a payments company, but our ambitions are much broader. We hope to increase global commerce by building financial infrastructure and tools to meet the needs of companies of all sizes anywhere in the world.

                                      Tech stack: Ruby, Python, many others.

                                      Contact: dalan@stripe.com

                                      🇸🇬 - https://stripe.com/jobs/listing/integration-engineer-singapore/2003347

                                      🇯🇵 - https://stripe.com/jobs/listing/customer-integration-eng/1962446

                                      🇦🇺 - https://stripe.com/jobs/listing/integration-engineer-melbourne/2003343

                                        1. 2

                                          I’m always sad I can’t find a pry or ipython REPL for my node programs, but I’ve realized why these don’t exist for JavaScript:

                                          In JS land, there’s no air for a console REPL to develop because Chrome devtools are so good, and can connect to your CLI app to debug and inspect.

                                          1. 1

                                            I would’ve argued that the new async features were woefully hard to debug in console, but I believe they’ve recently added the ability to natively await to the REPL without being inside an async function.

                                        1. 1

                                          netlify make it just too easy these days.

                                          1. 7

                                            I find pymotw much more succint and helpful 90% of the time since I need an integrative example and not just the individual parts. I need help in understanding how they go together.

                                            Agreed that the Python documentation could improve in a variety of ways. We need:

                                            • An API level reference that describes the stdlib in things like arity, types, and what the methods / classes do and are for.

                                            • random.choice for example, should link to what seq is. Overall, this method pretty much fits the bill.

                                            • A set of examples of how to use the specific methods that are found in a given package.

                                            • This could show something like.

                                            random.choice([1,2,3]) # 2
                                            
                                            • I think each method should have an example, no matter how simple, full stop.

                                            • A higher set of narrative examples at the package level that how to use it to solve problems. Sometimes these are called examples and recipes.

                                            • This should be a further expanded and solicit for examples, similar to what this is trying to do: https://docs.python.org/3/library/random.html#examples-and-recipes

                                            • A package that I found originally incredibly unhelpful was asyncio. It is notoriously hard to grok without these kinds of examples. I’m not sure if things have improved since I first looked a couple years ago.

                                            I will no doubt piss off quite a few people with this statement, but the community around Python is one of the most hostile and unhelpful communities around any programming-related topic that I have ever seen - and with that I am not just referring to #python on Freenode, but to communities with a dense population of Python developers in general. This point actually consists of several separate attitudes and issues.

                                            I also cannot disagree with this more. The Python community has been fantastic in my experience. After nearly a decade and having been formerly a Developer Evangelist where I attended dozens and dozens of conferences, PyCon is still my favorite conference for the things I’m able to pick up and learn, the culture, and the inclusivity.

                                            1. 2

                                              FYI your link to pymotw is “lhttps://pymotw.com/3/” (note the preceding ‘l’ ‘https’)

                                              1. 1

                                                Thank you! Fixed.

                                            1. 3

                                              This article is really interesting but disappointing that all the links are behind the CERN auth wall. Was curious to see how this project has progressed and get future updates.

                                              1. 2

                                                Not many people are talking about migrating to free software anymore. It seems like there’s now the open source everyone takes for granted (Linux servers, libraries) and open source no one bothers with until the proverbial boiling frog is well cooked, like with CERN’s 10x price increase.

                                              1. 24

                                                Nice try bossman.

                                                1. 6

                                                  I love this website, thank you @pushcx! 🙇‍♂️

                                                  1. 1

                                                    I found this to be particularly concerning:

                                                    While I was sleeping, early morning May 4th, my phone began to ring incessantly for minutes with notifications. It was an employee’s phone sending messages via Signal, for several minutes. I called him to ask what was going on. He said he hadn’t touched his phone. I asked him to turn off his phone, but the messages kept coming. At the same time, another employee called me to tell me he was getting those messages too and wanted to know what was happening. To be clear, I don’t think in any way somebody exploited a vulnerability of Signal, but it was used. In the years I have been using Signal, this never happened to me nor do I know of similar cases, but I have talked with security specialists that pointed me to how this might have happened.

                                                    How can this be achieved with Signal? Does this imply the woman or boyfriend of the woman has insider knowledge of SIgnal to achieve this?

                                                    It’d be good to get clarity on “It was an employee’s phone sending messages via Signal,” actually means, as in the OP was receiving messages that appeared to be coming from someone who wasn’t actually sending them?

                                                    1. 1

                                                      In the past he already found a Signal remote exec bug. Anyway I don’t think in any way that a Signal bug was exploited.

                                                      I sent you a private message.

                                                    1. 1

                                                      Company: Stripe

                                                      Company site: https://stripe.com/ || https://stripe.com/jobs/search

                                                      Full Time Positions:

                                                      Locations:

                                                      • Customer Engineer Manager APAC: Singapore
                                                      • Integration Engineer: Singapore, Remote North America, San Francisco, New York
                                                      • Security Ecosystem Analyst: San Francisco

                                                      Descriptions:

                                                      Integration Engineer

                                                      • Confident and comfortable with customers. We’re expecting to see user facing roles in your past or present.
                                                      • A strong technical generalist. Many of us were engineers in prior jobs.
                                                      • Comfortable with code-level debugging (Stripe code and user code)
                                                      • Empathetic, collaborative, communicative, consultative
                                                      • Intellectually curious, with great problem solving skills

                                                      Security Ecosystem Analyst

                                                      • Have hands-on experience evaluating, implementing, and managing, information management, asset management, data classification, and vulnerability resolution tooling
                                                      • Have experience managing and conducting audit readiness assessments within AWS (or similar) cloud security and infrastructure
                                                      • Are an expert with assessing the configuration and implementation of security tools, related to network security, endpoint security, encryption technology, vulnerability scans, access controls, etc.
                                                      • Have experience with PCI and SOC compliance programs as well as their technical and security requirements
                                                      • Have experience in security standards such as ISO 27001, 27002, 27005; NIST, COBIT, ITIL
                                                      • Are well versed with conducting technical and information security activities i.e security education; document and material classification and control and records management. This includes overseeing company (Stripe’s) security awareness program including Security assessment and ongoing education

                                                      contact: dalan+lobsters-q2y19@stripe.com

                                                      1. 4

                                                        Telegram and WhatsApp for people I know in person (and Telegram channels/bots/messages-to-self/etc), Matrix (self-hosted Synapse instance) as a glorified IRC bouncer. (Actually there are some Matrix-native channels too.)

                                                        Fractal as the desktop Matrix client, Riot on Android.

                                                        1. 1

                                                          What are your thoughts on using Matrix for personal/family?

                                                          1. 2

                                                            Not OP, but I tried this some year ago. The UX-issues related to E2E made it impossible. They should make E2E non-optional, that should give enough momentum to fix it properly.

                                                            I’ve also tried and abandoned XMPP, Wire, Whatsapp, Signal and probably some others. We used IRC for quite a long time, but the modern messengers steamrolled over it.

                                                            I compromised to Telegram. I know they don’t E2E, but I trust them enough for now. If something happens to break that, I’ll probably try to move everyone to XMPP or Matrix (if it’s usable by then).

                                                            1. 1

                                                              Out of curiosity did you use the riot-android or riot-ios?

                                                              1. 1

                                                                Our households had both. The mobile apps were actually somewhat better than the Electron desktop application.

                                                            2. 2

                                                              I use Matrix with my partner, but the e2e user experience is really disappointing at the moment.

                                                              I found multiple device support in Signal annoying also.

                                                              1. 1

                                                                Don’t want to put all eggs in one basket, don’t want them to deal with reliability issues of a 0.x product (and even more, the reliability issues of me hosting my own server :D)…

                                                            1. 6

                                                              Hey @wezm, excellent write up and superbly timed as I’m about to go about building my own home infra on an Intel NUC after reading articles from Jessie Frazelle and Carolyn Van Slyck. I especially appreciated your thoughts on rejecting certbot which I’ve always just blindly used for being ‘too magical’. We should all take that approach more often I think.

                                                              A couple things that you didn’t explain:

                                                              • Why you chose hitch as your reverse proxy over say Traefik or Caddy?
                                                              • Did you consider writing a separate docker-compose file per service? Why did you end up going with a ‘monolith’ docker-compose file?
                                                              • As well, did you consider creating a postgres instance per service?
                                                              • Why LuaDNS over say CloudFlare? While having DNS history via your git history seems like a good pro, having to do a commit and push to make changes seems a bit unwieldy, no?

                                                              Lastly, we should grab coffee sometime in MEL!

                                                              1. 5

                                                                Thanks for reading!

                                                                Why you chose hitch as your reverse proxy over say Traefik or Caddy?

                                                                Mostly momentum and personal preference. I was already using varnish on the old infrastructure and I like its, “Swiss Army Knife of HTTP”, nature. Hitch is by the varnish folks and is their recommended way to add TLS support.

                                                                Did you consider writing a separate docker-compose file per service? Why did you end up going with a ‘monolith’ docker-compose file?

                                                                No I never really considered a docker-compose per service. I guess my thinking all along was that this docker-compose file would describe all the services on the server. I think that probably makes expressing the dependencies easier (E.g. varnish depends on all the downstream services), but that’s just a guess.

                                                                As well, did you consider creating a postgres instance per service?

                                                                As you can probably tell from the article, I kind of like compactness and efficiency (although this is not a strict requirement). The idea of running three database servers to do what one can easily handle never really crossed my mind.

                                                                Why LuaDNS over say CloudFlare? While having DNS history via your git history seems like a good pro, having to do a commit and push to make changes seems a bit unwieldy, no?

                                                                I like the idea of having the config in text files that I can edit in my editor of choice instead of having to log into an admin UI and click around. Additionally LuaDNS makes it easy to share chunks of records with templates. So I only had to write the fastmail stuff once, then just include it in the other domains that need the same records.

                                                                I was using CloudFront before the move for linkedlist.org. They are ok, but they’re also just another tech giant, gaining more control over the internet. Basically I’m a sucker for the underdog.

                                                                Lastly, we should grab coffee sometime in MEL!

                                                                Sure!

                                                              1. 3

                                                                I agree with this post so much. I think this will be a large difference between the millennial difference and whatever generation comes next. I predict there will be a large resistance to the default allowance of companies to vacuum all users data into their analytics. We will be asked in the future why we were so complacent with allowing this.

                                                                1. 4

                                                                  I hope that’s the case. I fear the next generation will be even more reliant and compliant with big corp giants like Google, Facebook, Amazon, Microsoft, etc…

                                                                1. 4

                                                                  Gonna expand up my NAS. I’ve got a fancy new rack case (First rack case I own) and about half of the drive bays are wired up at this point. And the previous NAS needs to be emptied out.

                                                                  Otherwise working on various projects, mostly a personal image board (linear booru style) and on my Kernel.

                                                                  I think that’s a pretty relaxed weekend plan.

                                                                  1. 2

                                                                    Niiiiice. What drives are you going to use? I’m about full on my Synology ;(

                                                                    1. 1

                                                                      It’s a mix of everything, most of them are drives from the previous NAS that I recycled a few times by now. Though I have two helium drives in it too that provide the bulk of the capacity, they’re very nice to hold in your hands.

                                                                      The new drives are all WD Reds, the Helium ones are HGST and the oldest are Samsung HD204’s.

                                                                  1. 3

                                                                    I attempted at quickly created a fork with the proper logic to grab private photos however it appears that you need to go through the annoying OAuth process to grant access to your account, to your own Flickr app to request private photos.

                                                                    I don’t understand this complaint. Annoying or not, that’s the way it should work. It’s really not too hard to setup, either.

                                                                    1. 3

                                                                      You’re totally right that It’s not hard but you have to admit it is annoying to implement an OAuth flow for the Nth time. This would also be different if they had higher quality language bindings that helped you do this versus community supported ones where I’m afraid my API key is going to be leaked to someone else.

                                                                      It’s not so much a complaint about setting up an OAuth process for oneself, but the fact that Flickr doesn’t provide an API key for one’s own account access. I’m not building an app, I just want programmatic access to my own data. Something that many other API / services provide today.

                                                                      1. 2

                                                                        Many years ago I used the Perl Flickr module to write my own downloader. It was a pretty involved process, that was documented obscurely by Flickr, but it was possible. It allows me to download all my pics and metadata, even private ones.

                                                                        Something that many other API / services provide today.

                                                                        Flickr IIRC was one of the first mainstream services that allowed API access. I agree they might have kept it up to date and streamlined it, but there’s been a lot of churn at Flickr with regards to ownership and I don’t think it’s been a priority.

                                                                        Right now the biggest image sharing site (Instagram) has no user-accessible API and is actively hunting down and closing 3rd-party apps that allow stuff like automated uploading.

                                                                    1. 3

                                                                      I’m most surprised by the usage of Signal by people here. Early on it was just such a horrible user experience compared to the other chat apps. It seems to have been getting better recently though.

                                                                      One thing that came to mind was that it was impossible to delete old media that had been sent to you en masse nor was it losslessly compressed for storage. Videos ate up most of my storage on older phones.