1. 6

    It’s not DNS -> it’s never DNS -> it can’t be DNS -> it’s DNS

    1. 5

      Huh, in my experience it’s almost always DNS.

      1. 2

        what do you mean by this? I’m curious because i hear “it’s always DNS” a lot and that’s not my experience at all – I’ve found that some networking issues are caused by DNS and some aren’t.

        It does feel like weird/inexplicable issues are disproportionately likely to be caused by DNS though, because non-DNS problems are more straightforward to diagnose.

        1. 1

          It’s not DNS -> it’s never DNS -> it can’t be DNS -> it’s DNS

          This was possibly a reference to https://www.cyberciti.biz/humour/a-haiku-about-dns/

          It’s not DNS

          There’s no way it’s DNS

          It was DNS

      1. 1

        My top two were oxygen-mono and fira-mono, but both of these look pretty spaced out when I use them in the terminal. I much prefer Hack-mono over both when looking at both in my terminal and code windows.

        This as a iTerm / IntelliJ plugin would be super neat.

        1. 3

          I run my own mail server, use IMAP, so the backup system backing up my server also backs up my email. Worked find for the past fifteen years.

          If you don’t trust your email provider, I don’t think backing up and deleting is a good approach, as you have no clue what the provider does on deletion. And it might be basic stuff. Maybe they keep backups, maybe they are faulty, maybe they get hacked, maybe some rogue employee does something not so good.

          Without wanting to get too off-topic, but also with a lot of “secure” email providers (and cloud infrastructure at large) people base their company on marketing claims when it comes to security. I think it might make sense to reduce the risk.

          Now I am very aware, that a lot of people don’t want to run their own email server, but then something like E2E encryption would make sense. I wonder if a good approach to not running a full email provider could be to have some basic image, just forwarding incoming emails in an encrypted way, which means storage and sending could still be handled by the provider.

          1. 4

            Without wanting to get too off-topic, but also with a lot of “secure” email providers (and cloud infrastructure at large) people base their company on marketing claims when it comes to security. I think it might make sense to reduce the risk.

            Note that addressing this is the goal of Confidential Cloud Computing. The hardware removes the hypervisor from the TCB for confidentiality and integrity (but not for availability). Any page assigned to the VM is encrypted and its contents cannot be accessed by the host. The initial boot memory contents is attested by the hardware (or by some attested software) and so you can check that you’re booting the kernel you expect before you provide it with the decryption keys for the disk images that it has attached. If you use something like dm-verity, then you can also validate that the cloud provider is not tampering with the disk contents (unfortunately, there isn’t yet a read-write dm-verity, so you have to fall back to something like dm-integrity, which is vulnerable to replay attacks).

            An email provider building an offering on these technologies would be able to give you verifiable guarantees that they couldn’t see the contents of your email (they could probably leak some of it via side channels, but that would require a targeted attack, in contrast to the situation today where a malicious actor in the company could just copy all of everyone’s email). Endpoint security becomes very important in that case because the easiest way of stealing all of your email is to compromise one of your client devices.

            1. 3

              Endpoint security becomes very important in that case because the easiest way of stealing all of your email is to compromise one of your client devices.

              You’re exactly right, of course, but I always chuckle a little bit when I read something like this. It seems to say “Now, in order to be secure, all we need to do perfectly is the exact thing that we’re the very worst at.”

              It’s certainly a worthy improvement over the current cloud situation, to be clear. But I think there’s an argument to be made that it is easier for an organization to run their own server and have dumber clients, and I wouldn’t be terribly surprised to see experiments this direction.

              1. 4

                You’re exactly right, of course, but I always chuckle a little bit when I read something like this. It seems to say “Now, in order to be secure, all we need to do perfectly is the exact thing that we’re the very worst at.”

                Sad, but true.

                It’s certainly a worthy improvement over the current cloud situation, to be clear. But I think there’s an argument to be made that it is easier for an organization to run their own server and have dumber clients, and I wouldn’t be terribly surprised to see experiments this direction.

                Even with a dumber client, it’s not clear you get an advantage. If the dumb client can list emails and download them (or even view them) then a compromise to the client can do the same.

                This is what makes me incredibly nervous about webmail systems. When I open an HTML email in Apple’s Mail.app, it uses WebKit2’s sandboxing support to decode the HTML and generate the display in a separate, unprivileged, process. If there’s a vulnerability in WebKit then a random person who sent me an email can compromise an unprivileged process and hopefully not get any further. They need to also find a kernel vulnerability that they can exploit to be able to make a network request, for example. In contrast, if I read an email in a WebMail client in Safari on the same system, the entire web app runs in the same renderer process. A malicious email that exploits a vulnerability in the same WebKit that Mail.app uses can now do anything that I can do in that web app, including downloading all my mail to the browser to scan for sensitive messages, forward mail to other people, and so on.

                1. 1

                  Mail.app

                  I wonder if there’s an easy way to configure Mail.app to run against an offline store (mbox / maildir). Perhaps the right way is to run a local IMAP server over the offline store?

                  1. 3

                    Dovecot can work as this kind of proxy quite easily - you can configure it to use a local mail store and not accept delivery and also to use a remote IMAP server if you want to be able to collect mail from there.

            2. 1

              I think this might be where I eventually land. Cloud based mail server, synced locally, encryption at rest, encrypted cloud backup. The big part of this for me is the part where I don’t want a bunch of emails sitting on a box in the cloud for a long period of time in an unencrypted form.

              Yeah, the ingress insecurity is a problem, but then don’t most email servers use TLS for in transport security?

            1. 7

              I don’t really archive anymore. It’s all just text and so it’s not so much, even after 15 years. As I self-host, it’s just a Maildir with old and new mails. Backing that up with deduplication is painless.

              1. 4

                The worry that I have is not so much the cost of the storage, it’s the fact that a compromise in any mail client gives access to all of that mail. I’ve not seen any support in mail servers for spotting unusual traffic patterns and requiring 2FA to reactivate a per-client key, for example.

                1. 1

                  Fair point, but it’s a tradeoff I am willing to make.

                  1. 1

                    This is part of my concern too.

                  2. 2

                    I kind of blend the two: 1) I aggressively delete emails, and 2) archive what I feel would be an incredible loss. For example, I don’t save any emails that are just manifestations of some service’s history, such as a receipt for Apple Card payments, or other statement emails. With this approach I end up with less than 100 emails a year, but I do have to set the expectation that these external services will make the data easily available.

                  1. 50

                    This is a RIIR where I don’t see why you’d do that. The author seems to be serious about this, trying to add multiple sites. In my experience I always delegate the youtube & co crawling stuff to youtube-dl, because it has the biggest momentum and thus the best & fastest support for all those individually changing websites. Something you need crowdsourcing and a lot of maintainer time for.

                    1. 17

                      Yeah, the biggest value in youtube-dl is not in the git repo, it’s the community of people reverse-engineering websites and providing timely updates. Rewriting the engine in Rust is a simple matter of programming, but rebuilding that community might very well be impossible.

                      1. 1

                        can’t blame em for trying! or can you…

                      2. 11

                        Agree completely that it’s not necessarily “needed” by the community, but maybe the author simply finds the problem interesting. I’ve reimplemented stuff needlessly before. Not so seriously to open any of it up to outside contribution, but maybe the author just feels like doing that too.

                        1. 7

                          I would be tempted to use it just because it doesn’t rely on python as a dependency and ships as a single executable.

                          1. 3

                            fair enough, I also would like some changes to the “API” yt-dl has, but its definitely easier to hack around in for everybody when its written in python

                            Edit: I just realized this project specifically doesn’t want to add any format/quality functions, so it’s kinda useless for most things

                            1. 2

                              I’d guess that the built with python is one of the reasons for the popularity of ytdl

                              1. 1

                                I can see that being the case - easy to write a new handler for a domain.

                            2. 3

                              To elaborate: I’ve got a yt-dl service (yes in rust) on my servers running since 2015 on kinda the same code since when it started. The only thing done daily in the background is updating youtube-dl, everything else just stays the same. But without that it won’t work.

                              1. 3

                                One possible reason it might be good to have is if the main youtube-dl project gets purged from the internet by copyright-holding corporations. It happened once, it could happen again. This less famous alternative, or simply different project run by different people, might be available in times and places where youtube-dl is not.

                                Obviously it would be better if youtube-dl were to be generally available, and we should do what we can to make that happen, but some redundancy may be useful and wise.

                                1. 7

                                  youtube-dl was never really purged. afaik it was always accessible on repos of major distros, though the development infrastructure was all but gone.

                                  if you are thinking of a more comprehensive attack by a group of tech giants or the government, it seems more likely that youtube will eventually not support methods of access that allow downloading files, rendering alternatives equally useless. I could be wrong though.

                                2. 1

                                  I started yaydl when youtube-dl had the massive disadvantage of having been taken offline. I chose Rust over Python for three reasons:

                                  1. I can’t stand Python, neither its syntax nor its behavior when updating from 3.x to 3.y.
                                  2. I needed an excuse to try Rust. (I try to know as many languages as possible - you’ll never know when they could be useful.)
                                  3. I wanted to make it as easy as possible to add new sites, and Rust’s traits seem to be an awesome way to do that.

                                  I tried to read youtube-dl’s source code and I could not figure out how to add new sites there easily, so that’s that… :-)

                                  1. 1

                                    I needed an excuse to try Rust.

                                    To be fair that’s how my yayd project started, but since its just running since forever I don’t think it was a bad choice.

                                    If I’d have a wish: Please add a format flag, 99% of my users want to download an mp3 of podcasts and playlists containing them (or songs). I’m currently using a hack of retrieving filename, then audio, then video to get live progress from youtube-dl & ffmpeg, which also means I’m doing the conversion manually. (And ffmpeg wants its own line endings Especially for audio you may have to use a different format than the “best” as youtube has some better audio codecs in others.

                                    Definitely good luck with your project and if it works I’d be glad to adopt it, so I can drop all these std pipes. I think nobody here wants you to stop doing your hobby projects, its just a warning (at least from me) that yt-dl is a lot of work and creating a half-baked effort due to limited resources can leave a bad taste behind.

                                    tried to read youtube-dl’s source code

                                    To be fair it’s kinda bad, especially for the youtube section

                                    1. 1

                                      Hmm. I understand your wish, but I really don’t want that. There already are so many tools which do that. However, as yaydl calls FFMPEG anyway (if you tell it to), it could be easy to add that.

                                      I doubt that this will be considered before 1.0 though.

                                      Yes, my project needs manpower. I noticed that when fixing YouTube support recently. I was hoping that Rust would be popular enough for this to change. It does not, I guess.

                                      1. 1

                                        It does not, I guess.

                                        Well there are many people working on different things, I’d say if you combine the overall amount of people involved in all the different projects (linux,OS,embedded,async,crypto,gamedev,hypervisor,databases,web,cs..) there are a ton of contributors. But youtube-dl is kind of a working solution where you don’t really gain anything from adding rust (security,performance..). And also probably also the next addition to my list above.

                                        1. 1

                                          The one thing that Rust adds for me is that the binary will survive a Python upgrade. :-) Also, a slightly better performance is a feature in my opinion, even if it only affects the general startup time.

                                          Yes, youtube-dl is still a fine piece of software, but it has a very different intention from yaydl which aims to be as little complex as possible…

                                          1. 1

                                            will survive a Python upgrade

                                            Is a python upgrade that bad ? Just using the debian python3 and haven’t had any problems for my tiny stuff.

                                            1. 1

                                              I don’t use Linux. On most systems I own which run Python, every upgrade breaks one or more pip modules, I’d have to reinstall them and this is something I do not want. (I know about virtualenv, but in my opinion, a language that requires a virtualenv to avoid its dependencies to fall flat on their - or your - face is broken by design.)

                                              I cannot say for sure that I am not just too dumb for Python. :-) But Rust does not have these problems at all.

                                              1. 2

                                                tbf I also tried using virtualenv and gave up eventually

                                  1. 6

                                    Cool idea - but it seems too much tech, not enough customer. Seems like there would be a huge onboarding cost in time effort to understand

                                    1. 1

                                      This is a pretty cool idea for a site. I can see how it would be fairly useful as a catch-all for node/python/homebrew/go/eclipse/gcc/etc…

                                      1. 1

                                        Thanks! The blog format of most sites doesn’t suit me for writing guides or tutorials about technology. I prefer in-depth structured articles. Dev.to, Hackernoon, Hashnode have a place but I’m trying to make a better site for “evergreen” long-lasting articles. You’re right, mac.install.guide could be expanded beyond Ruby to other topics. Want to contribute? How can I contact you? I’m daniel@danielkehoe.com via email. Or PM me here.

                                      1. 2

                                        I lived in an apartment building that had fiber to the basement, and then Ethernet delivered to the apartments. From time to time, Facebook and some other sites would stop working. Sometimes it was CDNs that were blocked, sometimes it was DNS servers like 8.8.8.8. Basically intermittent random packet loss on specific IP addresses. Cutting 1.5 months diagnosis short, I learnt to use wireshark, led the ISP tech through the issue to confirm that it was a bug in the router’s pppoe stack / firewall, which they had to rely on the manufacturer to fix. The reward was a working internet connection (useful when working from home). No idea what the exact issue was to this day.

                                        1. 6

                                          This has absolutely been my experience doing web stuff… but also doing Python. Do you want to use virtualenv, pipenv, poetry, or…?

                                          1. 8

                                            Yeah it’s like the article says, instead of trying to tackle complexities we just hide them in more and more layers of tooling every year. It feels like there’s little innovation in problem solving, it’s all problem management.

                                            1. 4

                                              instead of trying to tackle complexities we just hide them in more and more layers of tooling every year.

                                              Hah, that complaint is the source of a big controversy from back in 2008 when Jonathon Blow said what you just said, criticizing Linux for doing exactly that.

                                              Specifically, his complaint was that X handled mouse input poorly (if you moved the mouse all the way outside of the window within a single frame, the mouse delta would be capped to the window width/height instead of representing the true distance), and people responded “why are you using X anyway, just use SDL” and his response was “because SDL has the exact same problem as SDL is just a wrapper around the X function anyway”.

                                              1. 1

                                                Last month, I had to install a package manager to install a package manager. That’s when I closed my laptop and slowly backed away from it.

                                                Perhaps the answer is complexity layering limitations?

                                                1. 1

                                                  complexity limitations? how does that work?

                                            1. 4

                                              Edited: I judged too harshly. Retracted my previous criticism.

                                              1. 6

                                                It does look like an ad for teleport, but I would not reduce it to “just”.

                                                1. 4

                                                  how so? i found it a very good general discussion of the ux issues around exposing a terminal in a web browser.

                                                  1. 3

                                                    On the contrary, I was just about to comment that this post is a really good example showing courage talking about the internal decisions made in a company on a product.

                                                  1. 4

                                                    Interesting that it is perceived that the alternative to Event Sourcing is “CRUD”. Is this the common viewpoint of programmers these days?

                                                    If we need to build a complex international payroll system, or a large bank lending system, or a nationwide new insurance quotation system, since when are the architectural choices for the implementation of functional requirements Event Sourcing or … CRUD?

                                                    1. 2

                                                      While not specifically CRUD, I often see actions on data as a predominant development approach for many developers. This leads to the events often later being seen as a second order effect (an audit of change) rather than the key function of the systems.

                                                      1. 2

                                                        What’s the third alternative then?

                                                        1. 1

                                                          Well what are the more typical choices for representing concepts, logic and constraints in a complex problem domain? CRUD is orthogonal to all of them …

                                                          • object domain model
                                                          • transaction scripts in a service layer (managers/services)
                                                          • database entity (table) as an object
                                                          • controller-entity style (per Fowler P of EAA)
                                                          • information model (database) and state-machines transformed with dataflows
                                                          • entities as change records
                                                          1. 1

                                                            I’m not sure you’ve read the article…

                                                            The post’s point is that while it’s generally seen as much easier to use CRUD for small systems, and reserve event sourcing for bigger, more complex systems, one can actually start using parts of event sourcing much earlier in scope.

                                                            If you’re planning on creating “ a complex international payroll system, or a large bank lending system, or a nationwide new insurance quotation system”, plain CRUD is not the right choice, but the article doesn’t state that.

                                                            1. 7

                                                              My point is that CRUD isn’t an architectural choice at all.

                                                              1. 2

                                                                Thanks for expanding!

                                                      1. 2

                                                        Sounds like this new MacBook would make a beast of a Gentoo machine :D

                                                        1. 5

                                                          One area that seems missing in all this is something for user focused network diagnostics (think ifconfig / ping / traceroute). Is there anything that can show me at a glance multiple OSI levels why something is broken. Say I just know that I can’t see google.com. It would be nice to have 1 command that shows that this is because {cable_unplugged, dns_broken, no_packets_route_past_your_router, …}

                                                          1. 1

                                                            Like the useless “network diagnostics wizard” in Windows? :)

                                                          1. 17

                                                            First of all, this conventional wisdom assumes that there is a single universal definition of “better”; but there isn’t one. If you work on a project as a team–which is almost all of us–it’s really hard to get everyone to agree on something; be it the definition of clean code, or even the type of whitespace to be used for indentation.

                                                            Having a really small team size (N) makes consensus easier. It amazes me how much process is compensation for having more than a couple of people (at most) work on something. The failure mode for a small team is bus factor. The failure mode for larger teams is silent accumulation of technical debt due to lack of consensus and coordination.

                                                            I don’t think there’s a neat solution to this problem, but I do notice that many problems go away as N approaches 1.

                                                            Two might be the sweet spot.

                                                            1. 3

                                                              Which is why high productivity is so incredibly important. Get the highest productivity people (see Brooks’s chief programmer teams) and do everything to help them be more productive.

                                                              1. 2

                                                                The failure mode for a small team is bus factor. The failure mode for larger teams is silent accumulation of technical debt due to lack of consensus and coordination.

                                                                I’m not sure that this concept has ever been made as clear as this quote. While not particularly a new observation, the profundity of this quote is one of those that I feel will grow to be just as important over time as Knuth’s famous quote on optimization.

                                                              1. 3

                                                                Meh - this should be automated. License change is a breaking semantic change in a library. There are finite licenses and known incompatibilities like this that are able to be modeled.

                                                                1. 4

                                                                  There are finite licenses

                                                                  Not really. Some people write their own n-clause BSD license or so. It’s often a matter of changing a few words.

                                                                  1. 11

                                                                    An automated tool should flag this for manual review and probably removal, because making your project depend on some jackass’s untested, non-peer-reviewed NIH legal code is usually a terrible idea.

                                                                    1. 3

                                                                      I think without intending to, you just proved my point. For a project that writes their own n-clause BSD license, these will inherently be dissimilar enough when looked at through automated tooling to trigger a build / package fail. This is good.

                                                                      1. 2

                                                                        I agree. I mean I should actually do this. I don’t think I have already seen this thing automated in my life and it is a shame. Licence changes in dependencies are probably rather uncommon but the risk is high.

                                                                      2. 2

                                                                        There are finite licenses that we should consider using…. I’m not saying one shouldn’t use a n-clause BSD license, but that one should have a relatively small number of them and they should be rejected by default, and later accepted as a case by case exception.

                                                                        1. 1

                                                                          they can be grouped by effect

                                                                        2. 4

                                                                          As he mentions, license compatibility is part of what you look for, but definitely not all. Can’t automate that. And, as the sibling mentions, licensing terms are absolutely not finite.

                                                                          1. 2

                                                                            There are automated tools to aid license review when doing packaging for Debian but they don’t remove the need for a thorough visual inspection.

                                                                            The licenses are then summarized in a standard format in the debian/copyright file.

                                                                            Unfortunately upstream developers and other distributions often don’t care about providing such clear licensing information.

                                                                          1. 8

                                                                            The primary flaw: no tests for code that has explicit edge cases. Even the ‘prod quality version’ has no tests.

                                                                            1. 1

                                                                              Very likely you naively come to EAV (Entity Attribute Value). See https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model#Alternatives for some pointers on how to avoid that anti-(debatable)-pattern.

                                                                              1. 5

                                                                                Very thoughtful review. The book is 15 years old now and it does show its age a bit, but I am proud of it.

                                                                                I don’t know whether responding to the author here works, but the two big issues he mentions: (unit testing > integration testing) and database isolation are two things I still emphasize.

                                                                                The thing about integration testing is that you lucky if you have seams that allow it. If you do, then it’s doable. The thing is, though, that writing tests at a high level to get coverage at a low level is like dropping a pebble down a hole in the the earth and trying to get it to land on on a particular ledge. Even with coverage tools it’s tough. Once you’ve done that, you have a test that works, covers a lot, and likely takes longer to run than a focused test. Since it takes longer to run, you bunch up your changes and don’t run it as often. When that test fails, you’re left wondering which particular change caused the failure and you have to spelunk to figure it out. That’s why I lean toward smaller focused (unit) tests.

                                                                                Re databases, it’s the slow down and the distance of used values from expected values that bothers me. Rails in particular is rough this way. I once worked with a team that had tests that took three hours to run on 24 cores because of ActiveRecord. If they’d only dealt with computation, I suspect the tests would’ve run in minutes.

                                                                                1. 1

                                                                                  One thing I have encountered from the article:

                                                                                  One of the purposes of testing is to make refactoring and subsequent behavior changes easier and safer, but if every jot and tittle of the internal code structure is encoded in the test suite via all the mocks and fakes, a simple half hour of work refactoring the code as part of adding new functionality turns into hours of tedious work restructuring the tests to match. The result is to paradoxically discourage refactoring because of the painful changes then required to the tests, defeating one of the purposes of having tests.

                                                                                  Now, I know testing is something of an art and perhaps this is solved by a more careful balance between integration tests and unit tests, but have you encountered this? And how do you avoid it?

                                                                                  1. 3

                                                                                    Often I’ve found that overspecification in tests comes from using the wrong mechanism for faking out parts. http://blog.ploeh.dk/2013/10/23/mocks-for-commands-stubs-for-queries/ is pretty helpful for getting your head straightened out. Read it today. Read it again sometime in the future. Share it with your juniors. Hide it from the seniors that you wish to overtake ;)