Python isn’t my goto lang but I checked it, yep. It behaves differently. You’d also not use 0 or 1 as exit code style truthy values. But the precedence is still different than javascript.

    // the js version
    let x, z = false;
    let y = true;
    let r = (x & y) == z;  // r is true
    let s = x & (y == z);  // s is 0
    let t = x & y == z;    // t is 0

    That’s surprising but not in the normal JS beating way but I guess historical context? Neat, TIL.


      Neat idea. Our prompts are too binary, too simple always. They are coupled to the computer thing that makes them work.

      I’ve had another idea for dialogs, prompts, popups. They have memory. Imagine a prompt that remembers how many times it has been seen. Then there could be logic like “we really don’t want to ask them to sign up for our newsletter more than 3x, they said no 3x”. It wouldn’t work for marketing because marketing don’t care but then at least there’s memory.

      They said no three times.

      1. 2

        I like exa instead of ls. And few more tools I mention here.

        1. 3

          I really want to like exa, but I get tripped up every single time using exa -t when what I want is ls -t. It’s such a productivity killer.

          1. 2

            I’d make an alias. I have alias a=‘exa’. Can do specific alias for the -t flag.

            1. 2

              Yeah. My ls -ltr muscle memory needs to go to exa -lrsold and I don’t have it yet. I have an alias t for a tree like list that is great.

              t='exa -l -T -L 2 --header --git-ignore -F -d -I node_modules'
              1. 1

                I agree, my muscle memory is a super power and a prison.

                This is a decent solution. I could alias it to lt

                1. 1

                  I do lt for the same. Funny how this particular muscle memorised incantation catches so many of us.

                2. 1

                  How is this any better than the fully POSIX compliant tree?

                  tree -L 2 -C -I node_modules
                  1. 2

                    Non-measurable preference? I have tree too. But to extoll this alias: It reads my .gitignore (if there is one). It has headers. Here is an output example. You can’t see the underlines of the column headings.

                     tmp/foo $ exa -l -T -L 2 --header --git-ignore -F -d -I node_modules
                    Permissions Size User    Date Modified Name
                    drwxr-xr-x     - you     22 Oct 12:23  ./
                    .rw-r--r--     0 you     22 Oct 12:23  ├── blech.txt
                    .rw-r--r--     0 you     22 Oct 12:22  ├── bleep.txt
                    .rw-r--r--     0 you     22 Oct 12:22  └── bleh.txt

                    It understand git. Has some nice other options in the manual.

            1. 5

              Percol, jq and gron. I use these dozens of times per day. Also, fish shell, I find it superior to all others and don’t quite understand why anyone would use bash at this day an age.

              I also use httpie, ag/ripgrep/ack, fd which are great, but they provide only incremental improvements on standard tools that work well. If I am writing shell scripts I always stick to curl, grep, find, etc.

              1. 4

                how about: because I’ve used bash for 25 years and the second a key command doesn’t work in a new shell I feel I’m wasting my time. Why learn a new way to do the exact same thing? Especially why learn it just so things are colorful?

                1. 3

                  Fair in production at work. Learning on the side, trying out a new thing to really make sure that it is the exact same thing (it’s not … bash and zsh and fish are similar but not the same or exact same). Of course, what you are doing is maybe working well enough but pay attention to pain points. Did bash do something weird with variable escaping?

                  Of course no one can convince anyone, or prove software’s worth. So, here’s my camp and creds I guess. I’m using fish but I think/feel that you could achieve most of it with zsh plugins and then not frustration that the world is still using bash. Zsh is bash compatible so the world staying where it is doesn’t matter. That’s pretty nice. It’s also a zero cost switch, other than learning the tricks (the features that are supposed to be great) that you didn’t have before.

                  Saying it is about colors is a bit reductive. Color is multiplexing for your cerebral cortex. Colors help you scan. But it’s not just colors.

                  1. 3

                    You don’t have to learn any command at all. The invocation syntax for commands is the exact same as bash. I still write my Shellscripts in Bourne shell like I did 15 years ago when I used bash. No change in there. What fish offers is a better UI for command input. Auto complete and history browser are superior to those of bash. You can have it without colours of you want. Colors are there to provide Information. For example, as you type you will get a suggestion for autocomolete. This is shown in a different colour, the same when picking autocplete suggestions. It is not quite accurate to say that it is just more colorful, because bash doesn’t have these features at all.

                    1. 1

                      This has NOT been my experience.

                      We use fish at work… for reasons… I was very quickly frustrated that commands were different and found it unusable.

                      You say autocomplete and history are superior, but they are also different. Sometimes the cost of change negates and superiority that the new thing would provide.

                1. 5

                  Great thread.

                  choose replaces the usual awk oneliner to get a column of text for me. https://github.com/theryangeary/choose

                  $ echo first second third | choose 1
                  1. 1

                    Now this one is pure awesome, thanks; as someone who has to search for awk samples every time I use it, this will help me a lot.

                    1. 1

                      It’s not in a separate package or documented so well, but I have an example program in cligen that does all this and more. For me, the memory mapped IO mode is even ~2x faster than mawk on Linux for files in /dev/shm.

                  1. 2

                    Yeah. I think dark themes look cooler and there are a wide variety. I tried but the web is mostly light. I tried contrast switchers in Chrome and Firefox but they don’t always work. Having trails from some Harvey Two Face splits wasn’t great. So I switched to light for terminal and editor.

                    Find the thing that works for you I guess. Try new stuff, sharpen your tools but don’t waste too much time like me. I check myself when I switch themes: is this just messing around or am I actually tired of the scenery?

                    1. 1

                      Good post and discussion.

                      This doesn’t handle memory outside the system. This doesn’t handle IO, network, database etc. You cannot avoid all runtime errors so there are no guarantees. You cannot invent a language or a machine/cpu/fpga that doesn’t not have some kind of runtime errors. Yes, all of us with our different machines and languages all need to do error handling, testing and monitoring and much more than that. Types and compilers are helpful but many people think that there is some kind of magic and then are surprised when it fails. This also doesn’t mean “don’t use types and compilers”. You need a mix of tools but also you need to understand what will never be possible and why there is no silver bullet.

                      • If I add typescript I don’t need tests. No.
                      • My compiler will catch all my mistakes. No.
                      • The new version of my language/framework will fix this. No.
                      • The next trendy language will have no errors of any kind. No.

                      What works (probably) but we (whoever that is, definitely includes me) don’t do enough of is:

                      • Feedback on pain points (money/time)
                      • Add tooling
                      • Take time to teach to open minds (not a time issue)
                      • Change culture (but how and with what authority)

                      Maybe some approaches for the original story:

                      Add a test to simulate the country that is having problems. It fails. Make the test pass. Decide if there is a refactoring opportunity. Refactor (or not). Commit the tests and the fix. The repo is stronger. There’s a monitoring angle here too but to me monitoring is a different kind of testing - it’s the same test just continuous.

                      If the problem is all I/O, then you can’t control the memory so all your types and compilers have no power. You can let it crash but it still doesn’t work. If you want to avoid or know breakage there is contract testing which is pretty neat in this era of mashups and cloud. See pact.io, it’s weirder but easier than it seems. You need to believe pact a bit though, it’s slightly different than maybe what you’ve seen. It’s a different layer of tests. Ultimately, this is probably culture but I like introducing tools as enforcement/codification for cultural values.

                      1. 10

                        It is ridiculous. I keep saying “stop helping” to prompts. Dismiss, no, stop. Stop helping. It’s bad help.

                        Humans find the shortest path. Like water downhill and electricity. If the web gets dumb, something will pop up in its place with usage. Flash, applets, realplayer, VRML, some quicktime file popping up and other things. The old things weren’t destroyed but slowly get routed around. People will figure out what works - half the population is above average intelligence.

                        1. 11

                          No one does “no testing”. Everyone has tests. No testing is writing hello world, shipping it and then dusting off your hands and saying “another job well done!” and going home. You don’t observe it running. People who think they have no tests but they do have manual tests are using their fragile, tired, human eyes to parse the output and think “pass!”. No one does “no testing”, there’s manual testing and automated testing.

                          Some people won’t call them manual tests but once we identify it as manual, it is easier to see that automated tests just save you from running ./main (or whatever) for the rest of your life. You can add types, functions, tricks, anything novel in Blubb 2.0 … but you could have a business logic bug so you need to run it at least once. No one does “no testing” but they think they do. And so the explosion of permutation paths is too much to manually test, so you don’t do it. So then you regress or ship a bug. “Whoops!” And if you do find something, it’s back to manual testing. “That shouldn’t happen again!”. Nothing grows.

                          Tests are critical and related to the entire lifecycle. But they especially pay off late in the project. Projects rarely shrink in complexity, they just pile on the features and then you declare bankruptcy. This can happen even with testing, I just think it’s a powerful ally/tool along the way.

                          OP’s example of testing race conditions might resist testing. So maybe the coverage is 90%. That’s not a reason to have 0% coverage. I want to automate the things. My fragile, tired human eyes don’t want to see ./main run again.

                          1. 1

                            I think a lot of people confusing “testing” with complex testing frameworks and unit testing, rather than “a program to automate running ./main in different ways”.

                            I like testing but care little for the whole TDD/unit testing approach. I think TDD-purists and the like have done more to turn people off from testing than anyone else, because their “one true correct way” to do things is often hard for people to implement, understand, and maintain.

                          1. 33

                            I’m not a huge fan of the article, but I do see the point they try to make. I’d say most tech content is badly contextualized and motivated.

                            I’ll give my favourite example: Docker scale talks. The worst one (and I’ve seen a couple) was a very good talk by a Google developer on how they allocate and run seven-figure numbers of docker containers in their ecosystem. Super interesting talk, given at a conference that wanted to be “the first conference for beginner developers”. The amount of time I’ve spend in the pause to talk to people that absolutely wanted to try all that out and telling them that they are not Google was terrible. Good content though. But none of the audience would ever need to apply it, unless they are working at GCE, AWS or Azure.

                            Similarly, I’ve seen a talk by Google SRE “for all audiences” that started with setting up an SRE team of at least 3 people in all major timezones. PHEW.

                            My takeaway from my years in this industry is how bad this industry is at evaluating solutions and budgeting how many solutions you want to have in a product. There’s value in restraint.

                            Still, all effective developers I know first reach to the internet for inspiration and are incredibly good at establishing that context by themselves.

                            1. 13

                              My takeaway from my years in this industry is how bad this industry is at evaluating solutions and budgeting how many solutions you want to have in a product. There’s value in restraint.

                              I think this is because most of the industry never needed to manage spendings in a company. In addition, most of us didn’t get trainings or classes on this.

                              You mention it’s hard to know which technologies are matching a company’s needs, but that’s also true for finance. Sometimes, spending 200k$ on prototyping is a drop in the ocean compared to the expected ROI, whereas sometimes 50$ of EC2 per day is a lot, and it’s not always easy to grasp if you never studied economics.

                              Maybe that’s something that’s missing in the space, like “finance for IT teams”, that goes beyond the basic of direct ROI and depreciation.

                              1. 4

                                I fully agree with you there! That’s also why I don’t want to put that on individual engineers or even teams. This behaviour is rarely asked for, therefore the skill is low.

                              2. 4

                                As far as I can tell, the problem “at scale” is much worse (and considerably deeper) than the article can tell. It’s not just that industry is ridden with superstitious gossip; the “science” that it supposedly rests on is too. Logic, empiricism, skepticism, statistical literacy, and historical awareness all can help… but ultimately there’s no easy way out of this bad situation, because the objects of our study are themselves almost entirely artificial. Lies repeated often enough and loudly enough can indeed become truths. So can less deliberate falsehoods, and of course the many kinds of claims and assumptions which cannot easily be assigned a truth-value. It’s not a special problem unique to our special field, either; brave people in economics and the social sciences have had to face this lack of foundations for longer than “computer science” has even existed.

                                Here’s a practical book and a much more theoretical one. Both are well worth reading. Good luck out there!

                                1. 4

                                  The amount of time I’ve spend in the pause to talk to people that absolutely wanted to try all that out and telling them that they are not Google was terrible. Good content though. But none of the audience would ever need to apply it, unless they are working at GCE, AWS or Azure.

                                  I will never ever need to excavate the side of a mountain but I would still attend a talk about how someone built this monster:


                                  I feel playing thought police at a conference fits in the “ego driven content creation” we’re discussing here. Just relax and let other folks enjoy what they like.

                                  1. 10

                                    The amount of time I’ve spend in the pause to talk to people that absolutely wanted to try all that out and telling them that they are not Google was terrible. Good content though. But none of the audience would ever need to apply it, unless they are working at GCE, AWS or Azure.

                                    I feel playing thought police at a conference fits in the “ego driven content creation” we’re discussing here.

                                    You are misusing (and watering down) the term “thought police”. (You aren’t alone.)

                                    In George Orwell’s 1949 dystopian novel Nineteen Eighty-Four, the Thought Police (Thinkpol) are the secret police of the superstate Oceania, who discover and punish thoughtcrime, personal and political thoughts unapproved by the government. The Thinkpol use criminal psychology and omnipresent surveillance via informers, telescreens, cameras, and microphones, to search for and find, monitor and arrest all citizens of Oceania who would commit thoughtcrime in challenge to the status quo authority of the Party and the regime of Big Brother. - Wikipedia

                                    Engaging in open discussion – even if challenging or critical – with others at a conference is nothing close to being the “thought police”. It doesn’t match any aspect of the description above.

                                    1. 2

                                      I’d like to redub some enterprise software tutorial audio over this digging video. https://www.youtube.com/watch?v=PH-2FfFD2PU

                                  1. 3

                                    Most music on soundcloud is crap. Most art is crap. Most movies are crap. Making good stuff is hard and rare because it’s hard. There’s the hobby side of things “I made pong in vimscript” which behaves more like art. Then there’s the $job side which is more bound to the business than to the art. The point is to reduce cost or produce more (which is cost). We don’t have a ton of fundamentals (like steel or atoms) so it’s very abstract. We have benchmarks and maybe some mathematical proofs (which are usually reduced or theory). It’s very hard to tell better from worse.

                                    When the rules don’t work for you anymore, you’ve ascended. This isn’t a superior position. You’ve gotten off the training wheels and now you have to be more independent which is more work. You don’t need the rules. The rules have holes. Great. You still need rules and they are going to come from you I guess. Or you need to keep looking. You needed the rules as a beginner, otherwise you would repeat all of history as self discovery. I threw away an actionscript 2.0 book after hitting some really horrible code and had a similar revelation as the article. “The teacher doesn’t know everything”.

                                    1. 2

                                      I had a physical reaction to the title and then in the post, the model number of the NIC “NE2000”. “Oh no”. This blog summoned ancient memories from synapses not fired in decades, causing a stomach churn response. Some memories of being shut in a networking closet from early career days. Some memories of trying to get a Duke Nukem game to run at home. Ancient memories of worrying about future Y2K problems and now the blog shows 1920. I guess it was bound to happen. Someone put this stuff back in the box please. :D

                                      1. 6

                                        Exclamation Points! Get them cheap here!

                                        1. 3

                                          Sorry! We just ran out!

                                          1. 3

                                            I found some half-price, they’re upside-down but for this price that shouldn’t matter¡¡¡¡¡¡¡¡¡¡¡ƖƖ¡¡¡¡Ɩ¡¡¡¡¡

                                        1. 9

                                          though some of my friends argue that it’s a bit of a relic of past at this point

                                          LDAP is tried and true technology. It works, it’s functional and malleable and has stood the test of time.

                                          It surely needs some investment to learn it, but it’s well deserved.

                                          1. 3

                                            Yeah. It’s old (from the 80s?). It’s well established. It’s boring. It works. It’s misunderstood. It’s niche. It’s not exciting, people don’t seek it out … it runs into you at work.

                                            The tools are text based and archaic. Seems like it could be disrupted with a bit of JSON but then what’s the point? Human readable? Web technology? Oauth and others are kind of already in this space but then they are solving a different problem. What’s really interesting is the tree and how other tools try to do what it does natively. Try to make an org chart with groups and delegated access areas in a flat database (not just authentication but roles permissions (authorization). :| Pretty difficult!

                                            Apache Directory Studio is a nice GUI to learn LDAP (with reinforcing success). 389 was a good server from Redhat. It’s been a long time since I’ve been submerged in it. Like anything else, it’s frustrating and stupid until it’s not.

                                            I still think it could be reinvented. But then adoption. A few projects exist (virtual directories) that maybe abstract and solve it enough to make it modern-ish. Trees are pretty cool but almost unexplainable to the masses with users in their flat user table. “Why would I need that?” Hmm, maybe you don’t. Let me explain what it can do and see what you think.

                                          1. 20

                                            So I am in kind of a similar situation, just in regards to Python. With all the virtualenv and pip stuff, I usually give up. This is especially annoying when I want to contribute to a larger Python project, but it assumes this or that workflow. A site or a wiki that collects this frequently assumed answers would be really nice.

                                            1. 7

                                              +1. It seems like the preferred method for packaging software in Python changes monthly – and, of course, each open source project has their own way of doing things. Pipenv? Poetry? Good ’ol requirements.txt? Very frustrating.

                                              1. 5

                                                This, not just in python, is probably one of the reasons that LSP is gaining popularity. Not that I am a fan of it, but I understand that one big chunk of black-box code handling whatever environment one lands is in attractive.

                                              2. 2

                                                I feel like Python is high-DIY culture. There’s one way to do things but low conventions. Pretty weird. Add in a mix of domains (now with machine learning and data science), the background and needs of authors is changing. I’ve tried to bring in a minimum of quality-of-life tooling from Ruby/Go/Elixir times and found only a few laterals. Poetry for dependencies. ipdb for a REPL. Pytest for tests. asdf for switching language versions. These have worked for me but they aren’t universal answers (and they needed bespoke config and hackery to me). Every project is different. This is the problem with an older language like python who had to add in some modern tooling after the popularity explosion.

                                                1. 1

                                                  Have you tried in Python 3? It’s got virtualenv baked in, and many projects already have their packaging decisions made.

                                                  That said, no doubt packaging is Python’s achilles heel.

                                                  However I’ve yet to encounter any sufficiently rich ecosystem that doesn’t have packaging foibles. I seem to recall Perl having a fair number of hoops to jump through to get something into CPAN, and ditto for Ruby and gems.

                                                1. 2

                                                  This is the feeling, his journal of the learning burn. A stranger in a strange land. No one like feeling dumb. Adults hate it so much that we aren’t like kids, sponges. Adults hate it so much we can’t learn new skills unless we get past it. And with computers already making us feel dumb (yes, computer you are correct, I see, my bad [fix]) on an hourly basis, how can you even take on more toopid points?

                                                  I think it’s at least an energy thing. You have to find some downtime between projects. Extra energy. I don’t know how that happens personally. For me, recently, it was having time between jobs. I mean, it’s rare to just “have time” and not “make time”.

                                                  Or another way to look at it. The Seven Stages of the Blub Paradox.

                                                  1. Shock - Python 2 is EOL?!
                                                  2. Denial - Surely they will do an extension for security updates.
                                                  3. Anger - Ugh. I have to change all this stuff!
                                                  4. Bargaining - Maybe I can just 2to3.
                                                  5. Depression - Maybe I’m too old to learn Python 3. I should go into management.
                                                  6. Testing - What if I just start a py3 branch …
                                                  7. Acceptance - Hmm, f strings are pretty neat.

                                                  Hmm, metaphor is very strained and this is not really what burning in a new ecosystem is like. But you can imagine anything you could do in python 3 would be possible in python 2. Turning complete vs neat tricks. All languages are the same but that’s not the point. Each language has taught me something and then diminishing returns like @arp242 said. End of history illusion. This won’t be the last time you face a new ecosystem. I try to understand the world view and reason for existing. Then play around. Then look at that reason for existing again.

                                                  1. 12

                                                    I enjoyed reading this. NixOS is still surprising to me, I don’t know how to summarize my experience but I think I still need to internalize NixOS a bit. I’ve chunked it as “grub labeling” which is reductive. My distracting thought while learning about NixOS was that rolling back a database with all of its state would require downtime because “select a grub label”. Is zero downtime with a production database possible with NixOS? How can you rollback something like that? If some external volume is exempt from Nix somehow, this still breaks the use case where a postgres major version bump horks everything.

                                                    Anyway, my own Nix learning aside, my only distracting thought while reading this article was similar: “another example of the database killing all good ideas”. How many ideas have a footnote about the database?

                                                    ‡ Does not apply to database - abandon all hope

                                                    Random good ideas that are killed:

                                                    • Containers - volumes are too tricky for a business outage
                                                    • Backups - you didn’t do a full realistic restore / flip because your hardware is too big
                                                    • Configuration Management - it’s the same as backups, you rarely exercise it fully
                                                    • Load Balancing - the middle tier scales and deploys with grace

                                                    Of course I mean a relational database like mysql/postgres/etc. I know there are exceptions and shops that mitigate these things that resist. I’m not trying to enumerate the things that don’t match. My point is, the database kills good ideas. I don’t like state, this post is about not having darling state. But then we can’t easily apply it to a state server - which is what a database is.

                                                    1. 11

                                                      If I understand the article correctly, then you can just use /persist directory to store your database data and it will be safe between restarts. About updates, as you can have multiple versions of the same software running at ease in the same OS (and all managed by the package manager) then I do not see much problem there. You spawn new server, copy data from one to another and then swap the listening daemon.

                                                      1. 5

                                                        Is zero downtime with a production database possible with NixOS?

                                                        Absolutely. In fact, it’s probably easier (based on my anecdotal experience) to deploy NixOS systems that maintain zero downtime than many other distros.

                                                        If NixOS made statefulness impossible, it’d be unusable for hosting most things that required a database. Instead, NixOS removes the direct ability to mutate system configuration - as opposed to user and application data. So, on a NixOS deployment, you still have your runbooks for things like database migrations and such, but those would be just baked into your system or package config as systemd units or what have you.

                                                        A good example of this is how NixOS handles certbot/Let’s Encrypt - it deploys a systemd timer that takes care of managing private key generation and getting certificates - i.e. that service’s whole state in /var/lib - for you. If you change the configuration, then the deployed systemd timer changes too, but the certs and privkeys remain. Since the difference between config and data is enforced at a filesystem level, this lets you, as a sysadmin, make useful assumptions about what’s actually mutable data and what’s not.

                                                      1. 3

                                                        I know what OP is getting at but I don’t like this. Config formats are usually a lib. Installing python is something else. They say “and then you’re done” but you aren’t done. Interpreted languages make you install a dev environment every time. Unless you are sharing on the web (no local files). Given that python’s easy_install / pip / pipenv / poetry / virtualenv / asdf / .tool-version / unresolved PEP specs … is a jungle of opinions and bit-rotting usability, how is this done? Let me ask you this: how do you install python? A cartoon fight cloud appears ;)

                                                        Ruby has no better answer (although reduced) and JS has many options for installing a node runtime of a particular version and managing libraries. It’s because of a hidden cost. Interpreted languages ask you to set up a dev environment, pretend to be a developer every time. It’s why so many READMEs don’t even want to get into it. Don’t say docker. :) You could say pypy. :) But then that’s not a config file? I liked HCL the last time I used it as a library in a program.

                                                        1. 3

                                                          I have zero experience writing GUIs, but it seems to me that declarative UI like SwiftUI or the upcoming Jetpack Compose is the wave of the future and perhaps the best way to approach creating cross-platform GUI frameworks. It’s unfortunate that the two examples that I just mentioned are tied to macOS/iOS and Android, but perhaps they could be ported to other platforms like Mono did with C#?

                                                          1. 3

                                                            Jetpack looks interesting. Yeah, I hope whatever sane approach cross pollinates out to everyone. I feel like we went full web and overshot some things. If I had to make Winamp right now, I’d probably use Electron and that’s kind of crazy? It’s not really on the web but I’m using web tech.

                                                            1. 1

                                                              I have a lot of experience writing Windows GUIs, mostly in C# WinForms and WPF, and have tried C++/MFC a bit. The C# ones are quite nice IMO, and MFC is kind of manageable in comparison. I don’t know how much support there is on other OSes - it does seem to be more than “none” with Mono. But that kind of just confirms the author’s point. Windows and OS X both have perfectly fine GUI app development languages and environments, and they don’t really support any other OS right.

                                                            1. 3

                                                              I don’t think we’ve figured out the view layer yet. The view layer (for me) is when all my good habits go out the window. My view code is completely different than the “plain old code”. Components are a nice move but it doesn’t tidy up nicely for me. There’s code in the view and it’s weird. Maybe view code is not code at all. Maybe it’s a document, a result. It’s something we haven’t found the right paradigm for. Maybe it’s a database. Maybe it’s a blob. Maybe it’s a frizzlewuzzle.

                                                              1. 1

                                                                I quite like the approach popularised by winforms and react, where only essential-local state (like text cursor location) is held in a component, while everything else is derived from externally-set properties.