1. 4

    It’d be interesting to compare httpx here: https://www.python-httpx.org/

    1. 1

      I’ll add this tonight! Looks great, hadn’t heard of it.

    1. 2

      Great article! I might try async requests sometime.

      When using Requests, I throttled to avoid overwhelming remote server. Like in BASIC days, I just used “sleep” to guarantee a minimal amount of time between requests. Has the advantage that it’s less likely to be buggy or have unpredictable behavior than my custom, throttling code.

      1. 4

        Adding sleep is good, especially if you’re trying to remain undetected.

        However people typically gravely underestimate what a server can handle. Even at home servers can handle 10k connections/s if configured properly.

        There’s something to be said about being nice, but in general I say that you can hit things as hard as you want and the server won’t stutter.

        1. 2

          There’s something to be said about being nice, but in general I say that you can hit things as hard as you want and the server won’t stutter.

          That works fine when there is simple rate-limiting and tracking on the server end. When you are dealing with larger APIs, or services that might be sensitive to request rates (e.g. LinkedIn), then you need to be aware of how they may take action later. Your client may appear to be working the first time around and then you get blocked later. It is worth understanding more about the service you are making requests to and taking a cautious approach because the response may be more sophisticated than you are prepared to deal with.

          1. 1

            you get blocked later

            Yes please always check this first, you don’t want to run into captcha requests (yt-dl..)

      1. 2

        Original author here, let me know if you have any questions

        1. 2

          You accidentally multiplied in a factor of 24. 72 years is only ~26k days.

          1. 1

            Wow, that’s a huge mistake. I’ll update it. Thanks!

          1. 1

            pct is the percent of “bad” emails on which to apply the policy

            Why would you ever want to change that to below 100%?

            1. 4

              pct is the percent of “bad” emails on which to apply the policy

              This is bad wording on my part - it’s actually what percentage of failures should be reported. This is to prevent an inundation of email reports. I’ll update it, thanks!

              1. 1

                Oh that makes sense. Thanks for clarifying!

                1. 3

                  Actually, I’m incorrect. The RFC actually does specify it as the percentage of emails on which to enforce the policy. Weird, right?

                  The idea is to allow a slow rollout of DMARC policies, to make sure nothing breaks. It’s not meant to be kept at that percentage forever - it’s for the first time you set up DMARC, in case you mess it up, you can start at 1% of emails and make sure they still get delivered.

            1. 8

              I can only assume the figures cited are in USD, but the article is on a .ca domain (Canada), so there’s some ambiguity there.

              1. 4

                Yes, apologies if that wasn’t clear - these are California-centric offers (and can probably be extended to include Seattle and New York). They are not across the US, or across all software jobs.

                1. 2

                  Startup-centric offers, too, it looks like.

                  1. 2

                    google and facebook aren’t startups.

                    1. 1

                      Startup- and FAANG-centric offers then? Or is nearly every IT job in the state of California, including big non-tech firms, something giving ownership of the company? Genuine question as I assumed there were lots of boring ones out there with just salary and benefits.

                      1. 2

                        Tech industry offers.

                        1. 1

                          So, it is California-wide in tech industry that you get equity and stuff? And not just Silicon Valley?

                          That’s nice cuz it’s nearly non-existent down here. You just get wages and basic benefits.

                          1. 1

                            Oh! Where do you live/work?

                            1. 2

                              I’ve mainly been in “Tri-State Area” of East Arkansas, West Tennessee, and Northern Mississippi. Generally close to Memphis, TN.

                              See, I noticed a bubble effect on tech forums where people think most IT jobs are like theirs. Not limited to startup or FAANG folks. So, Im pressing this out of curiosity if California, throughout the state, is really that different or localized to specific areas/companies. I doubted bank, insurance companies, warehouses, etc were making these offers.

                              Another person said just tech companies. So, my next question would be whether it’s all or most consulting firms, SaaS suppliers without VC-funding, etc outside Silicon Valley. It would be very advantageous if they were all doing good pay, benefits, and equity in areas with lower cost of living and less churn (job security). I’m doubting it by default but would happy to be wrong.

                              EDIT: Btw, good to see you again. I was thinking about messaging you this week asking where you been.

                              1. 2

                                Consulting firms aren’t generally considered tech companies. See https://stratechery.com/2019/what-is-a-tech-company/

                                1. 2

                                  I’m talking specifically about the ones that build, deploy, and maintain software/systems for clients. That linked definition looks like a high-scale-at-no-cost, tech company. A subset of them.

                                  Perhaps there’s a different definition of “tech company” in your area or even most areas. Here and in many places I read, a tech company is a company that primarily makes money on tech as a product or service, esp IT. These massively-scaling, tech companies are rare. Silicon Valley is itself rare in nature. I wouldn’t favor using them as the general definition of “tech companies.” I’m also not sure there was a nation-wide consensus on them being tech companies vs broader definition that includes IT-focused businesses.

                                  I’m thinking most business people and developers outside Silicon Valley would call a software shop a tech company even if it didn’t meet Stratechery’s definition. I’m not sure, though. Worth a survey. Meanwhile, I’m guessing you’re saying the jobs that have the extra benefits are just the “tech companies” that meet that site’s definition in California. That does narrow it down from “tech in California (statewide)” to that type of tech company that’s mostly in one area.

                                  1. 1

                                    The fancy tech companies do also exist outside the Bay Area, in NYC, Seattle, Boston, etc.

                                    1. 1

                                      There’s a few more places sure. The point is that limiting the term to one type of company doesn’t make a lot of sense given other kinds specialize in building and delivering tech. The deliverables are one-off custom, semi-custom, and mass market. The finance models range from venture to built up from loans or personal investments.

                                      On the label itself, Inc quotes people using my definition, including Gartner Research. Broader investigation with the Alex Payne quote looking the most solid (and supporting my definition). If going by that definition, virtually no tech companies offer some of the benefits in the linked article outside specific kinds in big cities that are themselves not representative of most places. A good argument for moving to them, maybe, but not going to a “tech company” in general for those benefits.

              1. 3

                Slightly odd corner case in Lobsters’ tagging system here: this is a written article. It is tagged video because it’s about video. Usually that tag is meant to be used to allow people to filter out posts where the content is a video rather than the written word.

                Not sure if lobsters’ tagging actually has a solution for this? So I’m going to just go off on a tiny bikeshed aside here: perhaps the ‘video’ tag could be called type:video or something like that? The same thing comes up for people wanting to filter out posts that are links to PDFs then being unable to see ordinary written posts on the web about topics like PDF rendering.

                1. 2

                  Yeah I was actually a little wary about putting the video type but there didn’t seem to be any other tag that fit better - definitely nothing on HLS or streaming.

                  I think a type: prefix sounds like a great solution.

                1. 3

                  Ugh… I know it’s not the point of the article but still hurts me to see sql that’s vulnerable to injections.

                  1. 1

                    Ah you’re right. Fixed it.

                  1. 1

                    Well written and interesting article. Looking forward to Part 2

                    1. 8

                      Deno seems very interesting. He’s trying to rectify his mistakes and adopt typed, async JS.

                      URL imports seem somewhat interesting but have one large flaw in my opinion - they drop the singular responsibility of security in exchange for a shared one.

                      NPM has a lot of issues, especially security ones, but at least it’s an actively maintained, up to date entity. If something goes wrong someone will notice very quickly and they will attempt to rectify it very quickly.

                      With URL imports, however, I’m worried that we’ll lose the quickly part of this. Imagine a scenario in which an early adopter makes a left-pad like module - something trivial that isn’t in the STL but everyone wants. In a few years they migrate away, and become removed from the tech community (but still host their domain/js)

                      If this domain gets compromised, every package with that import will be done. There won’t be a central authority that can sink that package or domain - it will be a shared responsibility of every developer, which is significantly more dangerous than the singular responsibility of npm.

                      1. 14

                        I was amused to note that he cautions against using “cute” code and then goes on to call URL imports cute at around 21:14

                        1. 4

                          The main thing I’d counter with is that the web has run on URL imports like that, and has managed it fairly well. Spread of imports also means spread of attack targets. NPM is a singular target by comparison, vs the various CDNs, libraries copied locally, and so on that this scheme proposes.

                          Overall, it’s a different tradeoff, and so far, this JS runtime is a thought experiment.

                        1. 2

                          I’ve used Streisand to create a VPN endpoint on a Digital Ocean droplet. AWS, Google Cloud and more are supported out of the box.

                          1. 2

                            Streisand looks very cool!

                            I’ve personally used Algo a lot - spins up a quick IPSEC VPN. I’ll trash it and make a new one every few weeks, really well written and good bit of software.

                          1. 14

                            Not much has changed since 2012 when this was published, Rust & Go (released in 2009/10) still don’t have any GUI libraries anywhere near as simple as REBOL, instead wrapping an older GUI system or using XML. In fact, most programming languages seem to expect you to make the GUI in HTML/JavaScript/CSS now, 6 years after this post. Even visual programming languages like Scratch rely on replicating textual elements with graphical puzzle pieces. Once you get into Piet or pureData, the language is visual, but I wouldn’t recommend trying to write a GUI app with an esolang like Piet, pureData, on the other hand, lets you create a simple or more complex GUI but it’s nothing you would traditionally consider code.

                            1. 3

                              Have you tried / what do you think of something like ImGui (especially when exposed to a “scripting” language)?

                              1. 1

                                Haven’t seen it before – looks good – and it’s been ported to Rust/Go & Javascript… https://github.com/ocornut/imgui

                              2. 1

                                I think that the we are still far away from the ideal library the article mentioned.

                                Electron (for all it’s flaws) get’s fairly close. It’s not as descriptive, but HTML paired with a good CSS framework can get fairly close.

                                There’s nothing that’s true native that comes even close though. I wonder how long it’ll take. Maybe with the advent of UIKit for macOS/iOS we’ll get another paradigm shift?

                              1. 1

                                The high count for times that compdef is run indicates that this run is without a cache for compinit. The lack of calls to compdump also indicates that it isn’t creating a dump file. This cache, speeds compinit startup massively. I’m not quite sure how the blog post author has contrived to not have a dump file.

                                1. 4

                                  “contrived” is a rather strong word to use here and suggests intent (perhaps to deceive). I’m not sure if it was your intent or not.

                                  1. 3

                                    This actually is with a dump file. A .zcompdump-MACHINENAME file is created in ~ (see here).

                                    The issue is that this is recreated each time the shell starts up. There are multiple places in OMZ that call compinit. In the additional reading at the bottom there is a link that modifies zsh to only recreate it once a day, but I still feel like that’s not ideal.

                                    1. 2

                                      Can you redefine compinit to a no-op during OMZ loading, and then do it yourself at the end?

                                      IMO zshrc should explicitly call compinit so it happens exactly once in a central location.

                                    2. 2

                                      Maybe he didn’t know?