1. 2

    If I understand correctly, this is meant for use by individual devs, each signing their own commits and tags. How is this better than each dev having their own signing key living on their own machine? I see a lot more of “going rogue” potential in this system.

    1. 3

      I understood it a bit differently, this approach is trying to add the same guarantees (as developers signing their commits and tags) to the outputs of the CI.

      With it, the CI system can sign a commit or, more commonly, a release tag at the same time and using the same role and key material as the other deployable artefacts in the release.

      1. 1

        Yep sorry, ended up replying on top of you. This is what was meant in the post.

      2. 3

        Not quite, this is more useful for the CI build agents that are often next in the chain. So after your devs have signed individual commits and those commits are being rolled up into deployable artefacts by the CI system.

        The CI system can authenticate to Vault as its “CI system identity” to sign those artefacts to mark them as deployable after having passed whatever rigour is appropriate, and the subsequent deployment environments verify the signature upon admission.

        This tool is one way to do that when the “deployable artefacts” in question are actually just e.g. IaC files on a branch instead of binaries or tarballs. There’s other ways to achieve this chain of custody of course. One simply being tarballing up the branch or copying its HEAD SHA into a file and signing that. Another I’ve read about (but can’t remember from where) is the propagation of dev’s user identity into the CI stage if you’re on the AWS stack, using their SSO and Code* products.

        1. 2

          Ah, so if I understand correctly this would be more likely to be used for signed tags than commits? That makes sense, thanks!

      1. 0

        So they reimplemented PagerDuty? I.e. they built something which (I assume) is not their core business i.e. what they sell to their customers. I wonder if no one suggested buying rather than building.

        1. 4

          Buying can (and often does) have a greater cost than building just the right amount of utilities for yourself. It’s always good to make at least a back-of-a-napkin approximation to check if buying is the right thing to do.

          1. 1

            OK but they had at least three full-time employees working on it. According to https://www.pagerduty.com/pricing/ the Pro plan is $19/mo per employee. Let’s say they have a hundred employees on it, that’s $23k/yr, which is cheaper than three full-time engineers, unless Silicon Valley ain’t what it used to be. I mean even if you multiplied by ten it’s almost certainly still cheaper than even a single FTE.

            1. 1

              Unless these employees were on it in a minor weekly commitment

        1. 22

          The fact that we even need to worry about sandboxing for looking at glorified text documents is embarrassing.

          1. 27

            Your PDF reader also ought to be sandboxed; malicious PDF documents have used to hack people.

            Ideally, your ODT reader also ought to be sandboxed. There have been RCE bugs in LibreOffice where malicious documents could exploit people.

            Reading untrusted user input is hard. Hell, even just font parsing is fraught with issues; Windows’s in-kernel font parser was a frequent target of bad actors, so Microsoft sandboxed it.

            Sandboxing is almost always a good idea for any software which has to parse untrusted user input. This isn’t a symptom of “the web is too complex”; it’s a symptom of “the web is so widely used that it’s getting the security features which all software ought to have”.

            The web is also too complex, but even if it was just basic HTML and CSS, we would want browsers sandboxed.

            1. 2

              Maybe the parent comment should be rewritten as, “The fact that we even need to worry about sandboxing for using untrusted input is embarrassing.” A lot of these problems would be solved if memory-safe languages were more widely used. (Probably not all of them, but a lot.)

            2. 23

              We need to worry about sandboxing for any file format that requires parsing, if it comes from an untrusted source and the parser is not written in a type-safe language. In the past, there have been web browser vulnerabilities that were inherited from libpng and libjpeg and were exploitable even on early versions of Mosaic that extended HTML 1.0 with the <img> tag. These libraries were written with performance as their overriding concern: when the user opens an image they want to see it as fast as possible and on a 386 even an optimised JPEG decoder took a user-noticeable amount of time to decompress the image. They were then fed with untrusted data and it turned out that a lot of the performance came from assuming well-formed files and broken with other data.

              The reference implementation of MPEG (which everyone shipped in the early ’90s) installed a SIGSEGV handler and detected invalid data by just dereferencing things and hoping that it would get a segfault for invalid data. This worked very well for catching random corruption but it was incredibly dangerous in the presence of an attacker maliciously crafting a file.

              1. 8

                when the user opens an image they want to see it as fast as possible and on a 386 even an optimised JPEG decoder took a user-noticeable amount of time to decompress the image.

                Flashback to when I was a student with a 386 with 2MB of RAM and no math co-processor. JPGs were painful until I found an application that used lookup-tables to speed things up.

                1. 6

                  The reference implementation of MPEG (which everyone shipped in the early ’90s) installed a SIGSEGV handler and detected invalid data by just dereferencing things and hoping that it would get a segfault for invalid data. This worked very well for catching random corruption but it was incredibly dangerous in the presence of an attacker maliciously crafting a file.

                  I think this tops the time you told me a C++ compiler iteratively read linker errors.

                  1. 2

                    We need to worry about sandboxing for any file format that requires parsing, if it comes from an untrusted source and the parser is not written in a type-safe language.

                    But the vulnerabilities in the web are well beyond this pretty low-level issue.

                    1. 4

                      Bad form to reply twice, but I realise my first reply was just assertion. [Google Project Zero categorised critical vulnerabilities in Chrome and found that 70% of them were memory safety bugs]. This ‘pretty low-level issue’ is still the root cause of the majority of security vulnerabilities in shipping software.

                      1. 3

                        They often aren’t. Most of them are still memory safety violations.

                    2. 21

                      The web is an application platform now. It’s wasn’t planned that way, it’s suboptimal, but the fact that it is has to be addressed.

                      1. 17

                        Considering web browsers to be “glorified text documents” is reductive. We have long passed the point where this is the priority of the web, or the intention of most of it’s users. One cannot just ignore that a system that might have been designed for one thing 30 years ago, now has to consider the implications of how it has changed over time.

                        1. 5

                          The fact that we even need to worry about sandboxing for looking at glorified text documents is embarrassing.

                          Web browsers are now a more complete operating system than emacs.

                          1. 4

                            The fact that sandboxing is still a typical strategy, as opposed to basic capability discipline, is embarrassing. The fact that most programs are not capability-safe, let alone memory-safe, is embarrassing. The fact that most participants aren’t actively working to improve the state of the ecosystem, but instead produce single-purpose single-use code in exploitable environments, is embarrassing.

                            To paraphrase cultural critic hbomberguy, these embarrassments are so serious that we refuse to look at them directly, because of the painful truths. So, instead, we reserve our scorn for cases like this one with Web browsers, where the embarrassment is externalized and scapegoats are identifiable. To be sarcastic: Is the problem in our systemic refusal to consider safety and security as fundamental pillars of software design? No, the problem is Google and Mozilla and their employees and practices and particulars! This is merely another way of lying to ourselves.

                            1. 4

                              It’s text documents from unknown/untrusted sources over a public network. If you knew exactly what the text document contained a priori you wouldn’t need to blindly fetch it over the internet, no?

                              To me the issue is we’ve relaxed our default trust (behaviorally speaking) to include broader arbitrary code execution via the internet…but the need for trust was always there even if it’s “just text.”

                            1. 5

                              git rebase is a great tool for fixing up whatever mess you did while developing. Now I don’t even consider commit messages before making sure my changes work, and then do a git rebase session to make the commit history sensible to others.

                              1. 1

                                I long ago thought about a GUI tool visualizing the git history as a tree, where you’d be able to drag & drop commits around, up & down a branch and between branches, and also easily split commits. (I.e. git rebase -i but more streamlined/actually interactive.) Didn’t try working on it, unfortunately, too many other unfinished projects, and I’m in a local maximum of being relatively fluent enough doing this in CLI git.

                                1. 2

                                  Ungit might be a tool you want. Not quite that strong, but very nice to work with history in a visual way.

                              1. 4
                                1. Device crates expose hardware peripherals as distinct types
                                2. One does not simply compute with distinct types in Rust

                                The whole second half of the post reads to me like elaborate workarounds for a lack of OOP. The way I’d solve this problem would be to create an interface describing a peripheral, create an implementation for each actual peripheral type, and then code everything to that interface. (Ideally the device crates would already be using a common predefined interface, saving me the trouble.)

                                It’s interesting that Zig has an “inline for” statement that can handle heterogeneous types, but then you’ve “solved” the problem at compile time with an arbitrarily large unrolled loop. That doesn’t seem like a good way to do this, especially on a microcontroller.

                                (I know Rust has interfaces. But Zig has no OOP at all, last I looked, which is one reason I haven’t considered using it.)

                                1. 4

                                  if you want interfaces in Zig, you can make them. In this case a good starting point would be to create a tagged union over all the distinct types.

                                  Here’s a talk on interfaces in Zig https://www.youtube.com/watch?v=AHc4x1uXBQE

                                  1. 2

                                    Well yeah, you can implement interfaces in any language. Turing-completeness, amirite?

                                    It looks like Zig’s pattern for interfaces is a struct containing function pointers that take a pointer to the struct as the first argument, I.e. handmade vtables. I remember doing this in Pascal in the mid-80s. Haven’t we moved on from that in the last 30 years? I like a language that does a bit more work for me.

                                    In a nutshell my objection to the “nothing’s hidden, every function call is explicit” school of language design is that it kind of locks you into the set of abstractions provided by the language. Any abstractions you try to build have all their rivets and wiring exposed, and can’t be used cleanly.

                                  2. 2

                                    Ideally if you care about size, the compiler should roll-up the unrolled loops where it can, keeping the type safety intact. But I’m not sure if LLVM and/or any other compilers have this potential optimization.

                                  1. 3

                                    A more mathematical solution to a Fizzbuzz without if’s (though dictionary lookup is arguable). Previous discussion

                                    1. 2

                                      That solution just pushes the if’s into the cond form:

                                      { 1: n, 6: "Fizz", 10: "Buzz", 0: "FizzBuzz" }
                                      

                                      There’s another if in this form:

                                      for n in range(100)
                                      
                                    1. 2

                                      I guess this is something like Debian’s posh?

                                      1. 5

                                        posh still has more features than POSIX requires, that mrsh doesn’t have, like arrays.

                                      1. 5

                                        Note: the article has seen some quite significant changes clarifying what it meant since this has been posted, I would recommend to skim it once again.

                                        1. 9

                                          Interesting take. It sounds like the author of Godot is very comfortable with OOP and it’s been working so far, so why stop doing something that is working?

                                          Writing many things with the ECS pattern can definitely feel more verbose than inheritance in my experience so far, but Rust doesn’t really make inheritance possible, so an ECS approach makes a lot more sense.

                                          It was a shame that the main thing the author pointed at was the generic “Node” base class at the center of Godot and didn’t really go into detail on how you would solve problems in both Godot and in ECS.

                                          For example, if a soldier enters the enemy base for more than 10 seconds they capture it. In ECS, I would create a system for checking if a soldier is in a base “SoldierInBase(Base)”, then I would add a system for checking whether the soldier and base are on opposite sides with a start time “SoldierInEnemyBaseSince(Time)” then a final system for SoldierCapturedBase that would look for any “SoldierInEnemyBaseSince(Time)” for over 10 seconds.

                                          What would any of that have to do with inheriting from the Node class?

                                          1. 5

                                            Writing many things with the ECS pattern can definitely feel more verbose than inheritance in my experience so far, but Rust doesn’t really make inheritance possible, so an ECS approach makes a lot more sense.

                                            Interestingly, one of the most popular language bindings to Godot is Rust which makes inheritance work with a bunch of macro magic.

                                            The example you gave allows to show quite well how Godot makes it easy to use both composition and inheritance. In Godot for that I would make a Node Capturable that inherits from a Timer, that has on_area_enter and on_area_exit methods which get connected to corresponding signals of Area that covers the capture zone. The methods would check if whatever entered the base are eligible to cap it (via classes or methods, doesn’t really matter) and if so would start the Timer. It would then emit a captured() signal when capture is complete. Some of the benefits this gives is the easy extensibility, e.g. if I wanted the capture speed to change with the number of soldiers, or blocking the capture if a friendly soldier is in the area, I could do that with a few lines of code. To use it, you would add the Capturable node to the scene tree of a base and connect the area accordingly.

                                            What would any of that have to do with inheriting from the Node class?

                                            Node class in Godot allows inclusion into the scene tree, and is the main way composition is done. Node actually inherits from Object, which is the “real base class” that everything inherits from, but it is less often used. Every Node object in the scene tree gets processing calls every render/physics frame, handles pausing, etc. which allows you to quickly add behavior to your game without worrying about interfering with pausing or destruction, etc. as that is handled by other systems.

                                            1. 5

                                              The difference comes into play when you’re inheriting, extending or overriding behavior. Your example in OOP would involve for example inheriting BaseCapper. So far, no big difference. But what if you want to customize that behavior? In OOP, you might for example override canCaptureBase(), which decides if a soldier is allowed to capture at a given moment. In ECS, you’d have to have a tag component CanCaptureBase that is added or removed by the system that handles this condition.

                                              It doesn’t seem too different, but in OOP it’s much easier to see what’s being overridden where: I can take a look at BaseCapper.class to see exactly what inherits it and trace back its different uses. Under ECS, there’s not an easy to see what would happen with a given entity, since components can be added or removed to it at runtime by arbitrary systems. In your example, there’s no easy way for me to know that an entity created in one system would eventually be the target of SoldierCapturedBase, since it might have to be processed by System A to gain CanCaptureBase, then System B to gain SoldierInBase, etc.

                                              Simpler example: It’s a lot easier to override an “onPlayerHit()” method than PlayerHitSystem.

                                              I’m generally not a fan of OOP and have spent years evangelizing against it. I’ve been doing a lot of ECS work and lately I’m coming to appreciate the points raised in the article. Namely, if you don’t need the performance benefits, OOP might be easier to work with. Despite OOP’s legendary verbosity, it can be a lot more concise than ECS (where you might have to declare several components and systems, and constantly repeat iteration boilerplate).

                                              1. 3

                                                Your example was very interesting! Is there a place where I could read more about example ecs systems and best practices in general?

                                                1. 2

                                                  If you don’t mind a focus on Rust and have a bit of time, I really liked this talk draft, which approaches ECS from a data-oriented programming direction.

                                                  Lots of glorious nitty-gritty details in this one.

                                              1. 1

                                                I’ve been working with AWS CDK for the last few months, and oh, god, it’s so much better than plain CloudFormation templates. I cannot even imagine going back to writing yaml, because just using code for such configuration works so much better.

                                                1. 9

                                                  I don’t really like Go, and have only done like …. 5 pages of it ever. But I feel like that goroutine example is pretty convincing that Go is easy? The amount of futzing about I have to do in Python to get a similar-working pipeline would lead to maybe 3x as much code for it to be that clean.

                                                  I dunno, Go reintroduces C’s abstraction ceiling problem meaning it’s hard for someone to show up and offer nicer wrapper for common patterns. But if you’re operating at the “message passing” and task spinup level, you’re gonna have to do flow control, and that code looks very nice (and local, which is a bit of a godsend in this kind of stuff).

                                                  Though I feel you for the list examples. Rust (which, granted, has more obstacles to doing this) also involves a lot of messing around when trying to do pretty simple things. At least it’s not C++ iterators I guess

                                                  1. 18
                                                    import asyncio
                                                    
                                                    async def do_work(semaphore, id):
                                                        async with semaphore:
                                                            # do work
                                                            await asyncio.sleep(1)
                                                            print(id)
                                                    
                                                    async def run():
                                                        semaphore = asyncio.Semaphore(3)
                                                        jobs = []
                                                        for x in range(20):
                                                            jobs.append(do_work(semaphore, x))
                                                        await asyncio.gather(*jobs)
                                                        print("done")
                                                    
                                                    asyncio.run(run())
                                                    

                                                    About the same length in lines, but IMO quite a bit easier to write.

                                                    1. 4

                                                      That is a good example but you can’t really compare asyncio and go routines. The latter are more like “mini threads” and don’t need to inherit all the “ceremony” that asyncio needs to prevent locking up the io loop.

                                                      1. 12

                                                        Goroutines are arguably worse IMO. They can be both run on the same and a different thread, which makes you think about the implications of both. But here’s an a bit more wordy solution with regular Python threads which can be more comparable:

                                                        import threading
                                                        import time
                                                        
                                                        def do_work(semaphore, id):
                                                            with semaphore:
                                                                # do work
                                                                time.sleep(1)
                                                                print(id)
                                                        
                                                        def run():
                                                            semaphore = threading.Semaphore(3)
                                                            threads = []
                                                            for x in range(20):
                                                                thread = threading.Thread(target=do_work, args=(semaphore, x))
                                                                thread.start()
                                                                threads.append(thread)
                                                            for thread in threads:
                                                                thread.join()
                                                            print("done")
                                                        
                                                        run()
                                                        

                                                        As you can see, not much has changed in the semantics, just the wording changed and I had to manually join the threads due to a lack of a helper in the standard library. I could probably easily modify this to work with the multiprocessing library as well, but I’m not gonna bother.

                                                        Edit: I did bother. It was way too easy.

                                                        import multiprocessing as mp
                                                        import time
                                                        
                                                        def do_work(semaphore, id):
                                                            with semaphore:
                                                                # do work
                                                                time.sleep(1)
                                                                print(id)
                                                        
                                                        def run():
                                                            semaphore = mp.Semaphore(3)
                                                            processes = []
                                                            for x in range(20):
                                                                process = mp.Process(target=do_work, args=(semaphore, x))
                                                                process.start()
                                                                processes.append(process)
                                                            for process in processes:
                                                                process.join()
                                                            print("done")
                                                        
                                                        run()
                                                        
                                                        1. 10

                                                          goroutines aren’t just good for async I/O. They also work well for parallelism.

                                                          Python’s multiprocessing module only works well for parallelism is basic cases. I’ve written a lot of Python and a lot of Go. When it comes to writing parallel programs, Go and Python are in different categories.

                                                          1. 4

                                                            It’s best to decide which one do you actually want. If you’ll try to reap benefits of both event loops and thread parallelism, you’ll have to deal with the disadvantages of both. Generally, you should be able to reason about this and separate those concerns into separate tasks to be able to separate your concerns. Python has decent support for that, with asyncio supporting running functions in threadpool or processpool executors.

                                                            I do agree though that Python isn’t the best at parallelism, because it carries quite a lot of historical baggage. When it’s threading was being designed in 1998, computers with multiple CPU’s where rare, and the first multi-core CPU was still 3 years away[1], with consumer multi-core CPU’s arriving 7 years later. The intention for it was to allow multiple tasks to run seemingly concurrently for the programmer on a single CPU and speed up IO operations. At the time, the common opinion was that most of the things will continue to have only a single core, so the threading module was designed appropriately for the language, with a GIL, giving it safety that it will not corrupt the memory. Sadly the things didn’t turn out how they initially thought they would, and now we have a GIL problem on our hands that is very difficult to solve. It’s not unlike the errno in C, which now requires macro hacks to correctly work between threads. Just that GIL touches things that are a bit harder to hack over.

                                                            1. 7

                                                              I’m aware of the history. My point is that the Python code you’ve presented is not a great comparison point because it’s comparing apples and oranges in a substantial way. In the Go program, “do work” might be a CPU-bound task that utilizes shared mutable memory and synchronizes with other goroutines. If you try that in Python, it’s likely you’re going to have a bad time.

                                                              1. 3

                                                                The example with multiprocessing module works just fine for CPU tasks. asyncio works great for synchronization and sharing memory. You just mix and match depending on your problem. It is quite easy to deffer CPU heavy or blocking IO tasks to an appropriate executor with asyncio. It forces you to better separate your code. And in this way, you only need to deal with one type of concurrency at a time. Goroutines mashes them together, leaving you to deal with thread problems where coroutines would have worked just fine, and coroutine problems, where threads would have worked just fine. In go you only have a flathead scredriwer for everything betweem nails and crosshead screws. It surely works, sometimes even well. But you have to deal with warts of trying to do everything with one tool. On the other hand, Python tries to give you a tool for most situations.

                                                                1. 6

                                                                  The example with multiprocessing module works just fine for CPU tasks.

                                                                  But not when you want to add synchronization on shared mutable memory. That’s my only point. You keep trying to suck me into some larger abstract conversation about flat-head screwdrivers, but that’s not my point. My point is that your example comparison is a bad one.

                                                                  1. 3

                                                                    Give me an example of a task of that nature that cannot be solved using multiprocessing and asyncio and I’ll show you how to solve it. You shouldn’t try to use single tool for everything - every job has it’s tools, and you might need more than one to do it well.

                                                                    1. 4

                                                                      I did. Parallelism with synchronized shared writable memory is specifically problematic for multiprocessing. If you now also need to combine it with asyncio, then the simplicity of your code goes away. But Go code remains simple.

                                                                      You shouldn’t try to use single tool for everything

                                                                      If you think I need to hear this, then I think this conversation is probably over.

                                                                      1. 2

                                                                        Parallelism with synchronized shared writable memory

                                                                        You describe a class of problems. But I cannot solve a class of problems without knowing at least one concrete problem from the class. And I do not.

                                                                        1. 3

                                                                          Here’s an example of something I was trying to do yesterday:

                                                                          I wanted to use multiprocessing to have multiple workers pull (CPU-bound) tasks off a (shared) priority queue, process each task in a way that generates zero or more new tasks (with priorities) and put them back on the queue.

                                                                          multiprocessing.Manager has a shared Queue class, but not a shared priority queue, and I couldn’t figure out a way to make it work, and eventually I gave up. (I tried using heapq with a shared multiprocessing.list and that didn’t work.)

                                                                          If you can tell me how to solve this, I would actually be pretty grateful.

                                                                          1. 1

                                                                            I gave it a bit of time today, here’s the result. Works decently well, if you don’t do CPU expensive stuff (like printing big numbers) in the main process and your jobs aren’t very short.

                                                  1. 3

                                                    I somewhat swapped True and False in Python. Works worse than you might think sadly, due to True and False instances being stored in static locations and much of CPython checking truthiness by location.

                                                    1. 2

                                                      Not to sound snarky, but if you need a high performance text editor, your data probably shouldn’t be stored as text. Making an interface for non-textual data that performs decently can be easier and bring UI improvements.

                                                      1. 7

                                                        To give a contrapoint, I think having a snappy editor is very important to me. This is why I absolutely hated Eclipse back when it was big (which seems to have been toppled by IDEA, which feels faster) and which is why I found Atom to be utterly unusable (likewise, toppled by VS Code which also feels faster).

                                                        I am actually thinking a lot about latency these days and I am sad that despite having so much more powerful computers in many ways they feel slower than devices back in the days with 1 MHz.

                                                        1. 2

                                                          I wasn’t advocating for slow here. Responsive for most files is enough. I personally use VS Code for my work, because it is responsive while not sacrificing functionality. And I think that trying to make tools more responsive when dealing with larger files or larger structures is a bit backwards. Even with code - Smalltalk, and some Lisp machines employ custom storage formats (impossible to miss closing brackets for Lisp!) that allow for more efficient code browsing and writing.

                                                          1. 4

                                                            To be honest, when speaking about performance I am talking about the 95% use case, which is reasonable size source code files.

                                                            Some users may not notice a difference in the responsiveness between editors, but then there are plenty of us who do.

                                                            1. 1

                                                              That said, Sublime Text is also very performant while editing extremely large files, which can come in handy.

                                                        2. 1

                                                          Some people have to deal with very large codebases.

                                                          I absolutely think structural editors (for code) are the right way to go, but they don’t really exist yet. Plus, large codebases tend to be legacy that have to keep compatibility with older OSes or language runtimes.

                                                          1. 1

                                                            Oh, structural editors do exist, just popular languages aren’t designed for them. Smalltalk is based on that principle and it works great in that, but sadly it doesn’t have the traction.

                                                        1. 2

                                                          Display cables are the highest bandwidth cables in everyday use. So not much surprise that they are being (mis)used by networking equipment for their availability and bandwidth.

                                                          1. 4

                                                            I take contention with the author’s recommendation of “Handling if-else logic”.

                                                            The synopsis: for functions with a significant amount of branching logic it should be abstracted to a dict containing functions. I believe the reason the author recommends this is two fold:

                                                            1. Python lacks a proper case/switch statement instead deferring to to if...elif... and
                                                            2. maximum branches should generally be < 12.

                                                            However, this code is not, in my opinion, Pythonic. A dict containing function pointers is more akin to Javascript than Python, wherein you’re effectively executing arbitrary code as the result of a lookup. Further, I think this approach significantly reduces the readability of the codebase. You’re now deferring logic to a multitude of other functions rather than encapsulating the unit of work inside the caller.

                                                            I agree that Python’s lack of a switch statement is less than ideal but I ardently disagree that this “trick” is something to recommend others use.

                                                            1. 5

                                                              The dict approach is something that should only take place if there is more than ~10 options IMO. Sometimes it truly is the option to take, as it allows easy extensibility and can separate the code into more logical parts. But reaching 10 cases is sometimes a pointer that something else is wrong in the system, since most of the time there shouldn’t be so much different cases handled by a single piece of code. But sometime, it just has to be like that (in a parser for example) and that’s where you can use dictionaries to clean it up if you need to. Upcoming structural matching should make that choice obsolete though.

                                                              1. 1

                                                                Agreed on all counts.

                                                            1. 2

                                                              You can, of course, do everything with lambdas in Python, you just need to have enough of them

                                                              1. 1

                                                                Notably, you wouldn’t even get the performance improvement for using is as is suggested in the last paragraph, since CPython checks if the objects compared are the same object for most inbuilt objects(I only checked str and int, but I’m quite sure it’s the same for all of comparable builtins (maybe except float)).

                                                                1. 1

                                                                  Notably, even in security contexts plain PRNG’s can be useful. For example, I’m working on a cryptography project in Python, and I’m using the default random() there for getting randomness from seeded values that isn’t used for security purposes.

                                                                  1. 1

                                                                    Even then, you may wish to use a seeded CSPRNG (I don’t know if that’s actually the right term, but I think you know what I mean). I have found that the quality (by which I mean a lack of patterns in the output) of the insecure random can be remarkably bad.

                                                                    Several years ago a colleague was using Go’s math/rand to generate some testing data, and he noticed some really strange patterns in it. Swapped in crypto/rand and the problem went completely away.

                                                                    That was great, but of course not seedable, which can be important for repeatability or deterministic builds. But a simple high-quality seedable generator is easily built from a block cypher in counter mode, which can sometimes be exactly what is needed.

                                                                    1. 1

                                                                      Of course, the size of the PRNG state and the quality of it are very important depending on the application. Considering that I will only use ~1000 bytes at most of data from a single seed(that is from a secure random source) and that Python’s random implementation uses a Mersenne Twister with a period of 2**19937-1 I think my use case doesn’t have to worry about most of the pitfalls of a PRNG. On the other hand, Go’s math/rand generator seems to have a bunch of problems.

                                                                  1. 23

                                                                    I agree with Ben; I believe that massively complex HA-clustered systems can easily be less reliable and require a hell of a lot more devops work than an application running in this single node architecture.

                                                                    90% of apps out there can put up with 15 minutes of downtime if you have an outage. Build it into your terms of service: “system can go down for up to 15 minutes during $MAINTENANCE_WINDOW”. Now you can also run database migrations without needing to implement some insane “zero downtime” policy that requires 10x as much engineering effort.

                                                                    1. 5

                                                                      user expectations continue to rise though. This isn’t the 90s Internet anymore. The big services have damn near zero downtime, so that’s just expected now.

                                                                      1. 13

                                                                        Really? Anecdotal, but Nintendo’s online store went down for over 7 hours last week for scheduled maintenance. Depending on the service (e.g., SaaS vs. storefront), scheduled downtime can be completely fine.

                                                                        1. 1

                                                                          Nintendo recently came out that it has been using a two decades old multiplayer tech so the 7 hour downtime is very thematic for a system that old.

                                                                          1. 2

                                                                            But people were ok with it.

                                                                        2. 7

                                                                          so that’s just expected now.

                                                                          That’s not the relevant question, though. The question is: Will it affect your business? And more to the point, how do the following two things compare:

                                                                          1. Your business with 0-downtime, and all the engineering and complexity costs that come with that, including their impact on bugs on velocity of feature development.
                                                                          2. Your business with X-minutes downtime, but with a better product with more features, and the ability to move faster.

                                                                          For most types of businesses, I don’t think it’s even a question. Sure, you have “old-fashioned” uptime states. But does anyone care?

                                                                          1. 4

                                                                            If I’m not mistaken my bank goes down every night. Certainly every late night attempt I’ve made it’s been down.

                                                                            1. 1

                                                                              Right but with the throughputs and resources available to large organizations, one would expect a 15 minute downtime window to be shorter for them because they can invest a lot of resources in making it smaller.

                                                                              On the other hand, building systems on the idea that dependencies will never go down, sounds positively foolish, as does trying to hit 100% uptime with no leeway. And indeed, we now have an incredibly complex and fragile stack intended for 100% uptime – that is much more error prone and more costly in terms of manhours spent fixing, documenting, learning, and building on it – than it would be to just allow a monthly or bi-monthy 15 minute downtime.

                                                                            2. 1

                                                                              Well then you can just create multiple clusters for redundancy ….

                                                                              https://news.ycombinator.com/item?id=26111314 (this quote appears to be real :-( :-( )

                                                                              1. 2

                                                                                Not sure if this is meant as irony or not. Having multiple instances of an app is a very real HA technique. And honestly, if you can isolate your services on a single box, it’s as simple as just starting multiple copies of the service.

                                                                                1. 1

                                                                                  Sure, having multiple instances of an app is conventional. Having multiple instances of a cluster manager is a different story. I’d say you need an ops team of at least 10 full-time people (and maybe 50 or 100 people) to justify that.

                                                                                  And justifying it because “well the entire cluster sometimes goes down when we upgrade” is a little ridiculous. The apps should be able to survive when components of the cluster going down.

                                                                                  1. 1

                                                                                    You can just use a managed solution.

                                                                            1. 21

                                                                              I started coding when I was 14, around 2007 or so. [..] Fast-forward 3 years I got my first few gigs as a web developer. By then I was pretty good at HTML and CSS already, had dabbled enough with PHP to know my way around of most sticky problems I would find myself in and while I didn’t really know much of vanilla JavaScript, it was okay, because everyone used almost exclusively jQuery anyway.

                                                                              Is there any other industry where a 17-year old who “dabbled enough” can land a job? I can’t shake the feeling that this is the real problem with a significant part of the industry: 18 year olds who write much of the code, guided by a 22-year old “senior developers”, all led by a 24-year old CTO.

                                                                              I don’t think there’s anything wrong with React or NPM or most other frontend things, but you got to know when to apply it and when not to apply it. Youthful enthusiasm and hubris leads to the “JS framework of the week”-syndrome. Experience is not a substitute for talent, but talent is also not a substitute for experience.

                                                                              1. 7

                                                                                I think that an industry without credentialism is a great thing. There is generally too much credentialism in the world. Not having credentials does not preclude learning and mastery.

                                                                                1. 6

                                                                                  I don’t think that “let’s not rely too much on credentials” and “let’s not have teenagers and early tweens run the company” are incompatible views.

                                                                                  1. 4

                                                                                    An even bigger problem is that having credentials doesn’t automatically imply mastery either!

                                                                                  2. 3

                                                                                    I don’t think age is the problem. It is more, lack of experience and lack of education. Aren’t we always hearing about those who only attend a coding bootcamp and already find a job? How good is their code quality?

                                                                                    1. 9

                                                                                      don’t think age is the problem. It is more, lack of experience and lack of education.

                                                                                      But those are strongly correlated, no?

                                                                                      I do suspect that age in itself does play a part; certainly if I look at myself I am now, at the age of 35, not the same person that I was when I was 17 or 25. I am more patient, more aware of risks, more humble, and less likely to be swept along in a wave of enthusiasm. Generally I think I’m more thoughtful about things (although I’m hardly perfect, and in 20 years when I’m 55 I’ll be able to list aspects where I’ve improved over 35-year old me – at least, I hope I will).

                                                                                      I’ve worked with a few older (30s) people who did a bootcamp and they were generally okay; their code wasn’t always perfect, but that’s okay as long as they keep learning and developing.

                                                                                      I think guidance is key here; there is nothing at all wrong with a 17-year old or bootcamper being employed as a programmer as such, provided they are guided by more experienced programmers. It’s this part that is often missing.

                                                                                      1. 15

                                                                                        I agree, but you are conflating two things. I started programming when I was 7, so by the time I was 18 I had 11 years of experience. There are a lot of fields where that amount of experience, even if it’s as an amateur, will get you a job. 18-year-old me was painfully immature; however, and anyone who would have offered him a job really needs to re-examine their hiring process.

                                                                                        On the technical side, part of the problem with 18-year-old me was the problem with any autodidact. There were massive gaps in my knowledge where I didn’t realise the knowledge even existed. A few years at a university that focused on theory and hanging out with systems programmers on the side helped a lot there: the main value of a university education is that it gives you a guided tour of your ignorance. It doesn’t fix your ignorance but it does give you the tools to fix the bits that you discover you need to fix and it does show you what you could learn.

                                                                                        I think that’s a big part of the problem with the industry. We have a huge number of people who have absolutely no idea of the scope of their ignorance. The biggest difference between 18-year-old me and me is that I am no longer surprised to find there’s a big bit of computer science that I don’t know anything about. Until a few years ago, I didn’t know that effects systems or flow-sensitive type systems existed. Now that I do and have learned that a bunch of properties that I’d been trying to enforce dynamically can be statically verified.

                                                                                        1. 5

                                                                                          …the main value of a university education is that it gives you a guided tour of your ignorance.

                                                                                          I’m going to steal the shit out of this quote. You nailed it. I learned to program when I was 10. Much later (after college, actually) when I decided to wanted to do this for a living I actually went back to school because I realized (through conversations with friends) that there were a ton of things I didn’t even know I didn’t know. Fast-forward a couple years and I was a much, much more effective software developer.

                                                                                          1. 4

                                                                                            I think the crux of the matter is that “coding” and “developing a product” are not the same things (I use “product” in the broadest sense; it can also be something like Vim or Lobsters). Coding requires pure technical chops and talent, whereas developing a product requires much more, and many of those skills can’t easily be taught in a course.

                                                                                            A teenager can find a clever exploit in some application: this requires just talent. But actually writing an application and maintaining it for 20 years takes much more than just an understanding of the technical parts.

                                                                                            Going back to this story, this seems roughly the problem in JS (or at least one of them): there are many very smart and talented people involved, many of whom are undoubtedly smarter than I am, but a sizeable number of them also don’t seem to have the “product developing skills” that are required to build an ecosystem that’s not the pain that it is today.

                                                                                      2. 2

                                                                                        I know a few designers that started very early, but it’s too not common. I have a friend that has started beekeeping at the age of 16. Most of jobs where you can start early are usually not very knowledge heavy and more based around the ability to do it. Now the question is, is programming knowledge heavy, and should it be?

                                                                                        1. 1

                                                                                          Programming probably not, but development yes, I think.