1. 13
    1. 3

      JavaScript is single threaded, so the locks have to be for external resources, not just threading, at which point, is this really an issue? I will grant that the fundamental flaw with async/await is that it is cooperative multitasking, and cooperative multitasking across processes was abandoned for very good reasons which also apply within a process, but if your single process is having tasks stall and die, you already have problems that need to be fixed.

      Like I understand wanting the language to make more guarantees for you, but solving the dining philosophers problem at the language level isn’t a reasonable ask.

      1. 4

        The core issue being addressed here is how to properly “clean up resources” and focuses squarely on promises and by extention async/await in JS.

        There’s no great way to cancel a promise and since every async operation in JS relies on promises, we are in a pretty sticky situation. This is a major reason why relying on async/await is wrought with issues because library and end-developers do not have the tools to properly cleanup resources in a reliable manner. There’s no way to have structured concurrency using async/await because we cannot make the necessary guarantees that are required.

        For many use cases, this is not an issue. However, when it does become an issue, the modern thinking is to abandon async/await entirely. At the end of the day, it’s all about flow control, and when using async/await you don’t have much control.

        1. 4

          Single threaded doesn’t precude needing locks as soon as you have async callbacks. “async locks” are a common need in JS. You have multiple logical threads of execution and you may need exclusive access to a resource across async pauses.

          So while you can never have “data races” for a particular variable you can have corruption in complex data structures if you “suspend” during a multi-operation mutation.

          1. 1

            Sure, but isn’t that just object oriented design? Don’t let external functions screw up your internals during an operation. The “lock” in that case is just this.isLoading = true; and if (this.isLoading) return;.

            1. 3

              That is one solution to the problem. That is a perfectly valid lock in JavaScript and does have uses. You can also make locks that aren’t object-oriented if you prefer. The point is that locks do exist in JS.

        2. 2

          I am not familiar with locks, but isn’t all used memory returned to the OS when a program is closed?

          1. 3

            Yep — I think what the article is focusing on is the prospect of a clean exit, where the finally {} blocks get to run. If the promises leak, that doesn’t happen.

            1. 2

              I see. It’s not that “aquireLock” is special, just a stand in for getting an external resource (database connection, for example) and that’s the leak. Thanks!

              1. 4

                For a good example of how this affects a real application, consider a simple connection pooling system for e.g. a database. At application startup, you create a limited number of connections to your database, we’ll say 4 in this case.

                The happy path goes like this:

                • Concurrent application needs database access to service a user request.
                • Application “checks out” a dedicated connection to make a query.
                • Application uses the connection to make any queries it wants
                • Application checks in the connection back to the pool

                If concurrency in our application is otherwise unbounded, then there can be contention for a limited number of connections. Pooling systems can have burst capacity to deal with this, but eventually there can always be more demand than available connections. At this point, those requests just have to wait for a connection to be available.

                Like a resource protected by a mutex or semaphore, these shared connection can’t be used until they are released. Depending on your programming language, scopes (either implicit with RAII, or explicit like with Python’s context managers) may be used to return connections to the pool, or you may have to explicitly release these connections. Either way, you are dependent on the control flow to release the connection back to the pool.

                The issue comes with async is that the potential exists for the control flow to never return to the yield point. All async systems have some sort of task or equivalent concept which isn’t rescheduled until ready - which creates the potential for the task to never be ready again. Since the runtime can’t know if the task will ever be schedulable, it has to assume the task is always live.

                In the case of our connection pool, this means if you hold the checked-out connection over any yield point, then the potential exists that the control flow will be transferred to a task that never gets rescheduled for the remaining lifetime of the application. This gives you an issue where the pool slowly shrinks until any operations with the pool effectively deadlock because they are waiting for connections that never become available. This is also difficult to avoid - typically the queries you make with the connection are themselves async ops which could also run forever.

                1. 1

                  Yeah! Or maybe the lock is in some distributed system, and it takes a while to time out if not released explicitly.

            2. 1

              Well, here is a potential workaround, if work() can be made to fit in it: run work() inside a Webworker. It has a terminate() method that is supposed to kill the worker immediately. Kind of like a SIGKILL if you will. If you are on nodejs, run it in another process. Then you send it a literal SIGKILL if it doesn’t behave.

              Of course, it would be better to send a message to the worker to ask it to cancel first. Which get back to the original problem. The issue I see is that work does not take an AbortSignal() as a parameter. If it, (and any other async method it calls) take that parameter, and use it to cancel whatever fetch(), setTimeout(), etc that is the async work, then work can be cancelled correctly. That imply, of course, that work() handle that signal bug free.

              I don’t think I know of any other method of cancelling code I don’t trust than “run in another context, and drag it behing the shed and shoot it if it misbehave”.

              1. 2

                You can cancel a paused (yielded) generator, as the post points out.

                1. 1

                  Ah, yes, true. But conceptually, isn’t this the same as an async function taking an AbortController as a parameter and cancelling on ´abort()´? In both case the API has to support the cancellation signal from top to bottom, and there must not have any error in its cancellation code…

                  If the point of the article was that generator code can make this automatic, much easier, and by default, then it make sense to me and I’m sorry, the point went over my head.

                  Yet in any case, the fundamental problem of the async work function is that it does not support a cancel. Either with a generator or an AbortController, ´ work()´ has to be rewritten, along with all the async functions it called directly or indirectly, up until you reach a cancellable operation.

                  if you can’t change the promise based work() function because it’s 3rd party code, it doesn’t matter if you use an AbortController or a generator around it, your function may not terminate and your exit will be ungraceful.

                  Unless there is something I am missing?

                  1. 1

                    I think you’re missing the little-known ability to call the return() method on a suspended generator from the outside.

                    This causes the generator’s code to return, as if the yield statement where the generator was suspended was in fact a return statement. This is a pretty curious feature of JS, and I’m not entirely convinced it’s a good idea (I think it would be preferable to throw a “cancelled” exception, which is also possible with the throw() method). But it does allow you to cancel any generator, without any planning by the generator’s original author.

                    1. 2

                      Well, thanks for the clarification, but even when I take into account return(), I still see the same problem than in my previous comment. I’ll rephrase to make my point clearer:

                      My peeve with the article is in the section Does AbortSignal help?, where the author dismiss AbortSignal as a non-solution, because it cannot cancel the promise. My point is:

                      True, an abort signal cannot cancel the promise, but only if we assume that work() cannot be modified. Because if I can modify work(), the solution that the author missed is to pass the AbortSignal to work() as a parameter and then to its sub function, and so on, until I reach a fetch or a setTimeout or any cancelable operation. Then I can register a listener on the AbortSignal to execute whatever method cancel the operation, and then call abort() whenever I want the operation to cancel. This will make the promise resolve, the resources will be cleaned fine, and everything will exit gracefully, exactly as if I used the generator method the author praised.

                      Which bring me to my second point

                      If function async work() which return a promise cannot be modified, and I am stuck with an API with an uncancellable promise, then I am screwed either way. Even if I wrap a generator around that uncancellable promise, calling return() on it will no more cancel the promise than does calling abort() on an AbortSignal, the resource will not be cleaned, and the exit will not be graceful.

                      In short, the author state that generator allow us to do things that AbortSignal cannot do, and he does so by first showing AbortSignal that don’t cross API boundary, and then comparing them to generator that cross every API boundary. This is comparing Apple and Orange, and make it seems that generator actually solve a problem that we had no way of solving before, which is incorrect.

                      Am I incorrect in my analysis here? If generator are the next sliced bread, I want on it, but before I rewrite the whole app, I need to know that I am getting something better than just a cuter way to cancel async operations.

                      1. 1

                        Even if I wrap a generator around that uncancellable promise, calling return() on it will no more cancel the promise than does calling abort() on an AbortSignal

                        Yes, you would still be stuck with the promise, but cancelling its wrapping generator could still have benefits (e.g. cleaning up any associated app-specific resources).