1. 1

    What makes webassembly a more compact/faster to download, than JavaScript (for, say, a typical SPA app using React.js) ?

    “… This isn’t shown in the diagram, but one thing that takes up time is simply fetching the file from the server. It takes less time to download WebAssembly it does the equivalent JavaScript, because it’s more compact. WebAssembly was designed to be compact, and it can be expressed in a binary form. Even though gzipped JavaScript is pretty small, the equivalent code in WebAssembly is still likely to be smaller. This means it takes less time to transfer it between the server and the client. This is especially true over slow networks…”

    1. 3

      It’s not really comparable since you can’t compile an SPA to webassembly - the browser APIs react-dom use aren’t present on the client.

      If you’re comparing rust compiled to JavaScript with rust compiled to webassembly, the bytecode format is going to be smaller.

    1. 2

      The article suggets with regards to JWT, that by default

      Where are they stored? Using a default configuration, to summarise: localStorage / sessionStorage

      What browser/library is making that default? I never came across this.

      Perhaps, the author specifically means JWT Refresh tokens (tokens that are meant to automatically re-generate authorization tokens, without user typing in their password again).

      1. 2

        I am sorry I am not picking up anything from this article… Eg:

        this is where I’ve changed my mind and I’ve started to write tests based on trait instead of implementation

        It seems that the author is discovering black-box testing… but I might be wrong and missing some jewels…

        1. 2

          We are always looking for libraries which have heavy usage of new features. If you have some libraries which use Concepts or other C++20 features, please let us know and we are willing to add them to our daily RWC (real world code) testing. This will help us improve our compiler.

          very nice of MS to engage with community like this. I have seen this also from React Native team (FB).

          1. 9

            Hey, mods? For the last five years or so, I’ve titled these articles “Jepsen: [Database Name] [Version]”. I’ve been informed that that’s no longer an acceptable title for Jepsen reports, because “Jepsen” is also the name of the site. Can I get a policy clarification here? Do I need to make up a lobste.rs-friendly title that doesn’t include the word “Jepsen” for each submission from now on?

            1. 7

              The story submission guidelines state, in part: “Please remove extraneous components from titles such as the name of the site, blog, section, and author.” The story was originally submitted with the story title “Jepsen: YugaByte DB 1.3.1” from the domain jepsen.io. Given that the site name appears in the story title, and consistent with the story submission guidelines, I replaced “Jepsen: “ with “Testing “ as the best operational description of the article I could find.

              It has since been brought to my attention that Jepsen is also the name of a Clojure library. As a site name, the story submission guidelines are clear. As a library or tool name, they are mute. As Jepsen is both, but given the existence of an interpretation of the article title that does not violate the site guidelines, I have restored the story title as originally submitted.

              1. 4

                It has since been brought to my attention that Jepsen is also the name of a Clojure library. As a site name, the story submission guidelines are clear. As a library or tool name, they are mute. As Jepsen is both, but given the existence of an interpretation of the article title that does not violate the site guidelines, I have restored the story title as originally submitted.

                This is such a wacky justification! Somewhere, a lawyer smiles without knowing why.

                But I like the “Jepsen: WebscaleDB 0.1.1-alpha2” titles, so I’ll take it.

                1. 3

                  Absent a Damas-Hindley-Milner typechecker for story titles, it’s just us manually splitting hairs to try to keep up their signal:noise ratio. I was also surprised by Jepsen being the name of the tool used for the analysis as well as the project and consulting company I knew about. I can only ask that @aphyr disambiguate with Hungarian Notation - perhaps he’d like to adopt Jepsen_clj, Jepsen_blog, and Jepsen_LLC so we can more efficiently mangle his titles in the future.

                  1. 5

                    I like to think the usage in the submission title here is a fourth meaning of “Jepsen”, being a proper name not of the blog itself but of the kind of article the blog contains, i.e. “a Jepsen of YugaByte DB”

                2. 3

                  Thank you. For what it’s worth, I prefer to see Jepsen in the title, because in my mind, it adds a ton of credibility for the content.

              1. 7

                Well, some of us are in this category (as the article points out):

                If you’re building API services that need to support server-to-server or client-to-server (like a mobile app or single page app (SPA)) communication, using JWTs as your API tokens is a very smart idea. In this scenario:

                • You will have an authentication API which clients authenticate against, and get back a JWT
                • Clients then use this JWT to send authenticated requests to other API services These other API services use the client’s JWT to validate the client is trusted and can perform some action without needing to perform a network validation

                so JWT is not that bad. Plus, it is refreshing to visit a website that says ‘there are no cookies here’… in their privacy policy.

                1. 17

                  Plus, it is refreshing to visit a website that says ‘there are no cookies here’… in their privacy policy.

                  The EU “Cookie Law” applies to all methods of identification — cookies, local storage, JWT, parameters in the URL, even canvas fingerprinting. So it shouldn’t have any effect on the privacy policy whatsoever.

                  1. 9

                    You still can use sessions with cookies, especially with SPA. Unless the JWT token is stateless and short lived you should not use it. Also JWT isn’t the best design either as it gives too much flexibility and too much possibilities to misuse. PASETO tries to resolve these problems with versioning protocol and reducing amount of possible hashes/encryption methods.

                    1. 1

                      Why shouldn’t you use long lived JWTs with a single page application?

                      1. 4

                        Because you cannot invalidate that token.

                        1. 6

                          Putting my pedant hat on: technically you can, using blacklists or swapping signing files; But that then negates the benefit of encapsulating a user “auth key” into a token because the server will have to do a database lookup anyway and by that point might as well be a traditional cookie backed session.

                          JWTs are useful when short lived for “server-less”/lambda api’s so they can authenticate the request and move along quickly but for more traditional things they can present more challenges than solutions.

                          1. 7

                            Putting my pedant hat on: technically you can, using blacklists or swapping signing files; But that then negates the benefit of encapsulating a user “auth key” into a token because the server will have to do a database lookup anyway and by that point might as well be a traditional cookie backed session.

                            Yes, that was my point. It was just mental shortcut, that if you do that, then there is no difference between “good ol’” sessions and using JWT.

                            Simple flow chart.

                            1. 1

                              Except it is not exactly the same since loosing a blacklist database is not the same as loosing a token database for instance. The former will not invalidate all sessions but will re-enabled old tokens. Which may not be that bad if the tokens are sufficiently short-lived.

                              1. 1

                                Except “reissuing” old tokens has much less impact (at most your clients will be a little annoyed) than allowing leaked tokens to be valid again. If I would be a client I would much more like the former rather than later.

                    2. 5

                      One of my major concerns with JWT’s is that retraction is a problem.

                      Suppose that I have the requirement that old authenticated sessions have to be remotely retractable, then how on earth would I make a certain JWT invalid without having to consult the database for “expired sessions”.

                      The JWT to be invalidated could still reside on the devices of certain users after it has been invalidated remotely.

                      The only way I could think of, is making them so short-lived that they expire almost instantaneous. Like in a few minutes at most, which means that user-sessions will be terminated annoyingly fast as well.

                      If I can get nearly infinite sessions and instant retractions, I will gladly pay the price of hitting the database on each request.

                      1. 8

                        JWT retraction can be handled in the same way that a traditional API token would; you add it to a black list, or in the case of a JWT a “secret” that its signed against can be changed. However both solutions negate the advertised benefit of JWTs or rather they negate the benefits I have seen JWTs advertised for: namely that it removes the need for session lookup on database.

                        I have used short lived JWTs for communicating with various stateless (server-less/lambda) api’s and for that purpose they work quite well; each endpoint has a certificate they can check the JWT validity with and having the users profile and permissions encapsulated means not needing a database connection to know what the user is allowed to do; a 60s validity period gives the request enough time to authenticate before the token expires while removing the need for retraction.

                        I think the problem with JWTs is that many people have attempted to use them as a solution for a problem already better solved by other things that have been around and battle tested for much longer.

                        1. 7

                          However both solutions negate the advertised benefit of JWTs or rather they negate the benefits I have seen JWTs advertised for: namely that it removes the need for session lookup on database.

                          I think the problem with JWTs is that many people have attempted to use them as a solution for a problem already better solved by other things that have been around and battle tested for much longer.

                          This is exactly my main concern and also the single reason I haven’t used JWT’s anywhere yet. I can imagine services where JWT’s would be useful, but I have yet to see or build one where some form of retraction wasn’t a requirement.

                          My usual go-to solution is to generate some 50-100 characters long string of gibberish and store that into a cookie on the user’s machine and a database table consisting of <user_uuid, token_string, expiration_timestamp> triples which is then joined with the table which contains user-data. Such queries are usually blazing fast and retraction then is a simple DELETE-query. Also: Scaling usually isn’t that big of a concern as most DBMS-systems tend to have the required features built-in already.

                          Usually, I also set up some scheduled event in the DMBS which deletes all expired tokens from that table periodically. Typically once per day at night, or when the amount of active users is low. It makes for a nice fallback just in case some programming bug inadvertently creeps in.

                          But I guess this was the original author’s point as well.

                        2. 1

                          I’ve never done any work with JWTs so this might be a dumb question - but can’t you just put an expiration time into the JWT data itself, along with the session and/or user information? The user can’t alter the expiration time because presumably that would invalidate the signature, so as long as the timestamp is less than $(current_time) you’d be good to go? I’m sure I’m missing something obvious.

                          1. 5

                            If someone steals the JWT they have free reign until it expires. With a session, you can remotely revoke it.

                            1. 1

                              That’s not true. You just put a black mark next to it and every request after that will be denied - and it won’t be refreshed. Then you delete it once it expires.

                              1. 7

                                That’s not true. You just put a black mark next to it and every request after that will be denied - and it won’t be refreshed. Then you delete it once it expires.

                                The problem with the black mark, is that you have to hit some sort of database to check for that black mark. By doing so, you invalidate the usefulness of JWT’s. That is one of OP’s main points.

                                1. 2

                                  Well, not necessarily. If you’re making requests often (e.g, every couple of seconds) and you can live with a short delay between logging out and the session being invalidated, you can set the timeout on the JWT to be ~30 seconds or so and only check the blacklist if the JWT is expired (and, if the session isn’t blacklisted, issue a new JWT). This can save a significant number of database requests for a chatty API (like you might find in a chat protocol).

                                  1. 1

                                    Or refresh a local cache of the blacklist periodically on each server, so it’s a purely in-memory lookup.

                                    1. 4

                                      But in that case, you’d be defeating their use as session tokens, because you are limited to very short sessions. You are just one hiccup of the network away from failure which also defeats their purpose. (which was another point of the OP).

                                      I see how they can be useful in situations where you are making a lot of requests, but the point is that 99,9% of websites don’t do that.

                          2. 1

                            For mobile apps, that have safe storage for passwords, the retraction problem is solved via issuing refresh tokens (that live longer, like passwords in password store of a mobile phone). The refresh tokens, are then used to issue new authorization token periodically and it is transparent to the user. You can re issue authorization token, using refresh token every 15 minutes, for example.

                            For web browsers, using refresh tokens may or may not be a good idea. Refresh tokens, are, from the security prospective, same as ‘passwords’ (although temporary). So their storage within web browser, should follow same policy as one would have for passwords.

                            So if using refresh tokens for your single page app, is not an option, then invalidating would have to happen during access control validation, on the backend. (Backend, still is responsible for access control, anyway, because it cannot be done on web clients, securely).

                            It is more expensive, and requires a form of distributed cache if you have distributed backend that allows stateless no-ip-bound distribution of requests…

                            1. 1

                              For mobile apps, that have safe storage for passwords, the retraction problem is solved via issuing refresh tokens (that live longer, like passwords in password store of a mobile phone).

                              But then why use 2 tokens instead of single one? It makes everything more complicated for sake of perceived simplification of not doing 1 DB request on each connection. Meh. And even you can use cookie as in your web UI, so in the end it will make everything simpler as you do not need to use 2 separate auth systems in your app.

                              1. 1

                                It makes everything more complicated for sake of perceived simplification of not doing 1 DB request on each connection.

                                This is not really, why 2 tokens are used (authentication token, and refresh token). 2 tokens are used to a) allow fast expiration of an authentication request b) prevent passing of actual user password through to the backend (it only needs to be passed when creating a refresh token).

                                This is a fairly standard practice though, not something I invented (it requires an API accessible, secure password store on user’s device ,which is why it is prevalent in mobile apps).

                                I also cannot see how a) and b) can be achieved with a single token.

                        1. 1

                          I think we are missing, as software dev construct – some classification for optimization techniques. There is a lot in between:

                          • ‘do not create a file on a network drive, every time you receive web request’
                          • ‘do not hand unroll your for-loops’

                          First seems like a good idea to avoid, even without profiling. Second requires a ton of justification.

                          Until, we have such classification, I think it is best if we share our personal experiences, and anecdotes, as such – without assuming that we had found a rule or an axiom of some sort…

                          1. 1

                            C++ has vcpkg with over 1K packages.

                            vcpkg is not standard packager of the language, but still.. (and it runs on windows, macos, linux, and as a 2nd class citizen freebsd…)

                            https://github.com/microsoft/vcpkg/tree/master/ports

                            1. 6

                              This is hilarious because a much less-well known system (baserock) with a very similar goal to Nix had a tool also called morph that predates this one by several years. Baserock was kind of parallel to NixOS but not nearly as successful, in the end morph died, but out of its ashes came a tool called buildstream which seems to be more successful.

                              edit: clarify

                              1. 2

                                thx for pointing to be buildstream. Seems like an excellent tool to build cross-os compatible packages. I was looking for something like this actually

                              1. 1

                                Also PipelineDB (which is a postgres extention) has explicit aggregating functions, that expose probabilistic ds:

                                http://docs.pipelinedb.com/probabilistic.html

                                1. 3

                                  Not really sure this quite covers “static analysis” the way its usually meant by researchers. Yes, linters perform a type of static analysis, yes, the field is quite broad, but the sum total of this page is a very weak sauce as a stand in for static analysis. For example, there is a single section for “binary static analysis”, and there is all of 4 entries. The previous section, detailing “multiple language” projects is probably closer to the colloquial sense of the word and covers a much wider breadth.

                                  1. 1

                                    the list includes static analysis tools that leverage Abstract Interpretation (eg. Ikos

                                    IKOS (Inference Kernel for Open Static Analyzers) is a static analyzer for C/C++ based on the theory of Abstract Interpretation. )

                                    1. 2

                                      yeah, but how many? I don’t really want to take the time at the moment, its not that big a deal, but several of those language-oriented sections were like 50% linters. Again, not a big deal, linting involves static analysis(!), but if you want to talk to someone who actually uses static analysis for bug hunting (or whatever, performance, etc) in their day-to-day, linters are not what they mean. Binary focused static analysis is what probably 90+ percent of the current research papers published mean when the use the phrase “static analysis”. There is an interesting essay to be written about the fluidity of terms in computational science research and practice, Halvar Flake among others had a tweet about “soundness” and “completeness” relatively recently, and the squishly-ness of those terms, if you search his handle with either of those terms it’ll come up, lots of serious people in the replies, point being I think a similar problem befalls use of “static analysis”.

                                      1. 1

                                        Yes, the list includes a mix of syntax checkers, style checkers, narrow bounds checkers and so on. A couple of tools eg Ikos, Spark, Polyspace are based on solid theoretical methods. Ikos also does binary analysis (well, LLVM IR encoded into bitstream files), do not know about others.

                                        Also agree, that we would benefit from some form of taxonomy, helping us navigate the theoretical constructs and their practical implementation in this space (eg Abstract Interpretation, Symbolic execution).

                                  1. 4

                                    Nice read, and the author seems to be refreshingly humble.

                                    Translation by google translate also seems to very very good. It is amazing how far tools accessible to a regular person have gone.

                                    1. 2

                                      yeah it’s pretty fun to read his articles (in Vietnamese) because they’re all humble and also has a lot of interesting information.

                                      1. 1

                                        Good writing with a consistent grammar and sentence structure often translates well.

                                        1. 1

                                          I think he must have tidied the translation up a little. Great read, in any case.

                                        1. 2

                                          Thx for the write up. You also got an A+ on SSL labs test report. :-)

                                          The javascript code base seems to be small, am I correct to assume that you are doing most of the UI on server-side?

                                          Also I see that some images are hosted on cloudfare (and there are some cookies) is that something you have chosen?

                                          Host: media.matchacha.ro User-Agent: Mozilla/5.0 (X11; OpenBSD amd64; rv:67.0) Gecko/20100101 Firefox/67.0 Accept: image/webp,/ Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate, br DNT: 1 Connection: keep-alive Cookie: __cfduid=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx Pragma: no-cache Cache-Control: no-cache TE: Trailers

                                          1. 6

                                            You also got an A+ on SSL labs test report. :-)

                                            Mozilla Observatory gives it a pretty good score as well! (A+, 115/100)

                                            The javascript code base seems to be small, am I correct to assume that you are doing most of the UI on server-side?

                                            JS is only sprinkled through for very minor UI enhancements and it’s all very much written the way I used to write JS code ten years ago. I don’t use webpack or babel and the code is merged together and then minified by broccoli. As for vendored JS, I use Quill in the admin for rich text editing, mapbox to display a map to the user after they complete their order and unpoly to enhance the navigation. Here’s what one of my javascript “modules” looks like:

                                            $ cat resources/js/app/discard.js                                                                                   
                                            /* global up */                                                                                                     
                                                                                                                                                                
                                            (function() {                                                                                                       
                                              up.compiler("[data-discard]", function(el) {                                                                      
                                                const timeout = el.dataset.discard * 1000;                                                                      
                                                const timeoutId = setTimeout(discard, timeout);                                                                 
                                                                                                                                                                
                                                el.addEventListener("click", discard);                                                                          
                                                el.addEventListener("animationend", onDiscardEnd);                                                              
                                                return function() {                                                                                             
                                                  el.removeEventListener("click", discard);                                                                     
                                                  el.removeEventListener("animationend", onDiscardEnd);                                                         
                                                  clearTimeout(timeoutId);                                                                                      
                                                };                                                                                                              
                                                                                                                                                                
                                                function discard() {                                                                                            
                                                  el.classList.add("discarding");                                                                               
                                                }                                                                                                               
                                                                                                                                                                
                                                function onDiscardEnd() {                                                                                       
                                                  el.classList.add("discarded");                                                                                
                                                }                                                                                                               
                                              });                                                                                                               
                                            })();
                                            

                                            Also I see that some images are hosted on cloudfare (and there are some cookies) is that something you have chosen?

                                            I use the free Cloudflare plan for that tiiiiny extra bit of security I get from hiding my origin host as well as the free CDN. You can read about what that cookie is used for here. CF does act as a CDN for my media, but the origin server hosts the actual images. I have CF pointed to an nginx instance on the origin that serves the media files and reverse proxies to the application.

                                          1. 2

                                            J has a very ‘mathy’ coding style. Where by mathy I mean:

                                            • no loops
                                            • no if statement
                                            • no allocations
                                            • no ‘technical initializations’

                                            in the main body of the algorithms.

                                            By comparasing, this is the Python with panda implementation of the k-means (albeit slightly more complex) https://github.com/jackmaney/k-means-plus-plus-pandas/blob/master/k_means_plus_plus/cluster.py

                                            I feel like (no substantive experience though), is that with all the explosion of linear algebra code, due to machine learning – APL family of language was best suited to express what’s written in books with LaTeX.

                                            But likely a number of historical, commercial, and usability/accessibility reasons made them much less accessible/attractive.

                                            Also the current trend toward compile-time expressions, monads hiding data type intricacies and their interaction with external functions, and lambda expressions – combined, are giving us the ‘mathiness’ of what’s already available in J.

                                            1. 2

                                              I sometimes use APL as a modeling language for what I will ultimately implement in Python with numpy or pandas. It helps me to frame the problem in terms of data, representation, and notation rather than functions and classes.

                                              Many of the APL primitives have equivalents in numpy routines, so once I have a satisfactory APL solution, porting it to Python is pretty simple. There are a few gaps though, like the power operator used in k-means. The numpy API is so large it can be hard to even know whether there’s an equivalent to some APL primitive.

                                              This isn’t an exorbitant cost though. The missing APL function can usually be written in just a couple lines of Python code.

                                            1. 3

                                              I wonder how this can get out of its current catch 22 situation where the main thing that makes it interesting is the easy access to a wide range of packages in a central repository, but having a wide range of packages available depends on having a wide range of people interested.

                                              At the moment, their crates.io equivalent, cppget.org, has only 24 packages (mostly build2 itself and a few example packages), and it’s not clear if there’s any infrastructure for users to submit their own packages. They also face the additional issue that unlike cargo, build2 has come much later in the life of C++, so it probably needs a strategy to make existing, widely used, libraries available in the repository.

                                              Yes, it can be used with a local repository, but at least for open source dependencies, the nice thing about cargo is that you tell it you want to depend on foo >= 1.0 and it does everything else for you. This experience is also very important for people who just want to try it out for the first time.

                                              1. 1

                                                Checked now (a year later from the parent comment) There are 39 packages in cppget.org https://cppget.org/

                                              1. 1

                                                As per this article https://janmr.com/blog/2015/04/time-budgets-for-the-web/

                                                User feels that system reacts instantaneously, if response time 0.1s.

                                                1. 2

                                                  Well, just yesterday, I gave up on my 4th attempt in 1.5 years, to setup Emacs for Javascript+Flow development.

                                                  Failed configuring js2-mode + LSP for javascript + eslint + flow-for-js2 minor mode While fighting Emacs confusion about where the directories are , where node_module packages can be … (and relying on projectile module to guess some of it..)

                                                  Very negative experience (again).

                                                  I have been using Emacs for ages. And I only know Emacs keymap. I very much appreciate that it works in a terminal, I prefer to use it that way…

                                                  But I would not recommend somebody new to the field to learn Emacs…

                                                  It is a dead-end, unless Emacs reinvents itself with with modern project-awarness capabilities, build/release/troubleshoot pipelines, and some structured friendliness for plugin developers.

                                                  I looked at the quick rise of VisualCode and IntelliJ before it. Ignored it at first, but Emacs’s extensibility is really more like anarchy.

                                                  It does not take into consideration composabilty between multiple ‘plugins’/’extensions.

                                                  So different modes/extentions sort of choose how they work with each other (and if that), which means, that when an overall solution requires stitching (and most use cases now do) of different extensions, things do not work…

                                                  The only dev work I use Emacs for now is where stiching/integration of different modes is not needed. For example, modifying Ansible yml files, or org-mode to write status reports… everything else is either VisualCode or IntelliJ.

                                                  1. 2

                                                    I might have missed on the page, but where are the sources for the test located? I would like to run some of them.

                                                    1. 3

                                                      There’s plenty of good comments on this story on HN https://news.ycombinator.com/item?id=20695806. I like reading the experiences of people trying to use code-sharing solutions on mobile - consensus seems to be that it usually isn’t worth it.

                                                      1. 2

                                                        I am reading comments (and judging from my own experience slightly differently though).

                                                        • If the app has sufficiently complex UIs that leverage platform-specific code, that prevents usage of something like Xamarin or ReactNative , then coding for each platform will be needed. In this scenario, trying to share ‘business logic’ modules in C++ will not work well. That’s the experience of the article authors. And I suspect most will agree.

                                                        • If, on another hand, an app has sufficiently ‘standard’/‘vanilla’ UI needs that can be covered by Xamarin or React Native – than that’s the path that should be taken.

                                                        In other words, what drives ability to share ‘business logic’ between Ios and Android, is the basic ability to share UI code. And if UI code cannot be shared, neither can be business logic.

                                                        Unlike Xamarin, React Native can, also, be embedded into otherwise native application.

                                                        Therefore, with React-Native, a ‘hybrid’ app can be created, where some UI code is native widgets, and some ‘activities’ or ‘screens’ are done in React Native. This let’s developers to compose the application from the two UI stacks (that look native to the end user). The screens that share the UI code, then, can also share the logic. Which, subsequently minimizes the code duplication, yet gives the flexibility to decide if something really needs to be done using native widgets – without recoding the whole solution.

                                                      1. 6

                                                        Running your own mail server (and a basic website), is an exceptionally great idea. Running yunohost. I am using 379MB out of 1.2 GB available.

                                                        Yunohost is Debian 9 based self-hosting ‘cloud in a box’ that offers email, personal site server, and many other programs that allow one to manage his/her web presence.

                                                        It is free, but please donate.

                                                        It includes the mail-in-box app, that the author has tried. Besides the mail server, I am running a few other small programs, and it allows me to install my own web apps, to be serviced by its preconfigured NGINX (and it also gets automagically TLS certificates from let’s encrypt!)

                                                        It also allows to service multiple domains in one instance (and it manages TLS certs, and user accoutns for each site independently). So I can point say 3 different domains into one YunoHost.

                                                        I also went through an update process with Yunohost and, it worked fine, just clicked a button on UI to do the update of the underlying platform, then the installed plugin apps.

                                                        I am using cheapest 5$ instance offered by prmgr hosting service. https://billing.prgmr.com/index.php/order/main/packages/xen/?group_id=10 Similar in capacity to the vps that the author was looking at.

                                                        PRGMR also donates its hosting to lobste.rs (from what I read). Which made, at least, my choice for VPS easier. My instance has been very stable and the IP addresses are not blacklisted by spam protection software, so my emails go with no problems to outlook and other email hosts.

                                                        I am not sure why author of the article did not look at Yunohost as whole package, and instead tried to install individual programs…