Threads for vladislavp

  1. 1

    Yes, the delineation is correct with some nuances.

    A developer can be a ‘Cost center’ or a ‘Profit center’.

    If a developer works for ‘Internal It’ – that means they are a ‘Cost center’. If they work for a company that sells software, or software services – they are a ‘Profit Center’.

    There are companies (usually large investment banks, online retailers, healthcare conglomerates, etc) that would claim that they are ‘just like a technology company’ – implying that software developers in there are ‘profit centers’… Because it is the technology that differentiates them from their competitors.

    In investment banks, if a person working close to a ‘profit center’ – eg the trade desk/trader, then they will receive benefits of being a ‘profit center’.
    But in the same org, working somewhere in regulatory/compliance, or middle or back office (that is much further from the trade desk) – another developer, may be with even higher skills – will be a ‘cost center’.

    In healthcare (but with much lower bonuses) something similar happens.

    Cost-center engineers are treated differently from salary, promotion perspective.

    Profit-center engineers will make significantly more, but also, they have to switch companies/places to stay competitive, and, often, they ‘trail’ their business bosses and move with them…

    1. 3

      In general, we recommend regularly auditing your dependencies, and only depending on crates whose author you trust.

      Ok thanks?

      I can do this in C++, because the language makes it impossible to ship libraries that have dependencies and you don’t get the exponential blowup, but the Rust ecosystem makes this completely intractable.

      Last time I messed with rust, I wrote a very basic piece of software (connect to Discord, FlashWindow if anyone says your name, like IRC highlights) and it pulled in over 1 million lines of dependencies. Even if I could audit that, what can I do if I find a crate I don’t like?

      1. 6

        I think this is a false dichotomy, as outlined by (as just one example) https://wiki.alopex.li/LetsBeRealAboutDependencies

        C/C++ stuff has dependencies, and sometimes enormous amounts of them. This mantra that dependency tree hell is an NPM/Cargo/etc. specific problem is - frankly - ridiculous. I can’t audit all of, say, Boost or LibreSSL much better than I can audit, say, Tokio or RustTLS. Arguably, it’s harder to audit something like LibreSSL than RustTLS on account of the memory management, but that’s a vastly different discussion. Let’s imagine I’m instead talking about NodeJS’s “lodash” or something, which I also haven’t read every line of.

        1. 1

          But you have package managers as a line of defense with C/C++ and most other popular languages. I don’t know why this is so hard to understand for many people.

          1. 1

            some C++ libraries have very high standards for contribution to them and the overall review + release process. C++ Boost is one of them ( https://www.boost.org/development/requirements.html ).

            Perhaps in some ways, the argument can be made that one big (but composable/de-composable), thoughtfully managed library with high-bar for entry (like Boost) – in longer term is significantly better, than 100s or 1000s of crates or NPM packages (also helps that the quality of library informs upcoming language standards).

            The article you linked, to me, reads a bit more like false dichotomy (as it does not take into account the maturity of both technical and release process maturity characteristics of the C and C++ libraries).

            These complaints are valid, but my argument is that they’re also not NEW, and they’re certainly not unique to Rust. Rather the difference in dependencies between something written in Rust vs. “traditional” C or C++ is that on Unix systems, all these dependencies are still there, just handled by the system instead of the compiler directly

        1. 3

          You won’t find it yet in the release notes, but GCC 12.1 is finally able to compile recent D code, thanks to a big bump of the version of the D frontend integrated in GCC.

          The maintainer, Iain Buclaw, said that it will be documented more officially later.

          1. 1

            Oh, thank you for pointing this out. This is a huge accomplishment for D !

          1. 2

            Is Podman a complement to Kubernetes or ECS, or a replacement? Or both? I’ve been trying to wrap my head around the container ecosystem as the current jack-of-all-trades at a small company. Kubernetes intimidates me, and all the AWS alternatives (ECS, Beanstalk, Fargate) mean I have to read AWS documentation 😆

            1. 9

              It is a reimplementation of Docker, with a cleaner design. It is mostly developed and maintained by Red Hat rather than Docker Inc.

              1. 1

                Got it - thanks for explaining that

              2. 1

                @vosper you might also find the thread:

                https://www.mail-archive.com/users@dragonflybsd.org/msg05686.html

                to be informative (although it touches on areas outside of linux-only ecosystem)

              1. 1

                “.. Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service. This factor remains relevant as written. …”

                I am thinking that actor-based systems (like Erlang, or in the future Java-Loom based systems) – will not qualify to be the 12-factor apps, given that they would not rely on a specialized caching service to store the state. (or may be I am mis-interpreting this point)

                1. 3

                  looking forward to it!

                  I have tried to use it as my dev env (for java backend, postgres) – but could not get a typical Java dev toolchain to work on NetBSD 9.x (gradle in our case [1])

                  [1] https://github.com/gradle/gradle/issues/16568

                  1. 3

                    Will try it out once 10.0 is released.

                  1. 2

                    SAP HANA has been applying CPU-level vectorization instructions to make their hybrid (OLAP+OLTP) db engine faster. I cannot find all the write ups that were published (some of them I had seen together with Intel).

                    But here is one (2014):

                    https://blogs.sap.com/2014/10/13/high-performance-application-using-vectorization/ discussing details in similar areas as the posted article.

                    1. 3

                      I find that Clojure has become an API to a bunch of different platforms for me now. I can leverage the JVM, Js runtimes, Python, and now Dart using the same language. This reduces a lot of cognitive overhead since I don’t have to remember how to juggle a bunch of different syntaxes, semantics, language quirks, and build tools.

                      Typically, all the platform interop stuff lives at the edges of the application and most of the interesting stuff happens in the core business logic. So, even though I still have to figure out how to interop with each different platform from Clojure, that effort tends to be front loaded and relatively small when compared to the scope of the overall project.

                      1. 2

                        Python

                        I know there’s an interop library, but have I missed a “Compiles to Python” Clojure dialect?

                        Beyond that, I have been feeling the same. Babashka has become my go-to scripting language/platform, and i never want to go back to bash (or perl or python or whatever else I’ve used over the years).

                        1. 2

                          It seems like babashka is not going to be able to support platforms (eg BSDs) not officially supported by GraalVM

                          https://github.com/babashka/babashka/issues/721

                          Perhaps initially, the inconveniences are minor, but overtime they could become hinderances for scripts/tools that expect ubiquitous presence of the particular command-line shell

                          1. 1

                            I use Void Linux as my daily driver with servers on Void or openbsd. If I want babashka, I’ll need to package it. If I want to package it, I’ll need a fully open from source build that works across libcs and in cross building (ideally).

                            I have not yet had, nor do I expect, a chance to use this really cool tool.

                            1. 1

                              Oh yeah, I meant that for my personal/one-off scripts I reach for babashka. I wouldn’t rely on it for anything that would be “production” or more generally open source for those very reasons.

                            2. 1

                              There’s clj-python for calling out to Python ecosystem from Clojure.

                          1. 3

                            I wonder if that exact problem would have been easier if C++ had compile-time reflections built into the core language.

                            1. 1

                              It is not clear from release notes, but does anybody know if Loom (fibers) made into JDK 18 ?

                              https://wiki.openjdk.java.net/display/loom/Main

                              1. 2

                                It did not. The JEP still hasn’t targeted a version. There were a couple changes here that help enable Loom, including the reflection via method handles and the address resolution SPI.

                              1. 2
                                • coding assignments during an interview (eg 15min to code an algorithm for strings, graphs, dynamic programming, etc.) – are always bad. Bad for the employer, for candidate and for the industry as a whole.

                                Mostly because the employer is assuming to hire people to write ‘novels’, often mutli-volume novels (if I may use that analogy), but, instead, they test the candidates with ‘jeopardy-style’ questions.

                                • take home coding is better, because ( a) it reduces the effects of already-wrong filtering, ( b ) it gives a an opportunity to the employer to see how candidate structures and expresses their thought, after they had time to think. Which is closer to the type of environment they will be in

                                  What is still bad about take home interviews are the following:

                                  • does not elicit the candidate’s ability to structure a larger system (if you are hiring a person with 10+ years of experience, most likely you are looking for it)

                                  • does not elicit candidate’s interest areas and the progress they made on their own in those areas (eg self-study competencies)

                                  • does not allow the candidate to demonstrate their particular strengths (that may be well outside the particular coding experiment)

                                  • I get that all those additional qualities can be somehow learned by other means, but the coding question is always used as ‘gateway’. In my view if a take-home coding question is still used, it should be up to the candidate to decide at which stage of the interview to do it.


                                As I posted in some other replies in the similar topics. There is an ample space for a technology+social-sciences startup that could make hiring process much better (and not just for software-dev industry).

                                What we have today in this space is really really bad. It slows down progress in the industry by suffocating creativity and long-term vision in software products, by narrowing down employment to folks that can prepare well for tests, and that have good/quick thinking ability.

                                Good part about it, that the companies that practice the ‘code-within-X-minute’ style of exercises, are filtering out really great candidates that become available to other employers with much better interview hygiene.

                                1. 5

                                  I’ve been playing with Dear ImGui recently. The documentation is terrible and probably should be listed in the ‘cons’, but the good news is that the abstractions are so simple that it doesn’t really need them. I’ve been using the ImTui back end, which renders with ncurses. There’s also one for rendering via WebSocket to a browser, along with many others for displaying in a more conventional GUI. I’d probably list a couple of other cons:

                                  • No accessibility features (yet - I think it’s planned)
                                  • No native look and feel (it’s mostly used in games, where this doesn’t matter)
                                  • Still a fairly young / small project, so lacks things like rich text layout APIs that you’d find in a more mature project.

                                  The docs spend so much time telling you what an immediate mode GUI isn’t that they forget to tell you what it is. Coming from OpenStep, I expected to hate it and I’ve been really surprised at how much I’ve enjoyed programming with it. The idea behind an immediate-mode GUI is that you treat the GUI as stateless. Each frame, you call a function for each control, which creates the control if it doesn’t exist and tells you if the user interacted with it. The framework is free to cache things between frames (and probably needs to for performance) but this means that memory management is trivial. In the framework, there’s a map from control identifiers to controls, at the end of the frame any controls that weren’t referenced can be deleted. Outside of the framework, there are no references to controls held by your code and no references to your code held by the framework.

                                  Rather than building a hierarchy of objects that represent controls, you execute code that builds the controls each frame. If you want a new window, you call a begin-window function, then a load of other functions that build controls in that window, and then end-window. Similarly, if you want any other kind of container view, you call a start function, then call functions to create things inside the list, then an end function. All of the functions that define containers (windows, tabs, combo boxes, and so on) return true if the thing is visible, false otherwise. This means that you can skip all of the code that displays things that aren’t visible.

                                  If you want two different controls to bind to the same data, read from the data source when you instantiate them. If the user interacts with either control, you will be notified (by the return value) when you create the control for the next frame and so you can update the data source then and it will be propagated to any controls drawn later in the frame and into all controls in the next frame.

                                  All of this completely eliminates most of the complexity of MVC frameworks. You write functions that take on the complete roll of a controller, with a very narrow interface to the views. Each frame they (logically) create the view and write back to the model if anything has changed. If you call a view function to create a control then any notification of state changed (button pressed, slider moved, text entered in text field, and so on) will be provided either by the return value or by a callback that’s invoked on the top of the stack, so your state management is trivial.

                                  The immediacy of the feedback means that you don’t need to track any state. The shape of your GUI directly reflects the shape of the code and any bit of your model that you’re reading to display is the same bit that you’d update for changes from the UI. If you want to display a load of items in a tree / outline view, then you’ll probably have a recursive function call for each layer in the tree and then a loop inside it for displaying each item at a given level. If you put buttons there and the user clicks one then the function that creates the button will return true and you can handle at at the point where you’re already traversing that bit of your data structure.

                                  1. 5

                                    Big fan of immediate mode GUIs and contributor to Nuklear.

                                    No accessibility features

                                    One of the big hurdles of most non-native GUI toolkits, not just the IMGui variants. Doing this properly can be done, but ohh dear is it a big chunk of work. Not exactly accessibility per se, but smaller steps like improving the multi language input was the missing piece in my use-case. As long as you stick to a use-case where being accessible isn’t required (Embedded, machine touchscreens), intermediate mode GUIs are a god send. PS: Mainstream screenreaders for the visually impared have an OCR fallback, though this is no excuse of course.

                                    Still a fairly young / small project

                                    Even with all the crazy industry support Dear IMGui pulled in, this will always be the case. It’s a fairly niche style to program. When I tell seniors “this paradigm mixes code on GUI definitions in a very pleasant way” they take out a crucifix and try to repell the sinful idea of mixing code like I’m the devil himself. MVC is the standard I learned in uni and will never leave bigger team sizes, I think.

                                    All of this completely eliminates most of the complexity

                                    When doing very complex layouts, IMGui can backfire hard. Straight shotgun to the foot, as you start to re-implement all that state and abstractions you got rid of in the first place. But for anything with a more straight forward purpose, simpler design, the IMGui style is unparalleled in simplicty and comfort. Niche, but when it fits, it’s glorious.

                                    1. 1

                                      Big fan of immediate mode GUIs and contributor to Nuklear.

                                      I had a brief look at Nuklear but when it said that it used ANSI C89 I ran away.

                                      MVC is the standard I learned in uni and will never leave bigger team sizes, I think.

                                      That’s definitely true now, although model-view-update is increasingly fashionable and I think immediate-mode GUIs fit that model very cleanly.

                                      When doing very complex layouts, IMGui can backfire hard. Straight shotgun to the foot, as you start to re-implement all that state and abstractions you got rid of in the first place. But for anything with a more straight forward purpose, simpler design, the IMGui style is unparalleled in simplicty and comfort. Niche, but when it fits, it’s glorious.

                                      I haven’t reached that point yet, but creating a tree view containing a row of different views that all update the part of the data structure that they’re expanding is an order of magnitude less code than it is with Cocoa / GNUstep. My normal litmus test for a GUI toolkit is whether it can smoothly scroll a table view with ten million rows. I haven’t tried that with Dear ImGUI yet. I presume that I’d need to draw the visible rows myself, which is fairly easy if it can tell me which ones need to be visible. GNUstep does a bit of caching here, rendering more rows and clipping. I believe Cocoa now renders a larger range to a texture and just composites it in the right place.

                                      1. 1

                                        ANSI C89

                                        You may use it from any C or C++ standard. The C89 thing was originally done by vurtun back in the day to ensure compatibility with even the most obscure embedded compiler. Since most of the work was done, the C89 thing was kept going to keep that quite unique part of this project. Some of the C stuff to make it happen are quite interesting and were mentioned in C related articles from time to time.

                                        model-view-update

                                        Ohh, didn’t know such a thing existed. Gonna read up on it, thanks!

                                    2. 2

                                      is there a layout engine that computes (and re-computes) relative positions on controls when screen window gets resized?

                                    1. 2

                                      Single Page App – is a response to failures of OS makers to recognize the need and to come up with uniform, secure, performant UI platforms, and expressive, non-vendor locked, secure methods of application distribution, deployment and updates – over the last 30 years (and before IOS/android and SPAs)

                                      If there would have been:

                                      • a decentralized ‘app store’ not just for apps but for ’suits, bundles and ‘gardens’,
                                      • a uniform UI platform + identity management platform, that worked without installing a bunch of ‘extra’ things,
                                      • without vendor lock-ins,
                                      • with commerce-friendly licensed tech

                                      then, there would not have been ‘SPAs’. Until then – SPAs are needed, it is not about ‘history’ it is about an ability to see, interact, and compose what you need – on one screen, with minimal (or no) network bandwidth requirements

                                      1. 3

                                        This article isn’t about SPAs vs. apps, its about SPAs vs multipage apps, and how many of the contortions involved in an SPA are already handled by the browser, and are generally handled more efficiently and correctly w.r.t expected site UX and accessibility.

                                        For your other points:

                                        Different platforms have drastically different core UI concepts, the obsession with having single cross platform UI framework is an obsession with there being a single OS that everyone has to use. Every “cross platform” UI framework screws up a bunch of things on its non-primary platform, because different platforms are, you know, different. Changing the UX of every platform (minus one) to match one primary platform (which would out of necessity be windows) would represent a significant usability break for the majority of platforms.

                                        what the heck are “suits and gardens”?

                                        vendor lock in: e.g there is one OS to rule them all. Different OSs are different, and people generally like their OS of choice.

                                        what is a “commerce friendly” license?

                                      1. 3

                                        I have a question: for folks who do not like IDEs and use non-language specific editors.

                                        How do you re-factor code (eg restructure function relations, or change arguments type, order that some common function needs to change) ?
                                        Say, for a code base that’s 100K+ lines and above, or a code base for 1.5mln lines and has may be 5-8 devs working on it, concurrently.

                                        I cannot imagine doing without a type/language aware IDE help.

                                        I am sure in the future formal methods for things like global state, data flow verifications, will be built into IDEs. Property-based testing will be done (optionally) at compile time, security assessment of dependencies will be made visible through IDE…

                                        Perhaps, I agree to a degree with an author, that statistics/ML based code suggestions as a ‘central element’ of coding, are not that useful. But that’s not a fault of an IDE as a concept, it is a just the particular focus Github’s co-pilot that might not be that visionary.

                                        1. 5

                                          90 % of my refactor work is contained to a couple files max. For those times when I need to refactor at the scale you describe it’s usually not feasible to do so in a single pass anyway. That means I need to keep the old interface around. There are a lot of automated refactoring tools that aren’t IDE specific as well. This is very much a matter of preference though.

                                          1. 4

                                            The last time I did a refactor to push an additional parameter onto a stack of methods in golang, it was laborious. Next time I try that I’ll be setting up the go language server for emacs and if that doesn’t have it I’ll try an ide.

                                            1. 4

                                              I would say neovim is “non-language specific”, but you can get refactoring support, live diagnostics, and such IDE features through LSP.

                                              I find it more convenient than “real” IDEs, since I don’t have to switch between different ones for working on different projects in different languages.

                                              I use it among other things at work, on a large monorepo with various Java services and also frontend clients, for which IntelliJ IDEA tell me to download another IDE.

                                              1. 1

                                                also frontend clients, for which IntelliJ IDEA tell me to download another IDE

                                                This is surprising to me. I regularly use a single instance of IntelliJ IDEA Ultimate to work in Kotlin, Python, Java, and TypeScript. Sometimes I use it to edit HCL (Terraform) too. I had to install language plugins but everything coexists without much trouble in my experience, both single projects with multiple languages and separate single-language projects.

                                                What other IDE was it telling you to download, if you happen to remember?

                                                1. 1

                                                  What other IDE was it telling you to download, if you happen to remember?

                                                  I think it was WebStorm, when I opened up an Angular file.

                                                  There’s also the issue that it’s all pixelated on a scaled Wayland display, and that I can’t use a pointing device, and even with the vim plugin there’s many things I don’t know how to do, or do comfortably, without a pointing device. And the sluggishness.

                                                  I tried to fix those things but it was easier for me to add capabilities to Neovim.

                                              2. 1

                                                We avoid having codebases that size. The largest we got was 75kloc, before we broke it up into smaller components. It was still maintainable because it was decomposed into sub-areas that would talk to each other over well-defined interfaces, and it was supported by a bunch of helper code that was pulled out into libraries.

                                                Generally speaking if you can make any change and it will affect an appreciable fraction of the entire codebase, you’re either developing a compiler or your repository’s architecture is questionable.

                                              1. 44

                                                Name popular OSS software, written in Haskell, not used for Haskell management (e.g. Cabal).

                                                AFAICT, there are only two, pandoc and XMonad.

                                                This does not strike me as being an unreasonably effective language. There are tons of tools written in Rust you can name, and Rust is a significantly younger language.

                                                People say there is a ton of good Haskell locked up in fintech, and that may be true, but a) fintech is weird because it has infinite money and b) there are plenty of other languages used in fintech which are also popular outside of it, eg Python, so it doesn’t strike me as being a good counterexample, even if we grant that it is true.

                                                1. 28

                                                  Here’s a Github search: https://github.com/search?l=&o=desc&q=stars%3A%3E500+language%3AHaskell&s=stars&type=Repositories

                                                  I missed a couple of good ones:

                                                  • Shellcheck
                                                  • Hasura
                                                  • Postgrest (which I think is a dumb idea, lol, but hey, it’s popular)
                                                  • Elm
                                                  • Idris, although I think this arguably goes against the not used for Haskell management rule, sort of

                                                  Still, compare this to any similarly old and popular language, and it’s no contest.

                                                  1. 15

                                                    Also Dhall

                                                    1. 9

                                                      I think postgrest is a great idea, but it can be applied to very wrong situations. Unless you’re familiar with Postgres, you might be surprised with how much application logic can be modelled purely in the database without turning it into spaghetti. At that point, you can make the strategic choice of modelling a part of your domain purely in the DB and let the clients work directly with it.

                                                      To put it differently, postgrest is an architectural tool, it can be useful for giving front-end teams a fast path to maintaining their own CRUD stores and endpoints. You can still have other parts of the database behind your API.

                                                      1. 6

                                                        I don’t understand Postgrest. IMO, the entire point of an API is to provide an interface to the database and explicitly decouple the internals of the database from the rest of the world. If you change the schema, all of your Postgrest users break. API is an abstraction layer serving exactly what the application needs and nothing more. It provides a way to maintain backwards compatibility if you need. You might as well just send sql query to a POST endpoint and eliminate the need for Postgrest - not condoning it but saying how silly the idea of postgrest is.

                                                        1. 11

                                                          Sometimes you just don’t want to make any backend application, only to have a web frontend talk to a database. There are whole “as-a-Service” products like Firebase that offer this as part of their functionality. Postgrest is self-hosted that. It’s far more convenient than sending bare SQL directly.

                                                          1. 6

                                                            with views, one can largely get around the break the schema break the API problem. Even so, as long as the consumers of the API are internal, you control both ends, so it’s pretty easy to just schedule your cutovers.

                                                            But I think the best use-case for Postgrest is old stable databases that aren’t really changing stuff much anymore but need to add a fancy web UI.

                                                            The database people spend 10 minutes turning up Postgrest and leave the UI people to do their thing and otherwise ignore them.

                                                            1. 1

                                                              Hah, I don’t get views either. My philosophy is that the database is there to store the data. It is the last thing that scales. Don’t put logic and abstraction layers in the database. There is plenty of compute available outside of it and APIs can do precise data abstraction needed for the apps. Materialized views, may be, but still feels wrong. SQL is a pain to write tests for.

                                                              1. 11

                                                                Your perspective is certainly a reasonable one, but not one I or many people necessarily agree with.

                                                                The more data you have to mess with, the closer you want the messing with next to the data. i.e. in the same process if possible :) Hence Pl/PGSQL and all the other languages that can get embedded into SQL databases.

                                                                We use views mostly for 2 reasons:

                                                                • Reporting
                                                                • Access control.
                                                                1. 2

                                                                  Have you checked row-level security? I think it creates a good default, and then you can use security definer views for when you need to override that default.

                                                                  1. 5

                                                                    Yes, That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG, and our app(s) auth directly to PG. We happily encourage direct SQL access to our users, since all of our apps use RLS for their security.

                                                                    Our biggest complaint with RLS, none(?) of the reporting front ends out there have any concept of RLS or really DB security in general, they AT BEST offer some minimal app-level security that’s usually pretty annoying. I’ve never been upset enough to write one…yet, but I hope someone someday does.

                                                                    1. 2

                                                                      That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG

                                                                      When each user has it its own role, usually that means ‘Role explosion’ [1]. But perhaps you have other methods/systems that let you avoid that.

                                                                      How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?

                                                                      [1] https://blog.plainid.com/role-explosion-unintended-consequence-rbac

                                                                      1. 3

                                                                        Well in PG a role IS a user, there is no difference, but I agree that RBAC is not ideal when your user count gets high as management can be complicated. Luckily our database includes all the HR data, so we know this person is employed with this job on these dates, etc. We utilize that information in our, mostly automated, user controls and accounts. When one is a supervisor, they have the permission(s) given to them, and they can hand them out like candy to their employees, all within our UI.

                                                                        We try to model the UI around “capabilities”, all though it’s implemented through RBAC obviously, and is not a capability based system.

                                                                        So each supervisor is responsible for their employees permissions, and we largely try to stay out of it. They can’t define the “capabilities”, that’s on us.

                                                                        How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?

                                                                        Unfortunately PG’s RBAC doesn’t really allow us to do that easily, and we luckily haven’t yet had a need to do something that detailed. It is possible, albeit non-trivial. We try to limit our access rules to more basic stuff: supervisor(s) can see/update data within their sphere but not outside of it, etc.

                                                                        We do limit users based on their work location, but not their logged in location. We do log all activity in an audit log, which is just another DB table, and it’s in the UI for everyone with the right permissions(so a supervisor can see all their employee’s activity, whenever they want).

                                                                        Certainly different authorization system(s) exist, and they all have their pros and cons, but we’ve so far been pretty happy with PG’s system. If you can write a query to generate the data needed to make a decision, then you can make the system authorize with it.

                                                                2. 4

                                                                  My philosophy is “don’t write half-baked abstractions again and again”. PostgREST & friends (like Postgraphile) provide selecting specific columns, joins, sorting, filtering, pagination and others. I’m tired of writing that again and again for each endpoint, except each endpoint is slightly different, as it supports sorting on different fields, or different styles of filtering. PostgREST does all of that once and for all.

                                                                  Also, there are ways to test SQL, and databases supporting transaction isolation actually simplify running your tests. Just wrap your test in a BEGIN; ROLLBACK; block.

                                                                  1. 2

                                                                    Idk, I’ve been bitten by this. Probably ok in a small project, but this is a dangerous tight coupling of the entire system. Next time a new requirement comes in that requires changing the schema, RIP, wouldn’t even know which services would break and how many things would go wrong. Write fully-baked, well tested, requirements contested, exceptionally vetted, and excellently thought out abstractions.

                                                                    1. 6

                                                                      Or just use views to maintain backwards compatibility and generate typings from the introspection endpoint to typecheck clients.

                                                              2. 1

                                                                I’m a fan of tools that support incremental refactoring and decomposition of a program’s architecture w/o major API breakage. PostgREST feels to me like a useful tool in that toolbox, especially when coupled with procedural logic in the database. Plus there’s the added bonus of exposing the existing domain model “natively” as JSON over HTTP, which is one of the rare integration models better supported than even the native PG wire protocol.

                                                                With embedded subresources and full SQL view support you can quickly get to something that’s as straightforward for a FE project to talk to as a bespoke REST or GraphQL backend.. Keeping the schema definitions in one place (i.e., the database itself) means less mirroring of the same structures and serialization approaches in multiple tiers of my application.

                                                                I’m building a project right now where PostgREST fills the same architectural slot that a Django or Laravel application might, but without having to build and maintain that service at all. Will I eventually need to split the API so I can add logic that doesn’t map to tuples and functions on them? Sure, maybe, if the app gets traction at all. Does it help me keep my tiers separate for now while I’m working solo on a project that might naturally decompose into a handful of backend services and an integration layer? Yep, also working out thus far.

                                                                There are some things that strike me as awkward and/or likely to cause problems down the road, like pushing JWT handling down into the DB itself. I also think it’s a weird oversight to not expose LISTEN/NOTIFY over websockets or SSE, given that PostgREST already uses notification channels to handle its schema cache refresh trigger.

                                                                Again, though, being able to wire a hybrid SPA/SSG framework like SvelteKit into a “native” database backend without having to deploy a custom API layer has been a nice option for rapid prototyping and even “real” CRUD applications. As a bonus, my backend code can just talk to Postgres directly, which means I can use my preferred stack there (Rust + SQLx + Warp) without doing yet another intermediate JSON (un)wrap step. Eventually – again, modulo actually needing the app to work for more than a few months – more and more will migrate into that service, but in the meantime I can keep using fetch in my frontend and move on.

                                                            2. 2

                                                              I would add shake

                                                              https://shakebuild.com

                                                              not exactly a tool but a great DSL.

                                                            3. 21

                                                              I think it’s true that, historically, Haskell hasn’t been used as much for open source work as you might expect given the quality of the language. I think there are a few factors that are in play here, but the dominant one is simply that the open source projects that take off tend to be ones that a lot of people are interested in and/or contribute to. Haskell has, historically, struggled with a steep on-ramp and that means that the people who persevered and learned the language well enough to build things with it were self-selected to be the sorts of people who were highly motivated to work on Haskell and it’s ecosystem, but it was less appealing if your goals were to do something else and get that done quickly. It’s rare for Haskell to be the only language that someone knows, so even among Haskell developers I think it’s been common to pick a different language if the goal is to get a lot of community involvement in a project.

                                                              All that said, I think things are shifting. The Haskell community is starting to think earnestly about broadening adoption and making the language more appealing to a wider variety of developers. There are a lot of problems where Haskell makes a lot of sense, and we just need to see the friction for picking it reduced in order for the adoption to pick up. In that sense, the fact that many other languages are starting to add some things that are heavily inspired by Haskell makes Haskell itself more appealing, because more of the language is going to look familiar and that’s going to make it more accessible to people.

                                                              1. 15

                                                                There are tons of tools written in Rust you can name

                                                                I can’t think of anything off the dome except ripgrep. I’m sure I could do some research and find a few, but I’m sure that’s also the case for Haskell.

                                                                1. 1

                                                                  You’ve probably heard of Firefox and maybe also Deno. When you look through the GitHub Rust repos by stars, there are a bunch of ls clones weirdly, lol.

                                                                2. 9

                                                                  Agree … and finance and functional languages seem to have a connection empirically:

                                                                  • OCaml and Jane St (they strongly advocate it, mostly rejecting polyglot approaches, doing almost everything within OCaml)
                                                                  • the South American bank that bought the company behind Clojure

                                                                  I think it’s obviously the domain … there is simple a lot of “purely functional” logic in finance.

                                                                  Implementing languages and particularly compilers is another place where that’s true, which the blog post mentions. But I’d say that isn’t true for most domains.

                                                                  BTW git annex appears to be written in Haskell. However my experience with it is mixed. It feels like git itself is more reliable and it’s written in C/Perl/Shell. I think the dominating factor is just the number and skill of developers, not the language.

                                                                  1. 5

                                                                    OCaml also has a range of more or less (or once) popular non-fintech, non-compiler tools written in it. LiquidSoap, MLDonkey, Unison file synchronizer, 0install, the original PGP key server…

                                                                    1. 3

                                                                      Xen hypervisor

                                                                      1. 4

                                                                        The MirageOS project always seemed super cool. Unikernels are very interesting.

                                                                        1. 3

                                                                          Well, the tools for it, rather than the hypervisor itself. But yeah, I forgot about that one.

                                                                      2. 4

                                                                        I think the connection with finance is that making mistakes in automated finance is actually very costly on expectation, whereas making mistakes in a social network or something is typically not very expensive.

                                                                      3. 8

                                                                        Git-annex

                                                                        1. 5

                                                                          Not being popular is not the same as being “ineffective”. Likewise, something can be “effective”, but not popular.

                                                                          Is JavaScript a super effective language? Is C?

                                                                          Without going too far down the language holy war rabbit hole, my overall feeling after so many years is that programming language popularity, in general, fits a “worse is better” characterization where the languages that I, personally, feel are the most bug-prone, poorly designed, etc, are the most popular. Nobody has to agree with me, but for the sake of transparency, I’m thinking of PHP, C, JavaScript, Python, and Java when I write that. Languages that are probably pretty good/powerful/good-at-preventing-bugs are things like Haskell, Rust, Clojure, Elixir.

                                                                          1. 4

                                                                            In the past, a lot of the reason I’ve seen people being turned away from using Haskell based tools has been the perceived pain of installing GHC, which admittedly is quite large, and it can sometime be a pain to figure out which version you need. ghcup has improved that situation quite a lot by making the process of installing and managing old compilers significantly easier. There’s still an argument that GHC is massive, which it is, but storage is pretty cheap these days. For some reason I’ve never seen people make similar complaints about needing to install multiple version of python (though this is less off an issue these days).

                                                                            The other place where large Haskell codebases are locked up is Facebook - Sigma processes every single post, comment and massage for spam, at 2,000,000 req/sec, and is all written in Haskell. Luckily the underlying tech, Haxl, is open source - though few people seem to have found a particularly good use for it, you really need to be working at quite a large scale to benefit from it.

                                                                            1. 2

                                                                              hledger is one I use regularly.

                                                                              1. 2

                                                                                Cardano is a great example.

                                                                                Or Standard Chartered, which is a very prominent British bank, and runs all their backend on Haskell. They even have their own strict dialect.

                                                                                1. 2

                                                                                  GHC.

                                                                                  1. 1

                                                                                    https://pandoc.org/

                                                                                    I used pandoc for a long time before even realizing it was Haskell. Ended up learning just enough to make a change I needed.

                                                                                  1. 3

                                                                                    For those who, like me, had a hard time finding a summary of what this is all about, here it is:

                                                                                    What Phantom OS is

                                                                                    To be short:

                                                                                    Orthogonal persistence. Application does not feel OS shutdown and restart. Even abrupt restart. It is guaranteed that application will be restarted in consistent state, which is not too old. As long as you have reference to any variable, it’s state is the same between OS reboots. You don’t have (though you can) save program state to files. It is persistent. Managed code. Native Phantom applications are running in a bytecode machine. (But it is worth to mention that Phantom has simple Posix compatibility subsystem too.) Global address space. Phantom OS is an application server. All applications can communicate directly, by sharing objects.

                                                                                    Phantom OS persistence is achieved not by serializing data to files, but by running all applications in a persistent RAM. You can (and it will be true) think of Phantom memory subsystem as of a persistent paging engine. All the memory is paged to disk in a way that lets OS to restore whole memory image on restart. Consistently.

                                                                                    (From the developer’s guide, which might be a starting point for introducing something like this than the particular kind of README in this repo.)

                                                                                    1. 1

                                                                                      I like this idea. I don’t know if I would switch my main OS to this, but it could be useful for virtual machines, like setting up a dev environment that has a lot of watcher scripts.

                                                                                      1. 4

                                                                                        resetting application state by OS reboots – has been the ‘repair’ tool of our trade, for more than 2 decades. :-) Some technologies, eg (Erlang) made it into a feature.

                                                                                        But yes, this is very interesting. Perhaps another area where this would be very helpful – is running ‘dually-verifiable-mode’ (my naming). Where similar apps (in functionality) are separately designed and run on even different hardware. Yet, they periodically-compare their state, to ensure that the results are not diverging. If they do diverge, the instance with ‘correct state’ should take over.

                                                                                    1. 1

                                                                                      Very good to know, thank you. Especially because it works in env. without exceptions.

                                                                                      Is there such a thing for C by any chance?

                                                                                      1. 4

                                                                                        I’m tickled that it uses AA NiMH batteries.

                                                                                        I’ve read about the old Tandy Model 100 from the early 1980s, which ran on AA batteries. Apparently they were somewhat popular with journalists, because one could write and either save to a cassette or hook up to a modem and upload to the office. They could run for a day or so on a fully charged set of batteries. That concept always fascinated me.

                                                                                        I doubt you could get anywhere near that sort of battery life with a modern Linux device, because of all the stuff that it does in the background.

                                                                                        1. 5

                                                                                          My M1 MacBook Air lasts all day, especially under light-ish usage, and having a computer with that kind of battery life has definitely been a game changer! I usually don’t even bring a charger with me when I leave the house any more.

                                                                                          1. 5

                                                                                            I had several PalmOS devices – a PalmPilot Pro, a III, a Handspring Visor – and they all ran on 3 AAA batteries for about a month of use. Then I got a Treo, which had a rechargeable battery pack and a cell voice/data modem.

                                                                                            As I recall, the upgraded III was basically on par with a Macintosh SE/30: 32 bit Motorola 68K series CPU, 2 MB RAM – which was static, used for both working memory and storage – greyscale screen of just a little less resolution as the Mac’s monochrome screen, and a serial port and an IrDA port.

                                                                                            There’s not much that a Linux box has to be doing all the time. 8 to 10 processes, mostly idle, is what you get at boot time. Write software with an eye towards power efficiency and you can do lots of useful stuff in constrained hardware.

                                                                                            1. 2

                                                                                              Actually the (pre-Lithium) Palms all took two AAA batteries. And yeah, they would run for weeks. And that was with keeping the DRAM alive 24/7 so that your data wouldn’t be lost.

                                                                                              Palm III series had between 2 and 8 MB of RAM depending on which model, a 16MHz 68k, and 160x160 LCD. Later models on the same architecture went as far as 33MHz CPUs and 16MB of RAM, and some devices had color and/or higher-res screens, although that became more common once they went ARM.

                                                                                              A semi-forgotten Palm device is the AlphaSmart Dana, which takes a 2001-era Palm (33MHz DragonBall, 16MB of RAM) and puts it in a laptop-ish form-factor with a real keyboard, and widens the screen to 560x160 (though apps not written specifically for it run in the center 160x160). One model even had WiFi.

                                                                                            2. 4

                                                                                              I owned an Amstrad NC100 for a while. Never put it to any serious use, but it was great - acceptable keyboard, all-day battery life from AAs, and PCMCIA card support.

                                                                                              https://duncan.bayne.id.au/photos/Retro_Computers/Amstrad_NC-100_and_case_Original.jpg

                                                                                              1. 4

                                                                                                The Psion 5 series (and its descendants) of the late nineties could also get a day out of a set of AAs, and could be made to run Linux. They had great keyboards, too.

                                                                                                1. 5

                                                                                                  I had a Series 3, which got 2-4 weeks of moderate use out of a pair of AAs. The crappy battery life in comparison was the thing that put me off ever getting a Series 5. The Series 3 had quite similar specs to the original IBM PC. It used a RAM disk for most persistent storage (it also had a little Lithium battery that would protect the RAM if the AAs ran out and while you were changing them).

                                                                                                  It was a fantastic machine. I wrote a load of essays for school on it and also learned a lot about how to write terrible code (it had a built-in compiler for a BASIC-like language called OPL). I probably used it more than my desktop. In some respects, computers are like cameras: the best one is the one you have access to. The Psion fitted in my jacket pocket and so was with me all of the time.

                                                                                                  I had an RS-232 adaptor for mine that let me copy files to a big computer easily, so I could write things in the simple word processor (which wasn’t WYSIWYG, though could do some styling and, I think, export to RTF) and then spell check and format them on a desktop (the word processor used around 10 KiB of RAM, most of which was the open document - it couldn’t fit a spell checking dictionary in the size. I think the version for the larger 3a or 3c might have had one).

                                                                                                  There’s a DOS emulator for the Series 3a, which runs well in DOSBox. If you tweak the ini file, you can get it to use a full 640x480 screen. I still use it periodically because I prefer the 3a’s spreadsheet to anything produced subsequently for simple tasks.

                                                                                                  1. 3

                                                                                                    In retrospect, all of these are pleasant devices to use, that have stood the test of time very well. The use of AA batteries also gives them a kind of longevity that I doubt modern devices will have.

                                                                                                    1. 2

                                                                                                      I think I got mine in 1993. The mother of a rich friend had upgraded to the 3a and sold hers quite cheaply (I think it was £120? They were £250 at launch). It came with the spreadsheet on a ROM disk and I also bought a flash SSD (I can’t remember if it was 128 KiB or 256 KiB). The flash disk was a single cell, so you could store files there but you couldn’t reclaim space until you did a complete erase. I mostly used it to store text adventures from the Lost Treasures of Infocom (which I think I still have somewhere, on 5.25” floppies. Unfortunately, I haven’t seen any off-the-shelf 5.25” USB floppies. At some point, I’ll have to find an early Pentium that still has the right controller).

                                                                                                      I don’t remember when I stopped using it. I was definitely using it on a daily basis in 1998. It might have died around then. I don’t remember using it at university when I went in 2000. For the amount of use and abuse (it was carried around in the pocket of a teenage boy for 5 years) it got, the purchase price was incredibly low. I don’t think I’ve owned a pocket-sized device that’s been as useful since then.

                                                                                                      I did manage to get on the Nokia 770 open source developers programme a few years later. Nokia gave a 2/3 discount on these machines to a load of people who were doing open source work. Unfortunately, a machine running Linux and X11 in 64MiB of RAM with no swap was… not a great experience. It was fine running vim in a full-screen xterm, anything else and the OOM killer would come along. The OOM killer’s policy was to kill the largest application, which usually meant the app with the most unsaved data. Or, if you were really unlucky, the X server. I used it with a ThinkOutside folding keyboard (which I still have and which still works well) to write a load of articles and a few book chapters. It wasn’t nearly as versatile as the Psion though.

                                                                                                      My phone is now something on the order of three orders of magnitude more powerful than the Psion but I don’t find I use it as much as I used the Psion. I wouldn’t write a 3,000 word doc on my phone with the on-screen keyboard but I did that several times on the Psion with its built-in keyboard without any problems.

                                                                                                      1. 1

                                                                                                        These days the ‘test of time’, is probably considered a bug.

                                                                                                        • electronic devices come with non-replaceable batteries
                                                                                                        • android phone manufactures pride themselves with ‘two year OS upgrades’ as the ‘limit’. While companies like Slack rapidly discontinue 4+ year old OS supports, so that ‘business users’ keep buying new devices every 2-3 years.
                                                                                                        • This practice of ‘2-3 year’ usage seem to propagate almost every sector of manufacturing. Economic growth is linked to sale of ‘new things’ not maintenance/upgradeability of the old. The ‘quality of architecture or design’ is measured not by how long those decisions last, but how easy they can be changed.
                                                                                                        1. 4

                                                                                                          iOS devices seem to have a much longer update lifetime. The iPhone 6 (released 2014) seems to still get OS security updates and the 6S (2015) can run the latest OS. LineageOS now does OTA updates, so (after the initial, quite painful, install which requires unlocking bootloaders and doing things that can potentially brick the device) it’s quite easy to get third-party OS support for a lot of devices. I’m using a OnePlus 5T (2017) and it happily runs Android 11 via LineageOS (presumably it will support 12 at some point, it usually takes a few months for a new AOSP release to make it into LineageOS).

                                                                                                          The EU is currently in the process of rolling out labelling requirements that will mandate device manufacturers commit up-front to how long they’ll provide security updates and the maximum interval between vulnerability disclosure and patch for Internet-connected devices. This should help the incentives a bit.

                                                                                                          Software for the Psion Series 3 was mostly delivered on ROM, a few things were provided on floppy disks and required you to own the serial adaptor so that you could copy them into the (scarce) RAM or external flash disks. There was a (quite small) print catalogue of all of the available software. I never had a software update for any of the software that I ran on my Series 3.

                                                                                                          1. 2

                                                                                                            My recent experience with Android:

                                                                                                            • Bought a flagship LG phone at the end of 2016 with Android 6
                                                                                                            • 2017 Received Android 7 update
                                                                                                            • 2021 LG exited Mobile business. As part of the exit, they stopped providing the free bootloader unlocking service (discontinued Dec 2021) and OS upgrades.
                                                                                                            • 2022 I desperately need to install Slack on my phone. Slack stopped supporting Android 7 August 2021, now it is only on android 8 and up (and they disable ability to use the app from a mobile browser too!)
                                                                                                            • Now I cannot unlock LG bootloader, therefore cannot even try to upgrade the phone, therefore cannot install Slack. Therefore, need a new device.

                                                                                                            On a separate occasion, recently had to throw away expensive bluetooth headset devices – because battery can no longer hold a charge. Last year had to do the same with HP tablets, non-replaceable batteries no longer hold charge.

                                                                                                            I am not even talking about appliances where doors break, etc.

                                                                                                            There seems to be some type in the global manufacturing stance, financial incentives, environmental non-concerns – that allow and actively promote this constant ‘replace-the-whole-item’ mentality. At least that’s how it feels to me.

                                                                                                            I am now looking for an 8-9 inch windows tablet that I can carry around, and have slack on that – instead of android, but most major manufactures stopped doing tablets with Windows (because at least up to windows 10 the UI interface for tablets and CPU/battery life rations are subpar).

                                                                                                            What you mentioned about labeling in EU makes good sense, but probably not anywhere near enough.

                                                                                                            I hope that in general longevity of devices, appliances and other consumables receive significant attention from policy makers across the world. It seems that leaving it to manufactures and financial systems – did not produce reasonable outcomes…

                                                                                                            1. 3

                                                                                                              I’m in a similar situation. I accidentally bought a Prime “exclusive” Moto G6 back in 2018. Well, my GF bought it for me with my money and didn’t read the fine print. You can’t unlock the bootloader on this particular phone through Motorola, because it was sold as an Amazon Prime “exclusive”. It hasn’t gotten an update since April of 2020.

                                                                                                              I’d love to install a custom ROM on this. I greatly extended the life of a previous Android device by installing Cyanogen Mod on it several years back. But I can’t, because I don’t control what I supposedly own. The whole situation is utterly ludicrous.

                                                                                                              1. 2

                                                                                                                To add irony to insult and injury, I chose the Moto G6 primarily to avoid yet another user-hostile anti-feature popularized by Apple: the lack of a 3.5 mm headset jack.

                                                                                                              2. 3

                                                                                                                I had an Asus Transformer Prime TF700, ran the stock firmware, and then some corruption in the flash caused it to get stuck in a boot loop. I never unlocked the bootloader and apparently I can’t do that without the device in a bootable state, so it became a paperweight. From that experience, I learned that the first thing that I do with an Android device is unlock the bootloader and replace the firmware with LineageOS.

                                                                                                                The problem in the Android ecosystem is the way that the incentives are aligned. If you buy an iPhone, Apple makes money in two ways:

                                                                                                                • The iPhone has a reasonably large markup.
                                                                                                                • They take a 30% cut of every app you install.

                                                                                                                This means that they have an incentive to keep devices supported because if you can’t run new apps then you won’t buy more apps. A lot of people also sell their iPhones every 1-2 years to buy the new flagship ones and the people who buy the second-hand ones often couldn’t afford a new one. Apple still gets revenue from the second-hand sales.

                                                                                                                With the Android ecosystem, the first of these goes to the device manufacturer, the second to Google. This means that, once a device has shipped, there’s no incentive for the manufacturer to do anything and the sooner the device stops working the sooner they’ll buy another one. I proposed a simple fix for this to the Android security team about 8 years ago when they were complaining about hardware vendors nor deploying security updates: divert 5-10% of app sale and ad revenue to the device vendor for every app that’s purchased on a device with the latest OS and all security patches installed. If your handset is fully up to date, the manufacturer gets 5-10% of the revenue (Google gets 20-25%), if it isn’t then Google gets the full 30%.

                                                                                                        2. 2

                                                                                                          I bought the Planet Gemini phone which is in Psion form factor. It’s a great little computer, but a rather expensive phone.

                                                                                                          1. 2

                                                                                                            Thanks for mentioning this. Astro slide 5g interests me (although I would prefer an 8 inch device).

                                                                                                            How good are they will long terms support (eg updates of OS, unlocking, battery replacement, etc)?

                                                                                                            I like my devices to last 1 year for every 100$ spent (or much better than that). So 800 $ means to me at least 8 years.

                                                                                                            1. 2

                                                                                                              The Gemini had a couple of updates, but it is currently on Android 8.1.0. The boot loader allows you to use your own OS, and you could boot multiple ROMs. I think if you wanted to get 8 years out of it you might need an something like Sailfish OS…I keep planning on playing with PostMarket OS on it.

                                                                                                      2. 2

                                                                                                        My Newton 2000 MessagePad run quite well on 4 AA batteries. Not as long as a Palm Pilot device, but long enough.

                                                                                                        The father of a very close friend of mine was a journalist using one of those Tandy models. He would save the articles to little cassette tapes and express mail them to the newsroom from the field.

                                                                                                      1. 1

                                                                                                        I am not experienced in this area. Question for those who are - is event sourcing essential to any microservice project? Or is logging sufficient to diagnose/troubleshoot problems?

                                                                                                        Coming from my background with single tenant deployments of monolithic server APIs, I can get a stack trace when the server runs into a problem. I can generally figure out what happened by reading said stack trace and maybe inspecting the database logs or server logs. But we may not have a single stack trace in the distributed microservice context. So, is event sourcing a substitute?

                                                                                                        1. 9

                                                                                                          Event sourcing is more about application data representation - rather than store the effects of user actions (“there is a post with content x”) you store the action itself (“user y posted content x”).

                                                                                                          Distributed tracing is indeed a problem in microservice systems. Easiest fix is to build monoliths instead. But if you must, the fix is distributed tracing systems like OpenTelemetry.

                                                                                                          1. 2

                                                                                                            Often I wish there were “Team Monolith” shirts and other paraphernalia.

                                                                                                          2. 2

                                                                                                            Event sourcing, or technical messaging (in memory or persisted event queues) – is almost always necessary for Microservices in business apps.

                                                                                                            Reason is simple: the microservices need to communicate to each other. That communication must include ‘commands’ and ‘data’. Some teams use ‘messaging’ or ‘events’ or gRPC calls to send ‘commands’ only. Then, they require all the microservices to get data from ‘central database’. That’s a big problem, essentially a database becomes a critical integration point (because the ‘data’ to execute the commands is there, in that central database).

                                                                                                            That kind of approach eventually becomes a bottleneck for microservices (unless the database becomes an in-memory data cluster…).

                                                                                                            So the alternative is to send ‘commands’ plus the external data that’s required to execute the command. Sort of like withing a language we call ‘function’ (which is a command) with arguments (which is data). But data can be complex, lots of it, and you need to have a mechanism that makes sure the a ‘command’ is sent ‘just once’ (unless we are dealing with idempotent commands).

                                                                                                            When you want the invocation to be asynchronous, you use a message bus, or thalo, or kafka, or zero-mQ type of systems, or UDP-styled message passing When you need the invocations to be synchronious you use RPC/REST/etc (or you can use TCP-styled message passing).

                                                                                                            In that model, where the necessary external ‘data’ is sent together with commands – the micro-services can still have their own databases, of course (to manage their own state) – but they no longer rely on a centralized database for a data exchange. The other benefit of it, is that Enteprises avoid that ‘schema change’ bottleneck in a centralized database (the message schemas are much easier when it comes to schema changes, than database schema changes)

                                                                                                            A message bus also in some limited sense, solves the ‘Service registration/Service naming’ question (consumers are registered, and unregistered as needed). But in a more general case, when microservices need to scale up and shrink elastically across VMs (depending on demand) – you will also end up using a Software Defined Network + Naming Service + Container Manager. And those things are done by Kubernetes or by nomad+envoy+consul.

                                                                                                            1. 2

                                                                                                              Event sourcing can help with examining application behavior in the wild. If you have a full log of the semantic mutations made to your system state, you should be able to figure out when (and whence) unexpected changes happened.

                                                                                                              You’ll still need tracing if you want to determine why an update failed to apply. If your backing queue for the events is backed up, for example, you probably want a fallback log to see what proximal faults are/were happening. As the old guidance goes, figuring out why something is “slow” is often the hardest problem in a microservice architecture, esp. in the presence of metastable faults.

                                                                                                              IMHO event sourcing is largely a better option than generic trigger-based DB audit logs. The latter tends to be noisy and often does a poor job of indicating the initial cause of a change; putting things into an event log can provide some context and structure that makes debugging and actually reviewing the audit logs tractable.

                                                                                                            1. 3

                                                                                                              I’m talking about situations where you know how to solve your problem, but you’ve chosen to implement that solution in some additional layer of code on top, rather than changing the original problematic code.

                                                                                                              Working in Android/React-native/Expo ecosystem – 80% of my maintenance time is spent doing workarounds for build-pipeline changes that Expo folks introduces on top of Android, and and on top of React-native.

                                                                                                              An analogy for C++ programmers: Imagine that you use a library through a good portion of your code, and to use it you have to include several of the library’s make files (a top level make file, a library make file, and a pre-compiler/transfomer makefile).

                                                                                                              Now imagine that every 3-4 months they change their makefiles. Every time you are up for an upgrade to leverage new features of the library (or the underlying OS platforms that the library abstracts) – you end up spending 80% of your time, trying to adjust your build process. You spend weeks on it. The stack-overflow answers refer to stuff 3-4 releases back. You cannot ‘estimate this time’. You are not adding any new features for the customers of your product. This is what it is like using community cli-tools and Expo (at least on Android).

                                                                                                              So you end up creating patch files, workarounds, etc just so your apps build can compile…