1. 35
  1.  

  2. 4

    So what I didn’t get: What is FaaS ? What exactly is the use case for this service ? Defining some function which can then be called from anywhere (probably “sold”) ?

    1. 9

      It’s like CGI billed pr request on a managed server.

      1. 7

        NearlyFreeSpeech.net does something sort of like that. Not per request, but based on RAM/CPU minutes. I find it pretty convenient.

        1. 5

          Yeah, I was hoping on billing by resource use (RAM, CPU and data transfer through syscalls) in a way that would give you a more precise view into how long your programs were taking to run. This would also give people an incentive to make faster code that uses less ram, which i would really love to see happen.

      2. 3

        I think this whole FaaS is a very interesting movement. Combined with more edge pods we deployed through Fastly / Cloudflare, we are onto something quite different than the big cloud services we saw with Facebook / Twitter / Gmail today.

        Reimagining how you are going to do email today (or really, any personalized messaging system like Whatsapp). These edge pods with deployment bundles like wasm enables us to implement the always online inbox, potentially with high availability right at the edge. So your device syncing will be extremely fast. At the same time, it is hosted online so everyone can reach you with minimal central coordination.

        It is unlikely in the beginning these things by their own will be successful. Decentralization is not a big consideration at this time. But it could deliver benefits today if implemented correctly, even for existing big players like Facebook. You could have a Wasm edge node close to the device materialize the feed GraphQL fragments in anticipation for a new user request. And since the edge node will know what’s the last synced feed fragment, it can also do so incrementally.

        I am optimistic about this. Would love to see wasm based deployment taking off, especially for edge nodes.

        1. 1

          This is an approach and idea that DFINITY (https://dfinity.org/) is pursuing, to provide a fully decentralized computing platform. The system is running wasm as the basic unit of execution, and charges for the cycles, memory, and bandwidth used. Currently, it is in beta, but should become available next year.

          Disclaimer: I work for DFINITY.

          1. 1

            Thanks! Yep, I looked at DFINITY before. One thing would be compelling to me is the closeness to the customers. With our cloud computing moved to the low-latency territory (most significantly, the cloud gaming), closeness of the edge nodes is a necessity. This is often overlooked by many decentralized movements from cryptocurrency space (probably because these Dapps have different focuses).

        2. 2

          Functions as a Service. Basically the usecase is for people that want to run code that doesn’t run often enough to justify having a dedicated box for it, and just often enough that you don’t want to set up anything for it beforehand. In this case, I plan to start using it for webhook handlers for things like GitHub and Gitea.

          1. 2

            So then you plan to be administering/running Wasmcloud? The idea is that people can just upload code to you? What hosting service are you using?

            This reminds me that I need to write about shared hosting and FastCGI. And open source the .wwz script that a few people are interested in here:

            https://lobste.rs/s/xl63ah/fastcgi_forgotten_treasure

            Basically I think shared hosting provides all of that flexibility (and more, because the wasm sandbox is limited). I do want to stand my scripts up on NearlyFreeSpeech’s FastCGI support to test this theory though…

            I think the main problem with shared hosting is versioning and dependencies – i.e. basically what containers solve. And portability between different OS versions.

            I think you can actually “resell” shared hosting with a wasmcloud interface… that would be pretty interesting. It would relieve you of having to manage the boxes at least.

            1. 4

              So then you plan to be administering/running Wasmcloud?

              I have had many back and forth thoughts about this, all of the options seem horrible. I may do something else in the future, but it’s been fun to prototype a heroku like experience. As for actually running it, IDK if it would be worth the abuse risk doing it on my own.

              The idea is that people can just upload code to you?

              If you are either on a paid tier, uploading the example code or talked with me to get “free tier” access yes. This does really turn into a logistical nightmare in practice though.

              What hosting service are you using?

              Still figuring that part out to be honest.

              I think the main problem with shared hosting is versioning and dependencies – i.e. basically what containers solve.

              The main thing I want to play with using this experiment is something like “what if remote resources were as easy to access as local ones?” Sort of the Plan 9 “everything is a file” model taken to a logical extreme just to see what it’s like if you do that. Static linking against the platform API should make versioning and dependencies easy to track down (at the cost of actually needing to engineer a stable API).

              I think you can actually “resell” shared hosting with a wasmcloud interface… that would be pretty interesting. It would relieve you of having to manage the boxes at least.

              I may end up doing that, it’s a good idea.

              1. 1

                (late reply)

                FWIW I have some experience going down this rabbithole, going back 10 years. Basically trying to make my own hosting service :) In my case part of the inspiration was looking for answers to the “polyglot problem” that App Engine had back in 2007. Heroku definitely did interesting things around the same time period.

                Making your own hosting service definitely teaches you a lot, and it goes quite deep. I have a new appreciation for all the stuff we build on top of. (And that is largely the motivation for Oil, i.e. because shell is kind of the “first thing” that glues together the big mess we call user space.)


                To be a bit more concrete, I went down more of that rabbithole recently. I signed up for NearlyFreeSpeech because they support FastCGI. I found out that it’s FreeBSD! I was hoping for a “portable cloud” experience with Dreamhost and NearlyFreeSpeech. But BSD vs. Linux probably breaks that.

                It appears there are lots of “free shell” providers that support CGI, but not FastCGI. There are several other monthly providers of FastCGI like a2hosting, but not sure I want to have another account yet, since the only purpose is to test out my “portable cloud”.

                Anyway, this is a long subject, but I think FastCGI could be a decent basis for “functions as a service”. And I noticed there is some Rust support for FastCGI:

                https://dafyddcrosby.com/rust-dreamhost-fastcgi/

                ( I’m using it from Python; I don’t use Rust)

                It depends on how long the user functions will last. If you want very long background functions, then FastCGI doesn’t really work there, and shared hosting doesn’t work either. But then you have to do A LOT more work to spin up your own cloud.

                It’s sort of the “inner platform problem” … To create a platform, you have to solve all those same problems AGAIN at the level below. I very much got that sense with my prior hosting project. This applies to packaging, scheduling / resource management, and especially user authentication and security. Security goes infinitely deep… wasm may help with some aspects, but it’s not a complete solution.

                And even Google has that problem – running entire cluster managers, just to run another cluster manager on top! (long story, but it is interesting)

                Anyway I will probably keep digging into FastCGI and shared hosting… It’s sort of “alternative” now, but I think there is still value and simplicity there, just like there is value to shell, etc.

          2. 1

            So what I didn’t get: What is FaaS ?

            FaaS is a reaction to the fact that the cloud has horrendous usability. If I own a serer and want to run a program, I can just run it. If I want to deploy it in the cloud, I need to manage VMs, probably containers on top of VMs (that seems to be what the cool kids are doing), and some orchestration framework for both. I need to make sure I get security updates for everything in my base OS image and everything that’s run in my container. What I sctually want is to write a program that sits on top of a mainframe OS and runs in someone’s mainframe^Wdatacenter, with someone else being responsible for managing all of the infrastructure: If I have to maintain most of the software infrastructure, I am missing a big part of the possible benefit of outsourcing maintenance of the hardware infrastructure.

            Increased efficiency from dense hosting was one of the main selling points for the cloud. If I occasionally need a big beefy computer but only for a couple of hours a month and need a tiny trickle of work done that wouldn’t even stress a first-generation RPi the rest of the time, I can reduce my costs by sharing hosting with a load of other people and having someone else manage load balancing across a huge fleet of machines. If; however, I have to bring along a VM, container runtime, and so on, then I’m bringing a fixed overhead that, in the mostly-idle phases, is huge in comparison to my actual workload.

            FaaS aims to provide a lightweight runtime environment that runs your program and nothing else and can be scaled up and down based on load and billed by RAM MB-second, CPU-second and network traffic (often with some rounding). It aims to be a generic and scalable version of the kind of old-school shared hosting, where a load of people would use the same Apache instance with CGI: the cost of administering of the base environment that executes the scripts is shared across all users and the cloud provider can run the scripts on whatever node(s) in the datacenter make sense right now. The older systems typically used the filesystem for read-only data and a database for persistent data. With FaaS, you typically don’t have a local filesystem but can use cloud file / object stores and databases as you need them. Again, someone else is responsible for providing a storage layer that can scale up and down on demand and you pay for the amount of data that’s stored there and how often you access it but you don’t need to overprovision (as you do for cloud VM disks, where you’re paying for the maximum amount of space you might need for any given VM).

            TL;DR: FaaS is an attempt to expose the cloud as a useful computer instead of as a platform on which you can simulate a bunch of computers.

          3. 3

            Looks like the default mode of NATS being at-most-once has bitten again, with the options for at-least-once being missed. When smart developers miss this, it’s a project problem in communication.

            [disclaimer: I work for the main company behind NATS]

            1. 1

              I wasn’t aware there were other modes. Is there a way to configure NATS so that it will act like kafka?

              1. 5

                For what you are doing you don’t need that, and you don’t need at-least-once semantics really either. Treat NATS like HTTP and make every message a request, meaning you will wait for the other side to confirm. I also used NATS to create the control plane for CloudFoundry and never used anything but at-most-once semantics for scheduling, command and control, addressing and discovery and telemetry.

                1. 2

                  Does this potentially run into issues with missed responses? If a response message doesn’t show up you’re not sure if it failed to arrive or your message was never received? Feels like this might be easier to avoid with HTTP.

                  1. 3

                    What happens when you don’t get a response from HTTP? That is also possible if the network stack hangs or breaks. Its the same thing TBH.

                    1. 2

                      Right. I guess I don’t know enough about TCP/HTTP failure or message queue send failure. Seems like NATS might drop messages under load which would have different failure characteristics than TCP (maybe not true?). Would retrying on timeout also have very similar characteristics to at-least-once?

                      1. 2

                        NATS will protect itself at all costs and will cut off subscribers who can not keep up. However, that really does not apply to request/reply semantics. So for request/reply I would say NATS and HTTP would behave similarly.

                        1. 1

                          cool, thanks for the knowledge

                          1. 2

                            np

                            1. 1

                              The other half of it was that the Rust client for nats wasn’t as mature as I was hoping.

                              1. 1

                                You should look again, its a 1st class citizen now, but at the time you are probably right.

            2. 2

              Oh I did not know that someone had posted Animal Crossing’s soundtrack, thanks for sharing that (also +1 for lofi hip hop radio for focus)… :)

              1. 2

                The FromRequest instance I have on my database user model allows me to inject the user associated with an API token purely based on the (validated against the database) claims associated with the JSON Web Token that the user uses for authentication. This then allows me to make API routes protected by simply putting the user model as an input to the handler function. It’s magic and I love it.

                I’m really curious about how this is set up. Poking around the code a bit I don’t see an implementation of FromRequest on the User model. Is what you’re describing in the most recent version that pushed to your public gitea? It sounds like a very natural way to handle API authz!

                  1. 2

                    Thanks! Sorry I was just looking in the models and api definition.

                    1. 1

                      it’s all good, that code is not organized very well lol