Would have been an extra jitsu session this morning but for child[0]’s football, and later probably lots of indoor activities for children[1,2] as it’s British Wet outside
You may be wondering what new features are coming, but we’ll have to keep that a secret until release time (stuff isn’t even integrated yet, you’re not going to get a sneak peek even if you install early).
Can I offer “serverless”? I’m clearly far too long in the tooth since I’ve never understood how server-side apps could be serverless, but then maybe it’s just me?
Serverless means “you don’t need to manage servers”, not “servers don’t exist”. Given the absurdity of the latter premise, I would think the former premise would be pretty obvious, but clearly I’m mistaken since it’s such a commonly cited point of confusion.
Serverless means “you don’t need to manage servers”, not “servers don’t
exist”. Given the absurdity of the latter premise, I would think the
former premise would be pretty obvious
Well, a lot of computing-involved folk lean toward literal-mindedness.
I hear the term “serverless”, and my brain flags it as a logical
contradiction. Serverless? That does not compute. I feel like that
Nomad probe in the original Star Trek episode The Changeling. “Non
sequitur, non sequitur, your facts are in error”. And as people go, I’m
not usually all that literal minded.
Serverless sounds like a term cooked up by business people to sell yet
another cloud thing.
If you use things like AWS Lambda, Google AppEngine, or even the venerable Heroku, you’re deploying your application “serverless” because while yes, SURE, there are servers churning along behind the curtain, you don’t get to see them, and in fact you don’t get to know they exist.
You upload your code and it starts magically running without any intervention on your part.
It’s about the abstraction presented to the deveoper, not the back-end implementation.
This post has a worrying tendency to refer to “messages”, as in
If I send() a message, I have no guarantees that the other machine will recv() it
which may indicate the author is making a really bad newbie error: thinking that TCP honors write boundaries. It doesn’t. TCP doesn’t transmit messages, it transmits byte streams. The fact that you sent a particular string of bytes in one call doesn’t mean the receiver will read that same string in one call.
What makes this conceptual error nasty is that during development TCP will often appear to work this way, because it’s common for each send() or write() to result in one packet if the two peers are on the same machine or same LAN and thus have very high bandwidth and low latency. (And assuming of course that your writes are smaller than an Ethernet frame.) Then you try your program across a longer distance and it breaks horribly.
Even if you know better, this can cause problems, because over a LAN the code that decodes frames often isn’t being completely exercised, so you only test the “easy” case where you read complete frames… Then later on you find that your code doesn’t handle reassemble partial frames correctly. I’ve been bitten by this before.
The more I work with networking code, the more I feel that no app developer should use the C socket APIs directly. They look straightforward, and you can build a trivial app without much fuss, but there are so many subtle behaviors and edge cases and platform variations that make it quite difficult to get solid shippable code.
I totally made that mistake as recently as three years ago. I looked at libUV and said “ugh, this looks big and complex, I don’t need all that for my purposes”, and started rolling my own C++ code. I really wish I hadn’t. ☠️
I worked with a warehouse automation vendor who had operated with this misconception for decades. We asked them to implement a JSON protocol that resulted in messages bigger than 1500 bytes, and the ensuing back and forth trying to get them to understand you have to call read()in a loop was one of the most frustrating technical conversations I’ve ever had! They thought TCP was message-oriented, but didn’t even have the vocabulary to say that.
The more I work with networking code, the more I feel that no app developer should use the C socket APIs directly.
I don’t mean to jump on this as I appreciate the sentiment. I think I’m just easily triggered by more and more areas coming under the “don’t roll your own crypto” banner. Of course: you really shouldn’t roll your own crypto. But do write your own implementation of well-defined systems so you can understand exactly how complicated it all is. And do write networking code using the C APIs, maybe even use raw sockets and see if you’re up to putting the various protocols’ headers together too. Don’t for goodness’ sake deploy it into production …
We need to keep practising these arts in some safe space so that we can learn where the sharp edges are.
Yeah, I’m implicitly assuming ‘the reader’ wants to build something serious. Of course you can and should play with whatever APIs you want!
Personally, learning and using the C socket APIs is not something I’d do for fun. I did it because I was being paid to design and build a cross-platform product with a small footprint and the higher level libraries looked too big (and I think I was wrong there.)
Ideally we get rid of these APIs someday in favor of better ones, but Unix APIs seem to be damn near immortal unfortunately.
Thanks for your advice. I wish these man pages came with this sort of info as well. It’s really quite impossible to write decent code where you cover all your bases.
Let me recommend “Unix Network Programming”, a book I wish I’d had when I started working on this stuff — partly because it probably would have convinced me not to DIY. It makes a great doorstop too.
Oh, and FYI things get even more “fun” when you try to integrate TLS. Obviously you’d grab an existing library, probably OpenSSL, but it’s quite confusing to integrate into your own socket code. (Plus there are good reasons to use a platform’s integrated TLS lib instead, like Apple’s SecureTransport, because it’s tied into things like the device’s root-cert list and cert revocation. So that gives you multiple APIs to figure out…)
I’ve seen a few systems that didn’t behave well when I synthetically flushed bytes to them one at a time.
In practice they seem to work more reliably.
If you disable nagles algorithm, and your message fits within the TCP segment size, I think you can get relatively reliable message semantics.
That said, I haven’t written an application myself that relies on such behavior.
Systems with strange maximum segment size settings might break things.
I hacked together a webserver in C recently by following beej’s networking guide, and the level of appreciation I have for libraries that handle all the open/recv/select stuff for me increased immensely.
Would have been an extra jitsu session this morning but for child[0]’s football, and later probably lots of indoor activities for children[1,2] as it’s British Wet outside
Very curious as to what these are 👀
My bet is that speaker support is one of the features
Brilliant
Heck, probably more child-wrangling
Can I offer “serverless”? I’m clearly far too long in the tooth since I’ve never understood how server-side apps could be serverless, but then maybe it’s just me?
Serverless means “you don’t need to manage servers”, not “servers don’t exist”. Given the absurdity of the latter premise, I would think the former premise would be pretty obvious, but clearly I’m mistaken since it’s such a commonly cited point of confusion.
if what it means is different from what is says it is just a stupid phrase. Be it by chance or purpose.
It’s not different from what it says. There are just multiple interpretations, and some literal-minded people cling to the least plausible.
I see, less is more.
Well, a lot of computing-involved folk lean toward literal-mindedness. I hear the term “serverless”, and my brain flags it as a logical contradiction. Serverless? That does not compute. I feel like that Nomad probe in the original Star Trek episode The Changeling. “Non sequitur, non sequitur, your facts are in error”. And as people go, I’m not usually all that literal minded.
Serverless sounds like a term cooked up by business people to sell yet another cloud thing.
Pure client-side web applications are definitely possible. and could rightfully be called serverless because they actually don’t need a server.
How does the web application get to the browser? :)
firefox /mnt/floppy/myapp.html
. ;)lol, my mother does it all the time. Just she uses
myapp.htm
by habit.touché
It’s you :)
If you use things like AWS Lambda, Google AppEngine, or even the venerable Heroku, you’re deploying your application “serverless” because while yes, SURE, there are servers churning along behind the curtain, you don’t get to see them, and in fact you don’t get to know they exist.
You upload your code and it starts magically running without any intervention on your part.
It’s about the abstraction presented to the deveoper, not the back-end implementation.
Yes. I don’t remember where I saw this, but it really should be called “on-demand.”
I recall a discussion about this on the other site (Embedding binary objects in C – about a post on Ted Ungangst’s site), with loads of interesting methods.
This post has a worrying tendency to refer to “messages”, as in
which may indicate the author is making a really bad newbie error: thinking that TCP honors write boundaries. It doesn’t. TCP doesn’t transmit messages, it transmits byte streams. The fact that you sent a particular string of bytes in one call doesn’t mean the receiver will read that same string in one call.
What makes this conceptual error nasty is that during development TCP will often appear to work this way, because it’s common for each send() or write() to result in one packet if the two peers are on the same machine or same LAN and thus have very high bandwidth and low latency. (And assuming of course that your writes are smaller than an Ethernet frame.) Then you try your program across a longer distance and it breaks horribly.
Even if you know better, this can cause problems, because over a LAN the code that decodes frames often isn’t being completely exercised, so you only test the “easy” case where you read complete frames… Then later on you find that your code doesn’t handle reassemble partial frames correctly. I’ve been bitten by this before.
The more I work with networking code, the more I feel that no app developer should use the C socket APIs directly. They look straightforward, and you can build a trivial app without much fuss, but there are so many subtle behaviors and edge cases and platform variations that make it quite difficult to get solid shippable code.
I totally made that mistake as recently as three years ago. I looked at libUV and said “ugh, this looks big and complex, I don’t need all that for my purposes”, and started rolling my own C++ code. I really wish I hadn’t. ☠️
I worked with a warehouse automation vendor who had operated with this misconception for decades. We asked them to implement a JSON protocol that resulted in messages bigger than 1500 bytes, and the ensuing back and forth trying to get them to understand you have to call
read()
in a loop was one of the most frustrating technical conversations I’ve ever had! They thought TCP was message-oriented, but didn’t even have the vocabulary to say that.I don’t mean to jump on this as I appreciate the sentiment. I think I’m just easily triggered by more and more areas coming under the “don’t roll your own crypto” banner. Of course: you really shouldn’t roll your own crypto. But do write your own implementation of well-defined systems so you can understand exactly how complicated it all is. And do write networking code using the C APIs, maybe even use raw sockets and see if you’re up to putting the various protocols’ headers together too. Don’t for goodness’ sake deploy it into production …
We need to keep practising these arts in some safe space so that we can learn where the sharp edges are.
Yeah, I’m implicitly assuming ‘the reader’ wants to build something serious. Of course you can and should play with whatever APIs you want!
Personally, learning and using the C socket APIs is not something I’d do for fun. I did it because I was being paid to design and build a cross-platform product with a small footprint and the higher level libraries looked too big (and I think I was wrong there.)
Ideally we get rid of these APIs someday in favor of better ones, but Unix APIs seem to be damn near immortal unfortunately.
Thanks for your advice. I wish these man pages came with this sort of info as well. It’s really quite impossible to write decent code where you cover all your bases.
Let me recommend “Unix Network Programming”, a book I wish I’d had when I started working on this stuff — partly because it probably would have convinced me not to DIY. It makes a great doorstop too.
Oh, and FYI things get even more “fun” when you try to integrate TLS. Obviously you’d grab an existing library, probably OpenSSL, but it’s quite confusing to integrate into your own socket code. (Plus there are good reasons to use a platform’s integrated TLS lib instead, like Apple’s SecureTransport, because it’s tied into things like the device’s root-cert list and cert revocation. So that gives you multiple APIs to figure out…)
I’ve seen a few systems that didn’t behave well when I synthetically flushed bytes to them one at a time.
In practice they seem to work more reliably. If you disable nagles algorithm, and your message fits within the TCP segment size, I think you can get relatively reliable message semantics.
That said, I haven’t written an application myself that relies on such behavior. Systems with strange maximum segment size settings might break things.
I hacked together a webserver in C recently by following beej’s networking guide, and the level of appreciation I have for libraries that handle all the
open
/recv
/select
stuff for me increased immensely.