1. 1

    It’s funny everyone is optimizing wc; I see two posts on wc today one beating wc with Go and now this

    1. 12

      It started with Beating C with 80 lines of Haskell, which kicked off the fad. I particularly like this one because everybody’s been comparing optimized $LANG to unoptimized C. This one is about what happens when you try to optimize the C.

      1. 1

        Yes. it seems the monothread optimized C version is roughly 20 x faster than the unoptimized C version. And roughly 60 x faster using the multihtreaded C optimized version .

        If I have read correctly. I just skimmed the article rapidly. But you know… With undefined behaviours, there can be some really difficult mistakes from time to time ;-)

    1. 2

      I’ve been wanting to write one in Golang backed by BoltDB or Badger for fun purposes. So far I’ve not got time to do so; but I hope I do soon and this article will be helpful.

      1. 1

        I’ve been thinking about the same thing. I also wanted to make it multi-tenant (only certain topics are visible depending on client credentials) and possibly distributable among many nodes.

        The biggest challenge right now is finding the time for it.

      1. 1

        Would love to see a Python implementation :P (hint hint CPython package)

        1. 3

          If your middlewares represent simple, independent operations, I still think middleware is a poor way of expressing these operations, but it is mostly benign. The trouble begins when the operations become complex and interdependent.

          I would disagree, and I think what you are struggling with is language/framework choices not the middleware concept. Just to expand on your example with 100% admin request; over time your main dispatch middleware will be so complicated nobody will understand what is going on. I’ve seen complicated projects with these “grand central dispatchers” bloating up to the point of unmanageable code. I would still keep an auth middleware that reads the user on top and adds it as a “context” to request (however express.js might do it), and let the chain continue. That one dead straight list of middleware will save you nightmares of testing and keep your code simple. Your central dispatch middleware is a bad smell to me.

          1. 1

            I’ve seen non-trivial middleware stacks in Ruby, PHP, and Node.js and would apply my analysis to all of them. I don’t think it’s language specific.

            “Context” is a better name than “request” for an arbitrary grab bag of properties, but req.context.isAdmin = … or req.context.account = ... isn’t really any morally different than req.isAdmin = ... or req.account = ....

            If your “grand central dispatcher” is complicated, that means that the operations you are performing on your requests are complicated. Breaking your dispatcher up into middleware won’t make the complexity go away – it will just make the complexity implicit in the assumptions that each middleware makes on the structure and meaning of the universal “context” object, rather than having it be expressed explicitly through the parameters and return types of functions, and the control flow of your dispatcher.

            But I don’t necessarily advocate one big “grand central dispatcher”. You can break it up. But if you break it up, I just advocate against decomposing it into multiple middleware. Instead, decompose it into functions with meaningful return values, whose parameters reflect their actual dependencies, where the control flow and interdependencies between these functions are explicit, instead of into crippled “middleware” functions that are not allowed to have a meaningful return value and can only communicate via implicit interactions inside an ill-typed, arbitrary grab bag of properties.

            Such functions, I would argue, are easier to test than middlewares. In order to test a middleware, you must artificially construct an http request and response, when likely the operation your middleware performs only cares about some parts of the request, and effects some parts of the request (or response).

            In order to test e.g.

            const rateLimitingMiddleware = async (req, res) => {
              const ip = req.headers['ip']
              db.incrementNRequests(ip)
              if (await db.nRequestsSince(Date.now() - 60000, ip) > 100) {
                return res.send(423)
              }
            }
            

            You have to

            const req = {headers: {ip: '1.1.1.1'}}
            db.incrementNRequests = sinon.stub()
            db.nRequestsSince = sinon.stub.returns(101)
            const res = { send : sinon.stub() }
            await rateLimitingMiddleware(req, res)
            sinon.assert.calledOnce(db.incrementNRequests)
            sinon.assert.calledWith(res.send, 423)
            

            whereas for

            const shouldRateLimit = async (ip) => {
              db.incrementNRequests()
              return await db.nRequestsSince(Date.now() - 60000, ip) < 100
            }
            

            Has one less mock, at least, and doesn’t require you to construct these nested request and response data structures.

            db.incrementNRequests = sinon.stub()
            db.nRequestsSince = sinon.stub.returns(101)
            const result = await shouldRateLimit('1.1.1.1')
            sinon.assert.calledOnce(db.incrementNRequests)
            expect(result).toEqual(true)
            
          1. 4

            Nim IMHO is one of underrated languages. It is as efficient as C and as clean as Python. I just wish for some solid tooling and companies backing it. Without that it’s extremely hard to convince people to pick up something that will have good tooling in future.

            1. 1

              Have been building a reverse proxy sidecar for a while; I always wanted a scripting support but hate to add full fledged JS or low performing scripting languages. Considered alternatives like Duktape, Lua etc. my end result was can have precompile loadable modules. But this can change that I can have middlewares in go interpreter.

              1. 5

                I suddenly see so many programming languages popping up. Zig, Inko, Janet, crystal etc. each tailored to solve a problem. Seems like people are done with interpreting and in some cases GC as well. Renaissance of compiled languages folks!

                1. 2

                  Seems like people are done with interpreting and in some cases GC as well. Renaissance of compiled languages folks!

                  Aren’t Janet and Inko interpreted?

                  1. 2

                    It’s a good time!

                    1. 1

                      I don’t see it that way, interpreted languages will be the next phase, but for now we know we have not seen awesome enough things from the current compiled languages, so are trying to fill that space out.

                    1. 4

                      The last time I wrote in C++ was probably 10 years ago. It was a server with libpurple that can do chat on multiple platforms. I have since then intentionally kept my self isolated and away from jobs/projects that involved C++. Not to say it is a bad language but the compile errors enter unreadable territories and syntax sometimes yields hard to trace code.

                      D lang or Go lang are some modern alternatives with Go being obviously more adopted. However my eyes are out for Rust, which in my opinion is still not the silver bullet but departure in right direction.

                      1. 3

                        We use Kotlin on our backend with Spring (Webflux + coroutines), and Hibernate. Other than initial nuances on Jackson and some basic issues everything works pretty smoothly. IMHO it’s an underrated language for backends so far, but with projects like Ktor I think we might see wider adoption soon.

                        1. 1

                          so what does this project do? the readme says a “fire-and-forget geolocation server” well what does that mean exactly?

                          +1 for crystal-lang tag

                          1. 1

                            Agreed I should add more description to readme. It’s IP to geolocation server that uses MaxMind GeoIP database and keeps is updated (hence fire and forget).

                          1. 2

                            Nim keeps getting better and better. I am planning to do a weekend hobby project writing a URL shortner; hopefully will be able to share my experience with it.

                            1. 1

                              I wonder how much space do you have on your hosting :D would be really nice if you can show size of SQLite db files on site as well. I love SQLite due to the it’s robust nature and solid code base!

                              1. 1

                                not much space, which is why I have a small disclaimer about jott.live not being used for important things. I’m wondering if there are any security concerns associated with displaying the storage being used by sqlite, but I’ll work on adding that now

                                1. 2

                                  Just to help you out :)

                                  pragma page_size;
                                  pragma page_count;
                                  

                                  Multiply first with second and you will have a safe file size :D

                              1. 12

                                Nicer community, better quality posts, and 0 click bait.

                                1. 1

                                  Just makes me wonder how many people use mercurial at there company?

                                  Edit: from company I mean at work.

                                  1. 2

                                    Which company? Octobus? As per their website, there are only 2 people, and Pierre-Yves David (“marmoute”) is a well know core Mercurial contributor. The company itself is about providing commercial support for Mercurial, along with other stuff like Python contracting.

                                    1. 1

                                      I think mercurial definitely has a niche in a corporate space. It’s easier to train new people on than git, scales better for monorepo setups, is more easily extensivle via python, and allows richer customization.

                                      1. 2

                                        It’s easier to train new people on than git

                                        I am curious about this – while mercurial definitely has less initial surface area and a far more consistent way of interacting. It also tends to have lots of customizations that add a lot of complexity right back in – and mix and match them in ways that are often unique per mercurial setup.

                                        Git while far uglier, also has more training resources both professional and free. Additionally, while git is far less consistent in terms of interaction, to a far large degree once you know it – you know it. You are unlikely to go to a site where git has lots of customizations making it behave different than the “git” you used at your last organization.

                                        1. 2

                                          Well you pretty much summed it up :) Mercurial is nicer/easier to use, but Git has more resources out there. I think at that point one being better than the other for a particular person or team will then depend less on the pros/cons of each tool, and more on the person/team’s mindset/culture/available support/etc.

                                          I’d add that Git having more resources, while helpful, is as much a proof of its success as a symptom of one of its main problems. Having to look up help pages and other tutorial pages on a regular basis becomes tedious quickly, and they still need to fix the core problem (they can’t quite fix the broken CLI at this point, but I did note several helpful messages being added to the output in the last few versions, so there’s progress).

                                          Finally, yeah Mercurial has a problem with the amount of customization they force on user because of their very strict backwards compatibility guarantees (resulting in a good portion of new features being turned off by default). This tends to be mitigated by the fact that teams will generally provide a central .hgrc that sets up good defaults for their codebase. Also, Mercurial extensions almost never change Mercurial’s behaviour (evolve is an outlier there but is still considered “experimental”) – they just add to it, so I’ve never come across (so far) any Mercurial resource that was potentially invalidated by an extension (feel free to point to counter-examples!).

                                          1. 1

                                            I suspect my issue might be more in my head (and my unique experience) than in reality. I have contracted with lots of git shops – and a fair number of mercurial ones. Most of the git shops worked very similarly, they differed in master dev versus feature branch dev, mono-repo or multi-repo – but they all felt similar and I could use very minor changes to my workflow to integrate with them, which is great for contracting.

                                            Each mercurial shop has been a wild adventure in unique workflow, and brand new extensions I have never seen or used. One used TWO different sub-repo extensions, another one used THREE configuration extensions! On top of that, most of them had annoying/wonky authentication mechanisms (some hand-rolled). The reason I use those examples (which is only a fraction of what I have seen) is that are all basically non-optional. I needed to use them to be able to work on the project… and of course mq versus non-mq. Never used evolve (yet).

                                            During the “will mercurial or git win?” – I was firmly on the mercurial side because I did work on Windows and git early on was non-function on it. But now when I hear a client is a mercurial shop, I dread it. But, I realize that is probably just my unique experience.

                                            1. 2

                                              Huh, well it’s very probable I’m just not aware of all the wild things people do out there with Mercurial. I frankly had no idea there were sub-repo extensions (outside of the core subrepo feature), and I don’t know why anybody would do custom authentication when SSH works everywhere (although I understand people might want to setup ActiveDirectory for Windows-only environments instead, but that’s it). What do you mean by “configuration extensions”? As for MQ, I don’t think it matters for the central repo, no? It should only matter for local workflows?

                                              1. 2

                                                According to https://www.mercurial-scm.org/wiki/UsingExtensions – there are at least 6 sub-repo extensions. And, yes, ActiveDirectory logins, other SSO variations and then on top of those multiple ACL layers.

                                                As for MQ – absolutely you can avoid it with others tools that can produce the same sort of history… rebase, graft, strip, etc. The issue being if all the “how we work” docs are written in MQ style – it is a bit of mental gymnastics to convert over.

                                                1. 1

                                                  Ah I see. And yeah I never really scrolled down past the non-core extensions :) (The only non-core extensions I have are a couple I wrote myself…)

                                                  1. 1

                                                    are a couple I wrote myself…

                                                    you… you are part of the problem! runs scared hehe

                                                    1. 1

                                                      Haha but that’s fine, I don’t think anybody besides myself are using them :)

                                          2. 2

                                            Might it instead be the other way around: that customization-seeking companies are more likely to choose Mercurial? This could be either because adventurousness promotes both non-Git and customization, or because Mercurial has the better architecture when you need to customize. IIRC the latter is true for both Mozilla and Facebook. Anyway, at my second job we used vanilla Mercurial, and we did fine. It was basically the same as any Git workflow, for that matter.

                                            1. 2

                                              Absolutely. Additionally, Mercurial is just more accessible in terms of customization. On top of that more than a handful of these shops had heavy Python contingents internally.

                                              1. 1

                                                Haha, yes, knowing the language certainly makes it easier to stray off the common path and into the woods of in-shop customization :-D

                                      2. 1

                                        I use Mercurial at work. My company uses Git, but I use Mercurial and clone, push, and pull transparently thanks to hg-git. I’ve noticed I am generally more aware than my Git-using colleagues of recent changes to the repo, because I’ve got a pre-pull hook set up to run hg incoming (with a tweak to avoid double network talk).

                                      1. 3

                                        Everytime I see a post for Nim I am hoping for a Golang competitor that can actually bring something new to the table. But then I look at the library support and community and walk back disappointed. I am still hoping for nim to take off and attract Python enthusiasts like me to a really fast compiled language.

                                        1. 12

                                          But then I look at the library support and community and walk back disappointed.

                                          It’s very hard to get the same momentum that Go achieved, just by the sheer fact that it is supported and marketed by Google. All I can say is: please consider helping Nim grow its community and library support, if everyone sees a language like Nim and gives up because the community is small then all new mainstream languages will be owned by large corporations like Google and Apple. Do you really want to live in a world like that? :)

                                          1. 3
                                            1. 1

                                              Have tried it; GC is way to optimistic so under high loads you would see memory being wasted. I love the syntax and power of language but it still stands shy when you can’t compile single binary (like golang) and end up with weird cross compile issues. Nim is way more efficient in terms of memory and GC overhead.

                                              1. 1

                                                Cannot compile single binary? What do you mean by that?

                                                1. 1

                                                  Let me rephrase; binary is not standalone with everything static linked (LibSSL and some dependencies). I had to recompile my binaries on server to satisfy the dynamic linked libraries with particular version.

                                                  1. 5

                                                    I think that’s more a result of Go having the manpower to develop and maintain an SSL library written in Go. As far as I understand, if you were to write an SSL library in 100% Crystal you wouldn’t have this problem.

                                                    By the way, Nim goes a step further. Because it compiles to C you can actually statically embed C libraries in your binary. Neither Go nor Crystal can do this as far as I know and it’s an awesome feature.

                                                    1. 3

                                                      Is there a distinction between “statically embed C libraries in your binary” and “statically link with C libraries”? Go absolutely can statically link with C libraries. IIRC, Go will still want to link with libc on Linux if you’re using cgo, but it’s possible to coerce Go into producing a full static executable—while statically linking with C code—using something like go install -ldflags "-linkmode external -extldflags -static".

                                                      1. 2

                                                        There is a difference. Statically linking with C libraries requires a specially built version of that library: usually in the form of a .a or .lib file.

                                                        In my experience, there are many libraries out there which are incredibly difficult to statically link with, this is especially the case on Windows. In most cases it’s difficult to find a version of the library that is statically linkable.

                                                        What I mean by “statically embed C libraries in your binary” is: you simply compile your program’s C sources together with the C sources of all the libraries you depend on.

                                                        As far as Go is concerned, I was under the impression that when you’re creating a wrapper for a C library in Go, you are effectively dynamically linking with that library. It seems to me that what you propose as a workaround for this is pretty much how you would statically compile a C program, i.e. just a case of specifying the right flags and making sure all the static libs are installed and configured properly.

                                                    2. 2

                                                      I suppose you built with --static?

                                                      1. 2

                                                        You have to jump through quite a few hoops to get dynamic linking in go.

                                                        By default it statically links everything, doesn’t have a libc, etc.

                                                      2. 1

                                                        It’s not uncommon or difficult in go to compile a webapp binary that bakes all assets (templates, images, etc) into the binary along with a webserver, HTTPS implementation (including provisioning its own certs via ACME / letsencrypt), etc.

                                                        1. 1

                                                          only have a passing familiarity with go’s tooling, how do you bake in assets?

                                                          1. 1

                                                            There are different approaches, https://github.com/GeertJohan/go.rice for example supports 3 of them (see “tool usage”)

                                                      3. 1

                                                        I think he mentions the ability to statically build [1] binaries in Golang. I’d note that this is a feature that is not so common and hard to achieve. You can do this with C/C++ (maybe Rust), but it has some limits, and it’s hard to achieve with big libraries. Not having statically built binaries often means that you need a strong sense of what you need and to what point or using good packaging/distribution workflows (fpm/docker/…).

                                                        It’s a super nice feature when distributing software (for example tooling) to the public, so it feels like “here you are your binary, you just have to use it”.

                                                        [1] https://en.wikipedia.org/wiki/Static_build

                                                  2. 1

                                                    The “programming by duct taping 30 pip packages together” method of development is pretty new, and it isn’t the only way to program. Instead, you grow the dependencies you need as you build your app, and contribute them back once they’re mature enough.

                                                    More time consuming, but you have total control.

                                                  1. 2

                                                    Interesting article!

                                                    The method getRestaurantMenus, when simultaneously invoked by many coroutines, will result in one of the coroutines winning the race condition and successfully entering the body to execute fetchMenuFromRemoteCacheOrDatabase.

                                                    It looks like this is solving the cache stampede problem with the locking approach, but using deferred coroutines for the locking. Couple of questions for the author:

                                                    1. Have you considered working with a CDN cache to eliminate stampedes? With a one second cache, DoorDash should be able to reduce the number of incoming requests to a single menu to the number of CDN PoPs per second.
                                                    2. For the other requests that are waiting, do they serve stale data and return, or just wait until the winning coroutine’s database read completes?
                                                    1. 2

                                                      Hey, If you look closely we are using the deferred not as a using mechanism but as a grouping mechanism. The best part about this approach is the late comers. So if your reads are expensive the readers coming towards the end (when Deferred is about to be fulfilled); see lesser latency. To answer your question:

                                                      1. The above mentioned scenario is just used as an example, of-course one can use CDN to prevent for this scenario. We have done something similar at places where it was applicable. We use this techniques at various places including identity systems, where putting up such information will be a bad idea.
                                                      2. Other coroutines just wait for winning coroutine to complete it’s read. You can have all sort of variations on top of it, e.g. have some sort of timeout and return stale data if scenario permits or start your own DB read. The gist resides in using promises to avoid repeated reads.
                                                    1. 2

                                                      I don’t want to burst your bubble, but there are < [200 byte](http://j.mearie.org/post/1181041789/brainfuck-interpreter-in-2-lines-of-c for one.) implementations in C.

                                                      With whitespace, that’s something like (according to this):

                                                      s[999], *r=s, *d, c;
                                                        
                                                        main(a, b)
                                                        {
                                                            char *v=1[d=b];
                                                            for(;c = *v++ % 93;)
                                                                for(b = c%7 ? 
                                                                        a &&
                                                                            (c & 17 ? 
                                                                                  c & 1 ? 
                                                                                      (*r -= c - 44)
                                                                                      :(r += c - 61)
                                                                                   :c & 2 ?
                                                                                      putchar(*r)
                                                                                      :(*r = getchar())
                                                                            ,0)
                                                                        :v;
                                                                    b&&c | a * *r;
                                                                    v=d)
                                                                        main(!c,&b-1);
                                                            d = v;
                                                        }
                                                      

                                                      Given JS is a C style language, surely some of these tricks would port fine, to reduce your byte count even further? Of course, at the sake of readability – yours is still quite readable. But, if golfing is the goal…

                                                      1. 1

                                                        Thanks for the code. Have already seen these minimal implementations; I am trying to push hard on minimizing while keeping code readable. I liked the recursive approach here; still looking at what are things I can pickup to minimize code even further, or what are the things that I can write to keep code readable while uglifier can compress it hard.

                                                      1. 5

                                                        I am using Windows 10 full time, never ever used Edge. I don’t think Microsoft ever recovered from the IE curse even with browser rebranding.

                                                        1. 2

                                                          Even with the rebranding it’s still lagging behind on features and support.

                                                        1. 7

                                                          Well what can I say there is already a reply article http://blog.breakthru.solutions/re-moving-from-php-to-go-and-back-again/

                                                          1. 11

                                                            I find arguments of the style “why did Facebook do X if there weren’t issues” (in this case build HHVM) or “Uber uses it for service development” very useless. It is interesting from the perspective of someone building an ecosystem, it’s not interesting for users that don’t build the next Facebook or Uber.

                                                            Facebook is - on the scope of all software development happening - a fringe thing. Their practices and decisions are hard to apply to smaller scales, even if their tech speakers say otherwise.

                                                            1. 4

                                                              Yeah his analysis of go as a language revealed a highly limited understanding of the language. I suspect he kept trying to write OO PHP and then got frustrated when it didn’t work the way he thought it did.

                                                            1. 5
                                                              Raspchat

                                                              I am working on refactoring frontend of http://raspchat.com/ previously it was written in Vue 1 without using any kind of packaging tools (webpack or rollup). I am refactoring frontend to be more simple with a new idea around chatting in multiple rooms, this time with hyperapp, and rollup. Staying away from react :) trying to keep it minimal. Few weeks back I switched backend from Golang to Node.js (for simplicity in codebase). I hope pretty soon I will finish the pending tasks and hit v1.0.

                                                              1. 2

                                                                Looks nice, however download button doesn’t work. Have you considered making the server actually IRC compatible? It’s a quite simple protocol.

                                                                1. 1

                                                                  Download link will be linked to Github. About IRC I have similar ideas, but every time I think about it, it begs the question if I should just use an IRC server with Node.js doing the WebSocket relaying; which leads me down the path of https://github.com/kiwiirc/webircgateway/ . I think I will keep it simple for now and gradually evolve it into something bigger.