1. 1

    HAProxy is great, never gave me any trouble, did its tasks perfectly.

    1. 9

      Since I’m sure I’m not the only one who missed the memo on Gemini: https://en.wikipedia.org/wiki/Gemini_(protocol)

      1. 3

        Just curious, how did you like using Svelte for this, Elton/1ntEgr8?

        1. 8

          It was my first time using Svelte, and I really enjoyed it. Can’t say that my code is idiomatic, but I thought Svelte was a refreshing take on frontend-dev

          1. 1

            Thanks. I started to port a small project over. There are some limitations, but it’s definitely interesting (I also use esbuilder. I feel so modern).

        1. 5

          Best bonus situation I’ve ever seen, management handed the head of the engineering dept a pile of money for bonuses and said “figure out how to divide this money up among your department based on merit”, then the head of engineering said “everyone here works their ass off” and split it evenly among everyone.

          Think that’s a pretty good example to set.

          1. 3

            In my experience, people can be divided in two categories:

            1. People who do work.
            2. People who do very little or no work.

            Many smaller companies it’s 100% of the first category, but the larger the company gets the more of category 2 seems to seep in. It’s just easier to hide that you’re a fuckup I suppose.

            Barring a few rare exceptions, I find it very difficult to really rank the merit of people in category 1.

            1. 2

              People who do very little or no work.

              I don’t mind people who do little work. Not everyone has the same background and experience and work speed. I do, however, really have zero patience for people who do a negative amount of work. They slowly drain a company.

          1. 39

            This article is full of misinformation. I posted details on HN: https://news.ycombinator.com/item?id=26834128.

            1. 10

              This really shouldn’t be needed, and even someone without any exposure to Go can see this is just bunk with the minimal application of critical thinking. It’s sad to see this so highly upvoted on HN.

              When I was in high school one of the my classmates ended up with a 17A doorbell in some calculations. I think he used the wrong formula or swapped some numbers; a simple mistake we all made. The teacher, quite rightfully, berated him for not actually looking at the result of his calculation and judging if it’s roughly in the right ballpark. 17A is a ludicrous amount of power for a doorbell and anyone can see that’s just spectacularly wrong. The highest rated domestic fuses we have are 16A.

              If this story had ended up with 0.7%, sure, I can believe that. 7%? Very unlikely and I’d be skeptical, but still possible I suppose. 70% Yeah nah, that’s just as silly as a 17A doorbell. The author should have seen this, and so should anyone reading this, with or without exposure to Go. This is just basic critical thinking 101.

              Besides, does the author think the Go authors are stupid blubbering idiots who someone missed this huge elephant-sized low-hanging fruit? Binary sizes have been a point of attention for years, and somehow missing 70% wasted space of “dark bytes” would be staggeringly incompetent. If Go was written by a single author then I suppose it would have been possible (though still unlikely), but an entire team missing this for years?

              Everything about this story is just stupid. I actually read it twice because surely someone can’t make such a ludicrous claim with such confidence, on the cockroachdb website no less? I must be misunderstanding it? But yup, it’s really right there. In bold even.

              1. 6

                I think this is really interesting from a project management and public perception point of view. This is slightly different from your high school classmate, because they might not have been aware the ridiculousness of their claims. Of course, this situation could be the same, but I think it is more interesting if we assume the author did see this number and thought it was ridiculous and still wrote the article anyway.

                Someone doesn’t write a post like this without feeling some sort of distrust to the tool they are using. For some reason, once you’ve lost the trust, people will start making outlandish claims without giving any benefit of the doubt. I feel like this is similar to the Python drama which ousted the BDFL and to Rust’s actix-web drama which ousted the founding developer. Once the trust is lost in whoever is making the decisions, logic and reason seem to just go out the window. Unfortunately this can lead to snowballing and people acting very nasty for no real reason.

                I don’t have much knowledge of the Go community or drama, and in some sense this is at least much more nicely put than some of Rust’s actix-web drama (which really threw good intent out the window), but I’d be curious to know what happened that lost the trust here. It might be as simple as being upfront about the steps being done to reduce binary size, even if they are not impactful, that might gain back trust in this area.

                1. 3

                  It’s my impression that the Python and actix-web conflicts were quite different; with Python Guido just quit as he got tired of all the bickering, and actix-web was more or less similar (AFAIK neither were “ousted” though, but quit on their own?) I only followed those things at a distance, but that’s the impression I had anyway.

                  But I think you may be correct with lack of trust – especially when taking the author’s comments on the HN story in to account – but it’s hard to say for sure though as I don’t know the author.

                  1. 2

                    Perhaps I am over-generalizing, but I think they are all the same thing. With Rust’s actix-web it essentially boiled down to some people have a mental model of Rust which involves no unsafe code (which differed from the primary developer’s mental model). At some point, this went from “lets minimize unsafe” to “any unsafe is horrible and makes the project and developer a failure”, regardless of the validity of the unsafe statements. Unfortunately it devolved to the point where the main developer left.

                    In the Go situation it seems very similar. Some people have a mental model that any binary bloat is unacceptable, while the core devs see the situation differently (obviously balancing many different requirements). It seems like this article is that disagreement boiling over to the point where any unaccounted-for bits in a binary are completely unacceptable, leading to outlandish claims like 70% of the binary is wasted space. Hopefully no Go core developers take this personally enough to leave, but it seems like a very similar situation where different mental models and lack of trust lead to logic and benefit of the doubt getting thrown out the window.

                    It is hard to say what is going on for sure, and in many ways I’m just being an armchair psycologist with no degree, but I think it is interesting how this is a “common” trend. At some point projects that are doing a balancing act get lashed out at for perceived imbalances being misconstrued as malicious intent.

                    1. 1

                      I don’t think you’re correctly characterizing the actix situation. I think the mental model was “no unnecessary unsafe”. There were some spots where the use of unsafe was correct but unnecessary, and others where it was incorrect and dangerous. I think there was poor behavior on both sides of that situation. The maintainer consistently minimized the dangerous uses and closed issues, meanwhile a bunch of people on the periphery acted like a mob of children and just kept piling on the issues. I personally think someone should have forked it and moved on with their lives instead of trying to convince the maintainer to the point of harassment.

                2. 2

                  on the cockroachdb website no less

                  Cockroachdb is on my list to play with on a rainy afternoon, but this article did knock it down the list quite a few notches.

                  1. 2

                    We use it as our main database at work and it’s pretty solid. The docs for it are pretty good as well. But I definitely agree, this is a pretty disappointing article.

              1. 3

                That was fun. Great idea, great execution!

                1. 3

                  It’s very rare, but I would actually like to see a video explaining this better.

                  (Good job with the minimal JS dependencies! uMatrix is very happy with this)

                  1. 1

                    prometheus with grafana (or similar)?

                    1. 2

                      just FYI, if you run postgres CI tests, you can set this in postgres.conf:

                      fsync = off
                      
                      1. 1

                        I believe many other DBMS supports this. For example Tarantool allows to make such speedup too (see https://www.tarantool.io/en/doc/latest/reference/configuration/#confval-wal_mode)

                      1. 2

                        Oh the Nostalgia. To think that I’m so old that I’ve experienced a big chunk of computer history is mind blowing. I started out with Commodore 64 and a VIC20. I used Intel 8086 and Intel 8088, my 486 66mhz I remember fondly, as I remember my Pentium from Digital (what a beast it was). After that point it went fast and from that point on I cannot really remember any particular computer as very special, up until my first Mac with OSX.

                        1. 1

                          OSX

                          … which also turned 20 a week ago.

                          1. 3

                            I’ve never been a mac user but I wonder if the upgrade path/user experience feels much diifferent over these 20 years of OS X compared to Windows (either 3.11 up to 5 years ago, or Win98/2000 up till Win 10)…

                            Because despite having used all these Windows systems (3.11, NT 4, 95A,B,C, 98, 98 SE, Me, 2000, XP, 7, 10, and not Vista and 8/8.1) - while some people might say the gui is kinda samey or had a clear evolution, my /experience/ is so vastly different.

                            3.11 was basic but worked.

                            95A was a complete shitshow and crashed daily and I had to reinstall once a month, at least

                            95 B and C were tolerable

                            98 was somehow fresher but less stable again

                            98 SE was pretty good

                            Me I don’t really remember

                            2000 was awesome (after the first few months with driver problems for some games)

                            XP was ok

                            7 was solid

                            10 is a step back in my opinion but it’s close to 7 in quality

                            1. 1

                              I have been using Windows since 3.11 and was using only Windows (and Dos) up until around Mac OS X. Never used a Mac before that point.

                              But for me it has seemed like Windows have been more incremental while OS X release have been more continuous. I mean, If I think back to my original OS X, I kind of remember it being just the same as what I am using today (Big Sur), which is obviously wasn’t. Windows releases however has been more distinct from its previous version, in my mind

                              I also used OS/2 (was that what it was called?) along side of Windows 3.11. But to be frank, back in those days, I was mostly using Dos. Windows 3.11, to me as a gamer at the time, didn’t really add anything for my needs.

                              1. 1

                                3.11 was basic but worked.

                                I worked in the helpdesk in a university library back then. I can’t remember how many people lost their complete dissertations from crashing window 3.11 machines (Combined with having no idea that you need to keep multiple backups on these slow and unreliable floppy disks). Whatever came after might have been bad, but all of them have been better than 3.11.

                                1. 1

                                  interesting. I mean we only had it for like 2 years (on one PC) and it was mostly used for Word and Excel but I can’t remember any crashes at all, that’s why I was so surprised that 95A was so bad…

                                2. 1

                                  The 1984 original Mac was “the first [UI] worth criticizing”, to misquote Alan Kay. Once you upgraded the RAM it was very capable, and quickly launched desktop publishing once PageMaker was released.

                                  The later 80s brought color and bigger screen support, some limited multitasking, networking, and a huge filesystem improvement.

                                  System 7 in 1991 was a big step with a fully-color GUI, multitasking, IAC, and tons of usability improvements. But under the hood it was still quite primitive with no memory protection or pre-emptive scheduling.

                                  The rest of the 90s saw only incremental improvements since Apple kept working on a series of failed attempts to build a better OS from scratch and/or port to x86 (Pink/Taligent, Star Trek, Maxwell/Copland).

                                  Finally in 2001 came Mac OS X, which was a NeXT-derived OS using the Mach microkernel, BSD Unix, the “AppKit” evolution of OpenStep, the “Carbon” porting layer for the old Mac APIs, and the “blue box” classic OS emulator to run unported apps. 10.0 was buggy and incomplete, but by 10.2 in 2002 it was solid.

                                  1. 1

                                    When I started working we had a lot of OS 9 macs, I used to only use them to test web pages in Internet Explorer. They crashed often and to a casual Windows/Linux user they weren’t great, but usable.

                                    When a coworker showed me OS X (must have been 10.0) it was kinda amazing, but I didn’t use it a lot, so can’t really comment. But I’ve always felt that mac users have sometimes lamented about good and bad releases, but hardly any game breakers to switch away for a certain release, more of a “been sick of it for a while”:..

                            1. 8

                              Brilliant, thanks for posting!

                              Apart from the main point, promoting interest in Alexander’s book A Pattern Language (which I just ordered), I was surprised to finally find the original raison d’etre of Design Patterns: to paste in bits of code that overcame the lack of convenience mechanisms in earlier versions of C++ and Java.

                              I realize now that I had never placed the GOF book in it’s original context, especially it’s time. Back then, this made a lot of sense. As much sense as the magazines with printed program code before the internet. It might even have made sense to call these bits of code ‘patterns’.

                              But this also points out very clearly that clinging to these patterns today is absurd. Believing they hold some sort of fundamental truth about programming is a big fallacy. In fact, claiming that any aspect of structuring a program relies on recurring patterns makes no sense. Common terminology can be very useful, yes, but recurring design structures, no. Every programming scenario and thus every design is different almost by definition: if a problem occurred before, it will already be solved so you don’t need to do so again.

                              In other words: it’s finally time to decisively ditch the religious status of the GOF book and develop new ideas about structuring code that fits our current times.

                              1. 15

                                I think it was a great disservice to the industry as a whole, and to their students, when instructors started teaching design patterns as if they were first principles of programming. If you read the GoF book it is pretty clear that even in the context of working around C++ shortcomings, the patterns are primarily descriptive, not prescriptive: “We see that a common thing people already do when confronted with a problem shaped like X is to use a solution shaped like Y. Here’s a name for Y so we can have conversations about it and not have to explain what we’re talking about every single time.” I never got the sense from the GoF book that the authors intended it to be a rulebook.

                                1. 11

                                  I was super frustrated when I was being asked to be able to cite and explain GoF design patterns when interviewed for a (senior) Go SE position. I mean, ok, a few of them kinda maybe still carry over, and thus could be used in speaking when mentoring juniors, but the majority are not relevant at all in Go! Basically, just use a closure, most of the time… Fortunately, I’ve recently learnt, that for other reasons as well, it’s probably good for me that they didn’t want to hire me in the end.

                                  1. 4

                                    I always feel that I learn more about the company by the questions they ask, then they learn about me by my answer. That’s good for me, though :)

                                2. 4

                                  I really don’t think Design Patterns were “to paste in bits of code that overcame the lack of convenience mechanisms in earlier versions of C++ and Java”. First, they’re not language specific — when the book came out, I recognized a lot of them from Smalltalk-80. Second, they’re not as simple as copy/paste. Third, good frameworks incorporate the appropriate patterns for you so you don’t have to re-implement them.

                                  The author really seems to misunderstand some of this stuff. His statement that (paraphrasing) “the Iterator pattern is for sucky languages” misses the point that the reason iterating is so easy in higher level languages is because their collections already have iteration baked in. If you implemented a custom collection like, say, a b-tree in Perl, you’d need to do some work to be able to iterate it simply. And that work would probably look just like the Iterator pattern.

                                  1. 2

                                    Maybe you want to read it again. Especially the Postscript although I’ll warn there are spoilers in there.

                                    You’re arguing with something the author isn’t saying. I agree that a book about designing a town is more useful (and less harmful) to programmers than the book about iterators.

                                    If you implemented a custom collection like, say, a b-tree in Perl, you’d need to do some work to be able to iterate it simply.

                                    So don’t do that? One important part of Alexander’s patterns is (paraphrasing) that the implementation serves needs instead of the other way around. Most problems don’t need a b-tree (and most people that use them could choose something better) and even those that do have a b-tree don’t need (or even should) iterate.

                                    1. 2

                                      Most software design needs good data structures, they’re critical for efficiency and they also frame how you think about the problem. A language that makes designing a new data structure difficult is a problem.

                                      That said, although I like the core idea overall, I think the author picked a terrible example in iterators. As he says, Alexander’s design patterns were about delegating aspects of design[1]. Iterators do exactly that: they exist to allow someone who is an expert in designing an efficient data structure build something that can then be used by someone who is focusing on high-level application design. That’s exactly the same idea that Alexander has in, for example, his nook or niche patterns: allowing someone with fine-grained local knowledge to design a space, without needing the person laying out the floor to understand that.

                                      There’s a lot more to Alexander’s ideas, particularly in how to build cohesive overall structures, but I don’t think it’s fair to say that GoF-style patterns are something completely different, they’re just a subset of Alexander’s vision. And, to be fair to the software engineering community, they’ve done better than architects by embracing even part of Alexander’s vision. I’ve never worked in a building that wouldn’t have been improved by first hitting the architect repeatedly with The Timeless Way of Building and then making them read A Pattern Language.

                                      [1] I’d thoroughly recommend reading everything he wrote, but if you don’t have time and want to learn something directly relevant to your day-to-day job, the chapter in Peopleware on office design cites Alexander and gives one of the best one-page summaries I’ve ever read of his core ideas.

                                      1. 2

                                        Most software design needs good data structures, they’re critical for efficiency …

                                        I have heard this before, but I think it’s a myth; It’s repeated so often people believe it to be true without people ever evaluating it for themselves, and that’s a shame. Spend a few years in an Iverson language and you will be convinced otherwise: All you need are arrays and good tools for operating on arrays.

                                        • Instead of a b-tree, use an array and binary search

                                        • Instead of a hash-table, use two-arrays of the same length, and order them by the hash-position of the value in the first.

                                        • Instead of an ordered hash-table, keep the hash-positions as a permutation index in a third array.

                                        • Instead of a bloom-filter, just use one-array and the same hash-position function you used for the hash table.

                                        • Instead of an lsm-tree, use a array-of-arrays, and an array-of-lengths.

                                        And so on.

                                        Arrays work better than anything else for performance. Operating systems have excellent support for mapping arrays to permanent storage. CPUs have special logic and operators for dealing with arrays. The highest-performing fastest databases in the world don’t use anything more exotic than arrays.

                                        … and they also frame how you think about the problem.

                                        I think you should take a look at notation as a tool of thought. I believe so strongly that this is a better way to think about the problem than “data structures” that I would prefer them even if they weren’t faster.

                                        Thankfully I don’t have to choose.

                                        There’s a lot more to Alexander’s ideas, … I don’t think it’s fair to say that GoF-style patterns are something completely different

                                        I think we must’ve read different books. I enjoy a humanist and organic design philosophy, and I can’t agree that “design patterns” is it. I’m also not sure Christopher Alexander would agree with you:

                                        When I look at the object-oriented work on patterns that I’ve seen, I see the format of a pattern (context, problem, solution, and so forth). It is a nice and useful format. It allows you to write down good ideas about software design in a way that can be discussed, shared, modified, and so forth. So, it is a really useful vehicle of communication. And, I think that insofar as patterns have become useful tools in the design of software, it helps the task of programming in that way. It is a nice, neat format and that is fine.

                                        However, that is not all that pattern languages are supposed to do. The pattern language that we began creating in the 1970s had other essential features. First, it has a moral component. Second, it has the aim of creating coherence, morphological coherence in the things which are made with it. And third, it is generative: it allows people to create coherence, morally sound objects, and encourages and enables this process because of its emphasis on the coherence of the created whole.

                                        1. 2

                                          Arrays work better than anything else for performance.

                                          But the ways APL uses arrays are terrible for performance. A lot of common idioms do huge amounts of work on large intermediate arrays to produce a particular transformation, kind of like those Rubik’s Cube macros where you flip six or seven edges to rotate one pair of corners.

                                          My college compilers class was taught by Jim Kajiya, who had worked on an APL compiler. During the discussion of APL he told us how getting good performance from the language was terribly difficult — it relied on a lot of knowledge of those idioms used in the language, using pattern recognition to optimize those into a more efficient direct implementation of their effect. And such a compiler is only as good as its library of idioms, so I imagine that if you use one it doesn’t know, or use one with slightly different syntax, your performance plummets.

                                          1. 2

                                            Spend a few years in an Iverson language and you will be convinced otherwise: All you need are arrays and good tools for operating on arrays.

                                            I think you’re disagreeing with people because you use words to mean different things. All of the things you say you can use instead of data structures are implementations of those data structures.

                                            1. 1

                                              All of the things you say you can use instead of data structures are implementations of those data structures.

                                              I’m not sure I’ve ever heard anyone suggest that a b-tree is the same as a sorted array. The performance difference is striking.

                                              In any event, they’re definitely different in the critical way: They all use the same “iterator” which should prove that an iterator isn’t a pattern, but a single operator.

                                              I think you’re disagreeing with people because you use words to mean different things.

                                              I’m agreeing with the author of the linked post. I’m disagreeing with you. I also think abuse is a poor form of debate.

                                          2. 1

                                            There is also Alexander’s OOPSLA keynote. Here is a quote which shows that he aspire to a lot more than GoF:

                                            I understand that the software patterns, insofar as they refer to objects and programs and so on, can make a program better. That isn’t the same thing, because in that sentence “better” could mean merely technically efficient, not actually “good.” Again, if I’m translating from my experience, I would ask that the use of pattern language in software has the tendency to make the program or the thing that is being created is morally profound—actually has the capacity to play a more significant role in human life. A deeper role in human life. Will it actually make human life better as a result of its injection into a software system?

                                        2. 1

                                          Which GoF patterns can’t be reasonably implemented as a library of abstract classes (or language equivalent), distributed with a language’s package manager? After reading these slides I went back over the GoF patterns and realized that pretty much all of them could. Distributing them in a book — it even came with “sample” implementations in C++ and Smalltalk! — and calling them “patterns” made sense in the 90s, before any reasonably-modern package managers existed (and when the Internet barely existed), but it doesn’t feel like it still makes much sense today.

                                          Plenty of the “patterns” are fairly… dated, as well. Memento for undo, for example, rather than a lens-like approach of collapsing an immutable series of state updates into the current state (and undo is simply popping off the last state update). GoF feels more like a 1994-era utility library that got some things right — and thus those parts got ported to other languages — and plenty of things wrong, too.

                                          1. 1

                                            So one implementation of a pattern in one language makes the book unnecessary? That makes no sense to me. The next language implementor has to just copy the existing implementation and translate it to their language, even if the languages are wholly different? This is kind of like saying we don’t need texts on, say, B-trees because you can just go get a B-tree class from a library.

                                            Maybe the patterns in the book seem so obvious to you that descriptions and explanations are unnecessary? I’d say that’s more a result of the book (and other usage of those patterns) having done its job well.

                                      1. 3

                                        I haven’t used this feature of Postgres in production before, I’d be curious what other use-cases people have found for them (that are actively used)?

                                        I suppose from reading the post and looking online that it’s possible for messages to be lost? i.e. if a client isn’t connected / listening to a NOTIFY, then the message is just lost?

                                        Interesting. Thanks for sharing.

                                        1. 3

                                          I’ve used it in various situations to hook it into other systems to avoid querying for new rows repeatedly for example. One can even combine this with triggers and notify under certain conditions and maybe even mention what has happened for example that “row with ID X has been updated”.

                                          So a piece of software would start, check the database and do it’s thing, then, would only continue to do its thing if a notify comes in, otherwise it can idle, instead of querying all the time.

                                          1. 2

                                            I used it about a decade ago before things like RabbitMQ were in vogue. We had a bunch of workers that sat and listened for notifications. The web interface would insert rows into the work queue, the workers got notified, and then updated the work rows with a success/fail. Worked awesome and meant that the crusty old PHP web app didn’t need any new dependencies added to it.

                                            1. 2

                                              We use it for webhooks. We interact with a third-party which may not have the data right away, and will instead call a webhook. We save the data to cache it for later, and as part of that we trigger a notify that allows the original caller that was requesting the data to await the update to that row.

                                              1. 1

                                                Same as @tonyarkles noted below: I use them for queues, which saves adding a whole new system.

                                                In addition I used them to monitor balances of users. If the balance gets under a certain threshold a trigger fires the notify, and some service picks that up.

                                                In both cases the payload is empty. The receiver of the notify will figure out what needs to happen, and the receivers are also woken up every once in a while with a timer, so it doesn’t matter much if a notify gets lost.

                                                Works very well, super easy to test, no dependencies to maintain.

                                                1. 1

                                                  It’s super useful to refresh configuration between multiple system.

                                                  We at https://hanami.run have a mail server and a web app. user use the web app to configure their email forwarding rule. the mail server listen to Postgres to refresh its configuration.

                                                  Losing message is totally fine in our case because upon restarting, the mail server query Postgres and get fresh data already.

                                                  As in, listen/notify is to notify about changes to do something. If a client isn’t connected, then upon re-connectiong they know what they have to do(reload the whole configuration).

                                                1. 6

                                                  Can’t wait for Generics and I don’t think I am ever going back to a JVM based language!

                                                  1. 2

                                                    Maybe go will be worth another look after generics. I guess it should make implementing a sequence/stream API more feasible. Although I suspect the performance would suck, go probably can’t optimize the virtual function calls as much as a JIT can.
                                                    Coming from a JVM background, and having recently written a CLI app with go, I found the experience extremely painful, and I don’t quite understand why one would give up a higher level language to work with go for non-trivial applications.
                                                    Being able to easily build and cross compile native binaries is a great feature, especially for CLI’s, but if running a JVM isn’t a major constraint, I’d take any major JVM language over go.

                                                    1. 2

                                                      This kind of reflects my views about Go as well. I think once you are out of “simplicity” dogma, you quickly realize how messy the code gets with interface{} casts everywhere. I use generics on daily base! Even a basic cache requires generic support. I don’t want to litter my code with castings and ifs when there exists a decent solution to do all of the manual undertaking for you. That is what compilers were invented for rather than just generating plain code. You can obviously ignore them if you don’t need them; but I not having them is a big pain in the a**.

                                                      1. 7

                                                        you quickly realize how messy the code gets with interface{} casts everywhere.

                                                        It should essentially never happen that you use interface{} in day-to-day code. If you’re having that experience, I can understand why you’d be frustrated! You’re essentially fighting the language.

                                                        interface{} and package reflect are tools of last resort that really only need to be used in library code.

                                                    2. 1

                                                      Two more releases, likely :)

                                                      I’m curious to see how the generics will work out in practice. But I do look forward to having a sane assert.Equal().

                                                    1. 13

                                                      This release is a big deal for apps running in containers. It’s complicated to explain and I’m on mobile, but previously memory was lazily reclaimed by the OS, leading to containers being killed for exceeding memory limits. This definitely affected borg jobs at Google and presumably k8s.

                                                      Now the kernel will eagerly reclaim that memory, allowing the container to stay within memory limits more easily.

                                                      1. 3

                                                        And they also cleaned up how you read memory usage. 1.16 is a really nice release, improving a lot of small things.

                                                      1. 2

                                                        This is great!

                                                        While I completely understand the sentiment of having that one tool that everyone uses and knows about I think that it comes at the prices of approaching problems in fewer ways, maybe from a technological point of view missing out. I don’t think that’s the case for Prometheus, but sometimes this also makes APIs specific to single client implementations.

                                                        Having more than one option and therefor also more than one interest group can be very beneficial, so I really appreciate this project.

                                                        That’s not to say that creating many Grafana clones is the right approach, but having some alternatives is certainly a good thing. Also this very much doesn’t seem to be a clone, as mentioned in the Readme.

                                                        Really nice. I think this could also be very useful for projects that measures things that aren’t your server instances, but to be used in single instance applications, like SoCi projects, where one simply wants to visualize some time series, without user management, etc. I also really like the straight-forward minimalism on the server side.

                                                        Just one nitpick, maybe it would make sense to specify the protocol as part of the source instead of enforcing HTTPS?

                                                        1. 1

                                                          Thank you, I’m glad you find this useful!

                                                          I do like having more options, and I was pleasantly surprised that it is possible to implement these kinds of different frontends for complex projects with relative ease. There’s a definitive tradeoff with this project (being more lightweight and quicker to write a new board vs being feature complete and easier to use), but I think that’s a neat thing to exist.

                                                          In that vein, we also wrote an alternative frontend to ElasticSearch with similar tradeoffs and design ideas in mind.

                                                          Just one nitpick, maybe it would make sense to specify the protocol as part of the source instead of enforcing HTTPS?

                                                          Good point! I created an issue to at least default to the protocol the frontend is using.

                                                          1. 2

                                                            relative ease

                                                            And that it’s possible to write them in modern, plain Javascript. No need for ES6 compilers or too insane browserhacks.

                                                        1. 9

                                                          Nomad is cool, because it may work with technologies other than Docker containers. For example, Nomad can be used to orchestrate FreeBSD jails: https://papers.freebsd.org/2020/fosdem/pizzamig-orchestrating_jails_with_nomad_and_pot/

                                                          1. 3

                                                            And it’s exec command is isolated with a chroot, which makes it super useful when migrating non-containerised workloads too.

                                                            1. 2

                                                              Anyone got any experience with that? Seems like a nice way to run plain binaries without having to use docker images (For example when the binaries are compiled with Go).

                                                              1. 4

                                                                I used the java one and the exec ones. It worked great, especially if you don’t require any special libraries already in the system.

                                                                1. 3

                                                                  We’ve been using the java driver in production for over 2 years now, we also us the exec driver for smaller tools, basically shell scripts to backup the consul and nomad database.

                                                                  1. 2

                                                                    I’ve used the exec in presentation demos, where I am running a cluster of nomad VMs, and I have an directory mounted to the host with the apps to run.

                                                                    I could of course host a docker registry in the host, but it’s not worth the hassle; I’d rather have simpler demos with less to go wrong!

                                                              1. 5

                                                                I’ve always had problems with RSS because I always found the list of “unread articles” stressful. I know you don’t need to read everything, but somehow my mind doesn’t cope well with the concept of “letting unread things remain unread” 🤷

                                                                I find that Lobsters/HN/Reddit works fairly well.

                                                                1. 2

                                                                  I’m in the same boat. When I used newsboat, a while ago, I had to mark everything as “read” weekly to keep out noise.

                                                                  I decided to make my own feed reader a few months ago. It ended up becoming a CLI that I pipe into fzf to read feeds. No read/unread, no complex navigation, and notifications on new items. It’s worked really well for me. If read/unread is overwhelming, you may consider it.

                                                                  1. 1

                                                                    list of “unread articles”

                                                                    That’s for the “main news” sites, where I also find RSS doesn’t work so well. But it works great for following low-traffic blogs from people where I want to see everything they post.

                                                                    1. 1

                                                                      Same, so I made my own that doesn’t have that.

                                                                    1. 1

                                                                      The first time I heard about io/fs was in adding something like go-bindata to the default toolchain. Is that still the plan, or was that delayed/canceled.

                                                                      1. 5
                                                                      1. 1

                                                                        You should consider reworking your description fields. You should not be including the full post in the description.

                                                                        My website landing page is a feed, and as you can see, it includes all posts I’ve ever made, and remains tiny: http://len.falken.ink . My description fields are 1 sentence, describing my content.

                                                                        1. 18

                                                                          You should not be including the full post in the description

                                                                          Why not? I prefer sites I can read in my aggregator completely (so I don’t have to deal with whatever fonts and colors and fontsizes the “real” site uses). The feed doesn’t need to include every article ever posted, though. The last few is fine. Keep old articles around (or not), is up to the aggregator.

                                                                          1. 7

                                                                            This is exactly why I put the article text in the description. I don’t think readers handle the Mara stickers that well though :(

                                                                            1. -1

                                                                              And that’s why you had problems, you used it for what it was not intended for.

                                                                            2. 1

                                                                              Because it’s for a description, what else do I have to say?

                                                                              1. 5

                                                                                Neither common practice nor the RSS 2.0 spec support your assertion that the description element should only be used for a short description.

                                                                                1. 1

                                                                                  It literally says “The item synopsis”….

                                                                                  Are we reading the same thing?

                                                                                  1. 5

                                                                                    I am reading: “An item may also be complete in itself”, which I interpret that the whole post is allowed to be in there.

                                                                                    But even if you were technically right, it feels as unnecessary and wasteful to require that the user fires up a browser to get the remaining two paragraphs of a three paragraph post, because the first one was regarded as intro/synopsis and is the only one allowed to be in the feed. If people do that, I always get the sense that they force you to do that to increase their ego through their pageview counters.

                                                                                    Text is easy to compress. If it is still too much, one can always limit the amount of items in the feed and possibly refer to a sitemap as the last item for the users who really use that feed to learn about everything on your site.

                                                                                    1. 1

                                                                                      If you read in the description field, it says what I wrote…

                                                                                      I agree with the logic where if you’re including some of the text, but then require to launch a browser to read the rest, it’s a waste.

                                                                                      If you’re delivering web content though, you’ll need a browser. You just can’t get over that. On my website, I don’t serve web content, I only serve plain text, for the exact purpose you mention: I don’t want my readers to launch a browser to read my content.

                                                                            3. 5

                                                                              You should not be including the full post in the description.

                                                                              Your root-page as feed idea is nifty, and I think there are plenty of scenarios where concise descriptions along those lines make good sense. Still, for probably the majority of blog-like things, the full content of posts in the feed offers a better user experience.

                                                                            1. 4

                                                                              I also love putting emojis and null in forms to see if its handled correctly :)

                                                                              1. 1

                                                                                don’t forget to add some tags, like <b>s!