Threads for tca

  1. 20

    I disagree.

    For better or for worse, NodeJS is just a runtime for Javascript. We similarly don’t have cython, rubinius, rails, or phoenix tags. People are too derpy to even mark software release articles correctly, so I’m bearish on asking them to properly mark javascript vs nodejs.

    However!

    You do have an excellent point that there is a category of articles whose primary component is a sort of “what the fuck?” feature. We pick on NodeJS–and especially npm–but we’ve seen the same thing in other ecosystems.

    Towards that end, I would suggest a wtf tag.

    This tag would help filter out all categories of articles whose main theme is “why the hell did this make it to production?” or “what they hell were these people thinking?”. That flavor extends far beyond the Node ecosystem examples you’ve given.

    Potential names for the tag:

    • wtf
    • clownshoes
    • schadenfreude
    • tirefire
    • shitshow
    • CADT
    • clusterfuck

    I would also suggest a hotness mod to make such articles dissipate a bit more quickly so we don’t get clogged up with news stories pointing out the latest disaster.

    1. 28

      I’m honestly not that enthusiastic in general about stories where the intended purpose is to laugh at the absurdity of someone else’s well-meaning hard work. Adding a tag means making it explicitly on-topic, and I would be happier if we don’t do that.

      1. 7

        I’m not sure I agree with the characterization that the purpose is laughing at someone’s well-meaning work. Maybe sometimes, but these are also cautionary tales about unintended consequences that all programmers can relate to on some level.

        Now if you are fatigued of that sort of thing and don’t want to see it, won’t a tag help with that? As it stands these stories get a fair amount of votes and visibility with no effective way to avoid them.

        My humble suggestion for such a tag name: facepalm

        1. 11

          Your response makes sense to me, and I certainly feel that it’s generally some of both.

          I wonder how anyone would feel about tag names like regrets or oops? The point being to cast it in the voice of the person whose bad decisions they were, to suggest a position of empathy.

          I kind of don’t expect a lot of people to seriously want either of those tags for this category of story, and I hope that thinking about why that might be illustrates why I feel this way.

          1. 16

            I like oops. I think it captures the essence of the stories without encouraging outright mockery.

            We’ve all gone oops.

        2. 5

          I understand where you’re coming from, and I respect that we might want to not have this as explicitly on-topic.

          That said, one of the best ways of learning tends to be with humor, and one of the most ubiquitous forms of humor is the misfortune of others. The slow dawning realization of how near one’s own actions are to somebody else’s comically large foulup sometimes only happens after joining in the mockery. “Hahaha wow what an idiot they didn’t even–wait, we don’t either…shit”.

          I think that having a tag both lets us mark stories as having this as a primary character in them, and also lets people who are sick of seeing reposts of schadenfreude filter it out. If we set the hotness correctly, they will fade away so that even if they are on-topic they are on-record as being something of fleeting interest, as we have with the rant tag.

          1. 11

            Terrible.

            From the about section of lobsters:

            It borrows some ideas from, while also attempting to fixing problems specific to, websites such as Hacker News, Reddit, and Slashdot.

            This is the kind of stuff falls under problems, and belongs on those sites. Over here, the aim is to be above it.

            Very disappointing to see more comments like this one popping up lately.

            1. 6

              Is the aim to be “above it”, though? One of the better tedu articles is basically just laughing at all the absurd and stupid things people do to seed random. There are doubtless other articles past and future that work in the same way.

              I completely agree that we don’t want to replicate the toxic environment of other communities: we want commenters to always be courteous and civil towards one another, even in disagreement. I humbly also assert that we want to avoid falling prey to content marketing and news spam. I understand your concern about the precedent this sort of thing would set.

              That said, people are going to post this sort of thing whether we like it or not. If you want it to stop, you have to flag such articles and say in the comments why and that you had done so…and then people will kvetch and downvote for meta interruptions. Ask me how I know this.

              So, we can either label these stories and hotmod them so they dissipate rapidly, burn karma policing the submission comment threads, or we can do nothing and watch them roll in now and again.

              1. 4

                One thing I like about the tedu article you linked, compared to some others, is that at least it’s an article with some investigation, even if perhaps not my favorite style of investigation. The thing that I find very “HN-style” and less useful is when someone links to a random no-context mailing list post, bug report, or changelog entry that is supposed to outrage us for some implied reason, and I sorta worry that having an oops tag will encourage people to link that kind of thing under that label.

                1. 3

                  someone links to a random no-context mailing list post, bug report, or changelog entry that is supposed to outrage us for some implied reason

                  This already happens, and there’s already a dedicated tag for it: “systemd”

                2. 3

                  I agree about courtesy and civility. I would add that it would be nice to be considerate of people who aren’t around to see it, also.

                  This is, of course, my personal ideal, and I understand that in some ways it’s unrealistic. I’m certainly not going to police anything; as you say, that would be bad karma.

                  1. 1

                    That was a great tedu article. Epic actually. Mjn beat me to a central aspect of it: the mockery is a series of examples of problems you see in the real world that teach important lessons followed by at least one, positive recommendation.

                    Far as last paragraph, that’s a decent summary of options at first glance. Might have to think on it more if I find time.

            2. 11

              I like CFIT for these sorts of rubbernecking stories.

              1. 6

                That’s a hilarious term, although perhaps somewhat opaque for a tag.

                1. 4

                  Yeah, true. A former coworker used it as the code name for a terrible last minute release of some software we were working on; that’s how I learned it.

              2. 7

                I think a community codifying mocking other people, other people’s work and other groups is a dangerous route for it to go down. Even if there are valuable lessons to be learned in such stories, none of the suggested names for the tag even allude to that.

                1. 5

                  As long as we’re listing names for that concept, I’m partial to mickey mouse, but in all honesty, I think rant almost always covers it.

                  However!

                  I don’t think our tags are or should be delineated by category boundaries but by potential utility. Very few lobsters care to distinguish e.g. cpython versus pypy; people either care about both or neither. Conversely, there are apparently a significant number of people who care about javascript but not node.js or vise versa. (I don’t particularly care for either, except as a rich source of mickey mouse bullshit, so my personal inclination is to let the people who care argue about it.) If enough people care enough about the distinction, it’s probably worth tagging. Restricting our tags along category boundaries arbitrarily limits their usefulness with no meaningful benefit aside from possibly shortening the arguments in tag request threads.

                  1. 4

                    Restricting our tags along category boundaries arbitrarily limits their usefulness with no meaningful benefit aside

                    I’m not sure that I follow your reasoning here…would you mind elaborating?

                    I think that category-tagging instead of utility-tagging is a better principle because it helps establish a way of thinking about future articles that people haven’t complained about yet. It also helps us go back and clean up labels on things that could use it.

                    I do like the mickey mouse tag, though. :)

                  2. 2

                    NodeJS is just a runtime for Javascript.

                    It is a runtime, but also a set of extensions that give the JS language things like file IO. V8 fits much better into the “just a runtime” category imo.

                    1. 2

                      No extensions to the language–just libraries and modules that support file IO.

                      Language != runtime environment in most cases.

                      1. 2

                        I guess saying “JS API” is more correct. Not being able to drop arbitrary node.js programs (even the js-all-the-way-down ones) into other JS runtimes and have them function correctly must count for something though, right?

                    2. 2

                      That’s a good idea. We have a rant tag, but a wtf-like may also fit many submissions. Being a bit more politically correct, how about just critique?

                      1. 2

                        fubar?

                        I do like tirefire though.

                        1. 1

                          Ill add to my other comment that the tag might just be “fad” or some variant of it. Point being that it’s a popular idea that might help someone, might even be next big thing as it rarely is, and probably is worth filtering out for most as fads usually are.

                        1. 2

                          WebKit solved this. People still echo about debugging and others myths to support their ulterior motive of easier binary compatibility with C.

                          1. 1

                            ShadowChicken looks interesting! I hadn’t seen it before.

                          1. 22

                            I think complaints about down votes degrade the discussion much more than the down votes do.

                            1. 0

                              The problem is when everyone starts using unicode when they don’t need to, because they are assuming it’s going to be supported. We already see this web pages using unicode characters and custom fonts for icons.

                              This can also be compared to how sending malformed HTML became standard and accepted. Now everyone has to use gigantic xml/html parsing libraries like libxml2(try and guess how many lines of code and bugs are in that thing) to have any chance at writing web clients. It’s the same with unicode or any other similar standard.

                              It’s not a good idea to just “use it everywhere”. This is the mentality that keeps pushing the ability control our own computing experience even further out of reach.

                              1. 7

                                net/html for go is a complete parser in 6700 lines (and that’s in go, a language that sacrifices expressiveness for readability).

                                libxml2 is huge because of the batteries-included mindset behind it (it includes an ftp client - I kid you not - to parse XML)

                                1. 10

                                  … using unicode when they don’t need to …

                                  The problem is that some people still think that ASCII is enough and nobody would ever use unicode in the XY context. Then, out of the blue, CKAN fails to decode some of our uploads and even fails to tell the user what is wrong with them. UTF-8 everywhere, please.

                                  1. 3

                                    Simply handling/not mangling unicode adds very little complexity. At its core, Unicode codepoints are the same as ASCII, just with more values. UTF8 is a simple packing scheme for them. Somehow segregating and differentiating between Unicode and non-Unicode text is far more complex than simply handling Unicode everywhere.

                                    Doing unicode text rendering is difficult, but it’s no harder than supporting the myriad of locale specific encodings that existed before Unicode.

                                  1. 3

                                    When I tested this myself the tail recursive version was substantially faster.

                                    code

                                    -module(tco).
                                    -compile(export_all).
                                    
                                    map_body([], _Func) -> [];
                                    map_body([Head | Tail], Func) ->
                                      [Func(Head) | map_body(Tail, Func)].
                                    
                                    map_reversed([], Acc, _Func) -> Acc;
                                    map_reversed([Head | Tail], Acc, Func) ->
                                      map_reversed(Tail, [Func(Head) | Acc], Func
                                    

                                    in the erlang shell

                                    Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false]
                                    
                                    Eshell V7.0  (abort with ^G)
                                    1> c(tco).
                                    {ok,tco}
                                    2> Data = lists:seq(1,1000000).
                                    [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,
                                     23,24,25,26,27,28,29|...]
                                    3> Succ = fun(X) -> X + 1 end.
                                    #Fun<erl_eval.6.54118792>
                                    4> timer:tc(tco, map_reversed, [Data, [], Succ]).
                                    {2844687,
                                    [1000001,1000000,999999,999998,999997,999996,999995,999994,
                                     999993,999992,999991,999990,999989,999988,999987,999986,
                                     999985,999984,999983,999982,999981,999980,999979,999978,
                                     999977,999976,999975|...]}
                                    5> timer:tc(tco, map_body, [Data, Succ]).
                                    {4678078,
                                    [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                     24,25,26,27,28|...]}
                                    
                                    1. 2

                                      You did not reverse the list, and for some reason the first measurement of timer.tc can often be off. Plus, it might be that garbage collection triggered while benchmarking map_body. Benchee measures multiple runs and runs garbage collection in between. Might also be something else, though - as the erlang page mentions architecture can also have an impact.

                                      1. 4

                                        Here it is with also reversing it after and its still faster. There is a consistent 2 second difference, this is not random fluctuations.

                                        map_tco(List, Func) -> lists:reverse(map_reversed(List, [], Func)).
                                        
                                        5> timer:tc(tco, map_tco, [Data, Succ]).
                                        {2776833,
                                         [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                          24,25,26,27,28|...]}
                                        6> timer:tc(tco, map_body, [Data, Succ]).
                                        {4498311,
                                         [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                          24,25,26,27,28|...]}
                                        
                                        1. 1

                                          In the shell it indeed seems to behave like mapbody is slower on the first run (at least with the input list you used, 1000000 elements, I ran the benchmark with 10000 elements)

                                          iex(1)> list = Enum.to_list 1..1000000
                                          [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                           23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42,
                                           43, 44, 45, 46, 47, 48, 49, 50, ...]
                                          iex(2)> my_fun = fn(i) -> i + 1 end
                                          #Function<6.50752066/1 in :erl_eval.expr/5>
                                          iex(3)> :timer.tc fn -> MyMap.map_tco(list, my_fun) end
                                          {458488,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          iex(4)> :timer.tc fn -> MyMap.map_body(list, my_fun) end
                                          {971825,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          

                                          However, running it more often in the same iex session map_body gets faster:

                                          iex(5)> :timer.tc fn -> MyMap.map_tco(list, my_fun) end 
                                          {555394,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          iex(6)> :timer.tc fn -> MyMap.map_tco(list, my_fun) end
                                          {505423,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          iex(7)> :timer.tc fn -> MyMap.map_tco(list, my_fun) end
                                          {467228,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          iex(8)> :timer.tc fn -> MyMap.map_body(list, my_fun) end
                                          {636665,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          iex(9)> :timer.tc fn -> MyMap.map_body(list, my_fun) end
                                          {493285,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          iex(10)> :timer.tc fn -> MyMap.map_body(list, my_fun) end
                                          {490130,
                                           [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
                                            23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
                                            42, 43, 44, 45, 46, 47, 48, 49, 50, ...]}
                                          

                                          That might be it. Benchee runs a warmup of 2 seconds where it doesn’t measure (to simulate a warm/runnign system) and then takes measurements for 5 seconds so that we have lots of data points.

                                          Might also still be something with elixir and/or hardware :) Maybe I should retry it with Erlang, but not this morning :)

                                          1. 2

                                            Never benchmark in the shell, it’s an interpreter. Compile the module with the benchmarker included and run that.

                                            1. 1

                                              Not sure what exactly you mean, the original benchmark was run compiled not in the shell. This here was just done for comparison with the reported erlang benchmarks - my erlang_fu (which is not very existent at this point in time) I currently fail to do that and don’t have the time to look it up atm :)

                                              1. 1

                                                Ok, not like an elixir script executable like I’d want to but I wrote a benchmark function that benchmarks it and then called that in the shell - good enough for now I guess. There it is all much faster, and map_body seems to be about as fast as the non reversed tco version or even faster. I’d still need a proper benchmark to determine it all though.

                                                  3> c(tco).         
                                                {ok,tco}
                                                4> tco:benchmark().
                                                map_tco
                                                23412
                                                18666
                                                18542
                                                19709
                                                20939
                                                map_body
                                                19908
                                                20046
                                                19854
                                                19753
                                                18869
                                                ok
                                                4> tco:benchmark().
                                                map_tco
                                                23729
                                                21282
                                                24711
                                                23922
                                                18387
                                                map_body
                                                19274
                                                19624
                                                18598
                                                19073
                                                18685
                                                ok
                                                

                                                code

                                            2. 1

                                              Ok, I ran your code in erlang and I also get consistently faster results for the TCO version. I don’t get it, it is the same function I wrote in Elixir. The interesting thing for me, comparing Erlang and Elixir is, that with the same list size and what I think are equivalent implementations map_body seems to be much slower in Erlang. E,g, compare the numbers here to the other post where I do the same in Elixir and iex. In Elixir map_body settles at around 490k microseconds, the erlang version is between 904k microseconds and 1500k microseconds.

                                              1>  c(tco).
                                              {ok,tco}
                                              2> Data = lists:seq(1,1000000).
                                              [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,
                                               23,24,25,26,27,28,29|...]
                                              3>  Succ = fun(X) -> X + 1 end.
                                              #Fun<erl_eval.6.50752066>
                                              4> timer:tc(tco, map_reversed, [Data, [], Succ]).
                                              {477397,
                                               [1000001,1000000,999999,999998,999997,999996,999995,999994,
                                                999993,999992,999991,999990,999989,999988,999987,999986,
                                                999985,999984,999983,999982,999981,999980,999979,999978,
                                                999977,999976,999975|...]}
                                              5> timer:tc(tco, map_body, [Data, Succ]).
                                              {826180,
                                               [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                                24,25,26,27,28|...]}
                                              6> timer:tc(tco, map_reversed, [Data, [], Succ]).
                                              {472715,
                                               [1000001,1000000,999999,999998,999997,999996,999995,999994,
                                                999993,999992,999991,999990,999989,999988,999987,999986,
                                                999985,999984,999983,999982,999981,999980,999979,999978,
                                                999977,999976,999975|...]}
                                              7> timer:tc(tco, map_reversed, [Data, [], Succ]).
                                              {471386,
                                               [1000001,1000000,999999,999998,999997,999996,999995,999994,
                                                999993,999992,999991,999990,999989,999988,999987,999986,
                                                999985,999984,999983,999982,999981,999980,999979,999978,
                                                999977,999976,999975|...]}
                                              8> timer:tc(tco, map_reversed, [Data, [], Succ]).
                                              {461504,
                                               [1000001,1000000,999999,999998,999997,999996,999995,999994,
                                                999993,999992,999991,999990,999989,999988,999987,999986,
                                                999985,999984,999983,999982,999981,999980,999979,999978,
                                                999977,999976,999975|...]}
                                              9> timer:tc(tco, map_body, [Data, Succ]).        
                                              {904630,
                                               [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                                24,25,26,27,28|...]}
                                              10> timer:tc(tco, map_body, [Data, Succ]).
                                              {970073,
                                               [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                                24,25,26,27,28|...]}
                                              11> timer:tc(tco, map_body, [Data, Succ]).
                                              {1485897,
                                               [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                                24,25,26,27,28|...]}
                                              
                                          2. 2

                                            I reran your benchmark and got similar results (consistently over 10x runs) that is around 35-40% faster:

                                            1> Data = lists:seq(1,1000000).
                                            [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,
                                             23,24,25,26,27,28,29|...]
                                            2> Succ = fun(X) -> X + 1 end.
                                            #Fun<erl_eval.6.50752066>
                                            3> timer:tc(tco, map_reversed, [Data, [], Succ]).
                                            {810879,
                                             [1000001,1000000,999999,999998,999997,999996,999995,999994,
                                              999993,999992,999991,999990,999989,999988,999987,999986,
                                              999985,999984,999983,999982,999981,999980,999979,999978,
                                              999977,999976,999975|...]}
                                            4> timer:tc(tco, map_body, [Data, Succ]).
                                            {1250838,
                                             [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,
                                              24,25,26,27,28|...]}
                                            

                                            Does it have something to do with the fact he used Elixir?

                                          1. 48
                                            • The process turns a request for binary DNS data into into XML, feeds it into the sytemd/dus ecosystem, which turns it into binary DNS to send it to the forwarder. The binary DNS answer then gets turned into XML goes through systemd/dbus, then is turned back into binary DNS to feed back into glibc.

                                            That’s certainly one way to do things.

                                            1. 27

                                              It’s things like that which make me question if people understand that software is entirely man made and doesn’t need to be complicated. The Standard Model isn’t forcing XML on us.

                                              1. 17

                                                “It was like this when I got here.”

                                                1. 1

                                                  “It just works.”

                                                2. 5

                                                  Apropos, one [of many] great Henry Baker’s Quotes

                                                  Physicists, on the other hand, routinely decide deep questions about physical systems–e.g., they can talk intelligently about events that happened 15 billion years ago. Computer scientists retort that computer programs are more complex than physical systems. If this is true, then computer scientists should be embarrassed, considering the fact that computers and computer software are “cultural” objects–they are purely a product of man’s imagination, and may be changed as quickly as a man can change his mind. Could God be a better hacker than man?

                                                3. 19

                                                  Where does XML supposedly come in? D-Bus does not use XML for serialization.

                                                  Also the original announcement at https://lists.ubuntu.com/archives/ubuntu-devel/2016-May/039350.html says resolved does not require D-Bus.

                                                  1. 5

                                                    It’s on the internet, it must true. :)

                                                    1. 19

                                                      I’ve thought about this some more. (As a small matter, the choice of serialization format wasn’t really the big wtf for me.) But it does illustrate systemd has an image problem. I’m willing to believe just about anything. Its detractors have certainly been hard at work, and they haven’t been entirely fair. But then Lennart “haha, fuck BSD and tmux too for good measure” has been a rather poor defender of his choices. Everything I’ve read by him leads me to conclude he doesn’t believe software can be too complicated, only not complicated enough. So presented with a claim that systemd does something extraneously silly, my default response is not to reject it.

                                                      Asking for evidence is exactly what one should do.

                                                      1. 8

                                                        But then Lennart “haha, fuck BSD and tmux too for good measure” has been a rather poor defender of his choices.

                                                        He also has very poor attackers. Most of the criticism I read basically boils down to “everyone hates on systemd and believes it’s not POSIX”. (from our recent discussions, I’d happily exclude you there)

                                                        No one wants to engage with that crowd in a nuanced argument, lowering the quality of support and the quality of criticism at the same time.

                                                        This is also why I regularly call out non-complex arguments, because that is the road they lead down to.

                                                        We happily use systemd in a lot of deployments and like it in practice. It works and is approachable to newcomers. Software and new software have bugs (also critical ones), so it doesn’t help to call out “systemd implemented a base service” - that’s the way the project works, deal with it. All of the components systemd now replaces will be replaced at some point.

                                                        Criticism must be phrased in terms of whether the pace is healthy or different approaches would work better or in platform-wide solutions lost along the way.

                                                        You have to break an egg to make an omelette, but there’s always the question what kind of omelette it should be.

                                                        1. 4

                                                          Yeah, it’s been more heat than light all around.

                                                    2. 3

                                                      According to this post on lwn:

                                                      is really as easy as it gets

                                                      But looking at the source it is using lots of sd_bus_message* calls so for something doesn’t require D-bus it seems to have a dependency problem…

                                                      1. 2

                                                        I was wondering this myself.

                                                      2. 14

                                                        To be fair, turning things into an internal representation for processing before serializing back into the original format is not at all uncommon.

                                                        1. 1

                                                          This is true, and I expect this to be done especially when the original format is a binary blob. But there are better formats than XML! Especially if this is only used internally for processing, why not make it some kind of object? XML is rigid and prone to breakage, and is meant to be something barely amenable to both humans and machines. Seems extraneous here.

                                                          1. 1

                                                            “some kind of object” still has to be serialized which was the point of contention.

                                                        2. -4

                                                          *drops mic*

                                                        1. 1

                                                          Sort of related: Jason Hemann recently posted a port of miniKanren that runs on it: https://github.com/jasonhemann/mini-over-micro-extempore That combination sounds particularly fun :)