1. 4

    The same can be said about git grep vs grep.

    1. 10

      This makes me a little sad, but it’s totally understandable. The “hallway track” is always one of the best parts of the event.

      1. 21

        Someone shared a visualization of the sorting algorithm on ycombinator news.

        PS: Really don’t enable sound its loud and awful

        1. 8

          Yeah this page is cool, and it shows that this “naive sort” (custom) is close but not identical to insertion sort, which is mentioned at the end of the paper.

          And it also shows that it’s totally different than bubble sort.

          You have to click the right box and then find the tiny “start” button, but it’s useful.


          I recommend clicking these four boxes:

          • quick sort
          • merge sort
          • custom sort (this naive algorithm)
          • bubble sort

          Then click “start”.

          And then you can clearly see the difference side by side, including the speed of the sort!

          Quick sort is quick! Merge sort is better in the worst case but slower on this random data.

          1. 1

            cycle sort is pretty interesting too!

            1. 1

              I thought (this is 15 year old memories) that what made merge sort nice is that it isn’t particularly dependent on the data, so the performance isn’t really affected if the data is nicely randomized or partially sorted or whatever, whereas quicksort’s performance does depend to some extent on properties of the input sequence (usually to its benefit, but occasionally to its detriment).

            2. 7

              If you are playing with this website: when you change your selected sorts, press “stop” before you press “start” again. Otherwise both sorts will run at the same time, undoing each other’s work, and you will wind up with some freaky patterns.

              This comment is brought to you by “wow I guess I have absolutely no idea how radix sort works.”

              1. 7

                Yeah the radix sort visualization is cool!

                The intuition is if you have to sort 1 million numbers, BUT you know that they’re all from 1 to 10. What’s the fastest way of sorting?

                Well you can do it in guaranteed linear time if you just create an array of 10 “buckets”. Then make a single pass through the array, and then increment a counter in the corresponding bucket.

                After that, print out each number for the number of times it appears in its bucket, like

                [ 2 5 1 ... ]   ->
                1 1 2 2 2 2 2 3 ...
                

                etc.

                I think that lines up with the visualization because you get the “instant blocking” of identical colors. Each color is a value, like 1 to 10. (Watching it again, I think it’s done in 3 steps, like they consider the least significant digits first, then the middle digits, then the most significant digits. It’s still a finite number of distinct values.)

                There are variations on this, but that’s the basic idea.

                And it makes sense that it’s even faster than QuickSort when there are a limited number of distinct values. If you wanted to sort words in a text file, then Radix sort won’t work as well. There are too many distinct values.

                It’s not a general purpose sorting algorithm, which is why it looks good here.

                1. 4

                  Oh, yeah — I meant that I started radix sort while the custom sort was still running, and it just kind of scrambled the colors insanely, and it took me a few minutes of thinking “dang and here I thought radix sort was pretty simple” before I realized they were both running at the same time :)

              2. 1

                Nice visualisation, though it does make some algorithms (selection sort) look better than they are!

              1. 1

                Monkey is popular because it fills a real need. […] The feature is useful for tests where, say, a repo that talks to a database is swapped out for a component returning test data. The lack of this feature in Go is a testament to the language’s resistance to adding the trendy latest features that have bloated languages like Python and C#.

                Not only is monkey patching (a.k.a. guerrilla patching) highly discouraged in Python, but the term was invented in the Python community to disparage the practice. And I’m pretty sure C# devs aren’t fond of the practice too. OTOH, Ruby devs seem to love it, mainly due to its prevalence in Rails.

                The author should be careful where he takes potshots.

                1. 3

                  Ruby inherited it from Smalltalk, which is one of the few pure imperative languages in existence. You create a class in Smalltalk by sending it a message that creates a new subclass and then sending it messages to replace method implementations with blocks that you provide. The ability to do monkey patching is inherent to such a language, because there’s no declarative definition of a class to override, classes are just data structures that you can modify and the canonical way of creating a new one is identical to the ‘monkey patching’ method.

                  This has been part of Smalltalk since Smalltalk-71, from 1971. Calling it a ‘trendy latest feature’ is displaying a huge amount of ignorance (unless the author has been in a coma for the last 50 years).

                1. 34

                  I had to stop coding right before going to bed because of this. Instead of falling asleep, my mind would start spinning incoherently, thinking in terms of programming constructs (loops, arrays, structs, etc.) about random or even undefined stuff, resulting in complete nonsense but mentally exhausting.

                  1. 12

                    I dreamt about 68k assembly once. Figured that probably wasn’t healthy.

                    1. 4

                      Only once? I might have gone off the deep end.

                      1. 3

                        Just be thankful it wasn’t x86 assembly!

                        1. 3

                          I said dream, not nightmare.

                          1. 2

                            Don’t you mean unreal mode?

                            being chased by segment descriptors

                            only got flat 24bit addresses, got to calculate the right segment bases and offsets, faster than the pursuer

                      2. 6

                        One of my most vivid dreams ever was once when I had a bad fever and dreamed about implementing Puyo Puyo as a derived mode of M-x tetris in Emacs Lisp.

                        1. 19

                          When I was especially sleep-deprived (and also on call) in the few months after my first daughter was born, I distinctly remember waking up to crying, absolutely convinced that I could solve the problem by scaling up another few instances behind the load balancer.

                          1. 4

                            Oh my god.

                            1. 2

                              Wow that’s exactly what tetris syndrome is about. Thanks for sharing!

                          2. 5

                            Even if I turn off all electronics two hours before bed, this still happens to me. My brain just won’t shut up.

                            “What if I do it this way? What if I do it that way? What was the name of that one song? Oh, I could do it this other way! Bagels!”

                            1. 4

                              even undefined stuff

                              Last thing you want when trying to go to sleep is for your whole brain to say “Undefined is not a function” and shut down completely

                              1. 4

                                Tony Hoare has a lot to answer for.

                              2. 2

                                Different but related: I’ve found out (the hard way) that I need to stop coding one hour before sleeping. If I go to bed less than one hour after coding, I spend the remaining of the said hour not being able to sleep.

                                1. 1

                                  I know this all too well. Never heard of the tetris syndrome before. I need to investigate this now right before going to bed.

                                1. 3

                                  Great, let’s switch every protocol to legacy SSL mode instead so it can suck as much as HTTPS and rely on terrible hacks like SNI…

                                  1. 5

                                    How is SNI a terrible hack? How is direct usage of TLS “legacy”? How would SMTP and IMAP ever need SNI if they just, uhhh, don’t have anything like the Host header? i.e. there’s no “virtual hosting” in email.

                                    1. 3

                                      i.e. there’s no “virtual hosting” in email.

                                      Well, that’s a different problem, that’s because no SMTP server expects the hostname of the cert to match the hostname of the email addresses. For historical reasons this will probably never get added, so in SMTP land SNI might never be needed.

                                      1. 4

                                        Not just “for historical reasons”, it’s all by design – the hostname will never match for anyone using hosted mail for custom domains.

                                        1. 5

                                          the hostname will never match for anyone using hosted mail for custom domains

                                          It could, though, and if we wanted authenticity on the link it needs to (if using TLS for the auth and not DNSSEC, of course), but there’s just so much already-deployed the other way it’ll never get changed.

                                    2. 3

                                      What sucks most about HTTPS in your opinion?

                                      1. 3

                                        That you have to type https to get it. And we’ve layers of hacks (redirects, hsts, https everywhere) to try to compensate for this. If it was starttls on the main port you would always get it when available, like other protocols do.

                                        1. 3

                                          If it was starttls on the main port you would always get it when available, like other protocols do.

                                          …until an active attacker comes along! Opportunistic encryption reduces to plaintext.

                                      2. 2

                                        Port 465 isn’t legacy, and you don’t need to care about it unless you’re running mail server or writing a mail client. STARTTLS was a hack to allow for opportunistic encryption, and that’s about it. Any competent mailserver admin should be publishing SRV records for this stuff, and any properly written mail client should be looking for them.

                                        SNI isn’t a hack either. It was rather daft that TLS went so long without a way to pass vhost information along. And it turns out that it’s really useful is you’re running a service mesh regardless of what the underlying application protocol is.

                                      1. 8

                                        I’m slightly disappointed to see that this article is mostly about making Firefox look faster rather than actually making it faster.

                                        I’m also curious, what does XUL.dll contain? I remember reading articles about replacing XUL with HTML for interfaces, why is XUL.dll still needed?

                                        1. 22

                                          The visual and perceived performance wins are arguably easier to explain and visualize and were an explicit focus on the for the major release in June. This isn’t just lipstick on a pig though. An unresponsive UI is a big. Regardless of whether the browser doing work under hood or not.

                                          But the IOUtils stuff has some really clear wins in interacting with the disk. Process switching and process pre-allocation also have som really good wins that aren’t just “perceived performance”.

                                          1. 5

                                            But the IOUtils stuff has some really clear wins in interacting with the disk. Process switching and process pre-allocation also have som really good wins that aren’t just “perceived performance”.

                                            No numbers were provided for these unfortunately. :’(

                                          2. 11

                                            I’m also curious, what does XUL.dll contain? I remember reading articles about replacing XUL with HTML for interfaces, why is XUL.dll still needed?

                                            That’s basically “the rendering engine”. The Gecko build system uses libxul / xul.dll as the name for the core rendering code in Firefox. There’s no real connection between the file name and whether XUL elements are still used or not.

                                            Not sure why it’s not just named “Gecko”, but that probably requires even more archaeology…

                                            1. 3

                                              It’s because XUL refers to ‘XML User Interface Language’, which is how Gecko was originally meant to be interfaced with. Gecko sits under XUL, and XUL hasn’t been completely replaced yet.

                                              “There is no Gecko, only XUL”

                                              1. 2

                                                I see, thanks!

                                              2. 4

                                                I’m slightly disappointed to see that this article is mostly about making Firefox look faster rather than actually making it faster.

                                                User-perceived performance can be just as important as actual performance. There are tons of tricks for this and many go back decades while still being relevant today. For example: screenshotting your UI to instantly paint it back to the screen when the user reopens/resumes your app. It’ll still be a moment before you’re actually ready for user interaction, but most of the time it’s actually good enough to offer the illusion of readiness: a user will almost always spend a moment or two looking at the contents of the screen again before actually trying to initiate a more complex interaction, so you don’t actually have to be ready for interaction instantly.

                                                IIRC this is how the multitasking on many mobile operating systems works today – apps get screenshotted when you switch away from them, and may be suspended or even closed in the background while not being used. But showing the screenshot in the task switching UI and immediately painting it when you come back to that app gives just enough illusion of continual running and instant response that most people don’t notice most of the time.

                                                1. 1

                                                  Yeah but what’s better, implementing complex machinery to make your slow software look faster, or implementing complex machinery to make your slow software faster ? I’d argue that making the software actually faster is always better, and if it is faster, it’ll look faster too, no need to trick the user.

                                                  I agree that there comes a point where you made your software as fast as it can be and all that remains is making it look faster, but that still makes for disappointing articles to me. I prefer reading about making software faster than reading about making software perceptually faster.

                                                  1. 5

                                                    What’s better is for it to be faster and more usable to the user, regardless of the method. The above noted screenshotting/painting is more than a trick. It gives users the ability to read and ingest what was already on the screen which gets them back to what they were doing faster. That’s much more important than, say, a 50% reduction in load time from 2s to 1s. Those numbers are satisfying for people who love to look at numbers, but really doesn’t mean anything to the end-user experience.

                                                    1. 2

                                                      That’s the thing: sometimes speed isn’t a good thing. For instance, you could have your UI draw to the screen as fast as possible, but if you do that, you’ll end up with screen tearing, which makes the user experience worse. If you slow things down a tad (which doesn’t consume any resources, because the software it just waiting), the UI gives the perception of working better. Also, some slowdowns are there to give feedback to the user, such as animations when you click buttons, or resize things: these give the perception that something is happening, and create a causal link in the user’s head between what they just did and what’s happening, which it harder to get when something just appears out of nowhere.

                                                      It’s not about tricking the user, even if there happens to be some smoke and mirrors involved, but about giving the user feedback. People like things to be fluid (which is what screenshotting a window for fast starts gives you), not abrupt. You might say that you’d be OK with this, but to give you a real-world example: if you were taking a taxi, would you be OK with your driver taking hard turns even if it got you to your destination a bit faster? Unless you were under severe time pressure, probably not.

                                                      If you want to be genuinely disappointed, there are user interfaces out there that introduce delays for other reasons. You’ve probably encountered UIs in the wild that seem to take a longer time to do things that seems reasonable, such as giving the result of some sort of calculation or some search results for flights or hotel booking. Those delays are there not because they serve a purpose, but to increase trust in the result. This is because people’s brains are broken, and if you give them an answer straight away, it seems as if you’re not doing any work, which makes the result less trustworthy. However, if you introduce a short delay or give the results back in chunks, it gives the perception that the machine is doing real work, thus making the results more “trustworthy”.

                                                      So no, faster is not always better, much as we might wish it to be.

                                                      1. 1

                                                        For instance, you could have your UI draw to the screen as fast as possible, but if you do that, you’ll end up with screen tearing, which makes the user experience worse. If you slow things down a tad (which doesn’t consume any resources, because the software it just waiting)

                                                        This is a bad example. Doing things as fast as possible and then waiting for the next frame is the best thing to do, it allows the CPU to go back to idling and preserves battery. Making the software faster here means more time idling means more battery saved.

                                                        Also, some slowdowns are there to give feedback to the user, such as animations when you click buttons, or resize things

                                                        I hate animations and always disable them when I can. I understand that other people feel differently about them, but I don’t care, that still makes reading articles about perceptual performance improvements disappointing when I go in expecting actual performance improvements.

                                                        You might say that you’d be OK with this, but to give you a real-world example: if you were taking a taxi, would you be OK with your driver taking hard turns even if it got you to your destination a bit faster?

                                                        That’s a bad example. Having abrupt screen changes is different from being thrown around in a car.

                                                        such as giving the result of some sort of calculation or some search results for flights or hotel booking. Those delays are there not because they serve a purpose, but to increase trust in the result.

                                                        Making things perceptually slower is not what we are talking about. We are talking about making things perceptually faster.

                                                        So no, faster is not always better, much as we might wish it to be.

                                                        Making your software actually faster when you want it to be perceptually faster is better than just making it perceptually faster. That was my point and I don’t think any of your arguments proved it wrong.

                                                  2. 2

                                                    It’s a legacy name.

                                                  1. 1

                                                    I don’t hate Perl, but I’m deeply ambivalent towards it. It had a niche which was basically “imagine if we took awk, sed, and shell scripting and turned it into something resembling a proper language”, and to be fair, it managed to do a lot with those foundations. But it never quite outgrew them, and the last attempt to do so almost killed it as a language.

                                                    For text munging, few languages come close, but I haven’t felt the need to reach for it in over a decade.

                                                    1. 1

                                                      I tried doing the same (edit: or rather, I tried this, but just with a custom case, and no SSD), and it was surprisingly effective and helped with focus, but the one thing that killed it for me was the SD card dying.

                                                      I really wish there was some way to trivially attach an NVMe drive to it. Maybe this case would make it worth trying again?

                                                      1. 1

                                                        Jeff Geerling has a write up on the case: https://www.jeffgeerling.com/blog/2021/argon-one-m2-raspberry-pi-ssd-case-review - that review was what made me buy and try it. Otherwise I hate SD cards, back in 2015 I already wrote about issues with the Pi and SD cards: http://raymii.org/s/blog/Broken_Corrupted_Raspberry_Pi_SD_Card.html

                                                      1. 1

                                                        Python really just needs to support dynamic typing at this point. Duck typing and the corresponding ecosystem that’s emerged just to do basic type system work is kind of a circus.

                                                        1. 4

                                                          Don’t you mean “static typing”? Python is a strongly typed, but dynamically typed language.

                                                          Anyway, Python’s never going to have compulsory static typing. Gradual static typing, sure, but it’s never going to be compulsory.

                                                        1. 44

                                                          There is very little in this I can agree with, except the last part.


                                                          Re: “Leadership under attack”:

                                                          Neither RMS nor ESR have never had any significant involvement in Linux, so why bring these up in the context of the Linux kernel? Seems odd.

                                                          Anyway:

                                                          • RMS has been highly controversial for a long time, as he mentions himself. He probably turned off more people from Free Software than attracted. He was always a highly problematic figure.

                                                          • Eric S. Raymond hasn’t been meaningfully involved in the OSI for a very long time, and when he came back it was for little more than for a weirdly misplaced rant about “Vulgar Marxists”. Add to this the context that ESR has significantly crazified in the last 10/15 years and is now advocating for literal terrorism … yeah (site seems offline, archive, and as with most of his posts the craziest stuff from ESR is in the comments he posts). The reason few people have noticed is probably because ESR has long ago already been relegated to the crazy internet cooks corner by most, and few people have been paying attention to him.

                                                            Besides, the OSI seems to do little more than write blogspam and discus some licensing issues on a mailing list. It’s certainly not significantly involved in the development of Linux as far as I can tell.

                                                          • I’m not aware of any serious efforts to “oust” Linus. The only sources I can find are from techrights.org, and that’s basically the InfoWars of OSS. By and large, people are happy with him as he’s been doing a pretty good job for the last 30 years.

                                                          Lunduke has been harping on about these things for ages, and always just pretends that the context of all of this doesn’t matter and that just “outed == bad == LINUX WILL DIE!!!!” I find it a complete non sequitur.


                                                          Re: “Linux companies”

                                                          • Who cares that IBM “killed off CentOS” (there’s also a bit more nuance to that IMO, but let’s leave that to the side)? There were immediately a bunch of replacements available. Doesn’t sound like “dying” to me.

                                                          • SUSE’s realignment to more “cloud stuff” seems to fit a general trend. Microsoft is doing the same for example: where Windows was once the core product, now e.g. Azure is increasingly seen as its “core product”. For better or worse, the OS in itself has become less important a more “abstracted”.

                                                          • Linux Journal is back, which he conveniently leaves out. LWN is still alive and strong. A single publication running in to a spat trouble strikes me as very little evidence of anything. I don’t see the “onslaught” he’s talking about.


                                                          Re: “Linux complexity”

                                                          Well, the world of computing is more complex than it was in 1992 🤷 It doesn’t seem to me that Linux has it worse than any other mainstream general purpose OS.

                                                          I think his point about maintenance and security are overly simplistic as well; Linux isn’t a monolithic entity where you run every line of code that gets committed to the kernel; it’s probably more useful to see Linux as a sort of “monorepo”.


                                                          Re: “Linux events”

                                                          He is complaining that in-person events were cancelled throughout 2020 and 2021 and that this means “our community is dying” and “in hospice with every known disease on the planet”.

                                                          I kept waiting for him to mention the pandemic.

                                                          He doesn’t. He simple asserts that there are fewer events and that they’re not coming back.

                                                          lol?


                                                          Re: “Fuchsia”

                                                          Yeah, this might replace Linux. We’ll see.

                                                          This is pretty much the only argument that makes any sense: something better will come along and it will displace Linux. It may not even be Fuchsia but something else. I have some ideas in which way things will probably move, but they’re probably wrong. We’ll see what happens.

                                                          Will Linux still be around in 25 years? I don’t think it’s as clear-cut as “Operating systems in the past have come and gone”. Overall, the world of computing is a lot less in its infancy than it was in the 70s and 80s, so it makes sense that systems last longer. There is also the issue there is a lot more software, so compatibility and inertia is more important than ever. You see this with programming languages as well, which seem to have a much longer average longevity than they had in the past.

                                                          It’s may not even be a bad thing if something were to come along and incorporate all the lessons from the last 25 years. Apple didn’t do too bad with its OS X right? But MacOS Classic was kind of horrible, and Linux seems “good enough” for a lot of things. It’s not uncommon that “good enough” blocks “better”, but again, we’ll have to see what happens.

                                                          1. 12

                                                            Regarding the first point, I definitely remember people trying to oust Linus because his behaviour was too brash.

                                                            He went on a small break and came back largely because of those attacks.

                                                            But he’s back now so it largely doesn’t matter, and I think people have stopped attacking him though I’m not certain of that.

                                                            1. 16

                                                              “Oust” is a strong word. I don’t believe I saw anyone credibly suggest that Linus step down from leading kernel development altogether. At most I saw claims that Linus’ interactions in mailing lists etc. was unprofessional, reflected poorly on the Linux project in particular, and could potentially exclude people from wishing to contribute.

                                                              As you said, Linus took these viewpoints to heart and the complaints have died down.

                                                              1. 3

                                                                At most I saw claims that Linus’ interactions in mailing lists etc. was unprofessional, reflected poorly on the Linux project in particular, and could potentially exclude people from wishing to contribute.

                                                                Those tend to be the opening moves of the ousting playbook.

                                                                1. 2

                                                                  I try to give people the benefit of the doubt. If they state they’re trying to change Linus’ behavior out of concern for him personally, his legacy, and the health of the kernel development process, I’d accept that, absent any proof of nefarious intent.

                                                              2. 7

                                                                Nobody was trying to oust Linus, just trying to get him to understand that “management by perkele” wasn’t working anymore. Take a read over what he himself posted on LKML on the issue: https://lkml.org/lkml/2018/9/16/167

                                                                1. 6

                                                                  FWIW, I think that break did wonders for Linus and was a moment of significant personal growth. The stuff he writes nowadays still has an edge to it, but in a much more mature way. I was recently reading some old Linus rants on the LKML, and I actually cringed each time he wrote that someone “should be retroactively aborted”. I think it’s helpful to think about what happened back then as more of an intervention than an ousting.

                                                                  1. 6

                                                                    It should also be added that Linus has said things to the effect of “Yeah, I don’t really like this angry temperamental side of my personality either; I wish it was different and I tried to change it and failed, guess it’s just how I am” years earlier already. This wasn’t some sort of magic epiphany moment but just one (large) step in a long process, and neither was it forced upon him by SJW beta cuck feminazi dangerhair Marxists trying to “cancel” him, or some such.

                                                                    And while I don’t want to excuse any of his more, ehm, angry behaviour, I also feel that he’s been portrayed a bit unfairly. Some people seem have the impression that he is (or was) some sort of angry madman ranting and raving to everyone because every ridiculously outburst got media attention with a picture of him giving nvidia the finger, because 🍿 Again, not excusing this, but it is a very one-sided and incomplete picture.

                                                                    1. 1

                                                                      Change is often slow. If over a period of 10 years you manage to reduce non-constructive inflammatory phrasings from being present in 10% of your communications to only 0.1% of communications (while probably also reducing edge cases and improving the general quality), then the outside world will still only get transgressions pointed out to them and observe no change. Even those closer to the fire may draw the same conclusion through confirmation bias.

                                                                2. 9

                                                                  You’ve spent a lot of time and effort responding to something that I think (despite the speakers claims otherwise) is basically a clickbait troll. I think that’s laudable but I definitely wouldn’t be spending my own time in this way! It’s a deliberately inflammatory headline and the presentation as a whole is mostly performative incredulousness.

                                                                  One thing I would disagree with you about:

                                                                  [RMS] probably turned off more people from Free Software than attracted

                                                                  I don’t dispute that RMS has variously been unhelpful and has certainly alienated people but I think that still his net attraction to free software has been huge. For me, I first became interested in becoming a computer programmer at all because of his political essays. I don’t think that people who were convinced by his political ideas would change their mind about those ideas even if they later came to dislike him.

                                                                  1. 2

                                                                    You’ve spent a lot of time and effort responding to something that I think (despite the speakers claims otherwise) is basically a clickbait troll.

                                                                    Perhaps; but I’ve seen Lunduke’s stuff around often enough to warrant writing something down, and it wasn’t that time-consuming :-)

                                                                    Re: RMS. I don’t want to handwave away your comments, but I’m also a little bit weary of talking about him, so I’ll defer to my post from a few months ago for that, adding to it that “convinced by his political ideas” it’s not an “on/off switch”, and that he mostly turned off people who were broadly sympathetic, but not on all details or with some more nuance, and (strongly) dislike his hard-line no-compromise stance. This was certainly the case for me, and actually quite a few people I know.

                                                                    1. 1

                                                                      Thanks for linking me to that. Very comprehensive and I think you have me convinced. Having X11 under the GPL would have been an enormous thing for FOSS and I suppose it hadn’t occurred to me that FOSS might have been bigger had Stallman been more charismatic.

                                                                    2. 1

                                                                      I try to separate RMS the person (whom I have never met) with the ideals he (and the FSF) espouses.

                                                                      Many people who are attracted to those ideals can be frustrated that RMS’ personality and communication choices can hinder the wider dissemination of them.

                                                                      1. 3

                                                                        I don’t even find it particularly hard (and this is not to say that have taken to a dislike of RMS). There is no shortage of people who I’ve gotten ideas from who I don’t much like.

                                                                    3. 5

                                                                      There’s also a few “inaccuracies” in the complexity part. Completely omitting the fact that most of the new code comes with drivers. Not sure if the complaint there is “complex hardware is a problem for Linux survival” or … ?

                                                                      There’s also the “million lines just in systemd bootloader”. I did not run cloc, but if there’s a million lines just in https://github.com/systemd/systemd/tree/main/src/boot I’ll eat my hat. (Edit: 8.5k lines including comments/whitespace)

                                                                      1. 2

                                                                        There’s also a few “inaccuracies” in the complexity part. Completely omitting the fact that most of the new code comes with drivers.

                                                                        Yeah, that’s my feeling as well, but I didn’t feel like doing an examination of the Linux source and what exactly is in those “2 million lines code”; my comment was already long enough 😅 Would be interesting though! Maybe I’ll do it later.

                                                                        As for systemd bootloader, I think he may have been referring to the entire EFI process, but I’d have to go back to see what he said exactly.

                                                                        1. 1

                                                                          I did a cloc on Linux. It’s big, with 20,670,238 LoC in total[1].

                                                                          14,157,243 of those 20MLoC are drivers. Another 993,777 are filesystems. 1,729,484 architecture support code.

                                                                          Ext2 + Ext4 + common filesystem code is 109,636 LoC. The ARM64 support code is just 9,040 LoC. The size of a reasonable Linux system (ARM64, Ext4) would be 3,908,410 LoC[2].

                                                                          Around 4MLoC seems extremely reasonable for the core of a kernel like Linux, in my opinion. And of course there are gonna be drivers and filesystems and additional platforms supported, and each of those things will add a whole bunch of code, but all of that code will be neatly sectioned off in its own area and just isn’t the kind of code which creates a lot of maintainability issues long term.

                                                                          I didn’t bothered to get numbers for exactly where code has gone in recent years, but given that the entire core of a reasonable ARM64 kernel is just 4MLoC, those 2 million LoC per year are definitely going into either drivers, additional CPU architectures, or additional filesystems.

                                                                          [1]: I’ve counted lines labelled as C/C++/headers/Assembly. There are some more code in perl scripts, makefiles, shell scripts, RST files for documentation, etc. But I think it’s fair to say that C, headers and assembly are the meat of the code that actually gets compiled into a Linux kernel. If anything, I over-counted by including a few hundred kLoC of tools and sample code and such.
                                                                          [2]: I arrived at the 4MLoC number by doing (Linux LoC - drivers LoC - architecture LoC - filesystems LoC + ARM64 architecture LoC + Ext2 LoC + Ext4 LoC + filesystem common code LoC).

                                                                        2. 2

                                                                          Completely omitting the fact that most of the new code comes with drivers.

                                                                          Yeah, obviously Intel is going to own the SGX code, and it will have little impact on linux development generally.

                                                                        3. 2

                                                                          He simple asserts that there are fewer events and that they’re not coming back.

                                                                          This is the only point I tentatively agree with—factually at least, not necessarily agree with the fact that it means the community is dying. My feeling is that the number of in-person events has been steadily decreasing in the last decade. I think one of the biggest blows was the 2008 recession though. It may be just my feeling and/or a regional thing, because I don’t have good data.

                                                                          Are there any good catalogs of FOSS events?

                                                                          1. 2

                                                                            Probably; but this seems to be across the board: couchsurfing and meetup.com are also not what they used to be for example. And then there were the forum meetups that I attended, a concept which seems pretty much dead too (just as forums are).

                                                                        1. 1

                                                                          Minor correction: Freescape was developed by Incentive Software, not Alternative Software.

                                                                          1. 17

                                                                            This kind of thing is why I judge GUI frameworks entirely on the quality of their text editing widget. I’d estimate that the complexity of a good text editing widget is around double the complexity of the rest of the GUI framework combined. I think this is one of the big reasons that the user experience on OS X is often better than Windows. NSTextView is sufficient for building a rich DTP application so everyone uses it, including having a spell checking dictionary shared across every single text editing component in the system. The Windows RichText control is barely suitable for writing an email and so there’s a proliferation of custom ones and all of them behave subtly differently. For example, the multi-click selection behaviour in Notepad is different to most other text widgets in Windows, the paste behaviour in Teams (Electron using Chrome’s text view) is different from Word (its own text view).

                                                                            1. 9

                                                                              Apple made a deliberate choice to be as consistent as possible, and it sure paid off.

                                                                              The near-universal proliferation of native text widgets is one of the biggest things keeping me on MacOS, because they all have emacs bindings.

                                                                              1. 7

                                                                                Most of the functionality of the text view dates back to OpenStep, some even to NeXTSTEP. There’s a reason that there’s a 1:1 mapping between the things that NSAttributedString Additions can represent and the set of things that HTML 1.0 can represent: HTML was just a serialisation of the things NSTextView can render. There were a bunch of DTP things for *STEP for this reason.

                                                                                I recall reading back in the ’90s that the Office team prevented the Windows team from adding better functionality to the rich text control on Windows because they were worried about making it too easy for someone to write a competing word processor. No idea how true this is (I joined the company 20 year after any of that would have happened) but it would explain a lot.

                                                                                1. 4

                                                                                  I worked for a company that makes a web-based document editor. At one point, I “snuck” some of the Emacs bindings into the text editing code, conditioned on the user being on a Mac, because I hated having to use the product without them. Most of our paying customers were on Windows, so they probably never noticed, but I did!

                                                                                2. 3

                                                                                  I’m guessing the Chromium edit control is the most full-featured fully accessible one that’s available on Windows to anyone outside the Office team. I wonder if anyone disagrees.

                                                                                  1. 4

                                                                                    Not agreeing or disagreeing with your question. But WebKit/Chromium content-editable mode has tended to be really flaky. Even very recently I’ve been able to reliably mess up editable rich text in nearly any web page by dragging bullet-list items up and down: it generally fucks us the nesting and often the styles of nearby text.

                                                                                    It turns out that a DOM tree is not the right data model for a text editor — it’s far, far better to use a mutable string/rope with attributes attached to character ranges. Which is exactly what Cocoa uses (NSMutableAttributedString.) With a DOM, many very simple editing operations often turn into super-complex tree manipulations. Unfortunately a web browser is inextricably tied to a DOM.

                                                                                    1. 1

                                                                                      Interesting; I wasn’t aware of those problems with DOM editing.

                                                                                      For native applications, it’s too bad that as far as I know, no clone of the OpenStep APIs (e.g. GNUstep) has accessibility support. I suppose that leaves us with Microsoft’s RichEdit on Windows.

                                                                                      1. 1

                                                                                        I got that information from the horse’s mouth, i.e. Ken Kocienda, who did the first iteration of content-editable in WebKit. I was really excited by it but disappointed by how buggy it was, and he explained what made it so hard compared to an ordinary text editor.

                                                                                        I also had a related experience in an experimental listserv I was writing a few years later. I was trying to prettify emails by cutting out the quoted stuff and the “On such-and-such day foo@bar.com said:” headings and boilerplate signatures. This became super difficult with HTML messages due to messing about with DOM nodes.

                                                                                  2. 1

                                                                                    I’d readily agree that Apple got a lot of UX things right. However, as a daily Linux user, something that always rubs me the wrong way is: in OSX, pressing Home and End in a multi-line text widget cannonballs the cursor all the way to the extreme beginning and end of the widget contents. On Linux, it only moves to the beginning and end of the current soft-wrapped line, which is so much better to work with. Going to the extrema is made possible with Ctrl-Home and -End. I realize I can use the trackpad on OSX to get to a specific spot in a text widget; but I don’t want to have to.

                                                                                    1. 3

                                                                                      Have you tried cmd-left/right?

                                                                                      1. 1

                                                                                        Oh, thank you! I did not know those were available. Now the hard part: Remembering these exist the next time I need them.

                                                                                        1. 1

                                                                                          Option-arrow moves between words.

                                                                                      2. 1

                                                                                        in OSX, pressing Home and End in a multi-line text widget cannonballs the cursor all the way to the extreme beginning and end of the widget contents

                                                                                        I’m not on a Mac right now, but as I recall, it moves the view not the cursor. You can get back by pressing any arrow key (which moves the view back to the cursor location). As with everything else in macOS, command and option are modifiers. As I recall, option moves one word, command one line. On an Apple keyboard, command, option, and function are all in a row and home / end and page up / down are function + arrow key, so all movement in a text box is left-hand modifier + right-hand arrow key.

                                                                                      3. 1

                                                                                        Isn’t Notepad just a thin wrapper around a native Windows TextBox control?

                                                                                        1. 1

                                                                                          It doesn’t seem to be (though I’ve not looked at the code). TextEdit on macOS is a tech demo for NSTextView (it’s actually open source, so developers can see how to use NSTextView most effectively).

                                                                                          1. 1

                                                                                            I have no way of checking myself, but I’ve a vague recollection that it is based off of a combination of some sample code I think was bundled with Visual Studio 6, and that I also recall that Notepad had severe file size limitations for a long time on account of it just being a built-in control.

                                                                                            It’s over a decade since I did any Windows development in anger or even really used it, but I guess it’s something that could be checked by inspecting the window class of the control to see if it corresponds to that of a regular textbox control.

                                                                                            1. 1

                                                                                              Notepad isn’t, but WordPad was shipped as an example application for both MFC and RichEd.

                                                                                      1. 2

                                                                                        Great post, until it goes off the rails at the end.

                                                                                        1. 1

                                                                                          Author’s definitely technically capable but, sure enough, nobody is flawless.

                                                                                          1. 1

                                                                                            Yeah but when their flaws encourage you to do things that are medically dangerous or ignore existential threats to large parts of human civilization or yet again shove their religion down your throat, it’s a little bit more than “oh well…”. It’s tiresome and makes me not trust anything else they have to say.

                                                                                        1. 1

                                                                                          The title says esoteric languages, but the post is almost entirely about obfuscated languages.

                                                                                          1. 3

                                                                                            From the Oxford Languages definition of “esoteric”:

                                                                                            intended for or likely to be understood by only a small number of people with a specialized knowledge or interest.

                                                                                            I think this covers these languages well.

                                                                                            “Esolang” is as far as I can see an accepted neologism for these languages, and the Esolang wiki lists most of them:

                                                                                            https://esolangs.org/wiki/Language_list

                                                                                            1. 0

                                                                                              Of course it covers these languages as well, but if you are only going to talk about the obfuscated ones, you can at least admit to it. Taking a tiny subset of esoteric languages, cherry picking the obfuscated ones and then call it esoteric languages is like starting a food blog and only talk about bubble gum.

                                                                                              1. 3

                                                                                                That’s just what we ended up calling them back in the day. With a bit of effort, somebody might be able to dig out the details from the old sange.fi mailserv, or in Archive.org, but that was decades ago. The community just needed a term for the kinds of toy languages people were playing with and “esoteric” was the least worst fit. Something like API might be “esoteric” in some senses, but as you mentioned elsewhere, it was built with a practical purpose in mind. “Toy” wouldn’t have worked because not all esolangs are toys. “Joke” or “Parody” wouldn’t have worked for similar reasons. They’re also not necessarily obfuscated: they may instead be aiming for a kind of minimalism to to subvert expectations: Brainfuck, Shelta, and Q*Bal fit this bill, with the former two being designed be implementable in a tiny executable, and the latter operating off of queues rather than stacks.

                                                                                                So, “esoteric” is what stuck. If you have suggestions for a better term that covers the field better, I’m all ears, but you’re two or three decades too late.

                                                                                                1. 2

                                                                                                  Thanks for this. Looking at esolangs.org, it hit me that the “esoteric” in esoteric programming languages didn’t really mean the same thing as, you know, if you look up “esoteric” in the dictionary - and you kind of made sense of it all.

                                                                                                2. 2

                                                                                                  At least Jelly and Brainfuck are not obfuscated; their source languages are as readable as assembly code. Are you thinking of languages more like Iota and Jot?

                                                                                                  Ultimately, what’s obfuscated and what’s plain is culturally dependent. Qalb is extremely readable if one knows Arabic, but to me it only forms gorgeous nonsense patterns.

                                                                                                  1. 1

                                                                                                    I’m not that familiar about this space.

                                                                                                    What languages in your opinion are esoteric but not obfuscated?

                                                                                                    1. 0

                                                                                                      One example could be APL (https://en.wikipedia.org/wiki/APL_(programming_language). It’s a pretty esoteric language, but wasn’t made with the sole purpose of being a bitch to use. It was made to solve a problem.

                                                                                                      1. 6

                                                                                                        APL isn’t an esoteric language.

                                                                                                        (Also, Orca and GolfScript aren’t “obfuscated”. They are very well designed for their intended goals.)

                                                                                                3. 1

                                                                                                  Plenty of languages in this area are not “obfuscated” or at least not intentionally. Brainfuck was designed to be a tiny language, not an obfuscated language. (Its name in fact should be seen as an expression of frustration rather than one of its goals.) Likewise, Befunge and Piet were designed to be 2-D languages – if they are obfuscated, that was not a design goal. Ditto Orca, or languages like Golfscript.

                                                                                                  In fact, I would argue that the only esoteric langauges covered here that focus on obfuscation are INTERCAL and the Brainfuck derivatives. A language that is hard to read as a side effect of its actual goal is no more obfuscated than APL.

                                                                                                  1. 2

                                                                                                    I see your point. And I was thinking about “esoteric” in the dictionary definition sense of the word. talideon had an explanation of how that word got to be used the way it did, which kind of made sense of the fact that there are different opinions of this.

                                                                                                1. 11

                                                                                                  As mainly a Go and Rust programmer, I’m all for generics. But I would love some people in this github issue to travel back in time and talk to their past-self, when their past-self was explaining that “we don’t need generic in go”, “generics make the language hard to approach, and not as simple as Go is designed for”. (I’m paraphrasing)

                                                                                                  Anyway, I love this idea, this is good news. As my father used to tell me as a kid: “only morons never change their mind.

                                                                                                  1. 6

                                                                                                    My first foray into Go was in 2009 or 2010 when the first version became public. Couple years after that I was actually paid for writing it, building some pretty cool microservices that are still being used at my previous employer (maybe).

                                                                                                    I didn’t ever accept Go not having or needing generics. It was blindingly obvious from the start! The 63th time I had to write a Sort implementation or doing a map/reduce operation I was already bleeding from my retinas, staring at what felt like a completely moronic attitude at programming language design, feeling just abject despair at the boneheadedness of its designers. “Yeah, we aren’t sure how to do them”, they said. Well, I don’t know, Java had them, how about you ask the same guy who implemented them for Java (Philip Wadler). And that’s what they did!

                                                                                                    I am glad the Go designers reduced their ultimatum from a never to a maybe, because this is finally a programming language worth taking seriously.

                                                                                                    1. 8

                                                                                                      the Go team’s position has never been “never”. This is a proposal for adding generics from a member of the Go team from 2010: https://go.googlesource.com/proposal/+/master/design/15292/2010-06-type-functions.md

                                                                                                      their positions were generally of the form “let’s use the language for a bit before we add generics, cause they change a lot” and “all the proposals so far have been for some reason bad”. The idea that the Go team has a history of saying Go will never have generics is not true and has never been true.

                                                                                                      1. 2

                                                                                                        You’re right, it was not the Go team itself that presented this attitude! It was the community that developed a cargo cult of sorts around it. That was the weirdest part – the language authors were neutral, if slightly reserved, about generics, but the community for some strange reason had this die-hard anti-generics stance that made no sense at the time.

                                                                                                        1. 2

                                                                                                          True. The weird rejection of the idea of generics has largely been a community thing, not the core team. I have issues with some element’s of Go’s design (mainly around the lack of sum types necessitating all kinds of silly hackery), but the Go team themselves have always been pragmatic and non-dogmatic.

                                                                                                    1. 26

                                                                                                      Very similar story from a few weeks ago: SQLite is not a toy database – I won’t repeat my full comment from there.

                                                                                                      SQLite is very fast. [..] The only time you need to consider a client-server setup is: [..] If you’re working with very big datasets, like in the terabytes size. A client-server approach is better suited for large datasets because the database will split files up into smaller files whereas SQLite only works with a single file.

                                                                                                      SQLite is pretty fast compared to fopen(), sure, but PostgreSQL (and presumably also MariaDB) will beat it in performance in most cases once you get beyond the select * from tbl where [..], sometimes by a considerable margin. This is not only an issue with “terabytes” of data. See e.g. these benchmarks.

                                                                                                      Is it fast enough for quite a few cases? Sure. But I wouldn’t want to run Lobsters on it, to name an example, and it’s not like Lobsters is a huge site.

                                                                                                      Well, first of all, all database administration tasks becomes much easier. You don’t need any database account administration, the database is just a single file.

                                                                                                      Except if you want to change anything about your database schema. And PostgreSQL also comes with a great deal of useful administrative tools that SQLite lacks AFAIK, like the pg_stats tables, tracking of slow queries, etc.

                                                                                                      And sure, I like SQLite. I think it’s fantastic. But we need to be a tad realistic about what it is and isn’t. I also like my aeropress but I can’t boil an egg with it.

                                                                                                      1. 9

                                                                                                        SQLite is pretty fast compared to fopen(), sure, but […] MariaDB will beat it in performance

                                                                                                        I would actually be interested in knowing whether SQLite handles that query that broke Lobste.rs’ “Replies” feature better than MySql/MariaDb.

                                                                                                        But I wouldn’t want to run Lobsters on it, to name an example, and it’s not like Lobsters is a huge site.

                                                                                                        I think Lobste.rs would run fine. It would probably be more an issue with the limited amount of SQL SQLite supports.

                                                                                                        1. 7

                                                                                                          The replies query broke because the hosted MySQL Lobste.rs relies on doesn’t do predicate push down. SQLite does do predicate push down, so it wouldn’t have the same problem.

                                                                                                          However SQLite doesn’t have as many execution strategies as MySQL, so it may be missing a key strategy for that query.

                                                                                                          1. 5

                                                                                                            SQLite’s query planner is honestly a bit smarter than MySQL’s in certain ways. For example, MySQL, as recently as 2017, did temporary on-disk tables for subselects. SQLite instead usually managed to convert them to joins. Maybe that’s been fixed in the last four years, but I wouldn’t assume that MySQL would be faster/that SQLite would be slower.

                                                                                                          2. 1

                                                                                                            Lobsters uses some fairly complex queries; usually those kind of things tend to do less well on SQLite, although I didn’t run any benchmarks or anything. I found that SQL support in SQLite is actually pretty good and don’t necessarily expect that to be a major issue.

                                                                                                            From what I understand is that the biggest problem with the Lobsters hosting is that it’s running MySQL rather than MariaDB. While MySQL is still being actively developed, from what I can see it’s not developed very actively and MariaDB is leaps ahead of MySQL. At this point we should probably stop grouping them together as “MySQL/MariaDB”.

                                                                                                            1. 1

                                                                                                              Aside from the operations perspective of migrating data, converting things that are not 1:1 between mysql and mariadb, etc. are there any features in lobste.rs that prevent the use of MariaDB?

                                                                                                              1. 1

                                                                                                                It used to run on MariaDB until there was a handover of the servers. AFAIK it runs well on both (but not PostgreSQL, and probably also not SQLite).

                                                                                                                1. 1

                                                                                                                  I guess their current hoster only provides MySql (for unknown reasons).

                                                                                                                  I asked about offering hosting, but never got a reply.

                                                                                                            2. 12

                                                                                                              I also like my aeropress but I can’t boil an egg with it.

                                                                                                              I bet you could poach an egg with it, with some inventiveness and a slightly severe risk of getting scalded. ;)

                                                                                                              1. 3

                                                                                                                When I posted that comment I was thinking to myself “I bet some smartarse is going to comment on that” 🙃

                                                                                                                1. 2

                                                                                                                  Joking aside, I think a better analogy would be comparing the Aeropress to an espresso machine: the Aeropress is going to get you really good coffee that you’re going to use every day, costs very little, is easy to maintain, and you can bring with you everywhere, but it’s never going to give you an espresso. But then again, it’s not really trying to.

                                                                                                                  (The analogy falls apart a bit, as one of the original claims was that it could produce espresso. I think they stopped claiming that though.)

                                                                                                                2. 1

                                                                                                                  LOL

                                                                                                                  …and audible laughter was emitted. Thanks for that.

                                                                                                                  1. 1

                                                                                                                    On the other hand if you had to set up and supply your password to obtain admin rights every time you just wanted to make coffee….

                                                                                                                    …because some nutjob might want to use it for boiling eggs and the company wanted to stop that….

                                                                                                                    …the device that just let’s you get on with making coffee (or boiling eggs) is a hellavuh lot faster for many jobs!

                                                                                                                  2. 5

                                                                                                                    Except if you want to change anything about your database schema.

                                                                                                                    SQLite has supported ALTER TABLE ADD COLUMN for years, and recently added support for dropping columns. So I’d amend your statement to “…make complex changes to your db schema.”

                                                                                                                    SQLite has stats tables, mostly for the query optimizer’s own use; I haven’t looked into them so I don’t know how useful they are for human inspection.

                                                                                                                    1. 2

                                                                                                                      SQLite has supported ALTER TABLE ADD COLUMN for years, and recently added support for dropping columns. So I’d amend your statement to “…make complex changes to your db schema.”

                                                                                                                      Yeah, the drop column is a nice addition, but it’s still a pain even for some fairly simple/common changes like renaming a column, changing a check constraint, etc. I wouldn’t really call these complex changes. It’s less of a pain than it was before, but still rather painful.

                                                                                                                      SQLite has stats tables, mostly for the query optimizer’s own use; I haven’t looked into them so I don’t know how useful they are for human inspection.

                                                                                                                      As far as I could find a while ago there’s nothing like PostgreSQL’s internal statistics. For example keeping track of things like number of seq scans vs. index scans. You can use explain query plan of course, but query plans can differ based on which parameters are used, table size, etc. and the query planner may surprise you. It’s good to keep a bit of an eye on these kind of things for non-trivial cases. Things like logging slow queries is similarly useful, and AFAIK not really something you can do in SQLite (although you can write a wrapper in your application).

                                                                                                                      None of these are insurmountable problems or show-stoppers, but as I mentioned in my other comment from a few weeks ago, overall I find the PostgreSQL experience much smoother, at the small expense of having to run a server.

                                                                                                                      1. 6

                                                                                                                        it’s still a pain even for some fairly simple/common changes like renaming a column

                                                                                                                        https://sqlite.org/lang_altertable.html :

                                                                                                                        ALTER TABLE RENAME COLUMN The RENAME COLUMN TO syntax changes the column-name of table table-name into new-column-name. The column name is changed both within the table definition itself and also within all indexes, triggers, and views that reference the column.

                                                                                                                  1. 3

                                                                                                                    Another thing that helps, at least with marginally well-behaved clients, is to add the header Cache-Control: public; max-age=3600.

                                                                                                                    1. 2

                                                                                                                      I have this:

                                                                                                                      cache-control: public, max-age=86400, stale-if-error=60
                                                                                                                      

                                                                                                                      Is this sufficient? My feed isn’t updated more than once per day.

                                                                                                                      1. 1

                                                                                                                        Is this sufficient? My feed isn’t updated more than once per day.

                                                                                                                        I think that should be plenty! It blows my mind how clients can fall down on simple stuff like this.

                                                                                                                    1. 1

                                                                                                                      There’s a minor historical error in that document. Acorn were never an ARM licensee, but VLSI was, and the ARM processors used by Acorn after ARM was spun out from Acorn were produced under license from ARM by VLSI.

                                                                                                                      1. 3

                                                                                                                        I feel rather strongly that this is intentional obfuscation. Many companies want to share as little information as possible with users. In part this is because transparency and truthfulness become weaponized by competitors. But also because companies view customers as adversaries, or at least as frenemies, when their changes break your workflow.

                                                                                                                        1. 3

                                                                                                                          That and a huge number of app devs just lack either the skill or the ambition to write decent change notes. I complained about uninformative app changelogs years ago on reddit and got a pile of responses telling me that I was an idiot, because users don’t want to read a bunch of stuff about how you refactored the WindowStrategyFactoryObserver. When I answered that of course you don’t write about your code changes, you write about what they actually mean for users, I got two kinds of responses:

                                                                                                                          1. “Coming up with a user-centric list of improvements sounds like a lot of work, I’m not doing that!” (If you don’t already have bug reports, user stories, or some kind of thing that captures users’ problems and desires, that you can cross-reference your finished work to, how are you prioritizing work in the first place?)
                                                                                                                          2. “What about when a release doesn’t have user-visible changes?” (Okay, so this version has no significant changes from the user POV but it was important for you to push a release anyway? If that happens more than once in a blue moon then I think your app is doing something shady. And no, I don’t think something like an API upgrade falls under this category — “updated the Frobozz API to v3 so that Splorch Effects don’t stop working in March 2021” addresses a user concern [people want Splorch Effects to work], and is potentially useful to someone looking through the release notes in April wondering why Splorch Effects are broken and whether an upgrade might fix them.)
                                                                                                                          1. 3

                                                                                                                            And you’re almost guaranteed that those same devs have commit histories that are a complete and utter mess.