1. 12

    I used to be a fan of Python but almost all the stuff here seems weirdly tacked on. Added with good intentions for sure but a lot of it seems short-sighted, implemented in the least-effort way.

    • f strings are like u strings in Python 2: a compatibility hack to make old code not go bust. Maybe these should’ve been just regular strings in Python 3, but over the entire lifespan format strings were not necessary but we need them mid-way in Python 3? Odd.
    • Pathlib uses the division operator in a weird way. I guess maybe this is similar to how + concatenates strings but seems to be overly cute for cuteness sake. Granted, this is a minor point.
    • Type hinting. I guess it is nice. Doesn’t seem fully formed, since that was introduced in 3.0 but had no guidelines on how to use it. I’m not necessarily a fan of adding features without a clear idea how to use it but okay, we seem to be getting to some point now.
    • Enumerations look like a weird hack, not a language feature. Without matching/dispatching on Enums these don’t feel all that useful. This is like ADTs but without the good parts.
    • Data classes or as we in ML call it, records. Useful feature, again as a decorator hack. But I can certainly see it being helpful. Again, no destructuring on them, which would make them much more useful.
    • Implicit Package Namespaces. Could you make Python packages more obscure with more special rules? Turns out you can.
    1. 17

      Why do you feel that Py3’s f strings are like Py2’s u strings? The f stands for “format”, meaning you can avoid the .format() method. The u stands for “unicode”, and is something else entirely. I don’t see how the two compare, would you care to elaborate?

      The division operator used in pathlib merely mimics the directory separator in unices. If we have p = pathlib.Path('/path'), then p / 'to' / 'file' is the same as /path/to/file. Pretty easy and convenient, if you ask me.

      1. 2

        Why do you feel that Py3’s f strings are like Py2’s u strings?

        Because both are what strings should’ve been by default. But one couldn’t change Python 2 in a compatible way so Python 3 strings are what were u strings in Python 2. Similarly with f strings, in languages that do format strings this way (Ruby, Shell, PHP come to mind) this is a default behaviour, with a way to opt-out. But that can’t be added in Python 3 because all existing code that might contain formatting characters would start doing unexpected things, so f strings had to be created. A hypothetical Python 4 would probably use f strings as default.

        My complaint here is if there is a new formatting syntax introduced, why was it added now and not in a clean syntactic way when Python 3.0 was introduced?

        The division operator used in pathlib merely mimics the directory separator in unices.

        I understand that it is for that purpose, but it is a “cutesy” misuse of division. The reverse operation of division would be multiplication, but what would multiplication on a path even mean? But as said, this is a minor complaint since Python already has + on strings, with no symmetrical - operation.

        Fun fact, Elixir has + on lists and also - on lists. What does it do? I’m leaving this as an exercise to the reader.

        1. 1

          Thanks, I understand what you’re getting at.

        2. 1

          Operator overloads, yeah, brilliant. Let’s bless this kind of crap so we can be surprised by more un-obvious, difficult-to-introspect garbage.

        3. 2

          I kind-of agree. It seems like a lot of recent changes make Python more complicated for not much gain. The language is starting to feel bloated instead of elegant and focused, to me at least. Thankfully they’re almost all opt-in features, so I can continue using Python as I like it, and only pick the new “features” which I genuinely get some benefit from.

          1. 2

            Data classes implementation is so much nastier than namedtuple, and given the fact that it’s yet-another-decorator-hack, I don’t understand why it has a pep and is in the standard lib.

            Type hinting, asyncio (or twisted), fstrings (an abomination)…no wonder Guido bounced. The whole place is going crazy! What happened to the Zen of python? All of the aforementioned seem direct contradictions.

            Fuck. I’ll be writing c and Lua if you need me.

            1. 1

              Implicit Package Namespaces. Could you make Python packages more obscure with more special rules? Turns out you can.

              The article is wrong, you should not use this feature as it suggests. It does solve a real problem — packages like zope that are broken into multiple distributions on PyPI — see PEP 420. In this advanced use case it replaces a disgusting setuptools hack so I’d count it as progress.

            1. 2

              This post is obviously written from the perspective of someone who cares about safety and security. Safety is a very important ‘top’ to the system but there are others which can be more important depending on what the user values. The software can be as safe as you want, but if it doesn’t solve the problem I need the software to solve, then it’s useless to me. If safety concerns are preventing me from writing software that is useful to people, then it’s not valuable. In other words, sometimes ‘dangerous code’ isn’t what we need saving from.

              Personally, I feel what we need saving from is people building software who have zero consideration for the user. So the better I can directly express mental models in software, the better the software is IMO. Modern C++ is actually really good at allowing me to say what I mean.

              1. 3

                This is based on assumptions that safety is only useful as an end to itself, and that safety decreases language’s usefulness. The counterpoint to it is that safety features eliminate entire classes of bugs, which reduces amount of time spent on debugging, and helps shipping stable and reliable programs to users.

                Rust also adds fearless concurrency. Thread-safety features decrease the amount of effort required to parallelize the program correctly. For example, parallel iterators are simple to use, and can guarantee their usage won’t cause memory corruption anywhere (including dependencies, 3rd party libraries!).

                So thanks to safety features you can solve users’ problems quickly and correctly.

                1. 1

                  I feel that one day both C and C++ will be relegated to academic “teaching languages” that students will dread, that are used only to explain the history and motivations by the more complex (implementation wise) but better language that overtakes them.

                  1. 1

                    I am not sure why that would ever happen. As teaching languages both are pretty much useless, for the surface simplicity and hidden complexity of C or the sheer size of C++. We are currently not teaching BCPL or ABC or any other predecessors of currently popular languages, because while interesting from a historical perspective it doesn’t teach you all that much.

                    1. 2

                      Late response, but I totally agree with you. I was thinking of it more in terms of the way assembly is typically taught to CS students. It’s good to know that your code will be run as these machine instructions eventually, but it’s not strictly necessary for developing useful applications.

              1. 4

                Is there any comparison between the different statically typed options for the Erlang VM? I know there is Alpaca and I seem to faintly remember walking across another language but can’t remember its name.

                1. 7

                  There is Alpaca, Purescript (purerl), Gradualizer, and Elchemy that I know of. I’ve not written a comparison as I’m weary about seeming unfriendly or competitive when really I would like all these projects to succeed!

                  Are there more specific questions I could answer for you?

                  1. 3

                    Thanks for the list!

                    I guess my main question would be which of these would map my existing OCaml workflow best onto the Erlang VM.

                    1. 7

                      From where I’m standing I would say the Purerl is the most mature and usable right now, making it a good choice. It is more of a Haskell than an OCaml though, Gleam is in some ways closer to OCaml there.

                      On the whole they’re all rather immature projects, so if you want real-world use I probably wouldn’t use any of them (except possible Purerl). I intend to continue to develop Gleam and the ecosystem around it so hopefully it’ll fill that niche in the not too distant future.

                1. 10

                  I have found quite a number of jobs in my career that I found to be meaningful. Working for the company that did a lot of the donation processing for the Barack Obama campaign was meaningful to me. I will always be proud of the work I did there.

                  Ditto working for the human genome project.

                  To be honest, much though I expect to be tarred and feathered for saying so, I find my work at AWS to be incredibly meaningful. I feel like we’re advancing the state of the art in commercial fault tolerant distributed filesystems. I realize the ‘commercial’ bit means that the understanding of how to run such a thing won’t get contributed back to the commons, and while I wish it wasn’t so I also feel like I’m at peace with living in a world where making money is how we keep score, and as such closed source is a thing we choose to live with.

                  I like this article, because I think it’s helpful for some people who lack meaning in their work to understand some of the things that can contribute to finding it.

                  1. 2

                    To be honest, much though I expect to be tarred and feathered for saying so, I find my work at AWS to be incredibly meaningful.

                    I think not, since the point of the post is that the job is meaningful to you first and foremost and not other people.

                    That said I see the conflict with the proprietary (not so much commercial) bit and would prefer this knowledge would find its way into the commons. Which over time it probably gradually will since things that were very difficult years ago are a commodity nowadays. That said you can view creating fault tolerant distributed filesystems as an enabler for other potentially meaningful work on top of it.

                    1. 5

                      That’s a shortsighted definition of ‘meaningful’.

                      Someone working at Unilever can seriously influence the amount of plastic used worldwide, by working on a project to reduce the amount used in detergent packaging, even if they internally sell it as a cost saving measure. They can find that immensely meaningful and it may well turn out to be one of the small things that in the end prevents us from ruining the environment.

                      If you work on a filesystem for Amazon that improves their FS reliability, which reduces their storage costs, which enables them to offer a few products slightly cheaper, then you improve the prosperity of all users of Amazon, which includes poor people for whom the prosperity improvement is relatively large. That is meaningful to them, even if for a different reason for why the work can be meaningful to you.

                  1. 1

                    Is it just me or do things work very differently in the Windows world than the Linux world? Things like not caring about (upper/lower) case so much so that they decided to toggle that flag for their OSX filesystems… Any time you map multiple characters together you are asking for trouble.

                    1. 1

                      The case insensitivity is on by default on macOS filesystems, so unless you want every user of your software to switch their FS settings it makes sense to go with the flow. A coworker of mine turned on case sensitivity and it turns out lots of software on macOS can’t handle it properly.

                    1. 3

                      I was alarmed by this paragraph:

                      I fixed numerous instances of a particular suppressed compiler warning on Windows because g++ didn’t have a convenient way to suppress all the ways that warnings could be triggered.

                      I have so many questions about the development culture in the team that produced that code.

                      1. 2

                        I mean, also this:

                        When a build error occasionally snuck past pre-validation and was submitted, the Linux CI pipeline was so fast that frequently the branch owners would see the build error on Linux, assume it was a Linux-only problem, and assign the resulting defect to the Linux porting team.

                        No wonder the port took so long when people who regressed the port just dropped code over the fence for others to fix.

                      1. 5

                        I just wish that I could get DuckDuckGo to return the results I’m looking for more than 30% of the time.

                        I switched full bore to DDG for ~3 months and found myself being forced (after a reasonable amount of flailing) and found myself reverting to google 60+% of the time :(

                        1. 2

                          How long ago was this? I’ve found DDG now gives me what I want at least 90% of the time. When I tried a few years ago, it was much worse. Might be time to start tracking it, actually.

                          1. 5

                            I’ve been using DDG as my “first search engine” for five years or so (probably the only instance where I was an “early adopter”!) and find it’s consistently worse than Google. I regularly fall back to adding !g (maybe 15-25% of the time?)

                            For example, searching “Dunedin” on DDG gives me many results about Dunedin, Florida rather than Dunedin, New Zealand (where I live). Combine Dunedin with any other term (e.g. “Dunedin trash pickup”, “Dunedin concerts”, etc.) and it’s consistently worse.

                            I had the same problem when I lived in Bristol, UK. Because English settlers had the imagination of the average dinning table there are about 15 cities named Bristol in the United States, too. Google always gave me better results if I wanted to find out anything about the One True Bristol.

                            Trying to search for stuff in Dutch (rather than English) has always been tricky.

                            Occasionally DDG will give me “no results”, whereas going to Google will give me what I want. I can’t recall a specific search term right now.

                            1. 1

                              For example, searching “Dunedin” on DDG gives me many results about Dunedin, Florida rather than Dunedin, New Zealand (where I live).

                              While I think you raise very valid points I think the idea of DDG is to not create a personalized filter bubble, so just picking whichever Dunedin has the best SEO is sort of the idea behind DDG.

                              I am not sure I am on board with that idea either, I always get frustrated when GMaps suggests me places on the other side of the world when for the most part it is painfully obvious I don’t actually want to cycle through the Atlantic ocean to buy ice cream.

                        1. 4

                          I wonder if there is an introduction to imperative programming for the pragmatic programmer. This hidden state changing in the background without notice seems terribly difficult and impractical to program with. How would refactoring even work when you aren’t clear at what does in and what comes out?

                          1. 3

                            This is precisely why I think that FP should be taught first. It really makes you appreciate the value of avoiding mutable shared state, and it provides you with the tools to do that. Having developed this awareness really helps when you’re working an imperative language where you have to be much more careful in this regard.

                          1. 7

                            There are web frameworks in OCaml, Oscigen being the most mature. But there are also other typed functional languages with web frameworks! Servant is a new one in Haskell, but there are others, same goes for F# and Scala.

                            I don’t think that types and functional programming are just missing a good web framework to win the day. Web frameworks are fundamentally building GUIs; they just so happen to be GUIs downloaded on the fly. And GUI programming has historically been the least typeful sort of programming. Why would I need a compiler to tell me that I’m write when I can see right here I am? There’s a lot to be said for confidence in refactoring and ease of checking, but it’s not quite as necessary as it is for systems programming, distributed systems, etc.

                            That said, there is a lot of cool new stuff going on in the Elm / Virtual Dom space — and OCaml’s certainly part of it.

                            1. 6

                              I wouldn’t call Servant a web framework. It’s more for building APIs - you don’t get templating or any other MVC niceties that you’d get with Rails. Instead, you get typed, declarative APIs that in some cases you can then use to generate client code or documentation.

                              1. 4

                                Why would you say GUI are any less “typeful”? How are you seeing that you are right? Maybe on perfectly static pages I could see that it might be easy to visually check the page but most pages are not strict static HTML pages. I’ve been using Giraffe + Fable-Elmish for my F# webdev, this lets me have types in my client and server. Having the same data representation on the client and server allows me to have very simple client side validation, while also potentially having the same validation on my server. I can use the same code for both client and server.

                                1. 3

                                  Yeah I don’t think that GUI’s are inherently typeful or not: Just pointing out that historically most GUIs have been written in dynamically typed programming languages: Javascript, Tcl, Ruby on Rails. Even some statically typed frameworks like vala/gobject and objective c have like an “Any” type. My guess is that this tendency is because it’s a lot easier to manually test GUIs, even ones that aren’t static, than it is to manually test distributed systems, kernels, compilers, etc. I’m not saying though that types aren’t useful for writing GUIs too though: having statically checked data across your server and client is awesome — your F# stack sounds great!

                                  1. 1

                                    I see! you weren’t meaning that they are inherently typeful or non typeful, you were describing the languages that are used for front end. That makes a lot more sense. I think part of it is that for a long time front end dev was seen as needing to be accessible to designers. Types have had a reputation as being “hard” because they tell you when you’ve boned it up, and that sometimes comes across as harsh I think. However really a good type system is like bowling with bumpers.

                                2. 2

                                  If servant is like Erlang’s Webmachine or Clojure’s Liberator than there is a pretty decent alternative: webmachine. I’ve used it and the biggest learning curve was the OCaml object system.

                                1. 12

                                  A brief, very short, no good history of email sender verification:

                                  1. 90s: PGP encryption and signing gets developed but never gains mainstream adoption due to it’s brittle design
                                  2. early 2000s: SPF gets created to link sender addresses to whitelisted ip address ranges. The proposed standard is undermined by the idea of SMTP FROM being different from user-visible “From” making evading restrictions trivial by spammers, and the lack of policy controls to define how receivers should react on validation failure.
                                  3. Late 2000s: DKIM appears, which digitally signs emails that can be verified by a public key published by a DNS record. By itself DKIM is not actionable, as it’s not clear what the receiver server should do about invalid or unsigned emails. No large mail provider takes action based on SPF or DKIM alone.
                                  4. Early 2010s: DMARC gets developed, which is a policy framework that allows domain owners to decide based on what a receiving mail server should evaluate incoming emails on, from that source domain (validate based on DKIM? SPF? Both? Neither?), what to do on validation failure and defines a reporting mechanism so domains can look at reports in aggregate.

                                  So to recap, the solution already exists: deploy SPF+DKIM, define your DMARC policy and you have email sender validation. Ignore PGP or S/MIME, those are very 90s standards that are not feasible to deploy on scale.

                                  Yeah, most organizations (90%-ish) decline to set a hard-reject DMARC policy. That is a story for another day though.

                                  1. 6

                                    Yeah, most organizations (90%-ish) decline to set a hard-reject DMARC policy. That is a story for another day though.

                                    That’s probably due to broken mailing lists. IETF MLs correctly handle that case by changing From, lists.sr.ht don’t change the e-mail but too many mailing lists just modify the e-mail and relay it further thus breaking DKIM signatures.

                                    1. 1

                                      Good points, I’ve set up SPF and DKIM on my server but will set up DMARC and make SPF stricter!

                                    1. 17

                                      The problem is we have two bad solutions, but bad for different reasons. None of them works transparently for the user.

                                      GnuPG was built by nerds who thought you could explain the Web of Trust to a normal human being. S/MIME was built to create a business model for CAs. You have to get a cert from somewhere and pay for it. (Also for encryption S/MIME is just broken, but we’re talking signatures here, so…) And yeah, I know there are options to get one for free, but the issue is, it’s not automated.

                                      Some people here compare it to HTTPS. There’s just no tech like HTTPS for email. HTTPS from the user side works completely transparently and for the web admin it’s getting much easier with ACME and Let’s Encrypt.

                                      1. 7

                                        We don’t need WoT here though. WoT exists so you can send me a signed/encrypted email. Nice, but that’s not what’s needed here.

                                        1. 3

                                          Of course you need a some measure of trust like a WoT or CA because how else are you going to verify that the sender is legitimate? Without that you can only really do xkcd authentication.

                                          1. 5

                                            Yes, you need some way to determine what you trust; but WoT states that if you trust Alice and I trust you, then I also trust Alice, and then eventually this web will be large enough I’ll be able to verify emails from everyone.

                                            But that’s not the goal here; I just want to verify a bunch of organisations I communicate with; like, say, my government.

                                            I think that maybe we’ve been too distracted with building a generic solution here.

                                            Also see my reply to your other post for some possible alternatives: https://lobste.rs/s/1cxqho/why_is_no_one_signing_their_emails#c_mllanb

                                            1. 1

                                              Trust On First Use goes a long way, especially when you have encrypted (all its faults nonewithstanding) and the communication is bidirectional as the recipient will notice that something is off if you use the wrong key to encrypt for them.

                                          2. 1

                                            Also for encryption S/MIME is just broken

                                            It is? How?

                                            1. 2

                                              The vulnerability published last year was dubbed EFAIL.

                                              1. 1

                                                Gotcha. Interesting read. I’ll summarize for anyone who doesn’t want to read the paper.

                                                The attack on S/MIME is a known plaintext attack that guesses—almost always correctly—that the encrypted message starts with “Content-type: multipart/signed”. You then can derive the initial parameters of the CBC encryption mode, and prepend valid encrypted data to the message, that will chain properly to the remainder of the message.

                                                To exfiltrate the message contents you prepend HTML that will send the contents of the message to a remote server, like an <img> tag with src="http://example-attacker-domain.com/ without a closing quote. When the email client loads images, it sends a request to the attacking server containing the fully decrypted contents of the message.

                                                S/MIME relies on the enclosed signature for authenticity AND integrity, rather than using an authenticated encryption scheme that guarantees the integrity of the encrypted message before decryption. Email clients show you the signature is invalid when you open the message, but still render the altered HTML. To stop this attack clients must refuse to render messages with invalid signatures, with no option for user override. According to their tests, no clients do this. The only existing email clients immune to the attack seem to be those that don’t know how to render HTML in the first place.

                                                The GPG attack is similar. Unlike S/MIME, GPG includes a modification detection code (MDC). The attack on GPG thus relies on a buggy client ignoring errors validating the MDC, like accepting messages with the MDC stripped out, or even accepting messages with an incorrect MDC. A shocking 10 out of 28 clients tested had an exploitable form of this bug, including the popular Enigmail plugin.

                                          1. 5

                                            I don’t understand why the author dismisses DKIM so quickly. The thing you want to know is whether the mail was sent by that organization it claims to be.

                                            So, was the gift card sent by the University of Queensland or a scammer? Was that gift card sent by Amazon or a scammer? Was the bank email sent by your bank?

                                            You don’t actually need to verify the signature end to end to be a massive improvement over the status quo. Yes, someone could’ve gotten my bank to send scam emails, but in this case this is the responsibility of my bank to solve that problem. Yes, signing would be better but let’s start with easy things to fix and DKIM so far has the most reasonable chance of doing so.

                                            1. 4

                                              I don’t understand why the author dismisses DKIM so quickly. The thing you want to know is whether the mail was sent by that organization it claims to be.

                                              But that is not what DKIM really does? If I send an email from mail.amazon-account-security.com or amazonn.com then it just verifies that it was sent from that domain, not that it was sent from the organisation Amazon Inc.

                                              What I am proposing is subtly different. In my (utopian) future every serious organisation will sign their email with PGP (just like every serious organisation uses https). For large organisation email clients can bake in key in clients. Then every time I get an email which claims to be from Amazon I can see it’s either not signed, or not signed by a key I know.

                                              I think this makes sense?

                                              1. 4

                                                Oh, now I see where you’re coming from. Thanks for explaining.

                                                But that is not what DKIM really does? If I send an email from mail.amazon-account-security.com or amazonn.com then it just verifies that it was sent from that domain, not that it was sent from the organisation Amazon Inc.

                                                That is true. But then Orgs could have a policy that you can trust emails sent from “amazon.com” (as long as DKIM verification is successful). Because right now to verify you have to look at emails and determine whether you want to click a link that does to login.amazonn.com, which is hidden in the HTML part of the mail anyway.

                                                A field saying “yes, this mail came really from amazon.com” or “yes, it came from paypal.com” would make my life a lot easier, because I could discard phishing emails that fail this check immediately. But so far adoption of DKIM seems to be rather low and I myself only use it as a weight in my spam filter, since organisations might not be strict about setting up DKIM properly.

                                                The problem with your suggestion is that you now shifted the trust problem to some kind of CA. So email clients can bake that in, and give Amazon emails preferred treatment, while my own domain is just as untrusted as any scammer. Or we have CAs like the existing ones with TLS where everyone can get a certificate and the signed email will say “yes, this email is signed and is guaranteed to be sent by accounts@amazonn.com”.

                                                That’s sort of how S/MIME works in big orgs, where users trust the internal CA to only give out certificates to authorized people. But it can’t be scaled to the Internet.

                                                The similar problem exists for HTTPS just as well, since you can go on https://www.amazonn.com and get your account information stolen just fine, since there is no authority to tell you that the URL is most likely not what you want. I mean, Safe Browsing does something like this, again giving priority to big established sites.

                                                1. 1

                                                  Yes, a CA giving out certificates to anyone would defeat the point. Keys baked in to email clients would be a convenience, or “starer kit”, not the entire solution.

                                                  I’m not completely sure what the best possible solution is. One solution might be to simply relax: once you sign up for a service you get a “welcome aboard” message (or even email) which allows you to easily import their key. the crypto-fanatics will be quick to point out this isn’t a secure key exchange, but that’s okay. We’re just concerned with “every-day security”, not “I am guarding the nuclear launch codes”-security. For those purposes, this is more than secure enough (and a lot better than doing nothing!).

                                                  Words like “signing” or even “signature” don’t even need to be mentioned; the email client might show a message like “Outlook verified that this email was sent by Amazon, Inc.”, and the initial key import might be something like “Allow Outlook to verify that emails are sent by Amazon, Inc.”

                                                  CAs could still work, provided that they do proper/real verification instead of the “we’ll give anyone a certificate if they ask”-approach.

                                                  Even if we do limit ourselves to just “bake in keys in client” then that would still be a gain. Most mass-phising campaigns seems limited to just a few large websites (PayPal, Amazon, Google, Apple, facebook, Twitter, etc.) with millions of accounts. Protecting these is a good start.

                                                2. 3

                                                  Who decides which signatures are baked in?

                                                  This is exactly the same problem as CAs.

                                                  1. 2

                                                    Okay; so what alternative do you propose?

                                                    1. 1

                                                      If I could solve this problem in the comment section, I would have done so already :)

                                              1. 3

                                                Very interesting to see an alternative to WeTransfer that is hopefully less shady!

                                                1. 3

                                                  Not a lot of technical content. The issue with modern zip archives is apparently new addressing modes to handle larger files that aren’t universally supported in unzip tools.

                                                  A different issue I’ve come across with zip files on Linux is filename encoding. Some compression tools still don’t use UTF-8 and some of the decompression tools on Linux don’t handle the filename encoding at all and just dump whatever bytes they get into the file system. There doesn’t seem to be an easy way to fix the filenames then.

                                                  1. 2

                                                    The issue with modern zip archives is apparently new addressing modes to handle larger files that aren’t universally supported in unzip tools.

                                                    It also seems to note that if you store your ZIP on a FAT32 disk it might get truncated and be called corrupt by mistake or the file that it decompresses might be larger than 2GB which leads to problems.

                                                    I don’t know, it seemed mostly like a rant about stuff failing but the ZIP format can do pretty little about that. Plus, most of these old files are way smaller than 2GB (back in the day your HDD was smaller than 2GB to begin with), so it seems like quite an edge-case.

                                                    1. 5

                                                      I don’t think this piece is advocating for anyone to abandon the ZIP format. You could make similar complaints about literally any format, including plain text. If the article has a point, it’s to spread awareness of the challenges that archivists face, for the purpose of highlighting the importance of funding that work appropriately.

                                                  1. 5

                                                    I have tried a number of keyboards. I have two Pok3r and an ErgoDox EZ and boy do I not like the ErgoDox. I first got it without the legs and it gets borderline unusable for me. With the legs it is slightly better but reaching all the keys is way too difficult for me. Also, compared to the nicely machined aluminium case of the Pok3r it feels intensely cheap (quite a feat for a keyboard that easily cost 3x as much as the Pok3r). I did like the extreme customization options. Being able to control the mouse cursor was cool and being able to use ‘A’ as control key would be amazing if I had it on my 60% keyboards.

                                                    Funnily enough a co-worker asked to borrow the ErgoDox since I just had it in my drawer and a few weeks later he returned it with the comment “oh man, how do I get rid of the ErgoDox set I bought for at home”.

                                                    I mostly code with the computer on the desk and the keyboard on my lap, so a split keyboard is not really an option. I tried the Atreus a friend of mine has and to me it seemed amazing. The fact that it is one piece but still ergonomic and ortholinear is really exciting. I really hope I can get around to building one some day. There is an european seller offering nice bamboo cases - turns out I really appreciate solid keyboard cases.

                                                    1. 2

                                                      I am a bit confused why Joy is supposedly descendent of FP, while Factor descends from FORTH. I haven’t used FP but Joy as a concatenative language feels very similar to Factor (or rather, the other way round) and I suspect there is some inspiration going on in some way.

                                                      Also, JavaScript descends directly from first order logic, instead of a bastardized Scheme with bits of syntax stolen from Java?

                                                      Still, I spend a lot of time staring at the diagram and it was fun.

                                                      1. 2

                                                        Supposedly, the concatenative approach was a parallel evolution rather than direct influence. The author talks about the origins of his language here. Forth influence is still notated with dashed lines in the complex version. But I should indeed note that Joy influenced Factor.

                                                      1. 9

                                                        Right on, so it’s not called Sir Hat!

                                                        1. 7

                                                          Allow me to take your code, m’lady.

                                                          1. 1

                                                            I’m pretty sure it was always just called ‘ess arr dot eitch tee’.

                                                            1. 3

                                                              I’m happy to announce today that I’m opening sr.ht (pronounced “sir hat”, or any other way you want) […]

                                                              https://drewdevault.com/2018/11/15/sr.ht-general-availability.html

                                                              But Source Hut sounds really nice! And it’s great that hosted users can continue using the short form :).

                                                          1. 3

                                                            essentially squat the username lest someone else take it over and cause me trouble down the line

                                                            As far as I know GMail does not reassign local parts anymore, so if you feel like it you can delete it. But maybe it is still wise to keep it, in case someone contacts you there or something.

                                                            1. 2

                                                              Well I feel dumb now.. I reverse engineered a compression scheme some time ago (by staring at hex dumps until it clicked), and turns out it’s just a rather simple variant of LZ.

                                                              1. 1

                                                                Apart from the wasted effort if you just assumed it was some variant of LZ I think that is still pretty impressive and I’m sure you learned a lot by this. So no reason to feel dumb.

                                                              1. 7

                                                                From my part, after using TypeScript for a small project, I realized that while it does have some benefits over plain ES6, the underlying semantics are still those of JavaScript and therefore the ROI is never going to be high. Which is why I turned to research ReasonML and ClojureScript as languages that compile to efficient JavaScriot but also bring different semantics to the table.

                                                                1. 4

                                                                  This is always the problem with retrofitting type systems into untyped languages. While somewhat possible, it makes the type system very complicated, slow and complex. Implementing basic ML-style type checking is a simple task that already nets you a somewhat usable language. Whereas the effort for solutions like core.typed is way larger, requiring PhD students and in the case of TypeScript and Flow, multiple man-years.

                                                                  So I somewhat agree with the blog post but for completely different reasons. If I were to write JavaScript code these days I would probably compile it out of OCaml with js_of_ocaml. Or maybe PureScript.

                                                                  1. 2

                                                                    I can’t really argue for pros/cons of dynamic vs static types as I’ve never used a “proper” static type language “in anger”. My argument against using TypeScript is that it (by design) doesn’t try to address the biggest pains of JavaScript - equality semantics and a standard library.

                                                                    On the topic of Clojure, I know that the ClojureScript compiler does do some type inferencing to generate more efficient JS code (remove a bunch of guards etc), and also gives you some warnings if you try to add an int to a string, for example. I’m interested to see where that goes…

                                                                  2. 2

                                                                    I have found pretty much the same thing. I prefer writing plain JavaScript myself (but I’ve been doing it for a LONNNNNG time and have so much familiarity that it never bothers me anymore). I can understand why someone would switch to something like ClojureScript or ReasonML (or Haxe or…), but TypeScript doesn’t really offer that much back given the investment.