Threads for skade

  1. 1

    Standard ML can be seen as “minimal Rust” if you want. I know, hot-take.

    Given that Rust is an ML, I’m not sure what the hot take is here?

      1. 4

        “Not Rocket Science” from Graydon Hoare describes a system from 2001 doing commit gating (the post is published in 2014, but places the system in that year).

        https://graydon2.dreamwidth.org/1597.html

        1. 3

          I thought CruiseControl was the first one. 2010.

          1. 4

            CruiseControl came out in 2001. I don’t remember it having native support for gated commits.

            Regardless, even in open source, projects were doing “gated commits” before Drizzle.

        1. 14

          I’m very curious how these companies address the fact that there are countries where smartphones are not universally owned (because of cost, or lack of physical security for personal belongings).

          1. 8

            At least Microsoft has multiple paths for 2FA - an app, or a text sent to a number. It’s hard to imagine them going all in on “just” FIDO.

            Now, as to whether companies should support these people - from a purely money-making perspective, if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

            A bigger issue is if public services are tied to something like this, but in that case, subsidizing smartphone use is an option.

            1. 24

              if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

              I had a longer post typed out and I don’t think at all you meant this but at a certain point we need to not think of people as simply customers and begin to think that we’re taking over functions typically subsidized or heavily regulated by the government like phones or mail. It was not that long ago that you probably could share a phone line (telcos which were heavily regulated) with family members or friends when looking for a job or to be contacted about something. Or pay bills using the heavily subsidized USPS. Or grab a paper to go through classifieds to find a job.

              Now you need LinkedIn/Indeed, an email address, Internet, your own smartphone, etc. to do anything from paying bills to getting a job. So sure if you’re making a throwaway clickbait game you probably don’t need to care about this.

              But even this very website, do we want someone who is not doing so well financially to be deprived of keeping up with news on their industry or someone too young to have a cellphone from participating? I don’t think it is a god-given right but the more people are not given access to things you or I have access to, the greater the divide becomes. Someone who might have a laptop, no Internet, but have the ability to borrow a neighbor’s wifi. Similarly a family of four might not have a cell phone for every family member.

              I could go on but like discrimination or dealing with people of various disabilities it is something that’s really easy to forget.

              1. 15

                I should have been clearer. The statement was a rhetorical statement of opinion, not an endorsement.

                Viewing users as customers excludes a huge number of people, not just those too poor to have a computer/smartphone, but also people with disabilities who are simply too few to economically cater to. That’s why governments need to step in with laws and regulations to ensure equal access.

                1. 11

                  I think governments often think about this kind of accessibility requirement exactly the wrong way around. Ten or so years ago, I looked at the costs that were being passed onto businesses and community groups to make building wheelchair accessible. It was significantly less than the cost of buying everyone with limited mobility a motorised wheelchair capable of climbing stairs, even including the fact that those were barely out of prototype and had a cost that reflected the need to recoup the R&D investment. If the money spent on wheelchair ramps had been invested in a mix of R&D and purchasing of external prosthetics, we would have spent the same amount and the folks currently in wheelchairs would be fighting crime in their robot exoskeletons. Well, maybe not the last bit.

                  Similarly, the wholesale cost of a device capable of acting as a U2F device is <$5. The wholesale cost of a smartphone capable of running banking apps is around $20-30 in bulk. The cost for a government to provide one to everyone in a country is likely to be less than the cost of making sure that government services are accessible by people without such a device, let alone the cost to all businesses wanting to operate in the country.

                  TL;DR: Raising people above the poverty line is often cheaper than ensuring that things are usable by people below it.

                  1. 12

                    Wheelchair ramps help others than those in wheelchairs - people pushing prams/strollers, movers, emergency responders, people using Zimmer frames… as the population ages (in developed countries) they will only become more relevant.

                    That said, I fully support the development of powered exoskeletons to all who need or want them.

                    1. 8

                      The biggest and most expensive problem around wheelchairs is not ramps, it’s turn space and door sizes. A wheelchair is broader (especially the battery-driven ones you are referring to) and needs more space to turn around than a standing human. Older buildings often have too narrow pathways and doors.

                      Second, all wheelchairs and exoskeletons here would need to be custom, making them inappropriate for short term disability or smaller issues like walking problems that only need crutches. All that while changing the building (or building it right in the first place) is as close to a one-size-fits-all solution as it gets.

                      1. 5

                        I would love it if the government would buy me a robo-stroller, but until then, I would settle for consistent curb cuts on the sidewalks near my house. At this point, I know where the curb cuts are and are not, but it’s a pain to have to know which streets I can or can’t go down easily.

                      2. 7

                        That’s a good point, though I think there are other, non-monetary concerns that may need to be taken into account as well. Taking smartphones for example, even if given out free by the government, some people might not be real keen on being effectively forced to own a device that reports their every move to who-knows-how-many advertisers, data brokers, etc. Sure, ideally we’d solve that problem with some appropriate regulations too, but that’s of course its own whole giant can of worms…

                        1. 2

                          The US government will already buy a low cost cellphone for you. One showed up at my house due to some mistake in shipping address. I tried to send it back, but couldn’t figure out how. It was an ancient Android phone that couldn’t do modern TLS, so it was basically only usable for calls and texting.

                          1. 2

                            Jokes aside - it is basically a requirement in a certain country I am from; if you get infected by Covid you get processed by system and outdoors cameras monitor so you don’t go outside, but to be completely sure you’re staying at home during recovery it is mandatory to install a government-issued application on your cellphone/tablet that tracks your movement. Also some official check ups on you with videocalls in said app to verify your location as well several times per day at random hours.

                            If you fail to respond in time or geolocation shows you left your apartments you’ll automatically get a hefty fine.

                            Now, you say, it is possible to just tell them “I don’t own a smartphone” - you’ll get cheap but working government-issued android tablet, or at least you’re supposed to; as lots of other things “the severity of that laws is being compensated by their optionality” so quite often devices don’t get delivered at all.

                            By law you cannot decline the device - you’ll get fined or they promise to bring you to hospital as mandatory measure.

                        2. 7

                          Thank you very much for this comment. I live in a country where “it is expected” to have a smartphone. The government is making everything into apps which are only available on Apple Appstore or Google Play. Since I am on social welfare I cannot afford a new smartphone every 3-5 years and old ones are not supported either by the appstores or by the apps themselves.

                          I have a feeling of being pushed out by society due to my lack of money. Thus I can relate to people in similar positions (larger families with low incomes etc.).

                          I would really like more people to consider that not everybody has access to new smartphones or even a computer at home.

                          I believe the Internet should be for everyone not just people who are doing well.

                      3. 6

                        If you don’t own a smartphone, why would you own a computer? Computers are optional supplements to phones. Phones are the essential technology. Yes, there are weirdos like us who may choose to own a computer but not a smartphone for ideological reasons, but that’s a deliberate choice, not an economic one.

                        1. 7

                          In the U.S., there are public libraries where one can use a computer. In China, cheap internet cafés are common. If computer-providing places like these are available to non-smartphone-users, that could justify services building support for computer users.

                          1. 1

                            In my experience growing up in a low income part of the US, most people there now only have smartphones. There most folks use laptops in office or school settings. It remains a difficulty for those going to college or getting office jobs. It was the same when I was growing up there except there were no smartphones, so folks had flip phones. Parents often try and save up to buy their children nice smartphones.

                            I can’t say this is true across the US, but for where I grew up at least it is.

                            1. 1

                              That’s a good point, although it’s my understanding that in China you need some kind of government ID to log into the computers. Seems like the government ID could be made to work as a FIDO key.

                              Part of the reason a lot of people don’t have a computer nowadays is that if you really, really need to use one to do something, you can go to the library to do it. I wonder though if the library will need to start offering smartphone loans next.

                            2. 5

                              How are phones the “essential technology”? A flip phone is 100% acceptable these days if you just have a computer. There is nothing about a smartphone that’s required to exist, let alone survive.

                              A computer, on the other hand, (which a smart phone is a poor approximation of), is borderline required to access crucial services outside of phone calls and direct visits. “Essential technology” is not a smartphone.

                              1. 2

                                There’s very little I can only do on a computer (outside work) that I can’t do on a phone. IRC and image editing, basically. Also editing blog posts because I do that in the shell.

                                I am comfortable travelling to foreign lands with only a phone, and relying on it for maps, calls, hotel reservations, reading books, listening to music…

                                1. 1

                                  The flip phones all phased out years ago. I have friends who deliberately use flip phones. It is very difficult to do unless you are ideologically committed to it.

                                2. 3

                                  I’m curious about your region/job/living situation, and what about is making phones “the essential technology”? I barely need a phone to begin with, not to mention a smartphone. It’s really only good as a car navigation and an alarm clock to me.

                                  1. 1

                                    People need to other people to live. Most other people communicate via phone.

                                    1. 1

                                      It’s hardly “via phone” if it’s Signal/Telegram/FB/WhatsApp or some other flavor of the week instant messenger. You can communicate with them on your PC just as well.

                                      1. 4

                                        I mean I guess so? I’m describing how low income people in the US actually live, not judging whether it makes sense. Maybe they should all buy used Chromebooks and leech Wi-Fi from coffee shops. But they don’t. They have cheap smartphones and prepaid cards.

                                        1. 2

                                          You can not connect to WhatsApp via the web interface without a smartphone running the WhatsApp app, and Signal (which does not have this limitation) requires a smartphone as the primary key with the desktop app only acting as a subkey. I think Telegram also requires a smartphone app for initial provisioning.

                                          I think an Android Emulator might be enough, if you can manually relay the SMS code from a flip phone, maybe.

                                    2. 2

                                      You’re reasoning is logical if you’re presented a budget and asked what to buy. Purchasing does not happen in a vacuum. You may inherit a laptop, borrow a laptop, no longer afford a month to month cell phone bill, etc. Laptops also have a much longer life cycle than phones.

                                      1. 4

                                        I’m not arguing that this is good, bad, or whatever. It’s just a fact that in the USA today if you are a low income person, you have a smartphone and not a personal computer.

                                  1. 6

                                    100 versions later

                                    This seems to be playing a little loose with the facts. At some point Firefox changed their versioning system to match Chrome, I assume so that it wouldn’t sound like Firefox was older or behind Chrome in development. Firefox did not literally travel from 1.0 to 100. So it probably either has fewer or more than 100 versions, depending on how you count. UPDATE: OK I was wrong, and that was sloppy of me, I should have actually checked instead of relying on my flawed memory. There are in fact at least 100 versions of Firefox. Seems like there are probably more than 100, but it’s not misleading to say that there are 100 versions if there are more than 100.

                                    That said, this looks like a great release with useful features. Caption for picture-in-picture video seems helpful, and I’m intrigued by “Users can now choose preferred color schemes for websites.” On Android, they finally have HTTPS-only mode, so I can ditch the HTTPS Everywhere extension.

                                    1. 6

                                      Wikipedia lists 100 major versions from 1 to 100.

                                      https://en.m.wikipedia.org/wiki/Firefox_version_history

                                      What did happen is that Mozilla adopted a 4 week release cycle in 2019 while Chrome was on a 6 week cycle until Q3 2021.

                                      1. 4

                                        They didn’t change their version scheme, they increased their release cadence.

                                        1. 7

                                          They didn’t change their version scheme

                                          Oh, but they did. In the early days they used a more “traditional” way of using the second number, so we had 1.5, and 3.5, and 3.6. After 5.0 (if I’m reading Wikipedia correctly) they switched to increasing the major version for every release regardless of its perceived significance. So there were in fact more than 100 Firefox releases.

                                          https://en.wikipedia.org/wiki/Firefox_early_version_history

                                          1. 3

                                            I kinda dislike this “bump major version” every release scheme, since it robs me of the ability to visually determine what may have really changed. For example, v2.5 to v2.6 is a “safe” upgrade, while v2.5 to v3.0 potentially has breaking changes. Now moving from v99 to v100 to v101, well, gotta carefully read release notes every single time.

                                            Oracle did something similar with JDK. We were on JDK 6 for several years, then 7 and then 8, until they ingested steroids and now we are on JDK 18! :-) :-)

                                            1. 7

                                              Sure for libraries, languages and APIs, but Firefox is an application. What is a breaking change in an application?

                                              1. 4

                                                I got really bummed when Chromium dropped the ability to operate over X forwarding in SSH a few years ago, back before I ditched Chromium.

                                                1. 1

                                                  Changing the user interface (e.g. keyboard shortcuts) in backwards-incompatible ways, for one.

                                                  And while it’s true that “Firefox is an application”, it’s also effectively a library with an API that’s used by numerous extensions, which has also been broken by new releases sometimes.

                                                  1. 1

                                                    My take is that it is the APIs that should be versioned because applications may expose multiple APIs that change at different rates and the version numbers are typically of interest to the API consumers, but not to human users.

                                                    I don’t think UI changes should be versioned. Just seems like a way to generate arguments.

                                                2. 6

                                                  It doesn’t apply to consumer software like Firefox, really. It’s not a library for which you care if it’s compatible. I don’t think version numbers even matter for consumer software these days.

                                                  1. 5

                                                    Every release contains important security updates. Can’t really skip a version.

                                                    1. 1

                                                      Those are all backported to the ESR release, right? I’ve just noticed that my distro packages that; perhaps I should switch to it as a way to get the security fixes without the constant stream of CADT UI “improvements”…

                                                      1. 2

                                                        Most. Not all, because different features and such. You can compare the security advisories.

                                                  2. 1

                                                    Oh, yeah, I guess that’s right. I was focused in on when they changed the release cycle and didn’t think about changes earlier than that. Thank you.

                                              1. 22

                                                For the uninitiated: Style insensitivity is a unique feature of the Nim programming language that allows a code base to follow a consistent naming style, even if its dependencies use a different style. It works by comparing identifiers in a case-insensitive way, except for the first character, and ignoring underscores.

                                                Another advantage of style insensitivity is that identifiers such as itemId, itemID or item_ID can never refer to different things, which prevents certain kinds of bad code. An exception is made for the first letter to allow the common convention of having a value foo of type Foo.

                                                There’s a common misconception that this feature causes Nim programmers to mix different styles in a single codebase (which, as mentioned, is precisely the opposite of what it does), and it gets brought up every time Nim is mentioned on Lobsters/HackerNews/etc, diverting the discussion from more valuable topics.

                                                1. 3

                                                  There’s a common misconception that this feature causes Nim programmers to mix different styles in a single codebase (which, as mentioned, is precisely the opposite of what it does)

                                                  But… isn’t it exactly what the feature does? If my coding habits would make me write itemId and a coworker’s code habits would make them write item_id, style insensitivity makes it likely that I would accidentally use a different style than my coworker for the same variable in the same codebase, right? While most languages would make this impossible by making item_id be a different name than itemId, right?

                                                  How is this a misconception?

                                                  To be clear, I’m not saying it’s a huge deal or that it warrants all the attention it’s getting (that’s a different discussion), but since you brought it up…

                                                  1. 3

                                                    Thanks for providing some context. Is this a thing that gets applied by default any time you use any library, or a feature you can specifically invoke at the point during which the library is imported?

                                                    The former seems … real bad. The latter seems … kinda neat? but a bit silly.

                                                    1. 6

                                                      Currently it’s always on, and there’s an opt-in compile flag --styleCheck:error that makes it impossible to use an identifier inconsistently within the same file. The linked issue discusses if and how this behavior should be changed in Nim 2.

                                                      Personally, I wouldn’t mind if it was removed, as long as:

                                                      • --styleCheck:error was on by default
                                                      • there was a mechanism to restyle identifiers when importing a library.
                                                      1. 2

                                                        I agree. People outside the Nim community can add real value to this discussion, since it is just speculation what they really think based on a few loud complainers.

                                                        1. 4

                                                          I’m someone who looked at Nim, really liked it, then saw the “style insensitivity” and thought “this isn’t for me”. (Not co-incidentally, I’ve been involved in a major, CEOs-gettting-involved, fiasco that was ultimately due to SQL case-insensitivity.)

                                                          Nim occupies a nice space - compiled but relatively “high level” - with only really Go as a competitor (zig/rust/c++ all seem a little too low level.) I personally recoil at the idea of “style insensitivity”, but hopefully in a friendly, lobste.rs manner.

                                                          1. 4

                                                            I’ve been involved in a major, CEOs-gettting-involved, fiasco that was ultimately due to SQL case-insensitivity.

                                                            You can’t just say this and leave us hanging 😆 tell us the story! How did that cause a fiasco?

                                                            1. 4

                                                              Our software wouldn’t start for one customer (a large bank). The problem was unreproducible and had been going on for weeks. The customer was understandably very unhappy.

                                                              The ultimate cause was a “does the database version match the code” check. The database had a typical version table that looked like:

                                                              CREATE TABLE DB_STATE (VERSION INT, ...);
                                                              

                                                              Which was checked at startup using something like select version from db_state. This was failing for the customer because in Turkish, the upper case of “version” is “VERSİON” (look closely). Case-insensitivity is language-specific and the customer had installed the Turkish version of SQL Server.

                                                              Some java to demonstrate:

                                                              public class Turkish {
                                                                public static void main(String[] args) {
                                                                  System.out.println("version".toUpperCase(java.util.Locale.forLanguageTag("tr")));
                                                                }
                                                              }
                                                              

                                                              If you look at the java documentation for toUpper, they specifically mention Turkish - others have been bit by this same issue, I’m sure.

                                                              Which makes me wonder - how does Nim behave if compiled on a machine in Turkey, or Germany.

                                                              1. 5

                                                                This is making me wonder if anyone has ever used this as a stack-smashing attack. Find some C code that uppercases the input in place and send a bunch of “ß”s. Did the programmer ensure the buffer is big enough?

                                                                1. 3

                                                                  I’m pretty sure Nim’s style insensitivity is not locale-specific. That would be very dumb.

                                                              2. 2

                                                                Seems plenty friendly to me. Programmers can get awfully passionate about style guides/identifier conventions.

                                                                I think it tracks individual, personal history a lot - what confusions someone had, what editor/tool support they had for what sort of “what meta-kind of token is this?” questions and so on. In extreme cases it can lead to, e.g. Hungarian notation like moves. It can even be influenced by font choices.

                                                                Vaguely related to case insensitivity, early on, Nim only used '\l' to denote a newline character..Not sure why it was not '\n' from the start, but because lower case “l” (elle) and the numeral “1” often look so similar, '\L' was the dominant suggestion and all special character literals were case insensitive. (Well, one could argue it was just tracking general insensitivity that made them insensitive…).

                                                                1. 2

                                                                  Personally I’d say this compares unfavorably to go, where style arguments are solved by the language shipping a single blessed formatting style.

                                                                  Confusions/disagreements over formatting style are - imo - a waste of the teams engineering time, so I see the go approach as inherently better.

                                                                  1. 2

                                                                    Go doesn’t enforce a style for identifiers. Try it online!

                                                                    1. 2

                                                                      Thanks, I hate it.

                                                                      Fair point to nim though!

                                                                2. 2

                                                                  There’s also crystal, but ii is failing to reach critical mass in my opinion.

                                                                  I think crystal did a great job providing things people usually want upfront. I want a quick and direct way to make an HTTP request. I want to extract na value from a JSON string with no fuss. I want to expose functionality via CLI or a web interface with minimal effort. Crystal got these right.

                                                                  I agree that above-mentioned languages are too low level.

                                                                  1. 1

                                                                    Not sure about the others, but I think exposing functionality via CLI is pretty easy in Nim. cligen is not in the Nim stdlib, though.

                                                                    1. 1

                                                                      I was not comparing to Nim directly. Just giving examples of the kind of thing I believe are the strongest drives to success of a language.

                                                                      But for an example of one thing that I found lacking in Nim was concurrency primitives. Crystal makes it relatively simple and direct with fairly simple and familiar fibre API.

                                                                      A quick way to spin up an HTTP service was another one. It even had support for websockets.

                                                            2. 4

                                                              It is always on - even for keywords like f_Or in a loop. I was trying to perhaps help guide the group towards a usage like you articulate.

                                                              EDIT: The main use case cited is always “use a library with an alien convention according to your own convention”. So, being more precise and explicit about this as an import-time construct would seem to be less feather ruffling (IMO).

                                                            3. 3

                                                              Just for the record - the Z shell (Zsh) had style insensitivity for its setopt builtin waaay back in the very early 90s. They did not make the first letter sensitive, though. :-)

                                                              As this seems to be a very divisive issue and part of what is divisive is knowing how those outside the community (who do not love/have not made peace with the feature) feel, it might be helpful if Lobster people could weigh in.

                                                              1. 2

                                                                As this seems to be a very divisive issue and part of what is divisive is knowing how those outside the community (who do not love/have not made peace with the feature) feel, it might be helpful if Lobster people could weigh in.

                                                                I looked into Nim and was at least partially dissuaded by style insensitivity. I don’t think it’s fatal persay, but it did hit me very early in my evaluation. I would liken it to the dot calls in Elixir: Something that feels wrong and makes you question other decisions in the language. That said, Elixir is a fabulous language and I powered through. I imagine others feel similarly.

                                                                1. 2

                                                                  What specifically do you not like about style insensitivity?

                                                                  1. 1

                                                                    Here’s the thing: I haven’t used style insensitivity so I can’t really say I dislike it. However, it struck me as unnecessarily inconsistent. I don’t care about snake case or camel case. I just want code to be consistent. Of course, code I write can be consistent with style insensitivity, but code I read probably won’t be.

                                                                    Additionally, I imagined that working in a team could have issues: repos use the original author’s preferred styling. Of course, having a clear style guide helps, but in small teams sometimes people are intractable and resistant to change. In a way, it triggers a feeling of exasperation: a memory of all the stupid little arguments you have with other developers.

                                                                    So here I am kicking the tires on a new exciting language and I am already thinking about arguing with people. Kind of takes the wind out of your sails. It may be a great feature, but I imagine it’s a barrier to adoption for some neurotic types like myself. (Maybe that’s a blessing?)

                                                                    1. 1

                                                                      You have it the other way around. With style insensitivity, code is much more likely to follow a consistent style — because it can’t be forced into inconsistency by using libraries from different authors.

                                                                      1. 1

                                                                        I can see how code I write is consistent, but code I read is going to be more inconsistent. If it wasn’t then why would we need style insensitivity in the first place?

                                                                        1. 1

                                                                          I can see how code I write is consistent, but code I read is going to be more inconsistent.

                                                                          Can you show me a serious Nim project that uses an inconsistent style?

                                                                          If it wasn’t then why would we need style insensitivity in the first place?

                                                                          Because libraries you’re using may be written in different styles.

                                                                          1. 1

                                                                            I’m saying it’s inconsistent across projects. Not within projects. Sometimes you have to read other people’s code. Style insensitivity allows/encourages people to pick the nondefault style.

                                                                            Ultimately, I’m not in the nim ecosystem. I posted my comment about why style insensitivity made me less interested in nim. I can tell you that this is the exact sort of argument I was looking to avoid so you have proven my initial concerns correct.

                                                                            1. 1

                                                                              I don’t see what the problem is with reading code written in a different style, as long as it’s consistent. And in practice, most Nim projects follow NEP1.

                                                              2. 2

                                                                Doesn’t JRuby have something similar around Java native methods?

                                                                1. 2

                                                                  I think it’s an interesting comparison, but it’s important to keep in mind that a language based around message passing is fundamentally different from what’s going on here where the compiler itself is collapsing a bunch of different identifiers during compile-time. When you call a method in Ruby, you’re not really supposed to care how it’s resolved, but when you make a call in a language like Nim, you expect it to be resolved at compile-time.

                                                                  1. 3

                                                                    I’d disagree there. The mechanism in JRuby is that the method is made available under multiple names to the application after it’s loaded. That’s not extremely different from what Nim does, except if we go down to the level to say we can’t compare languages with different runtime and module loading models.

                                                                    https://github.com/jruby/jruby/wiki/CallingJavaFromJRuby

                                                                    1. 2

                                                                      I guess what I meant was that even if the implementation works the same way, Rubyists fundamentally have different expectations around what happens when you call foo.bar(); they’ve already given up on greppability for other reasons.

                                                              1. 5

                                                                As always, a nice and thorough report. Good work @aphyr!

                                                                Is anybody using Redpanda instead of Kafka at work?

                                                                1. 5

                                                                  No, but I’ve definitely been tempted, at least for low risk/ dev environments, but haven’t pulled the trigger. I’m awaiting to give it a good spin first. I’ve been looking for something like this post though.

                                                                  FWIW I’ve been reluctant mostly because having new folks working with Kafka is tricky enough without additional “Kafka compatibility “ issues popping up. Similar to the reasoning that had us decide doing kstreams in Java instead of scala or clojure. I’d file it as “nice but will break the innovation budget”.

                                                                  1. 3

                                                                    This may be the first time I hear the term: “innovation budget”. Astonishingly fitting.

                                                                    1. 6

                                                                      Sidenote if this topic is interesting to you: it was a principle in Rusts development under the name of “strangeness budget”.

                                                                      https://steveklabnik.com/writing/the-language-strangeness-budget

                                                                      1. 6

                                                                        I believe it was popularized by this blog post from 2015: Choose Boring Technology. There it was conceptualized as innovation “tokens” like in friendlysock’s comment earlier.

                                                                        1. 3

                                                                          The concept of an “innovation budget” is something that is super important for all engineers, especially folks at a startup.

                                                                          Every team has some finite number of innovation tokens, which may or may not replenish over time. If you spend all of your innovation tokens on, say, building your webserver in Rust then you really can’t afford to do more exotic things like microservices or blue/green deploys or whatever else.

                                                                          Similarly, the model goes, if you pick a really boring webstack and ops stuff (say, modern-day Rails on Heroku), then you can spend those innovation tokens on more interesting problems like integer programming for logistics or something. The business version of this would be deciding to support multiple vendors or go with a new payment processor instead of just trusting Stripe or Braintree or whoever.

                                                                          My extension to the model is that, in a startup, you can exchange innovation tokens for efficiency tokens or cost coupons…if you build your stack in bespoke Julia it’s a lot harder to hire (and a lot harder to backfill departures!), whereas if you go with python or javascript then you can easily find interns or bodyshops to round out the workforce.

                                                                          One of the great open questions to me about the model is: how does an org deliberately replenish innovation tokens?

                                                                      2. 2

                                                                        Is anybody using Redpanda instead of Kafka at work?

                                                                        I’d run a PoC at work (and even found a minor bug). In my use-case the performance was exactly the same. In the end this did not move forward as I switched teams inside the organization and the old team went ahead with a managed service.

                                                                      1. 12

                                                                        I think the Unicode consortium made a huge mistake giving in to adding emojis to Unicode. It’s a bottomless pit, very politically charged and definitely ambiguous (compare for example the different emoji-styles across operating systems/fonts).

                                                                        It severely complicates most of the Unicode algorithms (grapheme cluster detection, word/sentence/line-segmentation, etc.) and, compared to dead and alive languages, feels very short-lived, like a fashion.

                                                                        How will emojis be seen in 50 years? I can already feel the second-hand-embarassment.

                                                                        1. 15

                                                                          It looks like people were already using emoji, and Unicode had to add them for compatibility. https://unicode.org/emoji/principles.html

                                                                          1. 12

                                                                            Every thread about emoji has a “Unicode shouldn’t have added them” comment (or several), and I feel like I then always step in to remind those commenters that basically every single chat/message system humans have built in the internet era has reinvented emoticons in some form or another, whether purely textual (“:-)” and “:/“ and friends) or custom graphics, or a mix of text abbreviations that get replaced by graphics.

                                                                            This suggests that they are a non-negotiable part of how humans conduct written communications in this era. Which means Unicode must find a way to capture them, by the nature of Unicode itself.

                                                                            1. 4

                                                                              This suggests that they are a non-negotiable part of how humans conduct written communications in this era. Which means Unicode must find a way to capture them, by the nature of Unicode itself.

                                                                              You might as well use the same argument to claim that Unicode should capture all words, too.

                                                                              1. 3

                                                                                Doesn’t it try? Morally, is there any difference between a code sequence of letters representing a word, and a code sequence of letters and combining characters that come together to create a single glyph?

                                                                              2. 1

                                                                                This is solved well with ligatures at the font level.

                                                                                Solving it at the font level has the additional benefit of not blocking the addition of new emoji on a standards body, as well as allowing graceful degradation to character sequences that anyone, including those on older software, can view.

                                                                                1. 8

                                                                                  Ligatures can’t and don’t solve all the traditional emoticons, let alone emoji.

                                                                                  Emoji are a part of written communication, no matter how much someone might personally dislike them, and as such belong in Unicode.

                                                                                  1. 1

                                                                                    Ligatures can’t and don’t solve all the traditional emoticons, let alone emoji.

                                                                                    Why not? This approach is more or less used for flags, where flag emoji are – for political reasons like ‘TW’ – ligatures of country codes in a special unicode range. If you happen to put ‘Flag{T}’ beside ‘Flag{W}’, you may get the letters ‘TW’, or you may get a flag that enrages China, depending on your font.

                                                                                    If you want to avoid ASCII ‘bar:(foo)’ from being interpreted as a smiley emoji , maybe unicode could standardize non-rendering ‘emoji brackets’ as a way of hinting to a font system that it could render a sequence of characters as an emoji ligature.

                                                                                    There’s no need to restrict emoji to the slow pace of the unicode consortium, when dropping in a new font will get you the new hotness, especially since using text sequences will render legibly for everyone not using that font.

                                                                                    This is win/win. It makes things more usable for those that dislike emoji, and it makes more emoji available to those that like emoji.

                                                                                    1. 5

                                                                                      Because fonts cannot change emoticons into images? They have different meaning, so font ligature processing, which is essentially replaceAll(characters/glyphs/whatever, graphic), does not work.

                                                                                      No one can adopt a system font that magically turns one set of characters into another system. Because it can’t be adopted as the system font, then no apps get emoji. A person can’t simply change the default, for the same reason the system couldn’t: you made ligatures that potentially change meaning of bytes.

                                                                                      As far a font is concerned, there is no difference between :) in “see you :)” “(: I’ve seen this comment format somewhere :)” but you ligature “solution” makes the latter nonsense.

                                                                                      Emoticons also have characters that have no equivalent emoticon, either due to number of characters, or the lack of color.

                                                                                      Now, you may not like emoji, but arguing “we didn’t need it before” is pretty weak sauce: we didn’t have it. The goal of text is to communicate, and it is clear that a vast proportion of all people alive use emojis in their communication. So computers should be able to facilitate that communication rather than requiring workarounds.

                                                                                      The use of semagrams in alphabetic languages is nothing new - even hieroglyphics used semagrams.

                                                                                      1. 2

                                                                                        Because fonts cannot change emoticons into images?

                                                                                        That’s… just untrue.

                                                                                        You ignored the entire paragraph where I pointed out that flags ALREADY work this way. Then, you ignored the second paragraph which addresses the problem you mentioned in the third paragraph, where something like an RTL marker could mark emoji. Then you invented me saying “we didn’t need it before”.

                                                                                        In fact, you seem to have ignored everything I wrote.

                                                                                        It would be nice if you responded to what I said, rather than what you imagined I said.

                                                                                      2. 3

                                                                                        The simple counterpoint to this is to imagine the Unicode Consortium declaring that all the writing systems and characters which ever will be needed have been invented already — anything new will just be a variant or a ligature of something existing!

                                                                                        That would be dangerously incorrect, and would not work at all.

                                                                                        So, look. I get that some people really really really really don’t like emoji and wish they didn’t exist. But they do exist and they are a perfectly valid form of written communication and they are not sufficiently captured by ligatures or other attempts to layer on top of ASCII emoticons, any more than an early-2000s forum would have been happy with just the ASCII forms. For decades we’ve been used to a richer set of these, and it is right and proper for Unicode to include them. Complaints about them, to me, feel like ranting that kids these days say ”lol” instead of typing out the fully-punctuated-and-capitalized sentence “That is funny!”

                                                                                        1. 1

                                                                                          and they are not sufficiently captured by ligatures or other attempts to layer on top of ASCII emoticons, any more than an early-2000s forum would have been happy with just the ASCII forms.

                                                                                          So far in this thread, I’ve seen this asserted – but I don’t see why flags are appropriately captured by ligatures, while emoticons are not. What is the technical difference that allows one to work while the other does not?

                                                                                          Again, I’m arguing that for emoji lovers, ligatures are BETTER and MORE FUNCTIONAL than encoding emoji individually into unicode. That this would be an improvement in availability and usability, not a regression.

                                                                                          We already have messaging programs ignoring the emoji range and adding their own custom :emoji: sequences because Unicode moves too slowly for them. We can wait years for unicode to standardize animated party parrots, or we can add :party_parrot: as text that gets interpreted by our application. Slack, and most others programs, chose the latter. Not to mention adding stickers – which arguably need the same position in Unicode as emoji.

                                                                                          Unicode’s charter is to standardize existing practice. Why not let Unicode standardize the way that emoji ranges are worked around in practice, today, this with standardized “emoji brackets” that allow clients to mark any text sequence as an emoji ligature? This matches the way things actually work, and fills the need for custom emoji (and stickers) that the Unicode consortium is not serving.

                                                                                          1. 1

                                                                                            I offer the following counter proposal: since you seem to think it’s at least possible and perhaps even easy, I challenge you to pick, say, 20 code points at random from among the emoji and come up with distinct, memorable ASCII sequences you think would suffice to be ligature’d into those emoji. I think that this will help you to understand why I don’t think “just ligature them” is going to work.

                                                                                    2. 2

                                                                                      This is solved well with ligatures at the font level.

                                                                                      Demonstrably false by the number of systems that screw up trying to auto detect smileys from colons and parentheses. 🙂 is unambiguous semantically; “:)” is not.

                                                                                      1. 3

                                                                                        I actually feel the opposite. “:)” is unambiguously a smiling face, and is mostly uniform in appearance across system UI fonts. The icon “🙂” is rendered differently depending on not only the operating system but also the specific app being used. The recipient of my message may see a completely different image then I intend for them to see. Even worse, the meaning and tone of my past emoji messages can completely change whenever Apple or Google or Telegram decides to redesign their emoji.

                                                                                        Too many apps have no way to disable auto-replacement of ascii faces.

                                                                                  2. 4

                                                                                    I think the Unicode consortium made a huge mistake giving in to adding emojis to Unicode. It’s a bottomless pit, very politically charged and definitely ambiguous (compare for example the different emoji-styles across operating systems/fonts).

                                                                                    This applies to other planes in Unicode, due to https://en.wikipedia.org/wiki/Han_unification

                                                                                    Also, any kind of character system is politically charged an interesting read here is: https://www.hastingsresearch.com/net/04-unicode-limitations.shtml (I do not agree with the points here and history has proven the author wrong, but it’s a good specimen to look at political unicode arguments pre-Emoji)

                                                                                    1. 4

                                                                                      I was going to mock your post by pointing out all of the other stuff in Unicode which is “politically charged”, from Tibetan to Han unification to the Hangul do-over to that time that a single character was added just for Japan’s government. But this is a grand understatement of exactly how political and pervasive the Consortium’s work is. Peruse the list of versions of Unicode and you’ll see that we already have a “bottomless pit” of natural writing systems to catalogue.

                                                                                      I think that the most inaccurate part of your claim is that emoji are “like a fashion”. Ideograms are millennia old and have been continuously used for communicating mathematics.

                                                                                      1. 2

                                                                                        It severely complicates most of the Unicode algorithms (grapheme cluster detection, word/sentence/line-segmentation, etc.)

                                                                                        If there were no emojis in Unicode, but everything else remained, would any of these things really be simpler? The impression I get is there are corner cases across the languages Unicode covers for all of the complexity, independent of emoji; emoji just exposes them to westerners more.

                                                                                      1. 2

                                                                                        There’s so many things that excite me about this, so I’ll leave it at: this is such a nerdfest for me as a programming language fan.

                                                                                        1. 3

                                                                                          In Swift, this would be:

                                                                                          struct Test {
                                                                                              let `in`: String
                                                                                          }
                                                                                          
                                                                                          let a = Test(in: "asdf")
                                                                                          

                                                                                          I think the example doesn’t really show the point of r#. You could just as well just change the name to _in or __in instead and it would probably be more readable than r#in.

                                                                                          1. 3

                                                                                            As always, the RFC gives a lot of motivation. https://rust-lang.github.io/rfcs/2151-raw-identifiers.html

                                                                                            (E.g. the ability to name a function like a keyword, particularly useful for FFI use)

                                                                                            1. 2

                                                                                              But if you always call it using r#, you have essentially renamed the function. It would be acceptable if it was only at declaration or where disambiguation was otherwise needed, but here it seems to surface at every point of use.

                                                                                              1. 4

                                                                                                Imagine function:

                                                                                                extern "C" r#match() {
                                                                                                
                                                                                                }
                                                                                                

                                                                                                bc. a dynamic library needs to export this symbol. I agree in general, r# is not to be used in interfaces intended for humans.

                                                                                                1. 6

                                                                                                  Hm, I don’t think that’s what is happening here. For FFI purposes, we have a dedicated attribute, link_name

                                                                                                  https://doc.rust-lang.org/reference/items/external-blocks.html#the-link_name-attribute

                                                                                                  Unlike r#, it’s not restricted to valid rust identifiers (ie, it allows weird symbols in name).

                                                                                                  My understanding that 90% of the motivation for r# was edition system, and desire to re-purpose existing idents as keywords. Hence, unlike Swift or Kotlin, Rust deliberately doesn’t support arbitrary strings as raw identifiers, only stuff which is lexically an ident ((_|XID_Start)XID_Continue*).

                                                                                            2. 3

                                                                                              The example uses debug serialisation (#[derive(Debug)]), which perhaps isn’t the best example of why it matters, but at least proves the point.

                                                                                              The name matters in serialisation, and this could be generated code. I’ve had this exact problem in two unrelated protocol generators that happened to generate C++, and got funny build errors when I tried to define messages with fields like delete and static.

                                                                                              1. 1

                                                                                                OK, but that option hasn’t gone anywhere. You can still name it _in if you want. There’s plenty of niche cases where it would be nice to keep the identifier, mostly when interfacing with code you don’t control.

                                                                                                1. 1

                                                                                                  Yes, exactly, I figured out about raw identifier while checking a PR at sqlparser crate. where author used in for parsing one of the statements.

                                                                                              1. 4

                                                                                                I did some experimentation with Android, and I found out that even if you don’t install a custom ROM you can still increase the level of privacy.

                                                                                                The best way to get more privacy is to quit the social networks who feed on your data.

                                                                                                My rules of making my phone more secure and private:

                                                                                                • Never use a closed source browser;
                                                                                                • Never use a browser controlled by an organisation who profits from your data;
                                                                                                • Use add blockers, and a VPN.
                                                                                                • Use Tor Browser or Orbot, or a browser with good protection against trackers.
                                                                                                • Clear everything everytime you close the browser;
                                                                                                • Don’t use apps if you can use the website. Just add it to home as a shortcut. Some sites will behave exactly as an app.
                                                                                                • Don’t install apps from the Playstore if you can find alternatives on Fdroid;
                                                                                                • Avoid saving stuff on ☁️;
                                                                                                • Logoff from waze, never use Maps;
                                                                                                • Install PCAPdroid to check what apps are calling home when they shouldn’t;
                                                                                                • Don’t use Google if you are logged in the browser with a Gmail account. In any case prefer other search engines. Use more than one search engine if possible: peekier, duckduckgo, ecosia, mojeek, brave, marginalia, etc. The results are not that bad, sometimes are better because you don’t have all the commercial junk pushed first.
                                                                                                • Instead of Reddit, use teddit;
                                                                                                • Use an invidious instance instead of directly using the YouTube app;
                                                                                                • Install deedum and look on the Gemini content, it will remind you of the old internet if you are nostalgic about it;
                                                                                                1. 2

                                                                                                  Install deedum and look on the Gemini content, it will remind you of the old internet if you are nostalgic about it;

                                                                                                  What effect does this have on security or privacy?

                                                                                                  1. 1

                                                                                                    Looking at my comment now, I must admit that the last point doesn’t make a lot of sense in that context. So it has 0 effects for security and privacy.

                                                                                                  2. 1

                                                                                                    Don’t use apps if you can use the website.

                                                                                                    I find the exact opposite to be better both from a privacy focus and a reliability focus. Typically there is not as much ad junk in apps, there are far less beacons, and many apps (at least the ones I use) allow you to cache stuff locally so your activity is harder to track (because no network activity). Just my 2¢.

                                                                                                    1. 2

                                                                                                      From my perspective, the jury is out on this one. Websites tend to be well-inspected and there’s plugins that analyse which trackers are used. Common trackers are very known and you can do the inspection yourself even with simple methods. Apps - not so much.

                                                                                                      1. 2

                                                                                                        I was in the same boat until I realized that some apps are making constant requests home, even if I don’t use them.

                                                                                                    1. 3

                                                                                                      I would like a convention-over-configuration framework for Rust.

                                                                                                      Also, don’t have have time to take a real look at it right now.

                                                                                                      But wow, that name is super cute.

                                                                                                      1. 4

                                                                                                        The blog post is posted now and locked in time now. Time continues forward always. This article diverges starting now.

                                                                                                        Frameworks sometimes have upgrade guides going from version to version. Unless someone packages this blog post as a tool, with generators and CLIs, it’s copy-and-paste which is forking and bit-rot start.

                                                                                                        Even generators have a tricky problem of revisiting vs heirloom configs. If I generate a project using the FooFramework on 1.0, follow the upgrade path from 2.0 -> 3.0 -> 4.0. What do I expect to happen? I generated 1.0. Now the world is on 4.0. Who tells me where I’m at with my mix of libraries and decisions? Deprecation warnings along the way? Even the most battle-hardened frameworks, docs and communities have bit-rotted comments in this situation.

                                                                                                        I would not want to be running “stuff from a blog post 0.0.0”. However, that doesn’t mean this blog post is bad. It’s got the steps and recommendations (which are valuable). Next step, make a script or a template for people to use. But now it’s heading towards a framework.

                                                                                                        It’s like script iteration:

                                                                                                        • Write down the steps, curate things, collect knowledge
                                                                                                        • Put those steps in scripts
                                                                                                        • Polish the scripts into programs
                                                                                                        • Apply software rigor etc

                                                                                                        I like all the Rails copycats. It’s good for everyone. Next, Redwood, Blitz, Remix are all very familiar. Would be great to see a low level language pull this off. My current thoughts are that it can’t be done for whatever reason or I’m wrong and it just hasn’t been done yet. Very hard to do tippy-top abstractions in assembly. Rocket and Buffalo (Go) are the nearest I’ve tried.

                                                                                                        Someone on twitter said something like

                                                                                                        rails for rust? be careful what you wish for

                                                                                                        I think if the idea is solid, implementations will converge.

                                                                                                        1. 5

                                                                                                          As an aside, there is one generator/template-builder/cookiecutter alternative that actually supports updates: https://copier.readthedocs.io/en/stable/updating/

                                                                                                          1. 1

                                                                                                            Very cool. Bookmarked.

                                                                                                            When updating, Copier will do its best to respect your project evolution by using the answers you provided when copied last time. However, sometimes it’s impossible for Copier to know what to do with a diff code hunk. In those cases, you will find *.rej files that contain the unresolved diffs. You should review those manually before committing.

                                                                                                            Mac does this on updates. It makes a directory on the desktop called Relocated Items full of files it doesn’t know what to do with. Redhat/CentOS does .rpmsave (iirc), debian/ubuntu does .dpkg-old.

                                                                                                            Note that this is only a problem with code comments (not annotations) that don’t execute. There’s a way to verify you don’t have regressions on upgrades: testing on many levels. And deprecation warnings (annotations). But not docs and comments (without some hoops).

                                                                                                            I tried to update my PC bios recently (excuse the example) and I tried to go from A -> C. It said my .zip was corrupted. I had to go from A -> B -> C. And it worked. Same file. It can’t even error cleanly. A has no idea what the file C from the future is. B had a breaking change.

                                                                                                            1. 1

                                                                                                              Interesting. I have my own Cookiecutter clone. Maybe I should start dumping answers to interactive questions to the user cache directory so that it can be rerun easily.

                                                                                                          2. 3

                                                                                                            I tried building one years ahead, but wasn’t able to find collaborators. Had a good name though: gerust.

                                                                                                          1. 1

                                                                                                            Title should have (2016)

                                                                                                            1. 3

                                                                                                              Most users of this site can suggest a title change or addition. If enough do, the change is automatically applied.

                                                                                                              1. 2

                                                                                                                Oh, neat, thanks - I think I missed the button showing up.

                                                                                                              2. 2

                                                                                                                Is the implication that the integer overflow situation in Rust has changed since then? If so, pointers to more up-to-date info would be cool.

                                                                                                                1. 1

                                                                                                                  No, it’s just common lobste.rs practice.

                                                                                                              1. 14

                                                                                                                Important context: Author is involved in the development of Rust.

                                                                                                                1. 28

                                                                                                                  Grankra (she/they, by the way, for the other replies), has also been involved in the development of Swift. For example, they have written about the different approaches that Rust and Swift take towards ABI (e.g. how Swift makes different tradeoffs than Rust to make sure it can stay ABI stable in the face of generics).

                                                                                                                  https://gankra.github.io/blah/swift-abi/

                                                                                                                  1. 19

                                                                                                                    He does have a point… an OS shouldn’t be bound to a language, so it’s ABI should be well defined.

                                                                                                                    And if you look at the very very long history of security flaws at the OS syscall design level…. Poettering has a point. An ABI designed as a security barrier is a very different thing to a C function signature.

                                                                                                                    1. 4

                                                                                                                      He does have a point… an OS shouldn’t be bound to a language, so it’s ABI should be well defined.

                                                                                                                      Yes. But there’s no changing existing systems. This is a whole system re-architect.

                                                                                                                      I understand Fuchsia was designed with this in mind from the start. They’re not the first to attempt this either.

                                                                                                                      1. 4

                                                                                                                        Umm.

                                                                                                                        https://www.freedesktop.org/wiki/Software/systemd/kdbus/

                                                                                                                        ps: I’ll add dbus sucks. But it proves the point that something good could be retrofitted, and it has the one true excellent redeeming feature…

                                                                                                                        It exists, it works.

                                                                                                                        1. 4

                                                                                                                          While I find these quite unrelated to the topic, I do absolutely welcome efforts to make Linux IPC suck less.

                                                                                                                    2. 1

                                                                                                                      Wanna spend a few words explaining why that is important context?

                                                                                                                    1. 2

                                                                                                                      Fun bit: mimalloc uses ASLR for randomness as a weak fallback if secure randomness isn’t available. https://github.com/microsoft/mimalloc/blob/15220c684331d1c486550d7a6b1736e0a1773816/src/random.c#L255

                                                                                                                      1. 11

                                                                                                                        You could argue that $7/mo isn’t the most expensive thing in the world, but paying $7/mo for doing essentially nothing is quite expensive.

                                                                                                                        I spent a couple more hours getting the subdomain for the widget set up correctly with Route 53 and ACM, and wiring that up to the API Gateway custom domain configuration.

                                                                                                                        I’m not hating, we’ve all been there - I just hope all parties were happy with trading a couple hours of developer time for the same cost as hosting the thing for 10 years ;)

                                                                                                                        1. 2

                                                                                                                          It’s a bit weird there’s no “grandfather clause” where data gathered before the introduction of GDPR is exempt from explicit consent. But I do remember a fear when it was implemented in Sweden was that scofflaws and trolls would tie up government agencies from day 1 with more or less frivolous attempts to “get ’em” violating the GDPR.

                                                                                                                          1. 13

                                                                                                                            This is why there was such a long introductory period. You had a couple of years before the GDPR came into effect to contact everyone about whom you were storing PII and request consent.

                                                                                                                            I am not sure that this is actually a compliant implementation. You have to provide a mechanism for withdrawing consent, as well as for granting it, and individuals can require that you delete all PII associated with them. Holding their email address without the opt-in flag would put you in violation. If you have any mechanism for adding people that isn’t their direct submission of their email address, then you need to retain some hashes to prevent you from accidentally adding them back. I came across this case in the context of a college, which has a legitimate interest rationale for being able to keep the names of alumnae, but which needed to be able to ensure that ones that had opted out of having their contact information stored never had contact information added as the result of merging the alumnae list with some other public databases.

                                                                                                                            1. 2

                                                                                                                              Thank you for raising this point and of course, you are correct. The current solution is by no means perfect. We’ve sort of solved the first half of the issue, getting the opt-in action logged somewhere and surfaced in the CRM. The opt-out flow is currently fairly rocky — the person could either navigate to the consent form again and rescind their consent, or get in touch with the company and ask for their consent to be revoked, or indeed to have their PII deleted.

                                                                                                                              Still, I’m surprised that the CRM software does not handle this. It would be such a value-add to have this functionality built-in, compliant and correct.

                                                                                                                              1. 2

                                                                                                                                Still, I’m surprised that the CRM software does not handle this. It would be such a value-add to have this functionality built-in, compliant and correct.

                                                                                                                                I’m a bit surprised at that too. I’m pretty sure it’s been a thing that the Dynamics 365 marketing stuff has been shouting about for a while. One of the advantages of SaaS-type CRM offerings (versus on-premises offerings) is that the seller, as well as the user, has responsibilities under the GDPR and so has a much bigger incentive to care about compliance.

                                                                                                                                By the way, did you check for Schrems II compliance? It looks as if your hosting provider is in the US, which may be a problem.

                                                                                                                            2. 9

                                                                                                                              Aside from what david mentions, for a lot of things, you had to get consent before the GDPR.

                                                                                                                              A particularly visible example is newsletters. You had to use and present opt-in before the GDPR. What the GDPR did in that area is introduce enforcement that has teeth and hurts.

                                                                                                                              1. 2

                                                                                                                                I don’t think this article has anything to do with the GDPR at all. Opt-in for e-mail marketing is regulated by the e-Privacy directive.

                                                                                                                                1. 2

                                                                                                                                  Thanks for this explanation, I was eternally grateful not to have to deal with this stuff when it was coming along the pipe.

                                                                                                                                2. 3

                                                                                                                                  That would’ve caused a lot of companies to start selling/acquiring their marketing data like crazy, in order to be “grandfathered in”

                                                                                                                                3. 1

                                                                                                                                  Haha — indeed. Everyone was happy with the deal in this case. :-)

                                                                                                                                1. 5

                                                                                                                                  I wonder which could happen faster: Rust becoming as easy to use as other modern languages while retaining the wild goodwill it’s earned, or Swift satisfying its memory ownership goals and earning goodwill outside its core user group of Apple platforms. Although I think Swift nailed async and has the ergonomics down, I worry about the mass appeal aspect.

                                                                                                                                  1. 24

                                                                                                                                    Swift so far has proven that it’s Apple’s language for Apple’s requirements, and everything else is an afterthought. Other platforms are second-class citizens. IBM’s attempt to use Swift has failed. Swift for Tensorflow looks mothballed too. It would take effort for Apple to fix these issues, and fix the perception, but I don’t think Apple is interested. Swift is already unusually open for an Apple project.

                                                                                                                                    1. 6

                                                                                                                                      And this is really no different than Apple’s Objective-C ecosystem before Swift. It was always possible to use Obj-C outside of Apple’s ecosystem (even before it was Apple’s ecosystem), but it was annoying and there were a lot of nice things that you just didn’t get to use in that case. In fairness to Apple, though, .NET has only recently become (reasonably) pleasant to use outside of Windows, and then only because Microsoft felt vulnerable.

                                                                                                                                    2. 5

                                                                                                                                      One problem with Swift on Linux is Swift’s LLVM fork. You can’t build Swift against released versions of LLVM, hence Linux system LLVM. I really understand why it is the way it is, but they just need to bite the bullet. It took a year or so of effort for Rust to build against LLVM releases (I helped), but before that no Linux distributions packaged Rust. Until Swift’s LLVM fork problem is fixed, no Linux distributions will package Swift.

                                                                                                                                      1. 1

                                                                                                                                        Rusts reasonably well-developed backend system and ability to use multiple code generators (including multiple LLVMs) is actually a strengths too often ignored.

                                                                                                                                        Sure, it needs an expert to use it, but it sees a ton of use out in the open if you know where to look.

                                                                                                                                      2. 3

                                                                                                                                        I think the biggest problem with Swift expanding is the half-assedness of non Apple platform support, specifically Windows.

                                                                                                                                      1. -1

                                                                                                                                        In Poland we speak Polish, which is a really difficult language, it’s actually considered one of the top-10 most difficult languages to learn in the world.

                                                                                                                                        I get annoyed with this kind of comment.

                                                                                                                                        I bet if you ask the Chinese, English is the hardest language to learn in the world. Yet it’s the language most widely spoken around the world.

                                                                                                                                        I don’t know what kind of ethnic pride comes into play when people say “my language is difficult” or “my language is easy”. Usually they just mean the orthography matches closely the pronunciation or not. For example, most “English is difficult” jokes come from its spelling; almost no English speakers even complain about phrasal verbs or even know about them, and that’s arguably a far less predictable aspect of spoken English.

                                                                                                                                        Polish is really easy to learn if you know, say, Russian. Polish is extremely easy to learn if you’re a Polish child. I don’t know by what kind of metric one could reasonably classify the 10 most difficult natural languages of the world.

                                                                                                                                        1. 5

                                                                                                                                          It’s not entirely out of thin air. What he’s referring to is the list given out by the Foreign Service Institute, which lists polish as one of the harder languages (more granular lists exist, frequently indeed posting it in the top 10). https://www.state.gov/foreign-language-training/

                                                                                                                                          Adding nuance to that would break the storytelling.

                                                                                                                                          1. 4

                                                                                                                                            The assumption of that list is you are starting from English. “Hardness” is relative to knowing some first language, not generalizable.

                                                                                                                                            1. 4

                                                                                                                                              There are also metrics like the average age when a child knows their mother tongue proficiently, based on that metric you can see that some languages are indeed very hard.

                                                                                                                                            2. 1

                                                                                                                                              What he’s referring to

                                                                                                                                              This seems like an over-application of the principle of charity. I don’t feel that charitable. I’m sure there’s some list out there that puts Polish in the top 10, but I put little faith in it.

                                                                                                                                          1. 33

                                                                                                                                            I find it so interesting thinking about the way that programming languages can reflect natural language with dialects.

                                                                                                                                            I feel like you could also think about writing with a particular ‘accent’ when you are writing code that isn’t idiomatic to the language you are using. For example, if you came from Go to Ruby you might start writing a bunch of for loops, which is valid but not typically how Ruby is written. You are speaking Ruby, with a Go accent.

                                                                                                                                            1. 14

                                                                                                                                              The world has seen so much bad Fortran code that the name of the language is now a synonym for bad coding. Many of us have never seen real Fortran code, but we know what coders mean when they say, “You can write Fortran in any language.”

                                                                                                                                              How Not to Write Fortran in Any Language

                                                                                                                                              1. 8

                                                                                                                                                This has been my exact experience with Ruby. I started using it within Amazon, where they utilize live pipeline templates (LPTs) that are packages built out with layers and layers of monkey-patched Ruby, and these in turn spit out generated build artifacts, e.g. Cloudformation templates.

                                                                                                                                                Now my current role makes use of a Rails monolith and even after some time, the differences are still jarring and I’m still trying to rid myself of the muscle memory from my AWS experience, speaking that LPT dialect of Ruby, as you put it.

                                                                                                                                                1. 4

                                                                                                                                                  Having maintained LPTs, Octane templates, etc., you’re exactly correct. Horrifying kludge, but we’re steadily replacing it with.. TypeScript, Java/Kotlin and Python. ;)

                                                                                                                                                2. 6

                                                                                                                                                  programming languages can reflect natural language

                                                                                                                                                  I think a lot about how Perl was designed by a linguist, and it shows.

                                                                                                                                                  1. 4

                                                                                                                                                    In a good way, or in a bad way?

                                                                                                                                                    1. 8

                                                                                                                                                      Yes.

                                                                                                                                                      (In all seriousness, I suspect a bad way for actually implementing it; does an independent implementation of Perl 5 that’s compatible mostly exist? But it is fascinating less from a PLT and more a “you can phrase it like that?” angle you see in NLP…)

                                                                                                                                                  2. 4

                                                                                                                                                    A lot of programming languages have constructs that “infect” the code base. Once they’re in use, you have to keep using them. Some I can think of:

                                                                                                                                                    • async in Rust
                                                                                                                                                    • Rc vs owned in Rust
                                                                                                                                                    • null vs Optional in Java
                                                                                                                                                    • FP vs OOP style in Scala
                                                                                                                                                    • Akka in Scala
                                                                                                                                                    • naming schemes (tends to be a problem in older languages like Python or C++)
                                                                                                                                                    1. 5

                                                                                                                                                      I’m confused about the mention of Rc here? Rcs are owned. What I think is a infecting problem is that you can’t be generic over Arc vs Rc.

                                                                                                                                                  1. 4

                                                                                                                                                    The post sadly ignores the whole dimension why the aggravation happened particularly around :focus-visible. The problem is that Safari falls behind on accessibility features - something that Apple generally has touted as one of the strong points of their products. It’s accessibility advocates becoming grumpy with their former champion.

                                                                                                                                                    I’m pretty sure most of the people this post rants about understand the mechanics of igalia and openprio, they disagree with Apple slacking so long and not appreciating this. Yes, every team has their prio and that’s their good right - as much as it is the good right of their users and developers to become grumpy around that prioritisation.

                                                                                                                                                    1. 16

                                                                                                                                                      It kinda adds to the point that the post itself doesn’t highlight why this may be unsound and why MaybeUninit is needed. The problem is that given a value of a type that expresses constraints on its representation - say, the NonNull pointer - it is absolutely crucial that none of these representations are ever accidentally expressed at any point in time. So let n: NonNull<T> = ... expresses that the right hand side is never NULL. mem::zeroed or uninitialized memory may break that contract. The compiler relies on this for optimisations, for example for optimising Option<NonNull<T>>. The reason why I said may above is because for some types (say u32), where all memory patterns are legal representations, this doesn’t matter and everything is actually fine.

                                                                                                                                                      It boils down to the difference that in Rust, expressing “A is NonNull” while it isn’t is immediate UB, in C, it is UB on use.

                                                                                                                                                      One isn’t necessarly that much harder than the other - the base rule is easy, conceptually. But Rust up until now has not done a stellar job to teach people the structure of unsafe, because it’s currently mostly focus on teaching people how to use safe Rust proper.

                                                                                                                                                      1. 3

                                                                                                                                                        (I don’t mean this in an accusing way) I think that this is a bit having the cake and eating it too about Rust’s whole narrative on unsafe.

                                                                                                                                                        There’s an acknowledgement that Rust without unsafe is restrictive (see the whole “can’t write linked lists” thing). So what do people say? They say “you can use unsafe to get around this!”

                                                                                                                                                        I think the blog’s example (not being able to fully initialize an object at its definition site) is an extremely common code organization thing in basically any language, with its pitfalls etc. But in the hypothetical where you have your case that you have “proven to be right” or whatever, you can in a theoretical world say “OK make space in memory for this object. I’ll fill it out over several statements for code cleanliness or because I need this weird circular reference or whatever”

                                                                                                                                                        Of course you can probably write that stuff, but the post makes a real convincing argument that you have to negotiate with the tooling to get it to do that. So you’re futzing around with all of this to prove this guarantee that you in your case might not even need.

                                                                                                                                                        I imagine that having rules apply even in unsafe makes a lot of stuff a lot easier, and there’s I bet some foundational reasoning here, but I feel like this requires you to kinda restructure your code even if you know the theoretical machine code the compiler could generate to do what you want safely.

                                                                                                                                                        Kinda aligns with the general “Rust is harder than C”, which… I feel is noncontroversial. I never need unsafe for the kinds of stuff I write, but now I’m going to be even more careful cuz it seems like it might be real easy to mess up.

                                                                                                                                                        (EDIT: “prove your code to be right” is of course a bit wrong in Rust, because of your point about Rust’s rules about initialized memory. More of a conceptual idea based on how you would imagine your code to be as machine code…)

                                                                                                                                                        1. 1

                                                                                                                                                          Yes, uninitialized memory is an extremely common situation, but Rust does have an extremely simple solution: use Option. MaybeUninit and unsafe are there for cases when Option isn’t good enough, but you can program in Rust for years without encountering such cases.

                                                                                                                                                          1. 4

                                                                                                                                                            As a data point, I’ve been programming in Rust for 7ish years, including 2 professionally, and I’ve never wanted to partially initialize an object, nor noticed anyone else wanting that. If you don’t have all the information required to construct a Whichamahoosit, why would you want an invalid half of a Whichamahoosut? Maybe I just always solve this by using multiple pieces of data, or using the builder pattern, and in C people are used to doing things a different way?

                                                                                                                                                            1. 1

                                                                                                                                                              I feel like initialization and “tying the knot” between releated data structures after some book keeping is an extremely common code organization concern in every programming language. Fortunately we tend to have good patterns for dealing with it and I think that Rust offers good data structures for many scenarios that help to avoid exposing this issue too much.

                                                                                                                                                              Option kinda sucks because you now have this idea that you need to handle partially-init’d data structures everywhere. Builder pattern definitely feels like the cleaner solution in general, it’s sometimes futzy but that’s fine, and you can encapsulate any trickiness with good API design.