1. 1

    It is nice to see resources like this. I did some of this back in University for a basic raytracer I did. But I remember with fondness games like Doom.

    1. 1

      I haven’t ever done anything with entity-component systems. I am curious about how broadly this could be applied.

      So I understand you having an entity and you give it a position, so presumably this component is like a property of the entity. So why not just give the entity a position directly?

      1. 1

        There’s a few reasons. On the software engineering side of things, it avoids a lot of issues where class hierarchies are too rigid or code becomes too tightly coupled, but there are also performance benefits which is largely why the game development universe is so into the idea.

        If you have a system that is calculating collisions, for example, perhaps you only need that position to do the calculation. If you “just” give the entity a position “directly” (assuming Entity is a class and you just jam fields in there), then you will also “just” give it other things directly, and eventually it grows to have a huge number of fields. So, your collision algorithm is scanning huge chunks of fragmented memory only to read a single position variable, which is extremely cache-inefficient.

        In contrast, with an ECS you can implement that so that scanning all of the positions is just a linear scan of a contiguous array. Depending on the data type it may even be vectorized. The way you realize this is to not make the component a property of the entity in the sense that it is stored “in” the entity, but to instead store the components by type, completely separate from entities, and associate them with IDs. In the simplest ideal vision, there is no Entity class at all, an entity is simply an integer.

        1. 1

          In contrast, with an ECS you can implement that so that scanning all of the positions is just a linear scan of a contiguous array. Depending on the data type it may even be vectorized.

          I definitely get this along with the cache argument.

          The thing I am not really sure about is perhaps more generally, outside of games. I don’t really do game design, but I do things with web applications (angular).

          1. 1

            It’s a technique that’s mainly for performance–it also is kinda specific to languages (read: C/C++/Java/C#) that don’t have an easy way of doing dynamic compositions/mixins. Like, in Ruby or JS I don’t think it’s as big a win.

        2. 1

          Its a framework that helps reinforce a good separation of concerns. You can mix and match any type of components across your entities, and your systems only care about their specific types of components. It’s a lifesaver in instances where you need an oddball case later in development. “Gee, I really need this Sword Item class to be able to talk to the player, but only the Character class has “Talk()”! Rather than trying to figure out how to shoehorn your Sword into a different class hierarchy, you would start by just adding a Talk component to the Sword entity.

          It flattens out the logic, any “thing” in your world has the capability to do any action.

        1. 3

          So in short: Browser uses operating system libraries to do image decoding. Like most every other application on MacOS.

          1. 3

            This is definitely a rant, but I think it is something a lot of people have felt to some degree.

            I don’t buy all of what they are selling though. I happen to like JSON and the term API seems to mean something where I am.

            1. 6

              Yeah, I’m wondering what the author’s prescriptions would be based on these critiques. Does the author think we should use CSV or XML instead of JSON? Or do they like JSON but want to criticize it for being a “glorified CSV” anyways? I don’t understand the point.

              And they say APIs are for people who are “idiots” and “cannot code”. Does the author think we should somehow stop using “APIs”? What the fuck does that even mean?

              I think you could write a well-reasoned argument for the problem with managers and APIs; maybe you could write about how managers may think using an external web API is a panacea, how they may be unknowledgeable about the short-term costs in terms of development time it might take to integrate with a web API or the long-term costs of making the product depend on a third party’s services. You could even fit that argument in a rant format if you wanted. But “you are an idiot, you cannot code, here, use this line here, and when it runs, it will retrieve something from a server somewhere” is so far away from that it’s not even funny. This strikes me as a really bad rant by someone who doesn’t actually understand what they’re criticizing.

              1. 2

                The article is obviously a rant and some of the wording is a bit too much, with that said:

                I think the point is that maybe we didn’t need JSON and XML was just fine or at least that JSON did not fix all UX issues and in some cases, it introduced new ones. Now that we have JSON we might as well use it - that is not what the article is about.

                If we look at the history of configuration languages, let’s say it goes XML -> JSON -> YAML -> TOML (the exact transitions don’t matter - I want to show there was a lot of them) I think it is fair to say that they are all bad in some way. For example, XML was very wordy, but since it was all S-expression-like and editors were smart about it, it was easy to get it right. YAML is sometimes hard to read because (at least for me) even with editor guides it is difficult to figure out the indentation, etc.

                So if all configuration languages have UX problems why have new ones? The article claims most of this work is done for the benefit of developers who were able to spend their time coding new standards, new libraries, etc. instead of solving the original problems.

                How should the file format problem be solved? I don’t know to be honest, but I agree that with new formats what we needed was proof that they actually make all the things better before they were applied everywhere with a huge cost.

                In the same way, the author obviously didn’t say we should not use APIs. The thing is that before APIs programs could communicate through interfaces as well. Most of them could be called APIs maybe some were, it does not matter. What matters is at some point APIs became a “word” and it was used by business types as a nice abstract idea that sells and by developers again as an abstract idea that is easy to use: “we need to provide this API”, “we need to support this API”.

                I think the argument in the article was that I often see release notes similar to: “new export format” or “full support for new Gesture API”. Maybe the end-user does not care which format we use to export their data if it is not faster than it was 2 years ago? maybe they do not care if we support the latest Swipe Gesture API if the app loads slower than it has 1 year ago?

              2. 3

                but I think it is something a lot of people have felt to some degree.

                I agree, but I’m not sure that’s a sufficient bar to be worth posting in itself.

                For instance, if I focus specifically on the Firefox-related part of the rant, I understand the frustration (and it’s a common sentiment) but there’s zero actual investigation into why the extensions system had to be replaced. And frankly, the reason is underwhelming: the XUL-based extensions were about as secure as permitting users to install arbitrary kernel modules, and as such were insecure by design.

                If we agree that people ought to be able to do banking over the web in Firefox, then it’s a foregone conclusion that XUL-based extensions had to go.

                Also, based on this:

                When I go to a restaurant, I don’t want to know what the staff is doing with my food. I’m paying for the service and ignorance. And I expect the same from software. I’m paying, I’m the boss. It’s time the software industry started serving its boss, the user.

                Cheers.

                I expect that if banking wasn’t available on Firefox (or was notoriously insecure), this guy would be outraged at why such a primal need wasn’t fulfilled. So honestly I don’t see any solution here when this guy 1) doesn’t want to hear what the actual problems are, and 2) isn’t satisfied with any of his options. There is literally no way of satisfying this person.


                But circling back to your comment: if it’s something that a lot of people have felt, that’s a topic worth discussing if only to check if there’s a method of fixing it. I think a more useful approach would be if someone were to more explicitly start a discussion in the form of “A lot of people feel X. What is the cause of this, and what could be done about it?” Instead of trying to discuss the drivel in OP’s link.

                1. 2

                  This reminds me of the thing chrome was trying to do with extensions specifically involving ad block a few years ago. I don’t really know what came of that. But the argument for it I think was made on the basis of performance and security.

                  You could make the argument that letting a website do whatever it wants with javascript isn’t a good idea.

              1. 2

                Now I want to see how it does the scrolling bit, but the visualizations are interesting too!

                1. 1

                  one day I’ll learn how to do this fancy stuff and not some CRUD app, I swear

                1. 19

                  I think the greatest concern I have is the potential for governments to pressure Apple into using the tech for other purposes since we know Apple loves them some iPhone sales. That being said though, their technical summary seems to rule out every other concern since it only scans images before uploading them, only flags matches against a known set of images, and those matches are manually reviewed. You couldn’t really fake a match unless you knew the existing dataset so it’s almost impossible that you could “SWAT” someone, and even then it would be trivial to demonstrate that the image was sent to you by someone else. Perhaps if someone had your credentials they could upload images to your account but then that was a risk long before this.

                  I don’t like the idea of this being used for other types of images, but as implemented and for the purpose given it seems like a pretty well thought-out system. I am totally fine with the pushback since it makes Apple be as transparent as possible, but I don’t like that people are making some false claims about how the tech works. I think the focus of criticism deserves to be squarely on the issue of whether Apple bends to pressure from more restrictive governments when their profits are on the line.

                  1. 3

                    and those matches are manually reviewed.

                    I hope they are prepared for the difficulty on the people reviewing. I recall reports of police officers doing this sort of work having mental health issues and not doing it for very long.

                    1. 2

                      Agreed. I have heard that the Facebook team that handles these sorts of things has extremely high turnover.

                      1. 3

                        It looks like the manual review process involves low-resolution versions of the image to protect reviewers.

                        1. 1

                          I’d be far more worried if they didn’t have high turnover.

                      2. 3

                        You couldn’t really fake a match unless you knew the existing dataset so it’s almost impossible that you could “SWAT” someone

                        This was also true of DVD private keys until it wasn’t. (This is not to negate the second part of your sentence, only the first.)

                      1. 2

                        One thing I was curious about:

                        I had a few attempts to learn it, but I failed every time. It is fair to say that I don’t understand it.

                        For some context, I use angular at work every day. I am learning a little bit of react for a personal project. I am not exactly sure why it would be difficult to learn.

                        1. 3

                          Sometimes people just don’t click with a certain way of doing things.

                          At a former job we had a rather small web app written in emberjs. A year after it was created two developers tried to understand and improve it and had major problems with it. They proposed to rewrite it in angularjs in a single week, because they (and most of the team) knew angularjs. They finished quicker than a week and we never had any problems, even with onboarding new people.

                          That is not to knock on emberjs, but in this case it simply didn’t work out. Had we had different developers we might have been happy with emberjs.

                        1. 2

                          Interesting article. Do we have similar things in other languages?

                          The concept of promises I am relatively familiar with, and I have done quite a bit with observables.

                          1. 3

                            There are structured concurrency libraries for other languages (C, Python, Swift, maybe Kotlin are the mature implementations I know about).

                            The originator of the “structured concurrency” label summarized progress since their seminal post back in 2018, but I think it’s come a lot farther since then: https://250bpm.com/blog:137/

                            It’s linked from the article, but “Notes on structured concurrency” is probably the best summary of the idea yet written: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/

                            1. 3

                              Do we have similar things in other languages?

                              This article and the Swift proposal are very, very close to how Ada built-ins handles concurrency.

                              You don’t deal with threads and promises, the language provides tasks which are active and execute concurrently, and protected objects which are passive and provide mutual exclusion and allow complicated guard conditions to shared data. Tasks are like threads, but have procedure-like things (called “entries”) you can write and call which block until that task “accepts” them. You can implement the concurrency elements common in other languages (like promises) using these features, but you usually don’t need to.

                              Execution doesn’t proceed out of a block until all tasks declared in the scope are complete, unless those tasks are allocated on the heap. You can declare one-off tasks or reusable tasks even within functions that can share regular state. Tasks doesn’t just have to accept a single “entry”–queueing and selection of one of many entries is built-in, and this select block also supports timeouts, delays and proceeding if no entry is available. For long-running tasks which might not complete on time, there’s also a feature called “asynchronous transfer of control” which aborts a task if a computation exceeds a time threshold. Standard library functions provide pinning of tasks to CPUs, prioritization, and controlling which CPUs a task runs on using “dispatching domains”.

                              I’ve spent days of my life debugging async/await in other languages, but I feel like the Ada concurrency built-ins help describe the intent of what you’re trying to accomplish in a very natural way.

                            1. 32

                              When an error in your code base can take down millions of users who depend upon it for vital work you should

                              1. Have good CI
                              2. Have extensive tests
                              3. Make small changes at a time
                              4. Have at least one set of extra eyes looking at your changes
                              1. 15
                                1. Make use of language features that push you towards correctness, for example static typing.
                                1. 8

                                  I find it shocking how many people love “dynamic languages”

                                  1. 7

                                    I don’t. There’s a lot of neat tricks you can do at runtime in these systems that would require 10x more work to do at build time, because our build tools are awful and far too difficult to work with. Problem is that we only have the build-time understanding of things while we’re actually programming.

                                    Don’t get me wrong, I disagree with taking this side of the trade-off and I don’t think it’s worth it. But I also realise this is basically a value judgement. I have a lot of experience and would expect people to give my opinions weight, but I can’t prove it, and other rational people who are definitely no dumber than me feel the opposite, and I have to give their opinions weight too.

                                    If our tooling was better (including the languages themselves), a lot of the frustrations that lead people to build wacky stuff that only really works in loose languages would go away.

                                    1. 7

                                      I don’t, because I used to be one of those people. Strong type systems are great if the type system can express the properties that I want to enforce. They’re an impediment otherwise. Most of the popular statically typed languages only let me express fairly trivial properties. To give a simple example: how many mainstream languages let me express, in the type system, the idea that I give a function a pointer to an object and it may not mutate any object that it reaches at an arbitrary depth of indirection from that pointer, but it can mutate other objects?

                                      Static dispatch also often makes some optimisations and even features difficult. For example, in Cocoa there is an idiom called Key-Value Coding, which provides a uniform way of accessing properties of object trees, independent of how they are stored. The generic code in NSObject can use reflection to allow these to read and write instance variables or call methods. More interestingly, this is coupled with a pattern called Key-Value Observing, where you can register for notifications of changes before and after they take place on a given object. NSObject can implement this by method swizzling, which is possible only because of dynamic dispatch.

                                      If your language has a rich structural and algebraic type system then you can do a lot of these things and still get the benefits of a static type checking.

                                      1. 2

                                        Regarding your example, honestly I am not 100% sure that I grasp what you are saying.

                                        In something like C++ you can define a constant object and then explicitly define mutating parts of it. But I don’t think that quite covers it.

                                        I have enjoyed some use of Haskell a few years back and was able to grasp at least some of it. But it gets complicated very fast.

                                        But usually I am using languages such as c# and typescript. The former is getting a lot of nice features and the latter has managed to model a lot of JavaScript behaviour.

                                        But I have no problem admitting that type systems are restrictive in their expressibility. But usually I can work within it without too many issues. I would love to see the features of Haskell and idris, and others become widely available - but the current languages don’t seem interested in that wider adoption.

                                        1. 3

                                          Regarding your example, honestly I am not 100% sure that I grasp what you are saying.

                                          In something like C++ you can define a constant object and then explicitly define mutating parts of it. But I don’t think that quite covers it.

                                          I don’t want an immutable object, I want an immutable view of an object graph. In C++ (ignoring the fact that you can cast it away) a const pointer or reference to an object can give you an immutable view of a single object, but if I give you a const std::vector<Foo*>&, then you are protected from modifying the elements by the fact that the object provides const overloads of operator[] and friends that return const references, but the programmer of std::vector had to do that. If I create a struct Foo { Bar *b ; ... } and pass you a const Foo* then you can mutate the Bar that you can reach via the b field. I don’t have anything in the type system that lets me exclude interior mutability.

                                          This is something that languages like Pony and Verona support via viewpoint adaptation: if you have a capability that does not allow mutation then any capability that you load via it will also lack mutation ability.

                                          But usually I am using languages such as c# and typescript. The former is getting a lot of nice features and the latter has managed to model a lot of JavaScript behaviour.

                                          Typescript is a dynamic language, with some optional progressive typing, but it tries really hard to pretend to be a statically typed language with type inference and an algebraic and structural type system. If more static languages were like that then I think there would be far fewer fans of dynamic languages. For what it’s worth, we’re aiming to make the programmer experience for Verona very close to TypeScript (though with AoT compilation and with a static type system that does enough of the nice things that TypeScript does that it feels like a dynamically typed language).

                                          1. 1

                                            I really like the sounds of Verona.

                                        2. 1

                                          Strong type systems are great if the type system can express the properties that I want to enforce. They’re an impediment otherwise.

                                          It’s not all-or-nothing. Type systems prevent certain classes of errors. Tests can help manage other classes of errors. There’s no magic bullet that catches all errors. That doesn’t mean we shouldn’t use these easily-accessible, industry-proven techniques.

                                          Now, static typing itself has many other benefits than just correctness–documentation, tooling, runtime efficiency, enforcing clear contracts between modules being just a few. And yes, they do actually reduce bugs. This is proven.

                                        3. 4

                                          We have somewhat believable evidence that CI, testing, small increments, and review helps with defect reduction (sure, that’s not the same thing as defect consequence reduction, but maybe a good enough proxy?)

                                          I have yet to see believable evidence that static languages do the same. Real evidence, not just “I feel my defects go down” – because I feel that too, but I know I’m a bad judge of such things.

                                          1. 1

                                            There are a few articles to this effect about migrations from JavaScript to TypeScript. If memory serves they’re tracking the number of runtime errors in production, or bugs discovered, or something else tangible.

                                            1. 1

                                              That sounds like the sort of setup that’d be plagued by confounders, and perhaps in particular selection bias. That said, I’d be happy to follow any more explicit references you have to that type of article. It used to be an issue close to my heart!

                                              1. 1

                                                I remember this one popping up on Reddit once or twice.

                                                AirBNB claimed that 38% of their postmortem-analysed bugs would have been avoidable with TypeScript/static typing.

                                            2. 1

                                              Shrug, so don’t use them. They’re not for everyone or every use case. Nobody’s got a gun to your head. I find it baffling how many people like liquorice.

                                              1. 1

                                                Don’t worry, I don’t. I can still dislike the thing.

                                          2. 6

                                            And if you have millions of users, you also have millions of user’s data. Allowing unilateral code changes isn’t being a good steward of that data, either from a reliability or security perspective.

                                          1. 15

                                            Whenever I see people hating on pull requests and code reviews, I question the people and culture of their teams. I’ve been doing the PR+CR thing for many years in more than one company, and have never really thought negatively of PR+CR. If hiring is good, the people are good, and the culture is good, then I think PR+CR is not only fine, but is beneficial to a dev team.

                                            When I say “good”, I mean things like: everyone assumes positive intent by default; no quarelling over style; people know that blocking a PR is a heavy hammer that should be used sparingly; politeness; respect; humility. Stuff that just comes naturally for a company or department that has great culture.

                                            Obviously, if your team’s PRs are a minefield of finger pointing, accusation, ad hominem, style wars, frequent PR blockage, intelligence insulting (even unintentional), and arrogance… nobody would like that. But that’s a people problem, not a process problem.

                                            1. 2

                                              I remember using tfs and what code reviews were back then was nightmarish compared to PRs. It does help that I have a great team to interact with certainly.

                                              Stuff like style is enforced by eslint and similar technologies - reduces the chances of simple formatting being on the diff.

                                            1. 39

                                              Personally, fastmail - web client.

                                              1. 9

                                                +1 - Fastmail’s web client is several orders of magnitude less annoying than Gmail for me, and unlike GOOG where you ARE the product, Fastmail is straight up selling me a service and hosting my email in exchange for $$$. Very simple and compelling value prop there. How novel :)

                                                I switched after GMail went through a redesign a number of years back that made it all but unusable for people with any kind of vision issue at all. I’ve heard from multiple sources that the internal Google group for older folks absolutely had a fit but were summarily ignored. It was so low contrast and hard to read that I was actually ending up with severe eyestrain headaches at the end of the day on the regular.

                                                1. 2

                                                  How do you find Fastmail? Their service looks like it provides almost everything I want (in an ideal world, everything would be encrypted server side and search run from a CVM so that they couldn’t access my email even in the event of a compromise, but that’s not something that’s offered by any provider), but they seem very expensive. They seem okay if you’re a single person and probably fine if you’re a company, but they’re missing any kind of family plan, which is what I’d need to stop hosting my own email.

                                                  Office Microsoft 365 has a family plan that costs about as much as two of the 30 GB Fastmail plans and comes with 50 GB of email storage for each of up to 6 users, along with all MS Office and 1 TB of OneDrive, which makes Fastmail seem incredibly costly in comparison (even with the ability to add $30/year users to your $50/year subscription with the domain, 2 GB of email space is nothing these days and $30/year for 2 GiBi is insane: 2 GiB of geographically redundant cloud storage with a decent SLA is <$1/year).

                                                  1. 3

                                                    I have been using it for years. I am paying directly for my email, so I am the customer rather than the product. This is also all they do.

                                                    I have an office subscription too.

                                                    My needs are pretty basic.

                                                    1. 2

                                                      Respectfully you can’t compare the economies of scale in a company like Fastmail to a behemoth like Microsoft. While I can appreciate the value prop you describe, and I’m told Outlook365 is actually a formidable mail environment, I personally am happy to support a small company that does one thing and does it VERY well. That’s why I put my money on Fastmail.

                                                      1. 2

                                                        Respectfully you can’t compare the economies of scale in a company like Fastmail to a behemoth like Microsoft

                                                        I agree in general, but I’m comparing the mail hosting parts of the two offerings, the MS bit also includes a load more things that are more expensive to develop (the office suite). It looks as if Fastmail builds their own physical infrastructure, so I’d expect that they have pretty solid economies of scale. If not, then they could outsource their storage to a cloud provider and benefit from the provider’s economies of scale.

                                                        I personally am happy to support a small company that does one thing and does it VERY well

                                                        To be honest, if it were just me, I’d be very tempted to move, but for a family it’s a lot more expensive than self hosting. Once you’re up to four users, you’re on $200/year and there are quite a few providers that will offer a VM that can handle far more than four users for that much. I presume that if you’re building racks of infrastructure for serving thousands of customers, it’s cheaper. That means that the majority of the costs of Fastmail are some combination of:

                                                        • Administrative overheads (should be low - managing email infrastructure is eyvery low overhead per person at scale, my old university computer society manages email for a few hundred people with a handful of admin folks giving up an hour or so of volunteer time periodically)
                                                        • Cost of first-party software development (particularly the client, which I’ve heard is very polished - I believe a lot of their server software is open source things like Cygnus that they contribute to).
                                                        • Profit.

                                                        I don’t know how it’s split between these three. The interesting thing is that the first of these is really the only cost that doesn’t benefit (much) from economies of scale. If they doubled the number of users, the cost of software development would be roughly the same, but would be amortised over twice as many customers. I would have thought that about $20/year would be the sweet spot for the 30 GB plan to maximise profit and I’m really curious how their economists picked the price points they have. They’ve been around for over 20 years, so I can’t imagine it’s because they are limited in the speed at which they can manage growth.

                                                      2. 1

                                                        2 GiBi is insane: 2 GiB of geographically redundant cloud storage with a decent SLA is <$1/year).

                                                        Doesn’t that heavily depend on your traffic volume and structure, i.e. number of requests?

                                                        1. 1

                                                          Yes, though full downloads of a mail spool are pretty rare. Looking at Azure’s pricing, there’s no per-GB transfer cost, and the cost of read / write operations is a few cents per 10,000 operations ($0.13 for writes, $0.005 for reads). Assuming that your full-text search isn’t implemented using grep, I’d expect the cost there to be well under another dollar. You probably won’t hit 10,000 writes/year (probably 2-3 for every incoming mail: write the message, update the index, update metadata, plus another one [though these could be batched] when you mark something as read or add tags / move it to a folder, even at 10 per email, 1,000 emails per year is maybe slightly low for a low-volume user, assuming no batching of updates) for a 2 GiB mail spool and you need a lot of reads at half a cent per 10,000 for that cost to matter. If you use the native blob indexing facility for tags / folders with JMAP then that probably adds a bit more cost, but I’d imagine that they’d use something custom for that. Possibly if you have a lot of clients all downloading all messages you’ll hit a lot of reads, but I’d expect the cost of reads to be in the noise.

                                                        2. 1

                                                          Is there a way to bring your own domain to MS 365? That was what stopped me last time I looked for my family stuff.

                                                          1. 1

                                                            I’ve not tried but there is some documentation that suggests it’s possible.

                                                            1. 1

                                                              Sure, you just need to opt for a business plan or whatever it’s call. I don’t think you can do it with the lowest tier of family plan but I’ve used my own domain (utf9k.net) with Office 365/Outlook in the past.

                                                              Err, I assume you mean sending email from O365 and using it as your login address rather than say; transferring your actual domain registration. I assume the latter isn’t possible but I would be surprised

                                                              1. 1

                                                                Yeah, I meant using O365 to send/receive mail for my own domain. Last time I looked, I couldn’t figure out how to make it work on a family plan, and the price bump for the lowest enterprise plan where it’d work was more than I was willing to do.

                                                                Thankfully, I’m grandfathered into the old free G Suite plan, so we just use that for email and use O365 for other things.

                                                                1. 1

                                                                  Officially, you can only host your own domain if you use GoDaddy as a registrar, but there is a way to get around that limitation.

                                                                  However, I would suggest that you don’t host your primary email address with Microsoft as part of the Office 365 subscription. The main reason is that if anything happens to the subscription you will most likely lose access to the premium benefits, which means that you won’t be receiving any emails from Microsoft as well.

                                                            2. 1

                                                              Ah, but you can’t host your own domains on O365 unless your registrar is … GoDaddy, which, no.

                                                              1. 1

                                                                I use Fastmail for my personal email+calendar+etc and work uses Office 365, so I’ve used both web clients.

                                                                FWIW I like the web client for Fastmail more than the Outlook Web one. Fastmail is a bit more “power user friendly”, i.e. setting up new email filters and things like that are easier to both get to and configure. Despite not being more complex for beginner users either, I find. Fastmail UI is also faster in my experience. Outlook Web does some fancy AI sentence completion features and stuff, but I don’t really want this - so not having it is probably a plus in my book.

                                                                With my amateur marketing hat on, I think the 2GB Basic plan is mostly there to make the 30GB Standard Plan look better value. And for those folks who actually don’t need much at all by way of email.

                                                              2. 1

                                                                Same, but mostly by lack of a better alternative on Windows.

                                                                1. 2

                                                                  It remains shocking to me that there Windows doesn’t have a CalDAV/CardDAV client by default.

                                                              1. 1

                                                                I’m surprised there is no keyword for function.

                                                                1. 1
                                                                  def
                                                                  
                                                                1. 8

                                                                  Today I wrote a little table printer, you know, stuff like this:

                                                                       a     │           b           │
                                                                  ───────────┼───────────────────────┤
                                                                  asd        │ 4,434,341,321,312,321 │
                                                                  asdasdsad  │               443,434 │
                                                                  

                                                                  Before I knew what was going on I had added options to use +---+---+ instead of the unicode box drawings characters, options to configure the box drawing characters per-cell, a CSV output mode, and a bunch of other options I don’t need, and all sorts of methods and constants to set all of this. All I wanted was a simple table printer for a little program I wrote that’s a bit more advanced/nicer than Go’s stdlib tabwriter.

                                                                  I ended up removing most of it; it more than doubled the code size, and I don’t need any of these features. I suspect most people don’t. And if you do: well, use something else then.

                                                                  And I’m actually fairly conscious about limiting the scope of libraries and such. It’s so easy to get carried away.

                                                                  I wouldn’t really phrase it as “opinionated” though, but rather more along the lines of “this only solves use case X; other use cases are entirely reasonable, but not solved by this library”, although that doesn’t have such a nice ring to it. It’s fine to solve only 20% of use cases, and if you do it well then those 20% will be solved a lot better than with a much larger generic library/tool!

                                                                  1. 3

                                                                    I ended up removing most of it; it more than doubled the code size, and I don’t need any of these features. I suspect most people don’t. And if you do: well, use something else then.

                                                                    How about forking your code instead? Thanks to your simple and focused design, I can more easily modify it to suit my own needs.

                                                                    While your more flexible version had a better chance of solving my problem, I would have a harder time adapting it if it didn’t.

                                                                    1. 1

                                                                      This kind of goes back to the MVP philosophy. It even works with Open Source in my opinion.

                                                                      In fact, it works even better. If you publish a utility or a library that works for you, and people start using it, 2 things will happen.

                                                                      1. People will open Feature Requests against your code, saying “Can we have it do x?” You can then decide to take another look at that idea.
                                                                      2. Less often, some hero will come along and say: “I used your code and found myself needing to do x, so I wrote the following patch to have it do x. Can you merge it into your code”?

                                                                      Every time #2 happens to me, it makes my day.

                                                                      PS: Linux was the original example of a small opinionated software that grew this way. I would argue Ruby on Rails is a later example

                                                                    2. 1

                                                                      Clay Shirky called it Situated Software.

                                                                      1. 1

                                                                        It is an interesting issue that comes up often in code.

                                                                        I was making an interface for some middleware (testing out writing something to handle the incoming http request in node). The code I ended up writing for some flexibility was rather long, but as soon as I wrote it closer to the interface that most end up writing it as the code ended up being something like five lines long.

                                                                        I have been looking at the CanActivate interface in angular, which is a guard on a route:

                                                                        canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean | UrlTree> | Promise<boolean | UrlTree> | boolean | UrlTree
                                                                        

                                                                        Just look at those return types! This is a perfect example of something that went down a rabbit hole, but one that I think is pretty justified. You definitely want pretty much all those cases at some point.

                                                                        But where it becomes a real issue is when there end up being edge cases that the documentation isn’t specific about what happens. Although I am failing to come up with an example just now.

                                                                      1. 2

                                                                        Half way down the screen and I still don’t know what it is you are actually talking about. Now having read the whole thing, I get what you are trying to say about it, but still have no idea what it is.

                                                                        1. 3

                                                                          Hi, I wrote this post with this assumed knowledge (its hosted on Gemini and I didn’t submit it here, when I wrote it I assumed only people familiar with gemini would see it) https://gemini.circumlunar.space/

                                                                          I’ll add this link to the post!

                                                                        1. 1

                                                                          Writing a micro web framework for node from scratch in typescript. Learning more about stuff.

                                                                          1. 1

                                                                            Never would I do that.

                                                                            1. 2

                                                                              This is interesting, because if I read it right it explains all of the pain I had in the 90s over C++Builder’s collections. You had to create the list item type derived from some base list item, add your items, then you could add them to the collection. Obviously not a generic solution like would come along in standardization.

                                                                              1. 3

                                                                                I find it fascinating, but more because it is a prolog like language that is written in forth.

                                                                                I do have trouble following the count example in the manual.

                                                                                1. 3

                                                                                  Although logic programming and stack-based programming are very different paradigms from a user perspective, fast Prolog implementations typically compile to stack-based bytecode, the Warren Abstract Machine. So on the implementation side it’s less of an impedance mismatch than one might expect.

                                                                                  1. 2

                                                                                    ‘Stack-based VM’ and ‘stack-based programming language’ are two very different things.

                                                                                    1. 3

                                                                                      I’m not saying you’re wrong but your claim is unsubstantiated. Generally it’s more interesting if you explain the differences.

                                                                                      From my pov it’s programming languages all the way down so the difference is mainly whether you consider your language an IR or a UI.

                                                                                      1. 2

                                                                                        In terms of their goals yes, but you can treat a stack-based programming language as basically just a stack-based VM target if you want. For example here’s a paper from 1987 showing how to use a WAM-style compilation strategy to compile Prolog to Forth.

                                                                                    2. 2

                                                                                      As a prolog enthusiast, I did enjoy that mind bending aspect of it.

                                                                                      1. 2

                                                                                        It doesn’t have all the fun Prolog features like unification, but it may be easier to reason about, and it definitely makes its multithreaded nature easier to implement. Disclaimer: I helped with the ppc64le port.

                                                                                    1. 1

                                                                                      It really depends on what you mean by assembly language.

                                                                                      Web assembly isn’t an assembly language in the traditional sense, the binary form is a byte code - same with .net IL.

                                                                                      1. 1

                                                                                        That was my first thought, too. Then again, you could argue that byte code is really just machine code for virtual machines. In that case wasm/ILAsm would probably fit the common definition of assembly as a low level language that provides a 1:1 mapping to a machine’s instruction set. By that definition however, modern day x86 assembly would fail the test[1][2].

                                                                                        What’s really pusing it too far in my opinion though is the fourth example. Such a level of abstraction makes it nearly impossible to reason about the underlying architecture, which defeats the purpose of using assembly in the first place. By the same logic, you could call home-computer era BASIC dialects assembly, since you can infer the resulting machine code by looking at the interpreter implementation. It reminds me of my own history of learning programming, though. Assembly was the first language I learned properly (aside from a few brief excursions into Pascal in high school). As I progressed and started to get more comfortable with macros, I got a genious idea: How about creating a big library of macros that provides commonly needed functionality like printing text, mathy things, etc? Fortunately, before I got started on that project, I discovered that other people had already had the same idea, and had come up with this magic thing called C (among other things,of course).

                                                                                        [1] https://www.youtube.com/watch?v=eunYrrcxXfw [2] https://xlogicx.net/index.html

                                                                                        1. 1

                                                                                          The mapping between assembly and machine code for x86 is not quite 1:1, but it’s close enough. Yes, you can try to choose encodings that are short, or don’t contain ROP gadgets; but the assembler does not make any major decisions on your behalf, and there is a 1:1 correspondence between instructions in assembly and instructions in machine code (one assembly instruction never gets turned into multiple machine code instructions, nor vice versa). There is never a major security, performance, or debuggability impact resulting from choices made by the assembler.

                                                                                          Even for cases like arm for immediates, the choices made by the assembler are very simple and completely mechanical. (Though, by the time you get to something like the go assembler, you may have a point.)


                                                                                          C is a completely different kettle of fish. Register allocation alone places it in a completely different category from assembly (even with macro libraries).


                                                                                          Regarding .net and wasm, I don’t think there’s an argument there either. It would, of course, be possible to implement a .net or a wasm cpu. It would also be possible to make a hardware implementation of python—not bytecode, python source code—but I don’t think it would make sense to consider python machine code.

                                                                                          We have to consider the intent behind these systems. x86 assembly was intended as a (fairly-)direct textual representation of x86 machine code. Python was intended as a high-level scripting language. And ‘.net assembly’ was intended as a (fairly-)direct textual representation of a bytecode implemented by a virtual machine.

                                                                                        2. 1

                                                                                          And what? x86 binary form is also quite literally a byte code. As are 6502 and z80 and VAX.

                                                                                          There is no essential difference between bytecodes intended to be interpreted by a regular program and bytecodes intended to be interpreted by microcode.

                                                                                        1. 11

                                                                                          I’m not sure to what extent this is still the case, but 5 or so years ago when I was more actively interested in Haskell, I was frustrated by the prevalence of academics in the Haskell community. People seemed to want to intentionally complicate things such that a thesis explaining them is necessary. It seemed like everyone was building abstractions or trying to understand abstractions and nobody had any time left to build applications.

                                                                                          1. 3

                                                                                            I’m building lots of fun command line things if you like learning from small useful projects.

                                                                                            1. 2

                                                                                              When I was in University a half a decade ago, I was interested in Haskell.

                                                                                              But I was interested in it to actually make things. I think I felt some of this pain - building abstractions is pretty much the definition of the web programming model a lot of the libraries had built up. Almost needed a degree in math.