Lil blisters and oozes with a remarkable and innovative collection of Bad Ideas.
- JSON can’t represent NaN or Infinities, so Lil can’t represent them in its numbers, either.
If you are doing this, then also get rid of -0
as a special number distinct from 0
, but also mysteriously equal to 0
. The IEEE -0
value is not an integer, it is not a real number, and the behaviour of various common mathematical operations on this value is entirely arbitrary and impossible to predict based on a knowledge of mathematics. Just keep things simple for the user, I say.
Uniform operator precedence. PEMDAS be damned, I say- expressions evaluate right-to-left unless acted upon by parentheses or brackets.
Good idea. Most new languages think is a good idea to have 20 or so levels of operator precedence, and there’s no way I will ever memorize all these levels. Much better to keep the grammar simple enough that you can learn it.
- Absolutely no runtime errors. Except for, y’know, the kind that are caused by interpreter bugs, the host OS, sunspot activity, or a general malaise. Lil has excruciatingly straightforward control flow and operates upon the highly dubious yet daring, even brave, premise that any consistent behavior at runtime is conceivably more useful than a crash.
- A mushy, coercion-happy closed type system that generally aims at that lofty and deeply problematic goal of “doing the right thing” and generalizing operations over all reasonable datatypes.
This is the opposite of simple. Based on personal experience using such languages, this is a terrible idea, for several reasons.
x + y
means we are adding numbers, and that +
supports the usual identities (x + y
can be changed to y + x
without changing the value computed, x + 0
== x
for any valid argument of +
, etc).I do see one possible reason for making this terrible design decision, which is ease of implementation, so you don’t have to add a lot of error handling code to your interpreter, design good error messages, present errors to the user in a helpful way, etc. In my experience, good error handling adds a non-trivial amount of complexity to a hobby language interpreter.
The IEEE -0 value is not an integer, it is not a real number, and the behaviour of various common mathematical operations on this value is entirely arbitrary and impossible to predict based on a knowledge of mathematics.
Like what? As far as I can tell, -0
behaves equivalently to +0
in basically all cases, most importantly comparisons. There’s a few weird edge cases that can produce -0
instead of +0
, but since -0
is equal to +0
you aren’t going to notice unless you do a bitwise compare.
Section 6.3 of the 2008 standard appears to specify all the edge cases quite clearly, and there’s basically three ways to produce -0
: rounding a negative with the result being 0, the rounding implicit in the fused-multiply-add operation can produce -0
, and sqrt(-0)
equals -0
. As far as I’ve read, -0
as an input to basically any operation functions identically to +0
.
The primary edge case I’ve seen cited as a problem is how they in division. x/0 = Inf, x/-0 = -Inf
Let’s turn the question around. Suppose you started using a language similar to Python or Javascript, except that there is no difference between +0 and -0. They are the same number: -0
as an input to any numeric operation returns the same result as +0
, without any exceptions. In fact, the expressions -0
and +0
both print as the string “0”. This violates the IEEE standard, but would you even notice the difference? Would you care? My theory is that most people don’t know about the magic properties of -0
, and don’t care. For an end-user programming language like Lil, or like the new language I’m designing, getting rid of the special magic associated with -0
simplifies the language and removes a footgun.
Let’s answer the question instead. I’m not arguing that anyone wants or needs a sign bit, I’m asking for what nasty edge cases its current implementation introduces.
-0 behaves equivalently to +0 in basically all cases
-0 as an input to basically any operation functions identically to +0
You are using the word “basically” as a weasel word, meaning that -0 is equivalent to +0 most of the time, except when it isn’t. The nasty edge cases all lie in those cases where -0 isn’t equivalent to +0. Those are the cases where your code can go wrong.
People learn the properties of the real numbers in elementary and high school. The problem with the IEEE standard is that it contains two elements, NaN and -0, which are not real numbers, and which violate the axioms of important arithmetic operations. Having learned real numbers in high school, and not being trained in the correct use of floating point numbers, most people will probably just write code as if floats were real numbers, and not think through the consequences of “what if this expression returns NaN” or “what if this expression returns -0” every time they write an arithmetic expression. That creates opportunities for code to go wrong if these values arise. If you eliminate NaN and -0 from a programming language, then these footguns go away.
The infinities are also not real numbers, but they cause fewer problems in practice, because there is a sensible way to extend the real number system with +infinity and -infinity in a way that doesn’t break the axioms of real arithmetic.
One of the laws of real arithmetic is the law of trichotomy. Every real number is either negative, zero, or positive.
The NaN and -0 values in IEEE floats violate the law of trichotomy. -0 sometimes represents a small negative number, and sometimes represents zero, it depends on the context. This context is not stored in the number itself, so you may need to track it externally in order for your code to work. As a result, -0 by itself, without this context, cannot be unambiguously be treated as being definitely zero, or definitively a small negative number. Instead it is something different, just its own thing. This creates a problem when defining new numeric operations that aren’t speced by the IEEE standard. What happens when you pass -0 as an input? Often there is no right answer that produces the correct behaviour for all use cases.
For example, how would you implement the sign(x)
operator, which returns 0 if the input is 0, -1 if the input is negative, or +1 if the negative is positive? If the input is a real number, then the input obeys the law of trichotomy, and the code is trivial. There are multiple ways to write the code that produce the same results. If the input is -0, then then different ways to write the code (see previous sentence) may produce different results, and these different behaviours may be unexpected if you have internalized the law of trichotomy. No matter what sign(-0)
returns, it will be incorrect for some reasonable use cases. Different languages produce different results for sign(-0)
, and in some cases this might be by accident: maybe the library code is written by somebody who unconsciously assumes that the law of trichotomy is true. Ultimately, the way you work around this problem is by providing multiple different versions of the sign
operator and training people on which version to use in different circumstances. This complexity isn’t necessary if you don’t include -0
in your language.
Another issue is the equality operator. In the real number system, the equality operator x==y
satisfies the following requirements:
NaN violates the first requirement, -0
violates the second requirement. This matters when writing code, because people internalize these axioms and may subconsciously assume that they are true when reasoning about code.
The new language I’m working is very simple. All data is represented by immutable values. There is a single generic equality operator that satisfies the above requirements and works on all values, and there is code that doesn’t work correctly unless the equality operator satisfies the requirements. If I allow the NaN and -0
values into my language, then I need a second floating point equality operator that satisfies the IEEE requirements. I need to give different names to the two equality operators, and I need to train users on which equality operator to use in which cases. Similar to the problem of needing two sign operators.
Other language designers care less about these issues. They put multiple equality operators in their language (Scheme has =, eq, eql and equal, for example). They provide generic abstractions that fail in various ways when you put -0 or NaN into the abstraction, but so what, the code does what it does, floating point numbers are evil, just deal with it. Eliminating footguns is less important than other issues, such as simplicity of implementation and conformance to the IEEE standard. I’m not saying other people are wrong if their priorities are different than mine, I’m just saying that things don’t have to be this way, there are other ways to design a programming language.
Other language designers care less about these issues. They put multiple equality operators in their language (Scheme has =, eq, eql and equal, for example).
Scheme has multiple equality relations because of mutability (and because floating-point numbers are inexact, but that’s a a whole clusterfuck I don’t want to touch right now), not because of -0 or nan. Take for example common lisp, which has the same equality relations as scheme, but which has no infinities or nans.
-0 sometimes represents a small negative number, and sometimes represents zero, it depends on the context …
This is a highly misleading thing to say, when exactly the same thing applies to +0 (or unsigned 0, if it is the only 0). Toggle the inexact bit if you care.
Augh, my head. XD I think I managed to extract some particles of meaning out of that though, thanks.
Kahan doesn’t speak for all numeric analysts. The Posit standard doesn’t have a signed zero, and fixes all the problems I described in my extended reply elsewhere in this thread.
Given his track record of identifying performance problems in Windows apps, and not just minor ones. which mostly boil down to “the app is doing way to much, often unnecessary, work”, I think he is entitled to use this title.
That may well be, but this specific article in isolation (which is what most of us are judging by) does not substantiate the claim. The author even acknowledges it in the first paragraph:
I apologize for this title because there are many things that can make modern software slow. Blindly applying one explanation without a bit of investigation is the software equivalent of a cargo cult. That said, this post describes one example of why modern software can be painfully slow.
I don’t understand the apology and think the title is fine. I read it like: “Why criminals are caught (part 38) – (the case of) the Hound of Baskervilles”. Where the parenthetical parts are optional. You wouldn’t think a blog post with that title would claim all criminals are caught because of the specific reason in that case.
for this user.
Are there users for whom waiting an extra 20 seconds before they can start using a program is acceptable?
I really like this project, but I really don’t like the curl | sh pattern of installing things. We should make an effort to make packaging a more universal and easy process for projects like this.
I even went to do my due-diligence and read the shell script, but it was in a minified format that made it difficult to look at. I know I can trivially load it into my editor and replace the semicolons with newlines and read it that way, but I’d rather have an install that works with my package manager. I understand that code installed by package managers isn’t foolproof and has it’s own issues, but there has to be something better than a curl | sh pattern for it.
I guess this gets to the bigger problem of properly packaging things for multiple systems without need for manually creating the packaging for each system. I recently attempted to package an application for mac and windows (leaving the linux users to figure out how to run a binary for themselves) and found it to be very difficult and requiring more knowledge than I think should be necessary to do so Windows particularly. Is anyone aware of a system where I can just drop my windows, mac, and linux (and each architechture supported by each) binaries in a folder and have the packages generated by an automatic system?
I’d rather have a shell script that I can curl > install.sh
and then less
than add a new package repository to my system-wide settings. I don’t think a system package is any better than curl | sh over HTTP from a security standpoint. A hobby or poorly maintained system package repo is much more complex than a simple 14 line shell script.
True, but a system package can also add a zillion dependencies that somehow put the system into a weird state. I learned my lesson with third party packaging on Debian and Redhat already - for something simple like Bun, much better to pop it into ~/prefix/bun than somehow end up with a conflict about what version of OpenSSL should be installed system-wide.
Problem is you’re the 1/5th of people using the program, the fifth who are going to take a cursory look at the script as opposed to the other four-fifths who will simply run curl | sh and not notice their local library has a fake “Free WiFi” MITM installed by some skid.
.debs can be signed, but are not in general, so for the most part they’re trusted to exactly the same extent as the repository is. That means that curl | sh
over HTTPS has basically the exact same threat model as installing a .deb does, and it always makes me wonder if people who lament the security failings of the former process are happily making use of the latter one. The same doesn’t hold for RPMs, though.
In what sense are RPMs different (it has been a very long time since I dealt with anything other than initial Linux setup - my wife is the one installing terrible bioinformatics software and complaining about the code quality there :))
RPMs are much more likely to be signed than DEBs (where only the repo is usually signed).
But both points are moot anyways. If I were to ship malware to you via curl | bash
, I might as well do it via a malicious .DEB or .RPM which I have signed with my private key and told you to add the corresponding public key to your configuration.
Only, the curl’ed shell script is easily audited, whereas the same isn’t true for a .DEB or .RPM package. Yes, they can be extracted, but while I know the tools needed to inspect a file downloaded by curl, I would have to look up the commands to unpack a .DEB and also, I will need understanding of the files inside of a .DEB to know what gets executed at install time
I think much less than 1/5th of people will examine a script before installing it. That also goes for language dependencies, like NPM, PyPI, Bundler, Cargo, Go modules, etc.
Is your concern about the security implications of running untrusted code? If so, wouldn’t you have the same concern when you actually run the installed program as well?
On macOS binaries are by default required to be code signed, which means that the default behaviour requires some real identity of the authors (they have to pay apple for the signing cert), and - especially if historically - the authors signed the package, and then a fake update comes out that isn’t in principle you could notice. The signing requirement can be bypassed, but again requires extra steps that one would hope protect lay folk.
Interestingly (for hilarious reasons) you can codesign a shell script on macOS, but the signature isn’t checked - presumably because the code running is the bash/zsh/whev shell which is signed.
So the solution is to centralize software distribution and make it impossible for people to independently publish software?
No, though that does come with very large security benefits.
But a lot of malware relies on users simply double clicking something, which is path broken by the default, and by passable, Mac setup.
Packaging a Mac app has to be done locally on your own Mac because it involves code-signing using your developer credentials.
If it’s a developer/geek oriented app you might get away without signing it, since your users will probably have enabled running unsigned apps, but here in a thread complaining about insecure installation that doesn’t seem like a good suggestion!
I really hope they don’t disable code signing requirements, and I hate with a passion these sites that say “just disable this core malware protection to run our app, making you vulnerable to binaries from other sites, not just ours”.
You can run unsigned apps with the default signing rules: it requires that you know to context menu click and open, in which case it asks if you’re sure you want to run the app. It really is that simple, and means that a site can’t make a binary with the image or a zip file icon that then silently installs malware when a user “opens” it.
You can run unsigned apps with the default signing rules:
I think that’s changed recently…as far as I can tell, recent macOS now says something like “this app is damaged and can’t be run”, with no option to run anyways if it isn’t signed (and further shows a warning if it’s only signed, but not notarized; quite a pain)
I believe an incorrect signature isn’t by passable (though obviously you could simply remove the signature if you were malicious?)
Gonads is a real word that has nothing to do with Go nor monads, and I can’t decide if I think this was intentional or not.
Mara: What is that package name?
Cadey: It’s a reference to Haskell’s monads, but adapted to Go as a pun.
Pretty sure it’s intentional 😉
possibly a reference to a talk by someone who didn’t know what they were talking about
This is one of my biggest gripes with rust. I wish the author(s) would also consider adding an executor in std as a possibility (merging smol for example?)
Making one executor the “blessed official” one would be a little divisive, especially as different ones have different design goals. The other reply asking why not another executor is a perfect example of this.
One of rust’s official design goals was to avoid the ossification of the standard library, by leaving big things like this out of it. We are definitely suffering the unfortunate consequence of that: fragmentation in areas where interop is hard.
Standardizing on an interface and keeping that in the stdlib would ideally let us have the best of both worlds.
Making one executor the “blessed official” one would be a little divisive, especially as different ones have different design goals
It makes sense, given that one of Rust’s foundational pillars is “abstraction without overhead”. Any concrete executor is unavoidably specialized to particular class(es) of workload, and will unavoidably impose overhead for others. That’s inherent complexity to the domain.
Following from that, and alas, I’m afraid that
Standardizing on an interface and keeping that in the stdlib would ideally let us have the best of both worlds.
is essentially impossible. I think any abstraction sufficiently general to support even the existing set of implementations will necessarily prevent the end-to-end optimizations that make each of those implementations suitable for their use cases. Exectors aren’t, like, JSON parsers, or regexp engines; effective parallelism requires deep integration with the full language stack.
If smol was merged, then many Rust projects would still replace it with tokio. You’d still have fragmentation, dependencies, and keep explaining to users unhappy with async Rust’s speed/features that std
is not the best option. Or if std merged tokio instead, then you’d have a similar talk with microcontroller people who say Rust’s async is too bloated. In browser WASM you have no choice but to use browser’s runtime. In GUI applications you’d rather use the GUI event loop as your executor, not a custom runtime.
One-size-fits-all runtime together with std
’s promise of never making any breaking changes ever, is a recipe for being stuck with a deprecated sub-par solution.
. . . a recipe for being stuck with a deprecated sub-par solution.
For some workloads, sure. Is there a runtime that would best serve the needs of the majority of Rust users? Honest question!
For most users evidently tokio works fine.
But keep in mind that Rust in general targets a minority of users not served well by more popular languages — for most programs using a garbage collector is fine. So in this case I’d also be concerned that even though for most users tokio is fine, it’d leave out an important minority that really really needs something else.
(FWIW: I’m one of the original project members of async-std)
I would say async-std, smol and tokio all serve the needs of the majority. The reason for that is that most users are interested in the execution model, but are not using it that strongly that all of the small details matter.
From the async-std side, we do see users picking async-std over tokio though for performance profile reasons (tokio is better tuned for latency, async-std better to throughput - note that this is reported on their workload).
I hear really great things about smol but was also wondering why not async-std? That also seems really capable, no?
From what I understand async-std and smol both use async-executor. I believe async-std is effectively a wrapper around async-io
and blocking
, which smol encourages you to use directly. Both were developed by the same person.
This feels very similar to my experiences at a large software company. A million teams with vast, overlapping and frequently changing areas of ownership, resulting in nobody having the time to spend on any project.
I completely moved from vim/neovim to vis, very satisfied.
I ended up moving off Vis because of the lack of Language Server support. For Rust development, that’s the difference between a ~5s feedback cycle and a ~200ms feedback cycle.
Likewise, though it’s syntax highlighting system is simple, it is pretty bad for Ruby, where every variable ends up highlighted like a method name (since it’s impossible to tell the difference). Turn that off, and method signatures aren’t highlighted anymore.
There’s also helix which I think is shaping up rather nicely.
Shopify doesn’t support Git.
This isn’t really correct. See their docs on Github integration.
That is nice and all, but one thing that bothers me is that Shopify doesn’t minify the code for you. The same goes for JavaScript.
Shopify does automatically minify assets when they are served, including CSS and JS.
I quite like the writing style. It reminds me of Aphyr’s “Technical Interview” series.
Yes, and it does a pretty good job of being fun without getting in the way. I love that we have another emergent flavour of whimsy in our field, alongside the Hacker Koans from the older generation.
A quick, rough timeline:
There’s something about “we can’t change this, what about all the people using this” in the early days, becoming an issue for far far longer and for far many more people, that feels like a failure mode.
I’m reminded of the anecdote about make: Its original author used tab characters for indentation in Makefiles without much thought. Later when they decided to add make to UNIX, they wanted to change the syntax to something more robust. But they were afraid of breaking the ten or so Makefiles in existence so they stuck with the problematic tab syntax that continues to plague us today.
Your comment reminds me of the origin of C’s confusing operator precedence rules:
Eric Lippert’s blog post “Hundred year mistakes”
There were several hundred kilobytes of existing C source code in the world at the time. SEVERAL HUNDRED KB. What if you made this change to the compiler and failed to update one of the & to &&, and made an existing program wrong via a precedence error? That’s a potentially disastrous breaking change. …
So Ritchie maintained backwards compatibility forever and made the precedence order &&, &, ==, effectively adding a little bomb to C that goes off every time someone treats & as though it parses like +, in order to maintain backwards compatibility with a version of C that only a handful of people ever used.
But wait, it gets worse.
I think this article includes a logical fallacy. It assumes that whatever you’re doing will be successful, and because it is successful, it will grow over time. Since it will grow over time, the best time for breaking changes is ASAP.
What this logic ignores is that any tool that embraces breaking changes constantly will not be successful, and will not grow over time. It is specifically because C code doesn’t need continual reworking that C has become lingua franca, and because of that success, we can comment on mistakes made 40+ years ago.
Sure, but this error propagated all the way into Javascript.
I’m not saying C should have changed it. (Though it should.) But people should definitely not have blindly copied it afterwards.
I’m curious why you think it is problematic? Just don’t like significant whitespace? But make also has significant newlines…
For me it’s because most editors I’ve used (thinking Vim and VSCode) share their tab configs across file types by default.
So if I have soft tabs enabled, suddenly Make is complaining about syntax errors and the file looks identical to if it was correct. Not very beginner friendly.
IIRC Vim automatically will set hardtabs in Makefile
s for you. So it shouldn’t be a problem, at least there (as long as you have filetype plugin on
).
I always make sure to have my editor show me if there are spaces at the front of a line. Having leading spaces look the same as r trb is a terrible UX default most unfortunately have
Not just these two, but all methods. Why in the world can I not <form method="buy">
after all these years is baffling. This is a huge contributing factor to people giving up on HTTP, when browsers refuse to implement it sensibly.
Assuming you mean adding custom methods to HTTP, how would a browser know which of these methods are idempotent?
I don’t think the lack of custom methods harms HTTPs usage personally, though this may be because I’ve internalized a record-cetric way of modelling. I would probably implement that as a POST /products/123/purchases, or maybe PUT /products/123/purchases/<transaction_id>
Provide an attribute on the form which tells the browser if the request is idemptotent.
A POST can be idempotent too, but the browser assumes it is not.
This feature would contradict the basic design principles of HTTP. The spec has this to say on the HTTP verbs:
HTTP was originally designed to be usable as an interface to distributed object systems…
Unlike distributed objects, the standardized request methods in HTTP are not resource-specific, since uniform interfaces provide for better visibility and reuse in network-based systems [REST]. Once defined, a standardized method ought to have the same semantics when applied to any resource, though each resource determines for itself whether those semantics are implemented or allowed.
Fielding also addresses this point:
The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components (Figure 5-6). By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application’s needs.
That is, the basic idea is that every application has resources, identified by URIs, that you interact with via this tiny set of very general verbs. The specifics of your application are not defined by adding custom verbs, but by custom media types. This article walks through a simple example of doing this.
A whole world-view is implied here: The world is made up of resources, each with a canonical location. You can view them, create them, update them, and delete them. Want to log in? Wrong. You are actually creating a new session resource. Want a refund on your purchase? Create a new refund request resource. It is a noun-centric world view.
The merits of this design are debatable, of course, but understanding the intention will at least make things less confusing, and you don’t have to wonder why this feature hasn’t been implemented yet.
The noun-centric worldview is precisely why GET and POST are insufficient. There are things you can do with a resource besides get it or create it.
This is still verb-think :)
As another poster mentioned: In your example, you don’t buy something – you create a new purchase resource (or order, or whatever). Yes, this is unnatural. It clearly doesn’t click for most people. Probably it was not the best design. Still, it is absolutely fundamental to the principles of REST and HTTP, and I think it’s easier to just embrace it. I don’t think it’s going anywhere.
So what is then the go-to for that ? Easy device aware frontends that use REST APIs without bootstrap that actually look good and not like some bare HTML stuff from the 2000s ? I just want to throw the logic inside, not hire a frontend dev for my hobby projects. QT frontends feel easier..
Writing HTML and CSS, maybe using a CSS framework (other than bootstrap if you prefer). Using JavaScript fetch() calls when you need to contact an API.
If you weren’t writing CSS with a React site, then you are already using a CSS framework just via React components.
using a CSS framework just via React components
which is something you’ll do sooner or later, as you reach reactive components that need JS to run (date pickers, search bars with dropdown etc). At which point you’ll throw your hands in the air trying to marry the bootstrap code with your react render pipeline.
For teams of a significant size I’m sure this is the case, it’s been a while since I did frontend development in that context.
I still believe it’s entirely possible to write those yourself if needed though, or to bring in React for only a few elements on the page. People made interactive websites before React, of course 😅.
Is my position the most practical? Maybe not. I prefer avoiding react as a first choice because I think that it encourages not understanding the software you are writing.
The commit says it’s an experiment so I don’t see what the problem is. And I wouldn’t expect Google to create a new OSD spec and involve all search engine providers for something they’re only experimenting with.
I don’t think “experiment” is a magic word that lets you blatantly create unfair advantages for your own services. Anti-trust law doesn’t (as far as I know) have an “experiment” exception.
I think if Google was experimenting by suddenly getting all Android phones to automatically connect to search engines, they’d be blamed too.
This could also have been an opt-in behavior on the search engines side, though. Perhaps a non-standard header or meta tag.
but at least it wouldn’t be potentially illegal.
anyway they always have the option to not implement the optimization.
I see your point but I honestly see that as an experiment by developers who want to see if their idea makes any sense. And the best way for them to test is with the Google search engine, since they can’t reasonably start spamming other search engines with pre-fetch requests.
what’s the relevance of it being an experiment? does it exempt them from legal or social obligations?
From another point of view, Google by creating a spec so that other search engines would benefit from the same optimization is actually sponsoring better usability for their competition. That gives the competition an unfair advantage, because Google has actually paid for developers time when developing this feature, and competitors didn’t.
That maybe sounds like a stretch, but I fail to see why it’s an “unfair” advantage, since Google is paying for the development of Chrome. It’s like saying that Apple plays unfair because it wants iPhone to be integrated with macOS, but leaves Android behind.
Since Google is the most popular search engine, and Chrome is the most popular browser, then optimizing their interaction seems like a sensible thing to do (everything is cheaper and faster). And if competition doesn’t want to stay behind, they should negotiate with Google to be included in the idea.
That’s exactly not how it should be. First of all, Google paid for additional developer time to add to the general “connect to the default search engine” feature a “but only if it’s our search engine”. So the point is not valid here. Also Google and Chrome are not part of a closed eco system. So the Apple iOS example doesn’t work either. And the whole “they paid for it, so why would they allow competitors to benefit?” is a very short sighted idea. Imagine Google and big companies would generally be allowed to use their their power and money to make everything a closed system that only works with their own services. We would be back to the dark ages of the web, aka the nineties, where Microsofts IE was the driving force.
And in case you need another point of few: Google, as well as most if not all IT companies benefited a lot from Open Source software and contributions. The kind of software eco system Microsoft wanted to crush back in the days. Without those, Google, their competitors and most of todays IT companies would not be feasible. If course they don’t care. They take what they can take and make the best possible profit out of it. This is why we need laws that make competition fair.
I don’t really like entering such response-to-an-argument debates, because they tend to never end, but here goes.
“but only if it’s our search engine”. So the point is not valid here.
It’s cheaper for them to only care about their search engine; they can actually measure how this feature affects their expectations and then pull out the functionality without anyone complaining about it. Also, if a bug pops out because this function doesn’t work with DuckDuckGo, but it still works with Google, then who should fix it? Should it support Yandex, Baidu and Ecosia? Should it support your own search engine? Should Google hire additional support people for this function alone? Who should create documentation for it? What QA team should deploy testing for it? Suddenly this function starts to be bigger (and more expensive) than initially thought.
Also Google and Chrome are not part of a closed eco system. So the Apple iOS example doesn’t work either.
Well, it’s really popular, and there’s no alternative with a similar quality for it, but I don’t think it makes it “public”. Can you extend google.com or use it somehow differently than it’s sold by Google? You can take Chrome and extend it, but that just means that it’s possible to implement this functionality on one’s own (in a fork).
Imagine Google and big companies would generally be allowed to use their their power and money to make everything a closed system that only works with their own services. […] This is why we need laws that make competition fair.
They do it all the time. If not in the products themselves, then with lobbying and with “buying laws” that make their life easier and pose problems to the rest. And then people can’t have nice things, because legally they’re not allowed to, only big corporations can.
And in case you need another point of few: Google, as well as most if not all IT companies benefited a lot from Open Source software and contributions.
I would argue that if an open source project declares its license as BSD, but has a different idea how it should be used, then I think it simply has applied the wrong license. You can’t say “you can do everything with the code”, but at the same time have some expectations that result in a situation that some people are using the code “in the wrong way”. State it in the license and the problem will go away.
The key for me is that much like Internet Explorer before it, Chrome is essentially a pit into which Google throws huge amounts of engineering resources in order to ensure it controls the direction of the entire browser market. There’s little to argue that Chrome brings in any revenue at all for Google in of itself as a product - it exists to dictate the direction of web standards and to provide a vehicle which guarantees the continued dominance and tight integration of Google’s Search.
It really can’t be understated just how much engineering is continuously going into Chrome. Microsoft couldn’t keep up with Chrome’s release cycles and feature additions. Firefox is continuing to haemmorhage market share due to its increasing inability to keep up with Chrome. Safari is always behind in standards support compared to Chrome and that’s considering they often decline to support more complex features at all (though, like many others, I suspect they could if they weren’t more interested in preserving their App Store hegemony). Google won’t stop until it burns out or crushes what’s left of Firefox, and I imagine it will be deeply interested in lobbying efforts to “crack” iOS too.
Unfairness is ill-defined, but here it seems to be a proxy for what should be forbidden by anti-trust law. Those laws are specifically designed to curtail actions that would be the “sensible thing to do” for a company with monopolistic powers. Do you reject anti-trust altogether?
The thing is that I trust governments even less than I trust corporations. And with corporations at least I know they are driven by profit, which makes them more predictable. With governments, the politics are just hidden deals between powerful people. So if a government wants to control the market, then the first question I think of is who will control the government.
It probably would save time to come out and say that you don’t believe in anti-trust, so that people can recognize that there is a more fundamental disagreement rather than some other confusion.
It is in fact disabled by default. This whole thing has been full of misinformation from what I’ve seen. A popular post was claiming the new versions recorded you through your microphone. Others said it would phone home if it saw you editing copyrighted audio.
This entire affair brought out the worst in the community. I’m embarrassed by it.
I only read the CLA discussion some time ago, and it was horrible. The entire reason they wanted a CLA in the first place is to have some more flexibility in licensing: GPL2 is incompatibility with GPL3 and provides practical problems, and it prevents redistribution on e.g. Apple platforms.
That this is an organisation that has built their entire business on GPL3 software should give them some benefit of the doubt. But nope! Random conspiratorial nonsense all over the place.
Like most projects of this kind the people actually working on it are very few: 169 in the last 11 years (the furthest back the git repo goes), and this includes all the trivial “typo fixes” and such. The meat of the work is done by just 15 people or so, with most of it concentrated in just three. And all of the people who actually did the work actually signed the CLA.
You have people commenting “I will not be contributing any code under those terms” or “I think most contributors will not be okay with this, can say so for myself right now” and you check, and they have not contributed a single line of code. What the hell are you talking about? You’re not contributing code already. In that entire discussion I could find only two people: a translator who threatened to remove their translations (which you can’t do…), and someone who made some substantial contributions in 2010-2011 who had a nuanced in-between position.
The rest: just random people from the internet on a horse so high they need a space suit.
It’s kind off ironic that the fork got “raided” by 4chan because something not too dissimilar happened to Audacity.
I only read the CLA discussion some time ago, and it was horrible. The entire reason they wanted a CLA in the first place is to have some more flexibility in licensing: GPL2 is incompatibility with GPL3 and provides practical problems, and it prevents redistribution on e.g. Apple platforms.
The project seems to be licensed under GPLv2 or later; wouldn’t that clear up any issues with GPLv3 compatibility? Also curious about the issue with Apple platforms: GIMP is GPL and works on Macs, so there is probably some nuance here.
Someone might want to tell them that, as their readme has said otherwise for (at least) the last 12 years:
https://github.com/audacity/audacity/blame/master/README.txt#L48
The LICENSE text has just GPL 2 and there are no file headers, so 🤷 Lawyers can argue which one “applies more”, but probably best to avoid that.
Either way, the entire effort seems to be in good faith. I see no reason to doubt it.
It’s not permissible to alter the GPL itself and still call it the GPL, so putting the “or later version” text in the statement of license in README and/or source files has been the long established practice. If there is some legal challenge to that, it would likely invalidate the intentions of many projects that use that. You’re right that’s a question for the lawyers if it comes to that, but my gut feeling is that they pretty clearly did exactly what people do when they intend to allow later versions, so would expect that to be the default interpretation.
I haven’t followed this enough to have any clear sense of whether I’d consider what’s happening “in good faith,” but I certainly don’t assume that when companies swoop in to pay a bunch of money to acquire a community run project rather than working to establish/improve the existing governance.
putting the “or later version” text in the statement of license in README and/or source files has been the long established practice
It’s more than that - it’s described in section 9 of the GPL[v2] itself, which is what gives those READMEs force.
Agree that it’s unclear for Audacity in particular since different pieces of texts seem to be saying different things. I’m not at all familiar with this project, but the LICENSE.txt appears to explicitly indicate “version 2” with no “or later” clause for the last four years. Applying section 9 of GPLv2, prior to that point the user could have chosen GPLv3, but this change appears to negate that intention. https://github.com/audacity/audacity/blame/master/LICENSE.txt
“I only read the CLA discussion some time ago, and it was horrible. The entire reason they wanted a CLA in the first place is to have some more flexibility in licensing”
I don’t blame people who see this as hostile. What a company did yesterday is not a guarantee for tomorrow. Maybe MuseScore is staffed and owned entirely by people who are pure of heart and have nothing but the best intentions towards FOSS. It’s a company that can be acquired by another company at some point with less Good Intent. CLAs like this should be treated as radioactive.
The CLA says that you grant MUSECY SM LTD “the ability to use the Contributions in any way” which is not just flexibility, that’s “we can take this and distribute it as proprietary software.” If they only wanted flexibility to re-license under more acceptable FOSS licenses they could have written the CLA that way. Never sign a CLA or any legal agreement based on the assumption that the other party will Do The Right Thing. Assume that the other party is going to do any and everything the contract allows. In this case that would include deciding at some point in the future that they’re going to make Audacity closed source or open core or whatever.
“And all of the people who actually did the work actually signed the CLA.”
I wonder, did the people in question get paid for signing over this work? It might have something to do with their willingness to do so. (I’m not saying it’s bad for them to be paid. I’m happy to see people get paid for work on FOSS. But if we’re pointing to their signing the CLA as a justification for its goodness, it’d be good to know whether there was compensation for it or not.)
Yes, people on the Internet who haven’t contributed also have opinions about it. Users do have an interest in the direction that Audacity takes.
@proctrap said that the crash reporting is opt-out, not op-in. Can we get some sources in here?
Also worth pointing out that Audacity does in fact record you through your microphone. A link to the post would make it possible to evaluate whether it was malicious misinformation, or a joke.
Here is the original telemetry PR - the original implementation was opt-in.
Here is the followup to address the public reaction to #835
Ignore me re: opt-in vs opt-out. I don’t have time to verify either way, but I had seen that correction posted several times elsewhere. And yes, I realize this is me doing what I complained about.
Re: the microphone, this is mostly bad phrasing on my part. The author has since deleted the tweet, but from memory it was roughly “PSA: If you don’t want Audacity to send your microphone input to their new owners then don’t upgrade to 3.0.”, followed by a link to a news article. It had >2,500 retweets some time yesterday. The author later admitted in a reply that they made that up.
The opt-in is just in the telemetry PR, in the PR description, in big bold letters. It’s not hard to verify. In fact, it’s quite hard to claim anything else.
I understand the appeal, and certainly would like more plain HTML/CSS websites around, but this doesn’t resonate with me.
Part of what I like about Gemini is that it has aspects of a creative art project, quaint in it’s artificial limitations.
As well, Gemini is new, and shows that we can move forward with the lessons we’ve learned. Restricting yourself to specific outdated versions of HTML etc just strikes me as regressive.
Multiple of these are just the standard “I don’t understand floating point” nonsense questions :-/
That doesn’t explain why a script language uses floating point as its default representation, let alone why that is its only numeric type.
JavaScript has decimal numbers now fwiw, though I agree. Honestly I’ve been convinced that IEEE floating point is just a bad choice as a default floating point representation too. I’d prefer arbitrary size rationals.
Arbitrary size rationals have pretty terrible properties. A long chain of operations where the numerator and denominator are relatively prime will blow the representation up in size.
well at least JS users have 0o10 - 0o5
now, if they find leading 0 octal notation to be confusing.
Thanks for note, wasn’t aware of the ES2015 notation and MDN is helpful as always.
I mean if you want fun 08 is valid JS, and that’s absurd :) (it falls back to decimal, nothing could go wrong with those semantics)
Amusing, I’ve seen people write PHP with leading 0s before. Newer PHP rejects if there are invalid octal digits - fun! Putting the leading zeroes is common for people used to i.e COBOL/RPG and SQL; business programming where they’ve never seen C.
really? only like ~5 of the 25 appeared to be floating point releated: 0.1 + 0.2, x/0 behavior, 0 === -0 and NaN !== NaN. Correct me if I’m wrong. Most of them seem to be about operators and what type of valueOf/toString behavior one gets when faced with such operators. Only two I got wrong were because I forgot +undefined is NaN and I was a bit surprised that one could use postfix increment on NaN (and apparently undefined?).
Any arithmetic operation can be performed on NaN, but it always yields another NaN.
The undefined one is a bit weird but kinda makes sense, it is indeed not a number.
I actually think what’s weirder is how javascript will sometimes give you basically an integer. x|0
for example. The behavior makes a lot of sense when you know what it is actually doing with floating point, but it is still just a little strange that it even offers these things.
But again I actually think it is OK. I’m a weirdo in that I don’t hate javascript or even php.
i don’t see where is the contradiction there. JS numbers are IEEE 64-bit floating point numbers, so any weirdness/gotcha in IEEE floating point is also a weirdness/gotcha in JS too
i know that many (most) languages also use floating point numbers by default, but that doesn’t floating point gotchas any less weird, maybe just more familiar to already-seasoned programmers :)
It’s almost like Google is acknowledging that a browser isn’t the best place for non-trivial applications.
I have yet to encounter any GUI API (web or otherwise) that does digital typography decently. Does anyone know of one that, for example, supports the notion of a baseline grid? See also: font size is useless; let’s fix it.
Because digital typography for the masses is a bad idea. The whole concept of showing a paper page on a screen as a canvas (no pun intended) and use typographic elements as your artist brush is intricate per se.
I think the average Joe would be better served with something in the lines of markdown if only it was what they first had exposure to. WYSIWYG editors have this aura of being simple and direct but their complexity explodes in your face after less than a handful of elements.
I’m using “API” as an umbrella term. Over the years I’ve played with a variety of tools, sometimes called “APIs” or “SDKs” or “toolkits” or, in the case of the web, an amalgam of standards… which include APIs. Whatever you call them, I’m thinking of tools developers use to build software applications with graphical user interfaces (GUI). Here are some examples of what I mean:
There are others that I’m curious about but am less familiar with (SwiftUI comes to mind). I’m genuinely curious to know if any of them give developers the means to lay out text using principles that have established in the graphic design world for almost a hundred years now by luminaries such as Robert Bringhurst or Josef Müller-Brockman. All of the tools I’ve used seem to treat typography as an afterthought.
I think that’s overly pessimistic. The specific problem here is trying to embed one docment layout system in another. Few apps need to customize the specifics of eg. text layout to nearly the same extent as Google docs.
And though I empathize with the idea that it needn’t be this way, I haven’t found many better systems for application distribution than the web. Though maybe I really do just need to sell my soul to QT.
Your argument about “I haven’t found many better systems for application distribution than the web” is somewhat defeated by the very nature of web browsers.
Google distributes an application to multiple platforms with regular automated updates. It’s called Chrome. It’s a POS memory hogging privacy abusing whore of satan, but that’s not really related to it being native or not - Google manages to push those qualities into browser based ‘apps’ too.
But instead of knocking a few layers off the stack and starting again from something akin to the webrender
part of what would have been Servo, they’re just re-inventing a lower layer on top of the tower of poop that is the DOM.
The bit about GOTO is literally the last thing mentioned in the transcript, and that’s all it is—mentioned as being “great” without going into any real detail. Personally, I think it sounds horrible, but since it’s not described much past the concept, I can’t say for sure.
They are referring to Algebraic Effects, which are something like goto
or call/cc
and are getting a lot of attention on the PL community atm.
Because Javascript is broken by design and possibly the only language of its class with such limited numerics.
Javascript has actually had big integers for some time now, though it did not at the time that JSON was designed.
JSON was initially described as a subset by Crockford (this page from 2006). I think it would be common to assume that if you use if you have two values from data of the subset that are unequal, they’d be unequal in the superset too. But with the JSON/JavaScript pair this isn’t the case. Not even after proposal-json-superset, which Wikipedia claims makes JSON a “strict subset”.
I think it would be best if we just explicitly pointed out that JSON isn’t a subset, but it is difficult when even TC-39 uses the terminology like this.
Or am I wrong, and we can’t talk about “equality” of JSON documents at all?
Correct.
JSON defines an encoding, and two specific JSON-encoded byte sequences are comparable for sure. But two abstract JSON objects are not directly comparable, for many reasons, among them (topically to this post) that JSON doesn’t define numbers as precise values, and allows implementations to assert arbitrary precision.
That is, given abstract JSON object obj1
{v: 9123372036854000123}
and obj2{v: 9123372036854000000}
, thenobj1 == obj2
is literally undefined. Of course the encoded JSON strings"{"v": 9123372036854000123}"
and"{"v": 9123372036854000000}"
are comparable, but encoding isn’t bijective, and so equality of encoded forms doesn’t imply equality of data, e.g.{v: 1}
can be encoded to either"{"v":1}"
or" { "v": 1}"
, which are not equivalent, even though they represent equivalent information.JSON was actually “designed” without any such limitations to numerics, hence the problem here.
Also, “quite some while” is not quite some while, with bigints being clumsy instead of the default.
Even tools that don’t use JavaScript have this limitation, for example
jq
.Thinking of literals as something that can be stored in constant memory goes back further than JavaScript.