One of the principles of unicode is that it should be possible to re-encode old documents from legacy character sets to unicode, so this seems entirely sensible to me.
The proposal addresses this concern on page 5’s section “8. Finiteness”:
We have received concerns that there may be no end to the number of unencoded characters found in old microcomputers and terminals, leading to no end of future proposals should these characters be accepted. We believe this is not the case, for the following reasons.
That is a nice example! This is what I really like about working with Ada, where you have lots of control over your types, even including static and dynamic type predicates with which you can even define a “prime number” type or something.
Making invalid states impossible in the first place makes much more sense than trying to consistently catch degenerate cases at the beginning of functions.
Ada and Pascal (which is almost a subset of Ada) both have really useful ranged integer types (the Ada ones, as you say, are more general, but Pascal has a subset that covers 90% of my requirements). It’s something I miss in other languages. C now has arbitrary bit-width types. C++ lets you build these things and if you have a constexpr constructor then, with C++20’s std::is_constant_evaluated (or C++23’s if consteval) then you can get compile-time failures if your constant expressions don’t match that range, with clamping or wrapping for overflow in the dynamic case. It really feels like something that should be a core part of the standard library, even if it isn’t baked into the language.
You gave a great overview, much better than I could’ve written it. Just one remark: Arbitrary-bit-width is not enough, in my opinion. The strong suit of Ada, regarding numerical types, is that you have full control. To give an example, you can define a fixed point type with prespecified range and precision, which is unheard of and very cool. You can also add type predicates to run automatic proofs.
Even though I love working with C and try to use the type system as much as possible (just “const” parameter correctness alone saved me from a lot of bugs over the years), having tasted the strength of the Ada/Pascal/Haskell/etc. type systems leaves you wanting in many places.
By the way, your name is somehow really familiar to me, but I can’t wrap my head around it.
A lot of the design of Common Lisp came to be in the same sort of milieu as Ada, and unsurprisingly has arbitrary range types, but no one ever uses them, because, of course, their utility is somewhat limited in a fully dynamically typed language ^_^
Not entirely true. There is a mathematical structure known as a wheel (https://en.wikipedia.org/wiki/Wheel_theory) where division by zero is well-defined, where it maps to a special “bottom” element. I have never seen anyone use this, or to seriously develop “wheel theory”, but it is fun to know that it exists!
I actually wrote my Bachelor thesis about this. See chapter 3.1, where I actually proved that division by zero is well-defined in terms of infinite limits in this projectively-extended form of real numbers (a “wheel”). See Definition 3.1, (3.1e) and Theorem 3.6. It all works, even dividing by infinity! :D
What I noticed is that you actually do not lose much when you define only one “infinity”, because you can express infinite limits to +infinity or -infinity as limits approaching the single infinity from below or above (see chapter 3.1.1).
Actually this number wheel is quite popular with next generation computer arithmetic concepts like unums and posits. In the current Posit standard, the single infinity was replaced with a single symbol representing “not a real” (NaR). It’s all very exciting and I won’t go into it here, because I couldn’t do it justice just how much better posits are compared to floats!
One thing I’m pretty proud of is the overview in Table 2.1, which shows the problem: You really don’t need more than one bit-representation for NaN/NaR, but the IEEE floating-point numbers are very wasteful in this regard (and others).
While only 0.05% make up NaN-representation for 64 bit (double) floats, they make up 0.39% for 32 bit (single) floats and 3.12% for 16 bit (half) floats! The formula to calculate the ratio is simple: If n_e is the number of bits in the exponent and n_m is the number of bits in the mantissa, then we have 2^(1+n_e+n_m) floating point numbers and 2^(n_m+1)-2 NaN representations. To get the NaN-percentage, you obtain the function “p(ne,nm) = 100.0 / (2^(ne)) * (1 - 2.0^(-nm))”.
Mixed precision is currently a hot topic in HPC, as people move away from using doubles for everything, given they are often overkill, especially in AI. However, IEEE floats are bad enough for >=32 bits, let alone for small bit regimes. In some cases you want to use 8 bits, which is where IEEE floats just die. An 8 bit minifloat (4 exponent bits, 3 mantissa bits) wastes 5.5% of its bit representations for NaN. This is all wasted precision.
The idea behind posits is to use tapered precision, i.e. use a mixed-bit-length exponent. This works really well, as you gain a lot of precision with small exponents (i.e. values near 1 and -1, which are important) but have a crazy dynamic range as you can actually use all bits for the exponent (and have implicit 0-bits for the mantissa). In the face of tapered precision, the custom float formats by the major GPU manufacturers just look comically primitive.
You might think that posits would be more difficult to implement in hardware, but actually the opposite is true. IEEE floats have crazy edge-cases (subnormals, signed 0, etc.) which all take up precious die space. There are many low-hanging fruits to propose a better number system.
Sorry for the huge rant, but I just wanted to let you know that “wheel theory” is far from obscure or unused, and actually at the forefront of the next generation of computer arithmetic concepts.
Though if I understand the history correctly, the original intent of leaving tons of unused bits in NaN representations was to stash away error codes or other information that might be generated at different steps in a numerical pipeline, right? They never ended up being actually used for that, but what did happen eventually is that people started stashing other info into them instead, like pointers, and we got the NaN-boxing now ubiquitous in dynamic language runtimes like JS and LuaJIT. So it’s less a mistake and more a misdesign that turned out ok anyway, at least for the people doing things other than hardcore numeric code. That said, you can’t really NaN-box much interesting info inside an 8-bit float, so the IEEE repr is indeed wasteful there, especially when the entire goal of an 8-bit float is to squeeze as much data into as little space as possible.
Out of curiosity, does the posit spec at all suffer for having a single NaR value instead of separate NaN and infinity values? I vaguely understand how you can coalesce +inf and -inf into a single infinity and it works out fine, but when I think about it in terms of what error cases produce what results, to me infinity and NaN express different things, with NaN being the more severe one. Is it just a matter of re-learning how to think about a different model, or are there useful distinctions between the two that posits lose?
As far as I know, there was no original intent to allow metadata in NaN-representations and JS/LuaJIT just made smart use of it. It’s always the question what you want your number format to be: Should it be able to contain metadata, or only contain information on a represented number? If you outright design the format to be able to contain metadata, you force everybody’s hand because you sacrifice precision and dynamic range in the process. If you want to store metadata on the computation, I find it much more sensible to have a record type of a float and a bitstring for flags or something. I see no context where outside of fully controlled arithmetic environments, where you could go with the record type anyway, one would be able to make use of the additional information.
Regarding your other point: Posits are not yet standardized and there’s some back and forth regarding infinity and NaR and what to use in posits, because you can’t really divide by zero, even though it’s well defined. I personally don’t see too much of an issue with having no infinity-representation, because from my experience as a numerical mathematician, an unexpected infinity is usually the same as a NaN condition and requires the same actions at the end of the day, especially because Infs very quickly decay into NaNs anyway. This is why I prefer NaR and this is what ended up in the standard.
The only thing I personally really need in a number system is a 100% contagious NaR which indicates to me that something is afoot. An investigation of the numerical code would then reveal the origin of the problem. I never had the case where an infinity instead of a NaN would have told me anything more.
To be completely clear: Posit rounding is defined such that any number larger than the largest posit is rounded to the largest posit (and the smallest accordingly). So you never have the case, in contrast to IEEE floats, where an input is rounded to NaR/+-infinity. Given +-infinity is by construction “not a real”, I find it to be somewhat of a violation to allow this transition with arithmetic operations that are well-defined and defined to yield only reals.
Dropping infinity also infinitely reduces the necessary complexity for hardware implementations. IEEE floats are surreal with all their edge cases! :D
I vaguely understand how you can coalesce +inf and -inf into a single infinity and it works out fine
But you lose some features. My vague understanding is that +/-0 and +/-inf exist to support better handling of branch cuts. Kahan says:
Except at logarithmic branch points, those functions can all be continuous up to and onto their boundary slits when zero has a sign that behaves as specified by IEEE standards for floating-point arithmetic; but those functions must be discontinuous on one side of each slit when zero is unsigned. Thus does the sign of zero lay down a trail from computer hardware through programming language compilers, run-time support libraries and applications programmers to, finally, mathematical analysts.
Yes, this was the post-justification for signed zero, but it creates much more problems than it solves, creating many many special rules and gotchas. If you do proper numerical analysis, you don’t need such things to hold your hand. Instead, given it’s totally unexpected for the mathematician, it leads to more errors.
It’s a little known fact that Kahan actually disliked what the industry/IEEE did to his original floating point concepts (I don’t know how he sees it today), and this is not the only case where he apparently did some mental gymnastics to justify bad design afterwards to save face in a way.
I had never heard of that concept. I want to share how I understand dividing by zero from calc2 and then relate that back to what you just shared.
In calc 2 you explore “limits” of an equation. This is going to take some context, though:
Understanding limits
To figure out the limit of 1/x as x approaches 1, you would imagine starting at some number slightly greater than 1, say 1.1 and gradually getting smaller and checking the result:
1/1.1
1/1.01
1/1.001
etc.
But that’s not all. You also do it from the other direction so 0.9 would be:
1/0.9
1/0.99
1/0.999
etc.
The answer for the “limit of 1/x as x approaches 1” is 1. This is true because approaching 1 from both directions converge to the same number (even if they never actually quite reach it). Wolfram alpha agrees https://www.wolframalpha.com/input?i=limit+of+1%2Fx+as+x+-%3E+1
But, limits don’t have to converge to a number, they can also converge to negative or positive infinity.
Limit of dividing by zero
Now instead of converging on 1, let’s converge on zero. What is the “limit of 1/x as X approaches zero”?
We would check 0.1 and go down:
0.1
0.01
0.001
etc.
And as it goes up:
-0.1
-0.01
-0.001
etc.
The problem here is that coming from the top and going down the result approaches (positive) zero, and starting at the bottom and going up, the result approaches negative zero, which is a thing I promise https://en.wikipedia.org/wiki/Signed_zero. Since these values don’t converge to the same value, the answer is unknown it cannot be both answers, therefore division by zero (under this model) is unknowable and wolfram alpha agrees https://wolframalpha.com/input?i=limit+of+1%2Fx+as+x+-%3E+0.
As a “well actually” technical correctness note. I’m explaining this as I intuit it versus in reality 1/x as x approaches 0 goes to positive infinity and negative infinity. I know it has something to do with taylor expansion, but I’ve been out of school too long to explain or remember why. Even so, my explanation is “correct enough” to convey the underlying concept.
Wheel
As you get into higher mathematics I find that it feels more philosophical than concrete. There are different ways to look at the world and if you define the base rules differently than you could have different mathematical frameworks (somewhat like, but not exactly the same, how there is “regular” and “quantum” physics).
It looks like wheel said “negative and positive infinity aren’t different” which is quite convenient for a lot of calculations and then suddenly 1/x does converge to something when it goes to zero.
limits don’t have to converge to a number, they can also converge to negative or positive infinity.
“As a “well actually” technical correctness note”: if one is working in the real numbers, which don’t include infinities, a limit returning infinity is more strictly called “diverging” to positive or negative infinity. A limit can also “diverge by oscillation”, if the value of the expression of which the limit is taken keeps changing for ever, like sin(x) as x tends to infinity.
What this says is that the limit of 1/0 is not defined, not that 1/0 itself is undefined. Consider sin(x)/x. If you do the math, the limit as x→ 0 is 1, but sin(0)/0 = 0/0.
Another interesting example is the Heaviside function, x < 0 ? 0 : 1. Limit → 0 from the left is 0, limit from the right is 1, so the limit doesn’t exist. But the function is well defined at 0!
You can express limits as approaching from a direction though, can’t you? So you can say that lim -> +0 is 1 and lim -> -0 is 0. It’s not that the limit doesn’t exist, but a single limit doesn’t exist, right?
Why is this all way more fun to think about now than when I was taking calc 1 and actually needed to know it?
To me, a more noteworthy one is that the limit of pow(x, x) as x approaches zero is 1. x/x = 1 for almost all values of x, but pow(x, x) = 1 is much rarer.
It all boils down to the fact that we will soon need “proof of life” on the web to impose “digital scarcity”. If you spend hours writing a blog post, it is easy for you to invest the tokens you accumulated in that time to mark your blog post as “valuable”. The problem is that this still does not protect the web from paid spammers in third world countries churning out masses of bullshit blog posts and them investing their “proof of life” into their spam work.
You could introduce the element that any human reading your blog post will “invest” their “proof of life” for the duration taken to read the post (leading to bullshit posts receiving less consideration than useful ones). However, this does not protect the web from people selling their “proof of life” to the highest bidder, which is what we also already see with click farms and paid surveys.
Even if this problem is solved based on the “proof of life” idea, one still has to wonder how to actually implement this such that it is not replicatable by machines. I am certain, though, that there is no way around having to use authoritative data on humans in some way, just like we need certificate authorities.
I think it’s burning out people left and right that are trying to chase all the technological trends and fads in highly active fields of our trade. This is most prominently seen with web developers, who change their preferred software stack every 6-12 months. Have websites become better over the years given the amount of time poured into this technological tumbler? No, they have become worse, e.g. more bloated, less accessible, slower, more energy-consuming, etc..
This is why I think it makes more sense to build a career on top of established technologies and more stable departments. One department to avoid is probably frontend development, which sees many shifts of core technologies over the years and is usually not well-paid, because it is almost impossible to mature in this field and given the frequent technology shifts companies are literally unable to accumulate technical debt in a specific technology that warrants paying experts to fix and maintain their stuff.
Backend development on the other hand, with some exceptions, tends to be conservative in terms of employed technologies. With just PHP, MySQL, Java, C/C++, Go (and others) making up the majority of the software stack, you could have lived under a rock since 2020 and still be probably good to go with a very flat learning curve. Companies will have invested into a much smaller and stable set of technologies, and they will have accumulated technical debt motivating them to pay you good money to take care of their software for you.
Just look at the job listings: Most are looking for C#, Java, C/C++ developers; the few web development gigs are usually not well-paid, but backend can still keep you afloat. I recently noticed how few developers in my age group (late 20’s) actually know how to program in C (not C++). Considering how much software is still written in this language (often for good reasons), this niché could become very profitable in the future, just like COBOL turned into a well-paid niché after the 80’s.
So yeah, to briefly reflect on the article’s topic: It’s your choice how quickly your knowledge diminishes, and your own fault if you choose a profession that does not value your significant time investments to master a given technology.
True mastery is not an end in itself, as it heavily depends on the context it is formed and used in.
Typically not ones that break backward compatibility, just ones that offer a new (usually better) way to do specific things.
Old-style React code generally keeps working fine and interoperates with new-style code, and most of your existing React knowledge still applies. Some of my company’s web app code was written in 2019 and still works fine with the current version.
Every active ecosystem has paradigm shifts, including on the backend. Java is in the middle of one right now with virtual threads and structured concurrency, and it follows an earlier paradigm shift that resulted in a lot of reactive-style Java code being written. Java 8 introduced lambda functions and some people hated them but now they’re ubiquitous.
(Disclaimer: I do way more backend work than UI work, so my view on React isn’t as well-informed as someone more frontend-focused, but I’ve ended up needing to write React code often enough over the years to be familiar with the ecosystem.)
With just PHP, MySQL, Java, C/C++, Go (and others) making up the majority of the software stack, you could have lived under a rock since 2020 and still be probably good to go with a very flat learning curve.
This is one reason I flat-out refuse to touch JavaScript with a 10-foot pole. I went with backend Python + Go for exactly this reason. It’s exhausting when the job market wants you to stay up to date with a tech stack that changes completely every six months or so.
A wise choice then, sir! I taught myself some vanilla JavaScript, because it helps in some cases and is a core web technology, but the real poison are all those frameworks on top of it, which are also the ones switching all the time and burning people out.
It’s amazing how much you can achieve with plain old JavaScript. An added bonus is the fact that websites usually remain light and fast with handcrafted JavaScript code.
You do not need Cloudflare for DDoS-protection. Many hosters offer it as part of their packages, like Hetzner for instance with their VPS and hosting packages.
Great story, but probably the most important lesson to learn from this remained un-remarked on in the conclusion:
I was into my unofficial second shift having already logged 12 hours that Wednesday. Long workdays are a nominal scenario for the assembly and test phase.
Whoever was responsible for the crunch-time situation really deserves the blame for the problem, not the person who wired the breakout box.
I am in this world. The deadline is set by the orbit of Mars, if you miss it you are delayed for two years, so there is an extreme amount of pressure to hit the launch window. Secondly, every space mission of this class is a fabergé egg with 217 seperate contractors contributing their custom jewels. There are always integration issues, even assuming there wasn’t some fundamental subsystem issue that delayed delivery for integration. Even when rovers are nominally the same platform, they still have quirks and different instruments that mean they are still firmly pets rather than cattle. Given the cost per kg of launch, every subsystem has to be incredibly marginal and fragile weight-wise, else it’s a gramme taken away from science payloads which is ultimately the whole purpose of these missions. As a result, things are delicate and fussy and have very un-shaken-down procedures. It’s the perfect storm for double shifts.
There’s also an often-ignored aspect that’s easy to miss outside regulated fields: there is an ever-present feeling that there is one more thing to verify, and it’s extremely pressing because that may be your last chance to check it and fix it. About half the double shifts I’ve worked weren’t crunch time specifically, we weren’t in any danger of missing a deadline (I was in a parallel world where deadlines were fortunately not set by the orbits of celestial objects). It’s just you could never be too sure.
Also, radiation hardened instruments and electronic components have a reputation of ruggedness that gives lots of people a surprisingly incorrect expectation of ruggedness and replicableness (is that even a word?) about many spacecrafts. These aren’t serially-manufactured flying machines, they’re one-, maybe two-of-a-kind things. They work reliably not because they’ve gone through umpteen assembly line audits that result in a perfect fabrication flow, where everything that comes off the assembly line is guaranteed to work within six sigma. Some components on these things are like that, but the whole flying gizmo works reliably only because it’s tested into oblivion.
Less crunch would obviously be desirable. But even a perfectly-planned project with 100% delay-free execution will still end up with some crunch, if only because test cycles are the only guarantee of quality so there will always be some pressure to use any available time to do some more of those and to avoid mishaps by making procedures crunch-proof, rather than by avoiding the crunch.
I did a little searching about this. The project was green-lit in mid 2000 with a launch window in Summer of 2003, so about 3 years, to build not one but two rovers and get them to mars for a 90 day mission. Check out this pdf of a memo from what would be riiiiight smack in the middle of that schedule:
The NASA Office of Inspector General (OIG) conducted an audit of the Implementation of Faster, Better, Cheaper (FBC) policies for acquisition management at NASA. By using FBC to mange programs/projects, NASA has attempted to change not only the way project managers think, but also the way they conduct business. Therefore, we considered FBC a management policy that should be defined, documented in policy documents, and incorporated into the strategic planning process. Although NASA has been using the FBC approach to manage projects since 1992, NASA has neither defined FBC nor implemented policies and guidance for FBC. Without a common understanding of FBC, NASA cannot effectively communicate its principles to program/project managers or contractor employees. In addition, the Agency has not incorporated sufficient FBC goals, objectives, and metrics into NASA’s strategic management process. Therefore, missions completed using FBC are outside the strategic management and planning process, and progress toward achieving FBC cannot be measured or reported. Finally, NASA has not adequately aligned its human resources with its strategic goals. As a result, the Agency cannot determine the appropriate number of staff and competencies needed to effectively carry out strategic goals and objectives for its programs.
My paraphrase: Y’all told everyone to do stuff faster, better, and cheaper, but then didn’t actually make any policies for how to do that, or how to measure your success at doing that. Oh, and y’all suck out loud at staffing.
They include the management response which was basically: Well… yeah that’s a fair point. Also it’s not Faster Better Cheaper’s fault we suck at staffing! We just suck at staffing in general. We plan to develop plans to fix that next year!
I’m not joking about that “plan to develop plans” part btw. Here’s the full quote:
NASA also only partially concurred with the recommendations to align staffing with strategic goals because management does not view FBC as the cause for the staffing issues identified. However, NASA plans to develop a workforce plan for each Center that will link staffing, funding resources, mission and activities and core competencies. In addition, the fiscal year 2002 Performance Plan will include a discussion of Agency human resources.
Big oof.
Despite all of this, the rovers meant to last like 3 months lasted 6 years and 14 years respectively. ¯\_(ツ)_/¯
Good point! Another aspect is that you should design systems in such a way that inadvertent misconnections become impossible, even for low-level testing. If that’s not possible with the hardware, in the given case a very simple pre-test would have been to test the impedance and resistance and abort in any case of excessive measurements.
To build a bridge to programming: Design your interfaces such that they cannot be broken with bogus input. This especially applies to low-level functions that are only explicitly called in tests, because you can mess up test inputs easily by accident. One approach is to use a strong type system, e.g. a function “{Real, NaR} log10(x:Real)” is much more fragile than “Real log10(x:StrictlyPositive)”, which is constrained by the type system not to yield NaR (not a real) in any case.
I like this distinction, I’ve met the same thing in OCaml (and wrote my own thread pool that can be used for both, on OCaml multicore). Just like the article says, I’d typically have both a pool for CPU-bound tasks (one thread per CPU core); and a pool with many more threads to handle queries (e.g. HTTP queries) in direct style. If I’d use an event loop for IOs there would still be a need for the first kind of pool to deal with CPU heavy tasks.
For event loops there’s also typically a need (at least on Linux) to hand off DNS lookups to a thead pool because the libc APIs are blocking. You can get around this by implementing DNS in a non-blocking manner, but then you lose all the other name lookups available via the system API (e.g. mDNS/zeroconf).
The really annoying thing there is that DNS-style lookups are best done with something that looks like a stack less coroutine (send the request, keep a continuation for what to invoke on response) but the libc APIs require a stack for the duration of their call. It would be nice to have an asynchronous getaddrinfo that returned an opaque token and a mechanism for registering it with kqueue / epoll / whatever, and a function to call to process the results.
Right, as far as I understand that’s what libuv does, for example. It has a thread pool to handle DNS and disk IO (at least when not using io_uring), and the actual event loop handles networking purposes and timers.
The way things should work is that there should be a DNS server on localhost which can resolve any kind of name lookup, and then applications should use some actually good and useful DNS library instead of the libc nonsense.
You’re not wrong, but your test is broken. Both versions are just storing a constant into memory n times, because the value of small is known a priori, so the computation of big is optimized out entirely. The DoNotOptimize enforces that the value of big is considered “used” (otherwise the loop would have no observable effects and could be removed entirely), but movw $0x2727, 0xe(%rsp) is enough to satisfy that. It doesn’t force the computation of big to be executed.
Ah, you are right. I redid the code by making small a random number continually changed each pass of the loop. It does still come out to the same assembly with either implementation but now the all important shl $0x8,%eax is there.
That’s a cool way to think about it, thank you for bringing it up. I think both direct bit copying and multiplication need an explanation anyway if you aren’t familiar with the problem and its solution, so it’s not clearly a win clarity-wise.
When it comes to performance, well bit operations are always fast so at least you’ll get a peace of mind when opting for those, even if it doesn’t matter in the end.
I played around with this and seems like you can’t do it with multiplication if the high bit depth isn’t divisible by the low one. For example RGB565 pixel formats are common and you need to expand 5-bit channels to 8-bit ones to display them on screen.
I don’t think you can do that with integer multiplication because you need to “fill” 3 low bits and you only have integer factors of a 5-bit number at hand. I added a mention to the article.
Multiplying by 257 looks like magic (although less so if you write it 0x101 or 0b10000001). Shift-and-or tells you exactly what you really need to know: 00 becomes 0000, FF becomes FFFF, everything in between is monotonic.
I guess I’m a bit curious on a meta level where the aversion to paying for content comes from. Like, I get that someone is inevitably going to reply with “I live in a country/situation where it is impossible to pay, you insensitive clod!” but take as a given that I am asking this question of people who have access to an accepted payment method and who have sufficient income to afford to do so.
My partner and I watch a lot of stuff together on YouTube, mostly cooking and crafts stuff, and I watch a lot of stuff related to my own hobbies. So I pay for premium to get it ad-free. I’ve historically also watched a fair amount of Twitch streamers related to my hobbies, so I’ve paid for subscriptions to them to get ad-free viewing. I support a podcast that I like, and get ad-free episodes in return. I make a recurring monthly donation to the admin of the Mastodon server I use. I’ve bought merch or other products from artists I liked.
I have the means to do this. Other people who have the means to do this: why don’t you?
(I also run heavy ad-blockers on all my devices, of course, if nothing else as a security/privacy measure, but when I find something I like I still tend to seek out a way to pay to support it so that the thing I like will continue to exist)
where the aversion to paying for content comes from
I have the means to do this. Other people who have the means to do this: why don’t you?
I don’t have an aversion to paying for content: I happily do that for music (on bandcamp or on CDs), books, games and would be happy to do that for films and shows given the opportunity.
I do, however, have an aversion for paying for services. Take Spotify, for example - $10 per month, and you get pretty good selection of music, but it’s tied to a pretty terrible music player. On top of that from those $10 I pay, most of it goes to Spotify itself and the top played artists globally, not the ones I actually listen to. So I’m theoretically paying for content, but in practice most of it is for the service (which I’d rather not have – I’d prefer to have my music offline, in a media player that doesn’t suck), and content creators I don’t care about. $10 for a service I don’t like, and supporting people I don’t want to support. Compared to Bandcamp which takes ~20% and just gives me the files (and streaming options if I want them), this is a terrible value proposition for either of my goals – supporting the artists and getting a good product out of it.
Now for YouTube this is a bit of a different story. I don’t know what the revenue share is like when it comes to the Premium subscriptions, and how much my favourite channels would really get out of it. But YouTube has become a monopolist and annihilated the competition by being the good, free product with no ads in it. Now that it’s comfy on its throne, it’s pulling the rug out and abusing its position to push whatever it wants – unskippable ads, adblock-blockers, idiotic copyright and monetization policies… and we have nowhere else to go. Is this something you’d want to support? I see a product actively becoming worse over the years, and I’m suppose to believe that once I’ll start paying it’ll become better?
Were it a good product that becomes better if you pay for it – that’d be something worth considering. That reminds me of Twitter in its prime, in its golden days. If it asked for money back then and gave extra stuff in return – new functionality, unrestricted Client API access etc, I’d happily throw my money at them in exchange for something extra. But nowadays it’s been on an almost comical downward spiral, getting worse and worse every week (not to mention all the people I cared about leaving), it’s taken most of the good things about it away and now it asks for money? It locked me out of Tweetdeck, leaving me with the absolutely abhorrent default client and promises that if I pay I’ll get it back? No thanks!
And it’s the same with YouTube. Lock in 120Hz+ streams behind premium and I’ll happily pay extra for the smoother videos. But with what they’ve been doing over the years, paying to get some of the good old days back just doesn’t sit right with me. And if I wanted to support the content (creators), then paying for YouTube is a very suboptimal way of going about it.
Take Spotify, for example - $10 per month, and you get pretty good selection of music, but it’s tied to a pretty terrible music player
I don’t think this is true. I don’t use Spotify, but my partner does, so I have set up spotifyd (runs at least on FreeBSD and Linux) and it seems to work well. We control it with the official client, but I believe there are other things that replace the control interface.
I do, however, have an aversion for paying for services. Take Spotify, for example
I guess this is a trendy thing to claim, but I’m not sure I understand the logic behind it.
For example: I have bookcases full of books that I’ve bought, and also an ebook reader with even more books I’ve bought. But I also have a library card. I only buy a book if I think I’m going to want to re-read it, or be able to instantly refer to it, multiple times. If I think I’m only ever going to read a particular book once, well, I’m probably not going to buy a copy; instead I’ll look to borrow one from the library.
I see streaming music services as being similar to a library card. They let me sample a lot of things that I wouldn’t ever listen to if my only option were to buy and forever own a copy. And when they do turn up something I like enough to re-listen many times, that’s when I do go buy a copy.
Were it a good product that becomes better if you pay for it
I’ve never understood why media needs to go above and beyond to get someone to pay for it. In the old days I could watch a movie when it was shown on a broadcast TV channel and accept the ads the TV channel would insert, or I could pay to watch the movie in a theater without any ad breaks, or pay to rent or own a copy of the the movie on VHS without any ad breaks in it.
I look at YouTube the same way: the cost of “free” content is advertising, and I can pay to remove the ads. I don’t need it to also offer a bunch of other above-and-beyond features on top of that.
My library has rules for what I can do with the books I borrow. Which makes sense given that they remain the property of the library.
So I’m not really sure what your point was here. Yes, libraries impose terms and conditions on their patrons. If you want to argue the nuances of which terms and conditions are morally acceptable to you personally, that’s a completely different topic than what was being discussed.
I guess this is a trendy thing to claim, but I’m not sure I understand the logic behind it
I don’t know if it’s trendy or not, but I’m not surprised. And it’s connected to your followup points and examples: library cards, VHSes etc – it’s hard to shake off the feeling that things used to be better. Library cards are free. VHSes I can buy and keep. Subscriptions services give me the worst of both worlds: I have to keep paying, I don’t get to keep anything, and they impose a ton of restrictions on how I get to consume the content.
If I have some euros to spare, I can buy myself an music album (physical or digital), which I can then listen to whenever I want to and give it away (discreetly) or sell it (if it’s physical) if I don’t want it anymore. I can buy a book in a similar way. I used to be able to do that with video games to – as I kid I bought a game, played it for a while, then traded it for a comic book with a rabbit samurai which I still have on my shelf (and which has since gone up in value :)). That’s a pretty good deal! Not so much with the subscription-based alternatives though. I guess the upside is that I have access to a vast library and I can access it anywhere I want, but it’s not only not that important to me personally, it also has to compete with free services that do the exact same thing.
This is where we get to the “above and beyond”. Things, in my view, have become worse for the consumers. They may have become slightly more convenient in some cases, but the experience is massively inferior in others. This is why, if I have to splash out my money on something, it’d better be really good, not just acceptably mediocre. This is the case with video games for me – they’re DRM’d and locked to my account forever, but there’s quite a lot of added value (automatic updates, streamlined installations, save syncing etc) that makes it an attractive proposition even compared to the good old days. With music, videos, books? Not so much.
I would argue that digital distribution (of music, books etc) has opened up a huge market for creators. Physical books and music are heavy and costly to manufacture. Only certain stores could carry them (postal distribution helped a bit). Nowadays an electronic book or music set is among the smallest pieces of digital content there is. This makes it much easier to seek and find a big market.
As to why gatekeepers and middlemen appear despite the early internet theoreticians saying they wouldn’t, that just means those people didn’t know how economics works.
Subscriptions services give me the worst of both worlds: I have to keep paying, I don’t get to keep anything, and they impose a ton of restrictions on how I get to consume the content.
With a library card you also don’t get to keep the books you borrow, and there are restrictions (return within a certain time, late fees if you don’t, etc.). Again, the problem is you’re treating the streaming service as if it’s your own private owned-by-you record cabinet in your home, when it really is more like the library that has a far larger catalog of stuff but that you explicit agree is not and won’t become your owned-by-you property.
I’m also not sure what “restriction” I’m suffering under. I open the app, I browse around until I find something that looks good, I hit “play” and it plays. What do you think I ought to be able to do that I’m not?
What do you think I ought to be able to do [in Spotify] that I’m not?
While I’ve never used Spotify, I understand that it does not let you download tracks as DRM-free standalone audio files. Therefore, if I subscribed to Spotify instead of buying standalone audio files, I would miss these features:
Creating playlists that contain both tracks on Spotify and tracks unavailable on Spotify
Opening tracks in alternative music players
for a better listening experience
iTunes (now Apple Music) tracks the Play Count and Last Played date of each track. Spotify may track this too, but I bet you can’t view those records if your subscription isn’t active.
Some may prefer the keyboard shortcuts of other media apps. mpv’s shortcuts are customizable to jump forward or backward any number of seconds with arbitrary keypresses.
to understand the music better
Viewing the track’s spectrogram to learn the notes using Audacity or Amadeus Pro
Marking measures and listening to sections on repeat or slowed-down using Transcribe!
Creating derivative works under fair use (e.g. only for personal use)
Mashing up the music with other tracks in dJay Pro
Sampling the track while creating new music in a digital audio workstation
While I’ve never used Spotify, I understand that it does not let you download tracks as DRM-free standalone audio files.
I’m not trying to be harsh here, but: have you checked the context of the discussion you’re replying to? My point was that I see streaming services as similar to having a library card which lets me temporarily borrow and enjoy many things that I would never ever go out and buy personally-owned copies of for myself.
And through streaming services I’ve discovered quite a few things that I later did buy copies of because I liked them enough, but I never would have even listened to them once if full-price up-front fully-owned purchasing was the only way to do so.
So complaining that you do not obtain full “owner” rights over tracks from streaming services does not really have any relevance to what was being discussed. That’s already been acknowledged, and I see streaming and purchasing as complementary things. I was asking for opinions on why they wouldn’t be, or what restrictions streaming has – in the context of the streaming-and-purchasing-are-complementary point – that make it untenable as a complement to purchasing.
I think my comment’s definition of “restrictions” is consistent with the definition in its grandparent comment, tadzik’s comment. That definition is “the restrictions that come with not owning content”.
I derive that definition from the sentence in tadzik’s comment that you quoted earlier:
“Subscriptions services give me the worst of both worlds: I have to keep paying, I don’t get to keep anything, and they impose a ton of restrictions on how I get to consume the content.”
the worst of the world of paid ownership
“I have to keep paying”
the worst of the world of free subscription
“I don’t get to keep anything”
“they impose a ton of restrictions on how I get to consume the content”
You replied to that comment asking for examples of “restrictions”. I think I successfully gave them. If you think the restrictions deriving from lack of ownership are off-topic, it would have been clearer to state that in your reply to tadzik’s comment instead of asking for examples.
Now, if I try to answer the question you said you meant – why streaming could not be complementary to purchasing – I think we already agree that they are complementary. Streaming is indeed useful for browsing in a way that purchasing is not. Streaming for free and without ads is nicer than streaming with a paid subscription, but depending on the price and the value of the paid content relative to available free content, paying for a streaming subscription may be worth it.
I have the means to do this. Other people who have the means to do this: why don’t you?
I try to support the artists who made the art that I like. I try not to support Google. If I had to choose between them I guess I’d reluctantly choose not to support Google, but I don’t have to choose: I can give money (less in-) directly (via Patreon or whatever) to the YouTubers I want to support.
More broadly, though, I just really don’t like ads. They make the internet worse to look at, and I don’t like business models based on the premise that my attention is a commodity that people are free to auction off. If there were only a few people doing that I’d just avoid them, but that’s pretty much impossible today. If I had to pay to make the ads go away, I think it would probably harden my stance against the creator in question, since I’d feel like I was rewarding a business model I disapprove of. As it is, I feel good about giving money to the creators I give money to precisely because it feels both mutually beneficial and entirely voluntary.
If you believe it’s unethical to support Google, then I don’t see how it could be ethical to watch YouTube at all. You still appear in the metrics that they use in their pitches to advertisers, and indeed you still appear in the aggregate metrics even when using an ad blocker. There is no ethical way to consume YouTube in that situation, that I can see.
And that’s why I try to watch videos on Nebula, when they’re available. But for the most part, YouTube is a monopoly and all the content is there; there’s being principled and then there’s being a masochist like Richard Stallman.
Let me give you my take on this. What does an ad-view on YouTube cost a company? From what I read it’s $0.010-$0.030 (let’s say on average $0.020). The average time spent on YouTube is ~20 minutes per day. If the viewer was shown an ad every 4 minutes, this amounts to 5 ads per day and 150 ads per month, which is equal to 3$.
How much is YouTube premium? 14$. It’s a rip-off. For it to be reasonably-priced the average YouTube Premium subscriber would have to spend 2+ hours per day on the platform, which is quite a lot and way beyond average (~20 minutes).
The content companies really nailed it with Music in the early 2010’s. You could buy an album DRM free and were left alone. Then they messed it up with movies, where you “bought” a movie but only bought a right-to-use depending on the platform. Music streaming is a way to disown people again in a way and to incentivize users to pay monthly for what is often a relatively static music collection which would have been much cheaper to buy once and extend incrementally. The detrimental effects on the music as a medium as a whole are another topic. Movie/Series streaming is a bit of grey zone which I won’t discuss here. What we see, though, is that the once simple streaming landscape has become more and more diverse, leading to immense costs per month if you want to follow a diverse offering of shows. Surely this is a first-world-problem, but you would’ve spent much less back in the day just buying a set of series on DVD, and you would have owned those DVDs forever. You could have also borrowed DVDs and BluRays at your local library.
To keep it short, I know many people who would be willing to pay for content, however, they don’t like being held hostage or becoming suckers of the greedy companies. Music has proven that content can be DRM free and still be profitable, but it has to be accessible to the user.
YouTube loses a lot of money with ad blocking users. Even conservative metrics assume a 20% market share of ad blocking users. If you factor this in, YouTube could reasonably ask for a plan of 1.99$/month or something to be ad-free. If they do it in a way that it’s not a hassle and they do it fairly, everyone would be happy.
No, the cost to place an ad, of which some is taken by Google as profit for their ad business, some is payment processing fees, some goes to YouTube to run the service and some goes to the content creator. In theory, a company then makes more in increased sales than they’ve spent, but most are really bad at tracking this.
The ad price is set based on an auction, so varies a lot depending on how many other companies want to place an ad in the same video. I suspect that most of the ads that I see are incredibly cheap because they’re not on mass-market content and few companies are competing for the space.
Ah, I see, thanks. I was thinking of how much each ad placement earns the video creator, but I see now that the correct analysis is to compare the cost of premium/ad free to the cost of placing the ad.
Your math is extremely suspect given that AFAIK it’s a competitive bidding process where the most-watched videos command higher rates. And since your whole argument is based on that math, I don’t think it holds up.
Based on the comments regarding who has access, I’d say using Cloudflare would help in this scenario. It seems that this website has decided to block entire parts of the globe to mitigate bad behavior, something using Cloudflare would enable without geoblocking.
We want to set the right example for the ecosystem: the way forward is implementing open standards, conformant to the specifications, without compromises for “portability”.
Getting OpenGL® ES 3.1 to run on Asahi Linux is an achievement but mixing it up with normative statements like “the right example” and “the way forward” is a bit strange, for lack of a better term. It’s adding ideology into what is a pragmatic issue. Open standards aren’t inherently good for being open per se, the quality and the feasibility of the standard is what essentially matters. Apple has done relatively well without OpenGL® ES 3.1 support, despite OpenGL® ES 3.1 being open and Metal being closed.
Normative arguments in programming communities is not a new phenomenon. Zed Shaw had a recent talk about how this type of rhetoric operates in programming communities specifically.
The only reason Metal does well is because Apple is a locked down ecosystem.
Over time, people start to love and defend the prisons they are willingly caught up in, up to the point that they are ridiculing, questioning and ultimately despising those with the energy and commitment to break free from them.
Godspeed to Alyssa Rosenzweig and her team for this marvelous achievement!
The only reason Metal does well is because Apple is a locked down ecosystem.
There’s something of a history in graphics APIs of closed ones being superior. OpenGL didn’t lose out to DirectX because Microsoft pushed Direct3D, it lost because you could write Direct3D code that ran on half a dozen GPU makers’ cards and on Xbox, whereas the OpenGL code had to have a load of different paths for different spellings of the same thing via vendor extensions and even people like John Carmack, who had a lot of non-technical reasons for wanting to use OpenGL, used Direct3D and wrote rants about the Kronos group’s failure to make OpenGL competitive. Before that, 3dfx’s GLide was more popular for games than OpenGL, even though there was an OpenGL implementation available for their cards.
From what I’ve seen of Metal, it’s a fairly nice API that is only superficially different to Direct3D 12 or Vulkan in a handful of ways. There’s an implementation of Vulkan that runs on top of metal, so nothing is stopping you using Vulkan on Apple platforms, but it looks as if few people choose to do this if they don’t need portability, which suggests that Metal is doing something right.
Outside of the graphics arena, CUDA is vastly more popular for compute on GPUs than anything open.
The problem with open standards is that you need to please multiple vendors, each of which has some things that they do well and some less well than competitors. You either make a huge API that can do everyone’s fast things (but has no performance portability) or a small API that doesn’t expose all of the fast things on some hardware. With GLide and CUDA, 3dfx and NVIDIA just made APIs for their hardware and ignored everyone else. With Metal, Apple can make an API for what they want to do and co-design it with their GPUs so that they can avoid spending any silicon area or power budget on things that they don’t need. With Direct3D, Microsoft tries a bit to build consensus between the GPU vendors and then uses its dominant market position to say ‘this is what will be in the next version, if you can’t do it you can’t put the DirectX logo on your marketing, if you can’t do it fast then you’ll look bad in the benchmarks that we publish’. With OpenGL and Vulkan, there’s no one empowered to make the hard calls.
OpenGL didn’t lose out to DirectX because Microsoft pushed Direct3D, it lost because you could write Direct3D code that ran on half a dozen GPU makers’ cards and on Xbox, whereas the OpenGL code had to have a load of different paths for different spellings of the same thing via vendor extensions
For those interested in a full history of this time period, it’s covered in this epic StackOverflow answer.
I would also like to claim that there is a lot of general incompetence and lack of will to even try useful stuff among the vendors. Look at AMD’s current compute effort, ROCm and HIP: despite HIP being a straight clone of CUDA, the whole ROCm stack sucks and makes mistakes Nvidia managed to avoid 15 years ago. And this is AMD’s own system, that isn’t a collaboration with anyone. It’s no wonder OpenCL never went anywhere, with vendors like that.
A lot of people like to blame Nvidia for OpenCL 2.0’s failure, but neither AMD nor Intel ever actually produced functional OpenCL 2.0 drivers, they made partial and buggy drivers and then sorta just gave up.
(And indeed, Apple’s Metal was from the start intended as an OpenCL replacement just as much as an OpenGL replacement, hence Metal compute shaders are roughly equivalent in power and ease of use to CUDA, while Vulkan compute shaders are very limited.)
The only reason Metal does well is because Apple is a locked down ecosystem.
That is precisely my point. The openness of an API, or lack thereof, does not seem to critically affect the success, usefulness, or capability of an API.
People building and promoting systems often mention the ideology that drives them and claim that ideology as a feature of the system, not merely incidental.
For example in the west we go on about democracy and human rights but there are some very successful countries that don’t embrace those values. Personally I don’t want to live somewhere like China or Qatar, but I also don’t want to use macOS. I have chosen to run macOS in the past and I live in a country with worse human rights and democracy than where I grew up, so I’m not claiming any kind of ideological purity - just that ideology factors into my pragmatic choices.
just that ideology factors into my pragmatic choices.
Overall I thought your comparison of Apple to China or Qatar was an interesting point but this last statement missed some nuance in my comment. I believe you are characterizing the decision itself of where to live as a pragmatic choice whereas my comment was more about pragmatic or ideological decision-making. I would definitely concede that the decision of where to live is a personal one and just like you, the material conditions of China and Qatar wouldn’t automatically override my personal preferences. On the other hand, I believe personal preference is generally less relevant when choosing an API to base your application on. I will use Metal if I have to, I will use DirectX if I have to, both because one of my goals as a software developer is to make sure my software is as accessible as possible.
As an aside, I don’t think the decision to implement some version of OpenGL on the M1 was ideological at its core. I don’t think there was any other option for them really, outside of implementing some version of Vulkan, so it seemed more like a hard constraint.
I don’t think there was any other option for them really, outside of implementing some version of Vulkan, so it seemed more like a hard constraint.
Vulkan didn’t exist when they developed Metal, and once they’d started implementing all their things in Metal and teaching it to the developers in their ecosystem it’d be a very big investment wasted to switch to Vulkan. Also, although Vulkan is mostly a success now, that wasn’t certain to happen when Apple needed to decide on a course of action.
Additionally, Khronos standards have tended to be insufficient for the needs of developers in the past, with various vendor extensions being essentially required to run games. This seems to be happening again with Vulkan (AFAIK, being just a bystander in this). By using their own API, they’re not at the mercy of the whims and trends of the developer community to the same extent.
What was one revelation of the Snowden leaks? NSA/five eyes really hate encryption and lamented about the fact more and more web traffic is encrypted. It would be very convenient for them to have a honeypot MITM that strips encryption and can see all traffic, while also preventing Tor users from effectively browsing the clearnet.
Cloudflare sees all traffic between you and the website you want to visit in clear text. Cloudflare is located in San Francisco, CA, USA. 20% of clearnet internet traffic goes through Cloudflare. Every US company can be forced by secret federal court order to allow the NSA to tap into their communications and no one at such a company who knows about it may talk about it to anyone unless they want to spend the next 10-20 years behind bars. It doesn’t matter if Cloudflare was an NSA-thing from the start or turned into one later, it very surely is given its size and market share.
DDoS protection is nothing special. Hosters like Hetzner have first-rate DDoS-protection and it’s included free of charge with their VPS packages. With some very few exceptions, I think it’s nonsense that companies think they have to use Cloudflare for DDoS protection.
Please think twice before using services like Cloudflare, especially when they’re “free”. Who is the product?
Please think twice before using services like Cloudflare, especially when they’re “free”. Who is the product?
While I agree with that, it’s often not even the choice of most tech people, unless it’s their own company. Similar things are true for cloud usage at large. There’s very little incentive to care about privacy and that kind of security in most companies. It doesn’t cost companies anything, but it brings them certain benefits. It’s just not how your typical company operates.
Of course this also explains why companies, large and small are being “hacked” all the time. But the response is using some mandatory security courses for employees and hoping it doesn’t happen next time. Security is barely a worthwhile endeavor for most companies, outside of marketing and similar things. It sounds good both in ads and in internal presentations, projects, etc. But it’s rarely meant sincerely in commercial contexts.
It’s more like companies showing you a “Your privacy is important to us”, when the only reason that they are required to have that banner up is precisely cause they couldn’t care less about it.
Companies still will eagerly provide your data to CDNs, analytics tools, and all sorts of other third parties, embed Facebook, not read the docs enough to opt out for non-facebook sending their data to FB and so on. It’s simply not an objective for a company that exists to increase profit. It’s not just about privacy. It’s a general theme. It’s about all about incentives.
If a website uses Cloudflare, the traffic between you and the website is 100% readable by Cloudflare. If you don’t believe me, read this:
CF does see all of the passwords, OAuth tokens, secrets, and PII that go through its systems, however, Cloudflare operates in accordance 56 with the GDPR and isn’t an advertising or data collection company giving them little to no incentive to steal any PII or steal the passwords of customers/website operators.
It’s not a question of belief. It was simply a technical question. As @edk mentions, the CDN functionality relies on being able to terminate the TLS connection on a Cloudflare server.
It certainly is a security puzzle worth thinking about. For example, there are protocols (designed before TLS was widespread) that use nonces and do not pass plain text passwords or even login identities (see “userhash”), even within TLS protected streams, e.g. https://datatracker.ietf.org/doc/html/rfc7616
A lot of CloudFlare’s (and other CDN) features depend on MITMing, reading data, but also things like modifying headers, sometimes compressing or re-incoding images, etc. And of course they cache the data. Tunneling through cloudflare wouldn’t be a big problem, but also wouldn’t gain you anything.
You could of course do that just for passwords, but the thing you protect against by having an account and a password could still be done by Cloudflare (reading content, and even modifying requests and responses).
Cloudflare is a CDN at heart. Like any CDN it needs to think in plaintext so it can cache things. So Cloudflare’s reverse proxy terminates TLS and (optionally!!) re-establishes TLS in order to talk to whatever is behind it. Setting aside any internal policy/security measures, which I hope exist but have no way of knowing for sure, someone with access to Cloudflare’s infrastructure could snoop on traffic while it’s between TLS connections, so to speak.
I should note that unlike parent I am not totally convinced Cloudflare is the NSA, although I would imagine they’ve seen more FISA orders than most companies their size.
They don’t really need to “be” NSA. If they operate in the US, as they do, any employee can be compelled to do their bidding through a National Security Letter, and it might even be a punishable offense for that employee to tell his boss.
That’s the happy case. There are many Government far more malign than the US Government; I’d bet that some of them (e.g. the Chinese and Russian Governments) have at least attempted to compromise individual employees of Cloudflare.
The “happy case” depends entirely on who exactly has their privacy infringed by a Cloudflare compromise, and it will likely not be the same answer for everyone involved.
What was one revelation of the Snowden leaks? NSA/five eyes really hate encryption and lamented about the fact more and more web traffic is encrypted.
This was a published issue long before Snowden. Clipper chip arguments from 1994 or so and back earlier with James Bamford’s Puzzle Palace all these supposed revelations were in the clear. https://a.co/d/8KBvKPL
I think (pretty much aligned with your point) that “people” in your sentence really means “people who didn’t read Bamford’s The Puzzle Palace from 1983, or read any freedom of information act documents since then about NSA, or ever visit NSA” because most the people i knew were like “no duh…should be obvious”.
And, again to your point, the number of such people was adequately large to create a sustained reaction to Snowden’s leaks.
I do think the co-opting of NSA equipment to watch domestic cellphone network traffic was the only previously unemphasized thing (because it’s outside NSA’s charter, unless one side of the conversation crosses the US border).
I have been developing a unicode library (libgrapheme) for a few years now, which especially allows grapheme segmentation of UTF-8 strings. It’s a freestanding (i.e. no stdlib) library, so you can compile it into wasm and use it from there without having to rely on browser functions.
They threw way too much shade on tiling window managers. Instead of lamenting the fact that many applications require a certain aspect ratio, the rational answer would be to improve that aspect instead of trying to reinvent the wheel at the WM level. It worked on the web too (mobile vs. desktop, responsive design).
The strongest point of most tiling WMs (dwm in my case) is the fact that I can control everything with my keyboard and don’t waste screen-space. Before using a tiling WM I had two monitors, but I am perfectly fine with one monitor and 10 workspaces now.
GNOMEs position is not favourable: They have to address the non-power-users and dumb down everything so nobody is required to learn anything new. However, the fundamental axiom of good UX/UI design is a tight link between human and machine, and nothing beats the keyboard. It, however, requires a bit of learning and getting used to.
indicated that while many users “feel” faster using the keyboard for interactions, for things other than text entry, when timed by an observer, most of them are actually faster with the mouse.
So I’d find some sourcing for your “fundamental axiom” really interesting, especially if accompanied by measurements.
The tog “study” isn’t. It makes reference to some study that’s never really explained or cited. A good write up and refutation of the famous tog page and the myths it’s spawned: http://danluu.com/keyboard-v-mouse/
While it’s just speculation since, again, we know nothing about the study that was conducted, I suspect that the result was based on users using new software they had no familiarity with. Two seconds to grab something out of a menu seems long. Two seconds to input a chord is very obviously not how experienced users use their software today. You can easily do a bunch of little experiments here yourself, I have since the tog thing always bothered me, personally (according to a watch) a familiar keychord is significant and consistently faster than most operations that requires actual pointing.
Key cords do have pretty poor discovery, and memorizing them takes time, so in unfamiliar software yeah, I probably would be faster digging through menus. But for something like a WM, the UI of which is engaged essentially 100% of the time you use a computer, the investment in keyboard driven operation makes all the sense in the world.
Thanks for the interesting source! I was a bit unclear/misleading with my wording: Obviously the mouse is much better than the keyboard for any GUI stuff, but what I meant was in regard to the single task of window management. I cannot point to any sources, but reordering windows with the mouse all the time takes a lot of time, whereas a tiling WM does it for you and you are lightning fast with reordering windows.
GNOME/ish apps are in fact adopting responsive design en masse (thanks to the whole open-Linux-smartphone movement). But it’s more complicated that just the aspect ratio. I don’t see any lamenting; rather, the post is capturing new, very interesting notions that should be useful for building good UX:
While an app may technically work at mobile sizes that’s probably not the best way to use that app if you have a large display. To stay with our chat example, you probably want to avoid folding the sidebar if it can be avoided, so the range of ideal sizes would be between the point where it becomes single pane and its maximum usable size.
I use NewPipe every day, have an adblocker on my computer and phone and am absolutely shocked when I watch YouTube at other places and see how many ads they are pushing. Even more shocking is how people have gotten used to it.
Of course Alphabet/YouTube has to finance itself somehow, but 11,99€/month for YouTube Premium is definitely overprized. If you consider that YouTube only makes a fraction of a cent per ad-view per person, they could probably be profitable with 0,99€/month, and I would be willing to pay that.
This would not resolve one other major issue I have with YouTube: The website is bloated as hell. NewPipe is smooth and native, runs on F-Droid and allows easy downloading (audio or video).
absolutely shocked when I watch YouTube at other places and see how many ads they are pushing
…
11,99€/month for YouTube Premium is definitely overprized.
Are you sure? ;-P
So, I pay for a family premium account, primarily to disable ads on stuff my kids watch. I have several motivations:
I want them to grow up assuming that it’s reasonable to pay for content (albeit indirectly in this case).
Ad content is so often mental poison.
I want them to grow up unused to omnipresent advertising. It should stand out, and feel intrusive and horrible.
Now that they’re starting to find content they really like (Explosions and Fire is great for kids, if a bit sweary) I’m going to fund them to choose their favourite creator each, via Patreon or similar.
Edited to add: I also run ad-blocking, at a network level. I have no qualms at all about ad-blocking in general, and think that ad-tech is a blight on the world. Please don’t mistake my willingness to pay for YouTube Red (or whatever it is called) as a criticism of adblocking.
Intentionally systematic do you think, or an unintended consequence of the technology?
I’ve not had a lot to do with video creators professionally. But when we did engage one to create some YouTube videos capturing our company and what it was like to work there, I was super impressed. Worth every cent and so much better than anything most amateurs could produce.
I don’t know that I’m interested in trying to divine if people intend to exploit or just accidentally help build systems which do. The purpose of a thing is what it does.
So when the thing does something you consider undesirable, how then do you determine whether to incrementally improve the thing, reform the thing, or burn the thing to the ground?
So you don’t consider what the intended purpose of a thing was, in that process? Even if only for the purpose (heh) of contemplating what to replace it with?
That would be a useful principle for me if you replaced the need to understand the intentionality of systems with understanding the material circumstances and effects of them. Spinoza, Marx, and the second-order cyberneticists had the most useful views on this in my opinion.
Ah, yeah - what I meant was, if you want YouTube content and loathe the ads, paying to remove them probably isn’t a rip-off.
At least, in the proximate transaction. I do wonder how much of that $11 or whatever goes to the content creators. I know it’s zero for some of the content my kids watch, because it turns out swearing Australian chemistry post-docs blowing things up isn’t “monetizable” :)
Hence paying the creators through a platform like Patreon. Although I’d rather not Patreon specifically.
(Edited to add: I remember how much I treasured my few Transformers toys as a child. Hours of joyous, contented, play alone and with others. Sure they were expensive for what they were, physically, and cartoon TV advertising was a part of what enabled that. It’s a similar deal with my kids and Pokemon and MTG cards … total rip-off from one angle (“it’s just a cardboard square”) but then you see them playing happily for hours, inventing entire narratives around the toys. Surely that’s no rip-off?)
This convinced me to give Newpipe a try and omg it is so much better than using Firefox even with uBlock Origin, let alone the Android Youtube app. Thank you so much for the recommendation!
Remember that YouTube Premium also comes with other things, like a music streaming service. Apparently YouTube tentatively thinks 6,99€ is around what the ads are worth.
Much like any creative endeavor, the platform/middleman almost certainly takes a big cut, and then what’s left is probably a Pareto-type distribution where a small number of top-viewed channels get most of the payout.
On the other hand, if you don’t pay for it and don’t watch any ads, then it’s likely that the people creating the videos get nothing as a result.
I am really let down every time I see people giving in to the NSA’s pet-project for circumventing web traffic encryption that is Cloudflare. I hope I’m not the only one extremely alarmed by Cloudflare’s origins and obvious implications if you have seen the Snowden leaks.
A very simple VPS on Hetzner with 40GB disk space and 20TB traffic (easily enough for the author’s case) costs like 4,51€/month. It also includes DDoS-protection. If you do not need custom server software, 10GB web hosting cost 2,09€/month.
Yeah the “move to CloudFlare because it’s free!!!!” caused me to eyeroll really hard, and then I recognized that the problem is just going to grow. Can we stop supporting CloudFlare? Please?!?
I guess the first thing is realizing that you probably -don’t- need that in the first place. But if you do - personally, I’m using Cloudfront for some of my things that /do/ require a bit more caching and distribution, and I fit neatly into the free tier for that. I think when CDNs do become a concern, a paid one is probably worth it.
I don’t know if Vercel is a Cloudflare CDN reseller or if they’re an AWS CDN reseller, but their free tier offers 100GB of bandwidth a month. I switched from a VPS to Vercel when UNIX sysadmin stopped being a fun hobby for me.
The first link in the lobsters search results makes clearest accusations, especially the last two sections of the page. These are clearly stated, and link to sources. They have the same rhyme as other accusations. See earlier history on wikipedia: Room 641A, PRISM, e.g.
ssl removed here sketch
This is a weird place to make that case. The OP is just hosting a public static website with a few technical articles. Unless you’re suggesting that the NSA is man-in-the-middling this encrypted traffic to some nefarious purpose, I don’t see the concern.
It’s pretty easy with an actual VPS (which is not much more expensive). Push to the VPS, have a post-receive hook there which forwards to your actual git host and does a deploy.
Wayland the protocol is actually pretty nice and well-defined. The problem is that it defines too little, so when saying “Wayland is pretty good”, this heavily depends on the compositor of choice. Every compositor has to bring around 10.000 SLOC of boilerplate and support dozens of “proprietary” (in terms of “follow the most popular compositors and libraries”, like wlroots and sway, which define quite a few custom extensions) extensions for basic stuff. Wayland itself merely is a very very thin layer for basic client- and buffer-management, because the designers were thinking of any possible case a GUI might be implemented apart from the default rectangular forms we are used to.
X may be large and dated, but at least it comes with everything you mostly need and an X window manager does not need to reinvent the wheel.
The Wayland team, in my opinion, really dropped the ball here in this regard. Wayland could have been the incubator for a proper Linux desktop with e.g. native color space support, native high-DPI, a vector-based-model (all-rendered was a step back imho), even better network transparency for fully remote-access and plenty other things that are missing in X. Instead we got this mess.
It would have been much better to keep the “freeform”-approach and allow crazy compositors/window-managers, but the burden would’ve been for the special cases to deselect certain default assumptions rather than the opposite. To give a concrete example, if your compositor/window-manager does not allow spawning a window at the top or in the center, simply ignore the request or print a warning (which might be well-reflected in a good API). With Wayland, window-placement-requests are simply not supported by default and you instead have to resort to one of the said proprietary extensions. This just leads to a horrible monoculture, as if we hadn’t learnt anything from the browser wars.
And we wouldn’t have needed standardization, just a proper foundational library by Freedesktop.org with well-defined interfaces that encourage reimplementation, if wanted, in a server-client model (i.e. the compositor/window-manager being a client to a proper server with an broad well-defined API) and not a util-library like wlroots that sits on top of a thin API. Such a reimplementation would still only be at most as complex as implementing a single bloody compositor in Wayland!
It is an antipattern that libraries like wlroots have to exist! If you replace wlroots and sway above with Webkit and Chrome, my point should become even clearer.
Keep in mind that they are 100% compatible with the git-data-format, so it’s just a way simpler frontend everyone is free to use without limiting anybody else within the git-ecosystem.
What next? A complete spritemap for Mario on NES? If the Unicode Consortium accepts this they, in my view, will have completely lost their minds.
If these were actually used in text streams, I think they pretty obviously fall within Unicode’s mission. NES sprites never did, as far as I know.
Also, many of them seem useful for modern applications too.
One of the principles of unicode is that it should be possible to re-encode old documents from legacy character sets to unicode, so this seems entirely sensible to me.
The proposal addresses this concern on page 5’s section “8. Finiteness”:
That is a nice example! This is what I really like about working with Ada, where you have lots of control over your types, even including static and dynamic type predicates with which you can even define a “prime number” type or something.
Making invalid states impossible in the first place makes much more sense than trying to consistently catch degenerate cases at the beginning of functions.
Ada and Pascal (which is almost a subset of Ada) both have really useful ranged integer types (the Ada ones, as you say, are more general, but Pascal has a subset that covers 90% of my requirements). It’s something I miss in other languages. C now has arbitrary bit-width types. C++ lets you build these things and if you have a constexpr constructor then, with C++20’s
std::is_constant_evaluated
(or C++23’sif consteval
) then you can get compile-time failures if your constant expressions don’t match that range, with clamping or wrapping for overflow in the dynamic case. It really feels like something that should be a core part of the standard library, even if it isn’t baked into the language.You gave a great overview, much better than I could’ve written it. Just one remark: Arbitrary-bit-width is not enough, in my opinion. The strong suit of Ada, regarding numerical types, is that you have full control. To give an example, you can define a fixed point type with prespecified range and precision, which is unheard of and very cool. You can also add type predicates to run automatic proofs.
Even though I love working with C and try to use the type system as much as possible (just “const” parameter correctness alone saved me from a lot of bugs over the years), having tasted the strength of the Ada/Pascal/Haskell/etc. type systems leaves you wanting in many places.
By the way, your name is somehow really familiar to me, but I can’t wrap my head around it.
I agree, I just meant to say that it is the closest that C comes.
I am everywhere!
A lot of the design of Common Lisp came to be in the same sort of milieu as Ada, and unsurprisingly has arbitrary range types, but no one ever uses them, because, of course, their utility is somewhat limited in a fully dynamically typed language ^_^
Not entirely true. There is a mathematical structure known as a wheel (https://en.wikipedia.org/wiki/Wheel_theory) where division by zero is well-defined, where it maps to a special “bottom” element. I have never seen anyone use this, or to seriously develop “wheel theory”, but it is fun to know that it exists!
I actually wrote my Bachelor thesis about this. See chapter 3.1, where I actually proved that division by zero is well-defined in terms of infinite limits in this projectively-extended form of real numbers (a “wheel”). See Definition 3.1, (3.1e) and Theorem 3.6. It all works, even dividing by infinity! :D
What I noticed is that you actually do not lose much when you define only one “infinity”, because you can express infinite limits to +infinity or -infinity as limits approaching the single infinity from below or above (see chapter 3.1.1).
Actually this number wheel is quite popular with next generation computer arithmetic concepts like unums and posits. In the current Posit standard, the single infinity was replaced with a single symbol representing “not a real” (NaR). It’s all very exciting and I won’t go into it here, because I couldn’t do it justice just how much better posits are compared to floats!
One thing I’m pretty proud of is the overview in Table 2.1, which shows the problem: You really don’t need more than one bit-representation for NaN/NaR, but the IEEE floating-point numbers are very wasteful in this regard (and others).
While only 0.05% make up NaN-representation for 64 bit (double) floats, they make up 0.39% for 32 bit (single) floats and 3.12% for 16 bit (half) floats! The formula to calculate the ratio is simple: If n_e is the number of bits in the exponent and n_m is the number of bits in the mantissa, then we have 2^(1+n_e+n_m) floating point numbers and 2^(n_m+1)-2 NaN representations. To get the NaN-percentage, you obtain the function “p(ne,nm) = 100.0 / (2^(ne)) * (1 - 2.0^(-nm))”.
Mixed precision is currently a hot topic in HPC, as people move away from using doubles for everything, given they are often overkill, especially in AI. However, IEEE floats are bad enough for >=32 bits, let alone for small bit regimes. In some cases you want to use 8 bits, which is where IEEE floats just die. An 8 bit minifloat (4 exponent bits, 3 mantissa bits) wastes 5.5% of its bit representations for NaN. This is all wasted precision.
The idea behind posits is to use tapered precision, i.e. use a mixed-bit-length exponent. This works really well, as you gain a lot of precision with small exponents (i.e. values near 1 and -1, which are important) but have a crazy dynamic range as you can actually use all bits for the exponent (and have implicit 0-bits for the mantissa). In the face of tapered precision, the custom float formats by the major GPU manufacturers just look comically primitive. You might think that posits would be more difficult to implement in hardware, but actually the opposite is true. IEEE floats have crazy edge-cases (subnormals, signed 0, etc.) which all take up precious die space. There are many low-hanging fruits to propose a better number system.
Sorry for the huge rant, but I just wanted to let you know that “wheel theory” is far from obscure or unused, and actually at the forefront of the next generation of computer arithmetic concepts.
Oh this is absolutely fascinating, thank you!
Though if I understand the history correctly, the original intent of leaving tons of unused bits in NaN representations was to stash away error codes or other information that might be generated at different steps in a numerical pipeline, right? They never ended up being actually used for that, but what did happen eventually is that people started stashing other info into them instead, like pointers, and we got the NaN-boxing now ubiquitous in dynamic language runtimes like JS and LuaJIT. So it’s less a mistake and more a misdesign that turned out ok anyway, at least for the people doing things other than hardcore numeric code. That said, you can’t really NaN-box much interesting info inside an 8-bit float, so the IEEE repr is indeed wasteful there, especially when the entire goal of an 8-bit float is to squeeze as much data into as little space as possible.
Out of curiosity, does the posit spec at all suffer for having a single NaR value instead of separate NaN and infinity values? I vaguely understand how you can coalesce +inf and -inf into a single infinity and it works out fine, but when I think about it in terms of what error cases produce what results, to me infinity and NaN express different things, with NaN being the more severe one. Is it just a matter of re-learning how to think about a different model, or are there useful distinctions between the two that posits lose?
Thanks for your remarks!
As far as I know, there was no original intent to allow metadata in NaN-representations and JS/LuaJIT just made smart use of it. It’s always the question what you want your number format to be: Should it be able to contain metadata, or only contain information on a represented number? If you outright design the format to be able to contain metadata, you force everybody’s hand because you sacrifice precision and dynamic range in the process. If you want to store metadata on the computation, I find it much more sensible to have a record type of a float and a bitstring for flags or something. I see no context where outside of fully controlled arithmetic environments, where you could go with the record type anyway, one would be able to make use of the additional information.
Regarding your other point: Posits are not yet standardized and there’s some back and forth regarding infinity and NaR and what to use in posits, because you can’t really divide by zero, even though it’s well defined. I personally don’t see too much of an issue with having no infinity-representation, because from my experience as a numerical mathematician, an unexpected infinity is usually the same as a NaN condition and requires the same actions at the end of the day, especially because Infs very quickly decay into NaNs anyway. This is why I prefer NaR and this is what ended up in the standard.
The only thing I personally really need in a number system is a 100% contagious NaR which indicates to me that something is afoot. An investigation of the numerical code would then reveal the origin of the problem. I never had the case where an infinity instead of a NaN would have told me anything more.
To be completely clear: Posit rounding is defined such that any number larger than the largest posit is rounded to the largest posit (and the smallest accordingly). So you never have the case, in contrast to IEEE floats, where an input is rounded to NaR/+-infinity. Given +-infinity is by construction “not a real”, I find it to be somewhat of a violation to allow this transition with arithmetic operations that are well-defined and defined to yield only reals.
Dropping infinity also infinitely reduces the necessary complexity for hardware implementations. IEEE floats are surreal with all their edge cases! :D
The original intended use case for NaNs was that they should store a pointer to the location that created them, to aid in debugging.
But you lose some features. My vague understanding is that +/-0 and +/-inf exist to support better handling of branch cuts. Kahan says:
Yes, this was the post-justification for signed zero, but it creates much more problems than it solves, creating many many special rules and gotchas. If you do proper numerical analysis, you don’t need such things to hold your hand. Instead, given it’s totally unexpected for the mathematician, it leads to more errors.
It’s a little known fact that Kahan actually disliked what the industry/IEEE did to his original floating point concepts (I don’t know how he sees it today), and this is not the only case where he apparently did some mental gymnastics to justify bad design afterwards to save face in a way.
I had never heard of that concept. I want to share how I understand dividing by zero from calc2 and then relate that back to what you just shared.
In calc 2 you explore “limits” of an equation. This is going to take some context, though:
Understanding limitsTo figure out the limit of
1/x
as x approaches 1, you would imagine starting at some number slightly greater than 1, say 1.1 and gradually getting smaller and checking the result:But that’s not all. You also do it from the other direction so 0.9 would be:
The answer for the “limit of
1/x
as x approaches 1” is 1. This is true because approaching 1 from both directions converge to the same number (even if they never actually quite reach it). Wolfram alpha agrees https://www.wolframalpha.com/input?i=limit+of+1%2Fx+as+x+-%3E+1But, limits don’t have to converge to a number, they can also converge to negative or positive infinity.
Limit of dividing by zeroNow instead of converging on 1, let’s converge on zero. What is the “limit of 1/x as X approaches zero”?
We would check 0.1 and go down:
And as it goes up:
The problem here is that coming from the top and going down the result approaches (positive) zero, and starting at the bottom and going up, the result approaches negative zero, which is a thing I promise https://en.wikipedia.org/wiki/Signed_zero. Since these values don’t converge to the same value, the answer is unknown it cannot be both answers, therefore division by zero (under this model) is unknowable and wolfram alpha agrees https://wolframalpha.com/input?i=limit+of+1%2Fx+as+x+-%3E+0.
As a “well actually” technical correctness note. I’m explaining this as I intuit it versus in reality
Wheel1/x
as x approaches 0 goes to positive infinity and negative infinity. I know it has something to do with taylor expansion, but I’ve been out of school too long to explain or remember why. Even so, my explanation is “correct enough” to convey the underlying concept.As you get into higher mathematics I find that it feels more philosophical than concrete. There are different ways to look at the world and if you define the base rules differently than you could have different mathematical frameworks (somewhat like, but not exactly the same, how there is “regular” and “quantum” physics).
It looks like wheel said “negative and positive infinity aren’t different” which is quite convenient for a lot of calculations and then suddenly 1/x does converge to something when it goes to zero.
“As a “well actually” technical correctness note”: if one is working in the real numbers, which don’t include infinities, a limit returning infinity is more strictly called “diverging” to positive or negative infinity. A limit can also “diverge by oscillation”, if the value of the expression of which the limit is taken keeps changing for ever, like sin(x) as x tends to infinity.
What this says is that the limit of 1/0 is not defined, not that 1/0 itself is undefined. Consider sin(x)/x. If you do the math, the limit as x→ 0 is 1, but
sin(0)/0 = 0/0
.Another interesting example is the Heaviside function,
x < 0 ? 0 : 1
. Limit → 0 from the left is 0, limit from the right is 1, so the limit doesn’t exist. But the function is well defined at 0!You can express limits as approaching from a direction though, can’t you? So you can say that
lim -> +0
is 1 andlim -> -0
is 0. It’s not that the limit doesn’t exist, but a single limit doesn’t exist, right?Why is this all way more fun to think about now than when I was taking calc 1 and actually needed to know it?
Yeah, that’s right!
Another weird limit argument is that the limit of x/x as x goes to zero is 1.
To me, a more noteworthy one is that the limit of
pow(x, x)
asx
approaches zero is 1.x/x = 1
for almost all values ofx
, butpow(x, x) = 1
is much rarer.At least from a cursory reading of the wikipedia page, you have basically traded
x/0
being defined for0*1 != 0
, andx/x != 1
.Yeah, you can also make a mathematical structure where division by zero is five. It’s not very useful though.
It all boils down to the fact that we will soon need “proof of life” on the web to impose “digital scarcity”. If you spend hours writing a blog post, it is easy for you to invest the tokens you accumulated in that time to mark your blog post as “valuable”. The problem is that this still does not protect the web from paid spammers in third world countries churning out masses of bullshit blog posts and them investing their “proof of life” into their spam work.
You could introduce the element that any human reading your blog post will “invest” their “proof of life” for the duration taken to read the post (leading to bullshit posts receiving less consideration than useful ones). However, this does not protect the web from people selling their “proof of life” to the highest bidder, which is what we also already see with click farms and paid surveys.
Even if this problem is solved based on the “proof of life” idea, one still has to wonder how to actually implement this such that it is not replicatable by machines. I am certain, though, that there is no way around having to use authoritative data on humans in some way, just like we need certificate authorities.
I think it’s burning out people left and right that are trying to chase all the technological trends and fads in highly active fields of our trade. This is most prominently seen with web developers, who change their preferred software stack every 6-12 months. Have websites become better over the years given the amount of time poured into this technological tumbler? No, they have become worse, e.g. more bloated, less accessible, slower, more energy-consuming, etc..
This is why I think it makes more sense to build a career on top of established technologies and more stable departments. One department to avoid is probably frontend development, which sees many shifts of core technologies over the years and is usually not well-paid, because it is almost impossible to mature in this field and given the frequent technology shifts companies are literally unable to accumulate technical debt in a specific technology that warrants paying experts to fix and maintain their stuff.
Backend development on the other hand, with some exceptions, tends to be conservative in terms of employed technologies. With just PHP, MySQL, Java, C/C++, Go (and others) making up the majority of the software stack, you could have lived under a rock since 2020 and still be probably good to go with a very flat learning curve. Companies will have invested into a much smaller and stable set of technologies, and they will have accumulated technical debt motivating them to pay you good money to take care of their software for you.
Just look at the job listings: Most are looking for C#, Java, C/C++ developers; the few web development gigs are usually not well-paid, but backend can still keep you afloat. I recently noticed how few developers in my age group (late 20’s) actually know how to program in C (not C++). Considering how much software is still written in this language (often for good reasons), this niché could become very profitable in the future, just like COBOL turned into a well-paid niché after the 80’s.
So yeah, to briefly reflect on the article’s topic: It’s your choice how quickly your knowledge diminishes, and your own fault if you choose a profession that does not value your significant time investments to master a given technology.
True mastery is not an end in itself, as it heavily depends on the context it is formed and used in.
I think “web developers swap out their entire stacks every 6-12 months” was somewhat true for a while but has kind of stopped these days. React won.
But aren’t there even paradigm shifts within React itself?
Typically not ones that break backward compatibility, just ones that offer a new (usually better) way to do specific things.
Old-style React code generally keeps working fine and interoperates with new-style code, and most of your existing React knowledge still applies. Some of my company’s web app code was written in 2019 and still works fine with the current version.
Every active ecosystem has paradigm shifts, including on the backend. Java is in the middle of one right now with virtual threads and structured concurrency, and it follows an earlier paradigm shift that resulted in a lot of reactive-style Java code being written. Java 8 introduced lambda functions and some people hated them but now they’re ubiquitous.
(Disclaimer: I do way more backend work than UI work, so my view on React isn’t as well-informed as someone more frontend-focused, but I’ve ended up needing to write React code often enough over the years to be familiar with the ecosystem.)
This is one reason I flat-out refuse to touch JavaScript with a 10-foot pole. I went with backend Python + Go for exactly this reason. It’s exhausting when the job market wants you to stay up to date with a tech stack that changes completely every six months or so.
A wise choice then, sir! I taught myself some vanilla JavaScript, because it helps in some cases and is a core web technology, but the real poison are all those frameworks on top of it, which are also the ones switching all the time and burning people out.
It’s amazing how much you can achieve with plain old JavaScript. An added bonus is the fact that websites usually remain light and fast with handcrafted JavaScript code.
Same. I don’t mind vanilla JS and messing around HTML, CSS in the browser but hate JS build tools and reactive frameworks.
You do not need Cloudflare for DDoS-protection. Many hosters offer it as part of their packages, like Hetzner for instance with their VPS and hosting packages.
Great story, but probably the most important lesson to learn from this remained un-remarked on in the conclusion:
Whoever was responsible for the crunch-time situation really deserves the blame for the problem, not the person who wired the breakout box.
I am in this world. The deadline is set by the orbit of Mars, if you miss it you are delayed for two years, so there is an extreme amount of pressure to hit the launch window. Secondly, every space mission of this class is a fabergé egg with 217 seperate contractors contributing their custom jewels. There are always integration issues, even assuming there wasn’t some fundamental subsystem issue that delayed delivery for integration. Even when rovers are nominally the same platform, they still have quirks and different instruments that mean they are still firmly pets rather than cattle. Given the cost per kg of launch, every subsystem has to be incredibly marginal and fragile weight-wise, else it’s a gramme taken away from science payloads which is ultimately the whole purpose of these missions. As a result, things are delicate and fussy and have very un-shaken-down procedures. It’s the perfect storm for double shifts.
There’s also an often-ignored aspect that’s easy to miss outside regulated fields: there is an ever-present feeling that there is one more thing to verify, and it’s extremely pressing because that may be your last chance to check it and fix it. About half the double shifts I’ve worked weren’t crunch time specifically, we weren’t in any danger of missing a deadline (I was in a parallel world where deadlines were fortunately not set by the orbits of celestial objects). It’s just you could never be too sure.
Also, radiation hardened instruments and electronic components have a reputation of ruggedness that gives lots of people a surprisingly incorrect expectation of ruggedness and replicableness (is that even a word?) about many spacecrafts. These aren’t serially-manufactured flying machines, they’re one-, maybe two-of-a-kind things. They work reliably not because they’ve gone through umpteen assembly line audits that result in a perfect fabrication flow, where everything that comes off the assembly line is guaranteed to work within six sigma. Some components on these things are like that, but the whole flying gizmo works reliably only because it’s tested into oblivion.
Less crunch would obviously be desirable. But even a perfectly-planned project with 100% delay-free execution will still end up with some crunch, if only because test cycles are the only guarantee of quality so there will always be some pressure to use any available time to do some more of those and to avoid mishaps by making procedures crunch-proof, rather than by avoiding the crunch.
I did a little searching about this. The project was green-lit in mid 2000 with a launch window in Summer of 2003, so about 3 years, to build not one but two rovers and get them to mars for a 90 day mission. Check out this pdf of a memo from what would be riiiiight smack in the middle of that schedule:
My paraphrase: Y’all told everyone to do stuff faster, better, and cheaper, but then didn’t actually make any policies for how to do that, or how to measure your success at doing that. Oh, and y’all suck out loud at staffing.
They include the management response which was basically: Well… yeah that’s a fair point. Also it’s not Faster Better Cheaper’s fault we suck at staffing! We just suck at staffing in general. We plan to develop plans to fix that next year!
I’m not joking about that “plan to develop plans” part btw. Here’s the full quote:
Big oof.
Despite all of this, the rovers meant to last like 3 months lasted 6 years and 14 years respectively. ¯\_(ツ)_/¯
Good point! Another aspect is that you should design systems in such a way that inadvertent misconnections become impossible, even for low-level testing. If that’s not possible with the hardware, in the given case a very simple pre-test would have been to test the impedance and resistance and abort in any case of excessive measurements.
To build a bridge to programming: Design your interfaces such that they cannot be broken with bogus input. This especially applies to low-level functions that are only explicitly called in tests, because you can mess up test inputs easily by accident. One approach is to use a strong type system, e.g. a function “{Real, NaR} log10(x:Real)” is much more fragile than “Real log10(x:StrictlyPositive)”, which is constrained by the type system not to yield NaR (not a real) in any case.
I haven’t seen NaR before.
In my mind I imagine a NaI (not an integer) could be useful to handle overflow/underflow/divide by zero.
NaR is used for some next generation computer arithmetic concepts like Posits.
I like this distinction, I’ve met the same thing in OCaml (and wrote my own thread pool that can be used for both, on OCaml multicore). Just like the article says, I’d typically have both a pool for CPU-bound tasks (one thread per CPU core); and a pool with many more threads to handle queries (e.g. HTTP queries) in direct style. If I’d use an event loop for IOs there would still be a need for the first kind of pool to deal with CPU heavy tasks.
For event loops there’s also typically a need (at least on Linux) to hand off DNS lookups to a thead pool because the libc APIs are blocking. You can get around this by implementing DNS in a non-blocking manner, but then you lose all the other name lookups available via the system API (e.g. mDNS/zeroconf).
The really annoying thing there is that DNS-style lookups are best done with something that looks like a stack less coroutine (send the request, keep a continuation for what to invoke on response) but the libc APIs require a stack for the duration of their call. It would be nice to have an asynchronous getaddrinfo that returned an opaque token and a mechanism for registering it with kqueue / epoll / whatever, and a function to call to process the results.
This can be done with a pipe+fork+dup2+exec, kqueuing/polling on the file descriptor for data.
Right, as far as I understand that’s what libuv does, for example. It has a thread pool to handle DNS and disk IO (at least when not using io_uring), and the actual event loop handles networking purposes and timers.
The way things should work is that there should be a DNS server on localhost which can resolve any kind of name lookup, and then applications should use some actually good and useful DNS library instead of the libc nonsense.
Why not just write it as a multiplication with 257, as usual and easily mathematically derivable for other depth transforms ((2^16-1)/(2^8-1))?
256+1=257, so we can see the bitshift and added original value easily. This is not magic.
Because bit shifts can be computed faster than multiplication. This is especially important in computer graphics contexts.
Benchmark it. The compiler will probably turn the multiplication into bitshift+or anyway, or leave it.
A quick check on quick-bench shows that it compiles to the same assembly with O3 on latest clang and GCC.
You’re not wrong, but your test is broken. Both versions are just storing a constant into memory n times, because the value of
small
is known a priori, so the computation ofbig
is optimized out entirely. TheDoNotOptimize
enforces that the value ofbig
is considered “used” (otherwise the loop would have no observable effects and could be removed entirely), butmovw $0x2727, 0xe(%rsp)
is enough to satisfy that. It doesn’t force the computation ofbig
to be executed.Ah, you are right. I redid the code by making
small
a random number continually changed each pass of the loop. It does still come out to the same assembly with either implementation but now the all importantshl $0x8,%eax
is there.https://quick-bench.com/q/VS7of8NLsjf60uH3XF_M010wFwY
That’s a cool way to think about it, thank you for bringing it up. I think both direct bit copying and multiplication need an explanation anyway if you aren’t familiar with the problem and its solution, so it’s not clearly a win clarity-wise.
When it comes to performance, well bit operations are always fast so at least you’ll get a peace of mind when opting for those, even if it doesn’t matter in the end.
I played around with this and seems like you can’t do it with multiplication if the high bit depth isn’t divisible by the low one. For example RGB565 pixel formats are common and you need to expand 5-bit channels to 8-bit ones to display them on screen.
I don’t think you can do that with integer multiplication because you need to “fill” 3 low bits and you only have integer factors of a 5-bit number at hand. I added a mention to the article.
Multiplying by 257 looks like magic (although less so if you write it
0x101
or0b10000001
). Shift-and-or tells you exactly what you really need to know: 00 becomes 0000, FF becomes FFFF, everything in between is monotonic.I guess I’m a bit curious on a meta level where the aversion to paying for content comes from. Like, I get that someone is inevitably going to reply with “I live in a country/situation where it is impossible to pay, you insensitive clod!” but take as a given that I am asking this question of people who have access to an accepted payment method and who have sufficient income to afford to do so.
My partner and I watch a lot of stuff together on YouTube, mostly cooking and crafts stuff, and I watch a lot of stuff related to my own hobbies. So I pay for premium to get it ad-free. I’ve historically also watched a fair amount of Twitch streamers related to my hobbies, so I’ve paid for subscriptions to them to get ad-free viewing. I support a podcast that I like, and get ad-free episodes in return. I make a recurring monthly donation to the admin of the Mastodon server I use. I’ve bought merch or other products from artists I liked.
I have the means to do this. Other people who have the means to do this: why don’t you?
(I also run heavy ad-blockers on all my devices, of course, if nothing else as a security/privacy measure, but when I find something I like I still tend to seek out a way to pay to support it so that the thing I like will continue to exist)
I don’t have an aversion to paying for content: I happily do that for music (on bandcamp or on CDs), books, games and would be happy to do that for films and shows given the opportunity.
I do, however, have an aversion for paying for services. Take Spotify, for example - $10 per month, and you get pretty good selection of music, but it’s tied to a pretty terrible music player. On top of that from those $10 I pay, most of it goes to Spotify itself and the top played artists globally, not the ones I actually listen to. So I’m theoretically paying for content, but in practice most of it is for the service (which I’d rather not have – I’d prefer to have my music offline, in a media player that doesn’t suck), and content creators I don’t care about. $10 for a service I don’t like, and supporting people I don’t want to support. Compared to Bandcamp which takes ~20% and just gives me the files (and streaming options if I want them), this is a terrible value proposition for either of my goals – supporting the artists and getting a good product out of it.
Now for YouTube this is a bit of a different story. I don’t know what the revenue share is like when it comes to the Premium subscriptions, and how much my favourite channels would really get out of it. But YouTube has become a monopolist and annihilated the competition by being the good, free product with no ads in it. Now that it’s comfy on its throne, it’s pulling the rug out and abusing its position to push whatever it wants – unskippable ads, adblock-blockers, idiotic copyright and monetization policies… and we have nowhere else to go. Is this something you’d want to support? I see a product actively becoming worse over the years, and I’m suppose to believe that once I’ll start paying it’ll become better?
Were it a good product that becomes better if you pay for it – that’d be something worth considering. That reminds me of Twitter in its prime, in its golden days. If it asked for money back then and gave extra stuff in return – new functionality, unrestricted Client API access etc, I’d happily throw my money at them in exchange for something extra. But nowadays it’s been on an almost comical downward spiral, getting worse and worse every week (not to mention all the people I cared about leaving), it’s taken most of the good things about it away and now it asks for money? It locked me out of Tweetdeck, leaving me with the absolutely abhorrent default client and promises that if I pay I’ll get it back? No thanks!
And it’s the same with YouTube. Lock in 120Hz+ streams behind premium and I’ll happily pay extra for the smoother videos. But with what they’ve been doing over the years, paying to get some of the good old days back just doesn’t sit right with me. And if I wanted to support the content (creators), then paying for YouTube is a very suboptimal way of going about it.
I don’t think this is true. I don’t use Spotify, but my partner does, so I have set up spotifyd (runs at least on FreeBSD and Linux) and it seems to work well. We control it with the official client, but I believe there are other things that replace the control interface.
I guess this is a trendy thing to claim, but I’m not sure I understand the logic behind it.
For example: I have bookcases full of books that I’ve bought, and also an ebook reader with even more books I’ve bought. But I also have a library card. I only buy a book if I think I’m going to want to re-read it, or be able to instantly refer to it, multiple times. If I think I’m only ever going to read a particular book once, well, I’m probably not going to buy a copy; instead I’ll look to borrow one from the library.
I see streaming music services as being similar to a library card. They let me sample a lot of things that I wouldn’t ever listen to if my only option were to buy and forever own a copy. And when they do turn up something I like enough to re-listen many times, that’s when I do go buy a copy.
I’ve never understood why media needs to go above and beyond to get someone to pay for it. In the old days I could watch a movie when it was shown on a broadcast TV channel and accept the ads the TV channel would insert, or I could pay to watch the movie in a theater without any ad breaks, or pay to rent or own a copy of the the movie on VHS without any ad breaks in it.
I look at YouTube the same way: the cost of “free” content is advertising, and I can pay to remove the ads. I don’t need it to also offer a bunch of other above-and-beyond features on top of that.
Does your library require you to read the borrowed books under their supervision?
My library has rules for what I can do with the books I borrow. Which makes sense given that they remain the property of the library.
So I’m not really sure what your point was here. Yes, libraries impose terms and conditions on their patrons. If you want to argue the nuances of which terms and conditions are morally acceptable to you personally, that’s a completely different topic than what was being discussed.
I don’t know if it’s trendy or not, but I’m not surprised. And it’s connected to your followup points and examples: library cards, VHSes etc – it’s hard to shake off the feeling that things used to be better. Library cards are free. VHSes I can buy and keep. Subscriptions services give me the worst of both worlds: I have to keep paying, I don’t get to keep anything, and they impose a ton of restrictions on how I get to consume the content.
If I have some euros to spare, I can buy myself an music album (physical or digital), which I can then listen to whenever I want to and give it away (discreetly) or sell it (if it’s physical) if I don’t want it anymore. I can buy a book in a similar way. I used to be able to do that with video games to – as I kid I bought a game, played it for a while, then traded it for a comic book with a rabbit samurai which I still have on my shelf (and which has since gone up in value :)). That’s a pretty good deal! Not so much with the subscription-based alternatives though. I guess the upside is that I have access to a vast library and I can access it anywhere I want, but it’s not only not that important to me personally, it also has to compete with free services that do the exact same thing.
This is where we get to the “above and beyond”. Things, in my view, have become worse for the consumers. They may have become slightly more convenient in some cases, but the experience is massively inferior in others. This is why, if I have to splash out my money on something, it’d better be really good, not just acceptably mediocre. This is the case with video games for me – they’re DRM’d and locked to my account forever, but there’s quite a lot of added value (automatic updates, streamlined installations, save syncing etc) that makes it an attractive proposition even compared to the good old days. With music, videos, books? Not so much.
I would argue that digital distribution (of music, books etc) has opened up a huge market for creators. Physical books and music are heavy and costly to manufacture. Only certain stores could carry them (postal distribution helped a bit). Nowadays an electronic book or music set is among the smallest pieces of digital content there is. This makes it much easier to seek and find a big market.
As to why gatekeepers and middlemen appear despite the early internet theoreticians saying they wouldn’t, that just means those people didn’t know how economics works.
With a library card you also don’t get to keep the books you borrow, and there are restrictions (return within a certain time, late fees if you don’t, etc.). Again, the problem is you’re treating the streaming service as if it’s your own private owned-by-you record cabinet in your home, when it really is more like the library that has a far larger catalog of stuff but that you explicit agree is not and won’t become your owned-by-you property.
I’m also not sure what “restriction” I’m suffering under. I open the app, I browse around until I find something that looks good, I hit “play” and it plays. What do you think I ought to be able to do that I’m not?
While I’ve never used Spotify, I understand that it does not let you download tracks as DRM-free standalone audio files. Therefore, if I subscribed to Spotify instead of buying standalone audio files, I would miss these features:
I’m not trying to be harsh here, but: have you checked the context of the discussion you’re replying to? My point was that I see streaming services as similar to having a library card which lets me temporarily borrow and enjoy many things that I would never ever go out and buy personally-owned copies of for myself.
And through streaming services I’ve discovered quite a few things that I later did buy copies of because I liked them enough, but I never would have even listened to them once if full-price up-front fully-owned purchasing was the only way to do so.
So complaining that you do not obtain full “owner” rights over tracks from streaming services does not really have any relevance to what was being discussed. That’s already been acknowledged, and I see streaming and purchasing as complementary things. I was asking for opinions on why they wouldn’t be, or what restrictions streaming has – in the context of the streaming-and-purchasing-are-complementary point – that make it untenable as a complement to purchasing.
I think my comment’s definition of “restrictions” is consistent with the definition in its grandparent comment, tadzik’s comment. That definition is “the restrictions that come with not owning content”.
I derive that definition from the sentence in tadzik’s comment that you quoted earlier:
You replied to that comment asking for examples of “restrictions”. I think I successfully gave them. If you think the restrictions deriving from lack of ownership are off-topic, it would have been clearer to state that in your reply to tadzik’s comment instead of asking for examples.
Now, if I try to answer the question you said you meant – why streaming could not be complementary to purchasing – I think we already agree that they are complementary. Streaming is indeed useful for browsing in a way that purchasing is not. Streaming for free and without ads is nicer than streaming with a paid subscription, but depending on the price and the value of the paid content relative to available free content, paying for a streaming subscription may be worth it.
I try to support the artists who made the art that I like. I try not to support Google. If I had to choose between them I guess I’d reluctantly choose not to support Google, but I don’t have to choose: I can give money (less in-) directly (via Patreon or whatever) to the YouTubers I want to support.
More broadly, though, I just really don’t like ads. They make the internet worse to look at, and I don’t like business models based on the premise that my attention is a commodity that people are free to auction off. If there were only a few people doing that I’d just avoid them, but that’s pretty much impossible today. If I had to pay to make the ads go away, I think it would probably harden my stance against the creator in question, since I’d feel like I was rewarding a business model I disapprove of. As it is, I feel good about giving money to the creators I give money to precisely because it feels both mutually beneficial and entirely voluntary.
If you believe it’s unethical to support Google, then I don’t see how it could be ethical to watch YouTube at all. You still appear in the metrics that they use in their pitches to advertisers, and indeed you still appear in the aggregate metrics even when using an ad blocker. There is no ethical way to consume YouTube in that situation, that I can see.
And that’s why I try to watch videos on Nebula, when they’re available. But for the most part, YouTube is a monopoly and all the content is there; there’s being principled and then there’s being a masochist like Richard Stallman.
Let me give you my take on this. What does an ad-view on YouTube cost a company? From what I read it’s $0.010-$0.030 (let’s say on average $0.020). The average time spent on YouTube is ~20 minutes per day. If the viewer was shown an ad every 4 minutes, this amounts to 5 ads per day and 150 ads per month, which is equal to 3$. How much is YouTube premium? 14$. It’s a rip-off. For it to be reasonably-priced the average YouTube Premium subscriber would have to spend 2+ hours per day on the platform, which is quite a lot and way beyond average (~20 minutes).
The content companies really nailed it with Music in the early 2010’s. You could buy an album DRM free and were left alone. Then they messed it up with movies, where you “bought” a movie but only bought a right-to-use depending on the platform. Music streaming is a way to disown people again in a way and to incentivize users to pay monthly for what is often a relatively static music collection which would have been much cheaper to buy once and extend incrementally. The detrimental effects on the music as a medium as a whole are another topic. Movie/Series streaming is a bit of grey zone which I won’t discuss here. What we see, though, is that the once simple streaming landscape has become more and more diverse, leading to immense costs per month if you want to follow a diverse offering of shows. Surely this is a first-world-problem, but you would’ve spent much less back in the day just buying a set of series on DVD, and you would have owned those DVDs forever. You could have also borrowed DVDs and BluRays at your local library.
To keep it short, I know many people who would be willing to pay for content, however, they don’t like being held hostage or becoming suckers of the greedy companies. Music has proven that content can be DRM free and still be profitable, but it has to be accessible to the user.
YouTube loses a lot of money with ad blocking users. Even conservative metrics assume a 20% market share of ad blocking users. If you factor this in, YouTube could reasonably ask for a plan of 1.99$/month or something to be ad-free. If they do it in a way that it’s not a hassle and they do it fairly, everyone would be happy.
Do you mean “earn” rather than “cost”?
No, the cost to place an ad, of which some is taken by Google as profit for their ad business, some is payment processing fees, some goes to YouTube to run the service and some goes to the content creator. In theory, a company then makes more in increased sales than they’ve spent, but most are really bad at tracking this.
The ad price is set based on an auction, so varies a lot depending on how many other companies want to place an ad in the same video. I suspect that most of the ads that I see are incredibly cheap because they’re not on mass-market content and few companies are competing for the space.
Ah, I see, thanks. I was thinking of how much each ad placement earns the video creator, but I see now that the correct analysis is to compare the cost of premium/ad free to the cost of placing the ad.
Your math is extremely suspect given that AFAIK it’s a competitive bidding process where the most-watched videos command higher rates. And since your whole argument is based on that math, I don’t think it holds up.
What the actual…?
Same, bloody CloudFlare! I can’t believe people are still unironically using this Five-Eye-honeypot.
I co not think that this is CloudFlare related issue
Based on the comments regarding who has access, I’d say using Cloudflare would help in this scenario. It seems that this website has decided to block entire parts of the globe to mitigate bad behavior, something using Cloudflare would enable without geoblocking.
I was blocked too. This isn’t how the internet is supposed to work.
Getting OpenGL® ES 3.1 to run on Asahi Linux is an achievement but mixing it up with normative statements like “the right example” and “the way forward” is a bit strange, for lack of a better term. It’s adding ideology into what is a pragmatic issue. Open standards aren’t inherently good for being open per se, the quality and the feasibility of the standard is what essentially matters. Apple has done relatively well without OpenGL® ES 3.1 support, despite OpenGL® ES 3.1 being open and Metal being closed.
Normative arguments in programming communities is not a new phenomenon. Zed Shaw had a recent talk about how this type of rhetoric operates in programming communities specifically.
The post is pretty explicitly a call to action though, right? This isn’t an ‘accident’, it’s fully intended. Nothing strange about it.
The only reason Metal does well is because Apple is a locked down ecosystem.
Over time, people start to love and defend the prisons they are willingly caught up in, up to the point that they are ridiculing, questioning and ultimately despising those with the energy and commitment to break free from them.
Godspeed to Alyssa Rosenzweig and her team for this marvelous achievement!
There’s something of a history in graphics APIs of closed ones being superior. OpenGL didn’t lose out to DirectX because Microsoft pushed Direct3D, it lost because you could write Direct3D code that ran on half a dozen GPU makers’ cards and on Xbox, whereas the OpenGL code had to have a load of different paths for different spellings of the same thing via vendor extensions and even people like John Carmack, who had a lot of non-technical reasons for wanting to use OpenGL, used Direct3D and wrote rants about the Kronos group’s failure to make OpenGL competitive. Before that, 3dfx’s GLide was more popular for games than OpenGL, even though there was an OpenGL implementation available for their cards.
From what I’ve seen of Metal, it’s a fairly nice API that is only superficially different to Direct3D 12 or Vulkan in a handful of ways. There’s an implementation of Vulkan that runs on top of metal, so nothing is stopping you using Vulkan on Apple platforms, but it looks as if few people choose to do this if they don’t need portability, which suggests that Metal is doing something right.
Outside of the graphics arena, CUDA is vastly more popular for compute on GPUs than anything open.
The problem with open standards is that you need to please multiple vendors, each of which has some things that they do well and some less well than competitors. You either make a huge API that can do everyone’s fast things (but has no performance portability) or a small API that doesn’t expose all of the fast things on some hardware. With GLide and CUDA, 3dfx and NVIDIA just made APIs for their hardware and ignored everyone else. With Metal, Apple can make an API for what they want to do and co-design it with their GPUs so that they can avoid spending any silicon area or power budget on things that they don’t need. With Direct3D, Microsoft tries a bit to build consensus between the GPU vendors and then uses its dominant market position to say ‘this is what will be in the next version, if you can’t do it you can’t put the DirectX logo on your marketing, if you can’t do it fast then you’ll look bad in the benchmarks that we publish’. With OpenGL and Vulkan, there’s no one empowered to make the hard calls.
For those interested in a full history of this time period, it’s covered in this epic StackOverflow answer.
I would also like to claim that there is a lot of general incompetence and lack of will to even try useful stuff among the vendors. Look at AMD’s current compute effort, ROCm and HIP: despite HIP being a straight clone of CUDA, the whole ROCm stack sucks and makes mistakes Nvidia managed to avoid 15 years ago. And this is AMD’s own system, that isn’t a collaboration with anyone. It’s no wonder OpenCL never went anywhere, with vendors like that.
A lot of people like to blame Nvidia for OpenCL 2.0’s failure, but neither AMD nor Intel ever actually produced functional OpenCL 2.0 drivers, they made partial and buggy drivers and then sorta just gave up.
(And indeed, Apple’s Metal was from the start intended as an OpenCL replacement just as much as an OpenGL replacement, hence Metal compute shaders are roughly equivalent in power and ease of use to CUDA, while Vulkan compute shaders are very limited.)
That is precisely my point. The openness of an API, or lack thereof, does not seem to critically affect the success, usefulness, or capability of an API.
People building and promoting systems often mention the ideology that drives them and claim that ideology as a feature of the system, not merely incidental.
For example in the west we go on about democracy and human rights but there are some very successful countries that don’t embrace those values. Personally I don’t want to live somewhere like China or Qatar, but I also don’t want to use macOS. I have chosen to run macOS in the past and I live in a country with worse human rights and democracy than where I grew up, so I’m not claiming any kind of ideological purity - just that ideology factors into my pragmatic choices.
Overall I thought your comparison of Apple to China or Qatar was an interesting point but this last statement missed some nuance in my comment. I believe you are characterizing the decision itself of where to live as a pragmatic choice whereas my comment was more about pragmatic or ideological decision-making. I would definitely concede that the decision of where to live is a personal one and just like you, the material conditions of China and Qatar wouldn’t automatically override my personal preferences. On the other hand, I believe personal preference is generally less relevant when choosing an API to base your application on. I will use Metal if I have to, I will use DirectX if I have to, both because one of my goals as a software developer is to make sure my software is as accessible as possible.
As an aside, I don’t think the decision to implement some version of OpenGL on the M1 was ideological at its core. I don’t think there was any other option for them really, outside of implementing some version of Vulkan, so it seemed more like a hard constraint.
Vulkan didn’t exist when they developed Metal, and once they’d started implementing all their things in Metal and teaching it to the developers in their ecosystem it’d be a very big investment wasted to switch to Vulkan. Also, although Vulkan is mostly a success now, that wasn’t certain to happen when Apple needed to decide on a course of action.
Additionally, Khronos standards have tended to be insufficient for the needs of developers in the past, with various vendor extensions being essentially required to run games. This seems to be happening again with Vulkan (AFAIK, being just a bystander in this). By using their own API, they’re not at the mercy of the whims and trends of the developer community to the same extent.
My comment was referring to the Asahi Linux team, not Apple.
What was one revelation of the Snowden leaks? NSA/five eyes really hate encryption and lamented about the fact more and more web traffic is encrypted. It would be very convenient for them to have a honeypot MITM that strips encryption and can see all traffic, while also preventing Tor users from effectively browsing the clearnet.
Cloudflare sees all traffic between you and the website you want to visit in clear text. Cloudflare is located in San Francisco, CA, USA. 20% of clearnet internet traffic goes through Cloudflare. Every US company can be forced by secret federal court order to allow the NSA to tap into their communications and no one at such a company who knows about it may talk about it to anyone unless they want to spend the next 10-20 years behind bars. It doesn’t matter if Cloudflare was an NSA-thing from the start or turned into one later, it very surely is given its size and market share.
DDoS protection is nothing special. Hosters like Hetzner have first-rate DDoS-protection and it’s included free of charge with their VPS packages. With some very few exceptions, I think it’s nonsense that companies think they have to use Cloudflare for DDoS protection.
Please think twice before using services like Cloudflare, especially when they’re “free”. Who is the product?
While I agree with that, it’s often not even the choice of most tech people, unless it’s their own company. Similar things are true for cloud usage at large. There’s very little incentive to care about privacy and that kind of security in most companies. It doesn’t cost companies anything, but it brings them certain benefits. It’s just not how your typical company operates.
Of course this also explains why companies, large and small are being “hacked” all the time. But the response is using some mandatory security courses for employees and hoping it doesn’t happen next time. Security is barely a worthwhile endeavor for most companies, outside of marketing and similar things. It sounds good both in ads and in internal presentations, projects, etc. But it’s rarely meant sincerely in commercial contexts.
It’s more like companies showing you a “Your privacy is important to us”, when the only reason that they are required to have that banner up is precisely cause they couldn’t care less about it.
Companies still will eagerly provide your data to CDNs, analytics tools, and all sorts of other third parties, embed Facebook, not read the docs enough to opt out for non-facebook sending their data to FB and so on. It’s simply not an objective for a company that exists to increase profit. It’s not just about privacy. It’s a general theme. It’s about all about incentives.
Please explain this claim.
If a website uses Cloudflare, the traffic between you and the website is 100% readable by Cloudflare. If you don’t believe me, read this:
trust us™
It’s not a question of belief. It was simply a technical question. As @edk mentions, the CDN functionality relies on being able to terminate the TLS connection on a Cloudflare server.
It certainly is a security puzzle worth thinking about. For example, there are protocols (designed before TLS was widespread) that use nonces and do not pass plain text passwords or even login identities (see “userhash”), even within TLS protected streams, e.g. https://datatracker.ietf.org/doc/html/rfc7616
It doesn’t seem like a security puzzle to me.
A lot of CloudFlare’s (and other CDN) features depend on MITMing, reading data, but also things like modifying headers, sometimes compressing or re-incoding images, etc. And of course they cache the data. Tunneling through cloudflare wouldn’t be a big problem, but also wouldn’t gain you anything.
You could of course do that just for passwords, but the thing you protect against by having an account and a password could still be done by Cloudflare (reading content, and even modifying requests and responses).
Cloudflare is a CDN at heart. Like any CDN it needs to think in plaintext so it can cache things. So Cloudflare’s reverse proxy terminates TLS and (optionally!!) re-establishes TLS in order to talk to whatever is behind it. Setting aside any internal policy/security measures, which I hope exist but have no way of knowing for sure, someone with access to Cloudflare’s infrastructure could snoop on traffic while it’s between TLS connections, so to speak.
I should note that unlike parent I am not totally convinced Cloudflare is the NSA, although I would imagine they’ve seen more FISA orders than most companies their size.
They don’t really need to “be” NSA. If they operate in the US, as they do, any employee can be compelled to do their bidding through a National Security Letter, and it might even be a punishable offense for that employee to tell his boss.
That’s the happy case. There are many Government far more malign than the US Government; I’d bet that some of them (e.g. the Chinese and Russian Governments) have at least attempted to compromise individual employees of Cloudflare.
The “happy case” depends entirely on who exactly has their privacy infringed by a Cloudflare compromise, and it will likely not be the same answer for everyone involved.
This was a published issue long before Snowden. Clipper chip arguments from 1994 or so and back earlier with James Bamford’s Puzzle Palace all these supposed revelations were in the clear. https://a.co/d/8KBvKPL
Yeah, but Snowden demonstrated that the surveillance was an order of magnitude or two larger than what people realistically expected.
I think (pretty much aligned with your point) that “people” in your sentence really means “people who didn’t read Bamford’s The Puzzle Palace from 1983, or read any freedom of information act documents since then about NSA, or ever visit NSA” because most the people i knew were like “no duh…should be obvious”.
And, again to your point, the number of such people was adequately large to create a sustained reaction to Snowden’s leaks.
I do think the co-opting of NSA equipment to watch domestic cellphone network traffic was the only previously unemphasized thing (because it’s outside NSA’s charter, unless one side of the conversation crosses the US border).
There is nothing simple about systemd, but otherwise nice guide.
I have been developing a unicode library (libgrapheme) for a few years now, which especially allows grapheme segmentation of UTF-8 strings. It’s a freestanding (i.e. no stdlib) library, so you can compile it into wasm and use it from there without having to rely on browser functions.
They threw way too much shade on tiling window managers. Instead of lamenting the fact that many applications require a certain aspect ratio, the rational answer would be to improve that aspect instead of trying to reinvent the wheel at the WM level. It worked on the web too (mobile vs. desktop, responsive design).
The strongest point of most tiling WMs (dwm in my case) is the fact that I can control everything with my keyboard and don’t waste screen-space. Before using a tiling WM I had two monitors, but I am perfectly fine with one monitor and 10 workspaces now.
GNOMEs position is not favourable: They have to address the non-power-users and dumb down everything so nobody is required to learn anything new. However, the fundamental axiom of good UX/UI design is a tight link between human and machine, and nothing beats the keyboard. It, however, requires a bit of learning and getting used to.
Source?
All the studies I’ve seen, going back to the one reported here:
https://www.asktog.com/TOI/toi06KeyboardVMouse1.html
indicated that while many users “feel” faster using the keyboard for interactions, for things other than text entry, when timed by an observer, most of them are actually faster with the mouse.
So I’d find some sourcing for your “fundamental axiom” really interesting, especially if accompanied by measurements.
The tog “study” isn’t. It makes reference to some study that’s never really explained or cited. A good write up and refutation of the famous tog page and the myths it’s spawned: http://danluu.com/keyboard-v-mouse/
While it’s just speculation since, again, we know nothing about the study that was conducted, I suspect that the result was based on users using new software they had no familiarity with. Two seconds to grab something out of a menu seems long. Two seconds to input a chord is very obviously not how experienced users use their software today. You can easily do a bunch of little experiments here yourself, I have since the tog thing always bothered me, personally (according to a watch) a familiar keychord is significant and consistently faster than most operations that requires actual pointing.
Key cords do have pretty poor discovery, and memorizing them takes time, so in unfamiliar software yeah, I probably would be faster digging through menus. But for something like a WM, the UI of which is engaged essentially 100% of the time you use a computer, the investment in keyboard driven operation makes all the sense in the world.
Thanks for the interesting source! I was a bit unclear/misleading with my wording: Obviously the mouse is much better than the keyboard for any GUI stuff, but what I meant was in regard to the single task of window management. I cannot point to any sources, but reordering windows with the mouse all the time takes a lot of time, whereas a tiling WM does it for you and you are lightning fast with reordering windows.
GNOME/ish apps are in fact adopting responsive design en masse (thanks to the whole open-Linux-smartphone movement). But it’s more complicated that just the aspect ratio. I don’t see any lamenting; rather, the post is capturing new, very interesting notions that should be useful for building good UX:
I use NewPipe every day, have an adblocker on my computer and phone and am absolutely shocked when I watch YouTube at other places and see how many ads they are pushing. Even more shocking is how people have gotten used to it.
Of course Alphabet/YouTube has to finance itself somehow, but 11,99€/month for YouTube Premium is definitely overprized. If you consider that YouTube only makes a fraction of a cent per ad-view per person, they could probably be profitable with 0,99€/month, and I would be willing to pay that.
This would not resolve one other major issue I have with YouTube: The website is bloated as hell. NewPipe is smooth and native, runs on F-Droid and allows easy downloading (audio or video).
Are you sure? ;-P
So, I pay for a family premium account, primarily to disable ads on stuff my kids watch. I have several motivations:
Now that they’re starting to find content they really like (Explosions and Fire is great for kids, if a bit sweary) I’m going to fund them to choose their favourite creator each, via Patreon or similar.
Edited to add: I also run ad-blocking, at a network level. I have no qualms at all about ad-blocking in general, and think that ad-tech is a blight on the world. Please don’t mistake my willingness to pay for YouTube Red (or whatever it is called) as a criticism of adblocking.
I will teach my children to know when they’re being ripped off and how they can protect themselves and their valuable time.
But they’re pretty obviously not being ripped off, if they want to watch the content, right?
They may not be “ripped off” but video creators will be either way, for a skillset that was once highly valued.
Can you elaborate on that? Do you mean, ripped off by adblocking, YouTube / other tech aggregators’ models in general, … ?
I am referring directly to the systematic devaluation of their otherwise professional labor.
Intentionally systematic do you think, or an unintended consequence of the technology?
I’ve not had a lot to do with video creators professionally. But when we did engage one to create some YouTube videos capturing our company and what it was like to work there, I was super impressed. Worth every cent and so much better than anything most amateurs could produce.
I don’t know that I’m interested in trying to divine if people intend to exploit or just accidentally help build systems which do. The purpose of a thing is what it does.
So when the thing does something you consider undesirable, how then do you determine whether to incrementally improve the thing, reform the thing, or burn the thing to the ground?
with its measurable circumstances and effects
So you don’t consider what the intended purpose of a thing was, in that process? Even if only for the purpose (heh) of contemplating what to replace it with?
Not interesting.
Consider Chesterton’s Fence.
That would be a useful principle for me if you replaced the need to understand the intentionality of systems with understanding the material circumstances and effects of them. Spinoza, Marx, and the second-order cyberneticists had the most useful views on this in my opinion.
As a child I wanted lots of plastic toys which cost a lot of money. Advertising works really well for doing that!
I wanted them and they were a rip off.
Ah, yeah - what I meant was, if you want YouTube content and loathe the ads, paying to remove them probably isn’t a rip-off.
At least, in the proximate transaction. I do wonder how much of that $11 or whatever goes to the content creators. I know it’s zero for some of the content my kids watch, because it turns out swearing Australian chemistry post-docs blowing things up isn’t “monetizable” :)
Hence paying the creators through a platform like Patreon. Although I’d rather not Patreon specifically.
(Edited to add: I remember how much I treasured my few Transformers toys as a child. Hours of joyous, contented, play alone and with others. Sure they were expensive for what they were, physically, and cartoon TV advertising was a part of what enabled that. It’s a similar deal with my kids and Pokemon and MTG cards … total rip-off from one angle (“it’s just a cardboard square”) but then you see them playing happily for hours, inventing entire narratives around the toys. Surely that’s no rip-off?)
True, but I hope you understood what I meant.
This convinced me to give Newpipe a try and omg it is so much better than using Firefox even with uBlock Origin, let alone the Android Youtube app. Thank you so much for the recommendation!
I’m glad my recommendation was helpful to you! :)
Remember that YouTube Premium also comes with other things, like a music streaming service. Apparently YouTube tentatively thinks 6,99€ is around what the ads are worth.
how much of that makes it back to the artists and people creating videos for google’s platform?
Much like any creative endeavor, the platform/middleman almost certainly takes a big cut, and then what’s left is probably a Pareto-type distribution where a small number of top-viewed channels get most of the payout.
On the other hand, if you don’t pay for it and don’t watch any ads, then it’s likely that the people creating the videos get nothing as a result.
Enough that they find it worth their time to do so, rather than… not.
I am really let down every time I see people giving in to the NSA’s pet-project for circumventing web traffic encryption that is Cloudflare. I hope I’m not the only one extremely alarmed by Cloudflare’s origins and obvious implications if you have seen the Snowden leaks.
A very simple VPS on Hetzner with 40GB disk space and 20TB traffic (easily enough for the author’s case) costs like 4,51€/month. It also includes DDoS-protection. If you do not need custom server software, 10GB web hosting cost 2,09€/month.
Yeah the “move to CloudFlare because it’s free!!!!” caused me to eyeroll really hard, and then I recognized that the problem is just going to grow. Can we stop supporting CloudFlare? Please?!?
Could you recommend an alternative CDN that offers comparable low latency worldwide, for a cheap price?
I guess the first thing is realizing that you probably -don’t- need that in the first place. But if you do - personally, I’m using Cloudfront for some of my things that /do/ require a bit more caching and distribution, and I fit neatly into the free tier for that. I think when CDNs do become a concern, a paid one is probably worth it.
I don’t know if Vercel is a Cloudflare CDN reseller or if they’re an AWS CDN reseller, but their free tier offers 100GB of bandwidth a month. I switched from a VPS to Vercel when UNIX sysadmin stopped being a fun hobby for me.
Would you have a good resource where I can educate myself about the relationship between cloudflare and the NSA?
HN history search results or lobste.rs search results.
The first link in the lobsters search results makes clearest accusations, especially the last two sections of the page. These are clearly stated, and link to sources. They have the same rhyme as other accusations. See earlier history on wikipedia: Room 641A, PRISM, e.g. ssl removed here sketch
This is a weird place to make that case. The OP is just hosting a public static website with a few technical articles. Unless you’re suggesting that the NSA is man-in-the-middling this encrypted traffic to some nefarious purpose, I don’t see the concern.
I mean, we know for a fact that the NSA is scooping up literally all the traffic they can for nefarious purposes, soooooooo.
The NSA’s data collection programs, and any nefarious uses, are wildly off topic in a discussion of static, public website hosting on Cloudflare.
If you can make the connection to have it be on-topic, by all means, please go for it.
Lobsters does not have downvoting. This is a feature. https://lobste.rs/about#ranking
I see. I can get behind that.
How do I set that up to automatically rebuild when I push to git?
It’s pretty easy with an actual VPS (which is not much more expensive). Push to the VPS, have a post-receive hook there which forwards to your actual git host and does a deploy.
The higher web hosting tiers even have ssh-access, making it trivial to set up push-hooks. FTP works all the time.
Wayland the protocol is actually pretty nice and well-defined. The problem is that it defines too little, so when saying “Wayland is pretty good”, this heavily depends on the compositor of choice. Every compositor has to bring around 10.000 SLOC of boilerplate and support dozens of “proprietary” (in terms of “follow the most popular compositors and libraries”, like wlroots and sway, which define quite a few custom extensions) extensions for basic stuff. Wayland itself merely is a very very thin layer for basic client- and buffer-management, because the designers were thinking of any possible case a GUI might be implemented apart from the default rectangular forms we are used to.
X may be large and dated, but at least it comes with everything you mostly need and an X window manager does not need to reinvent the wheel.
The Wayland team, in my opinion, really dropped the ball here in this regard. Wayland could have been the incubator for a proper Linux desktop with e.g. native color space support, native high-DPI, a vector-based-model (all-rendered was a step back imho), even better network transparency for fully remote-access and plenty other things that are missing in X. Instead we got this mess.
It would have been much better to keep the “freeform”-approach and allow crazy compositors/window-managers, but the burden would’ve been for the special cases to deselect certain default assumptions rather than the opposite. To give a concrete example, if your compositor/window-manager does not allow spawning a window at the top or in the center, simply ignore the request or print a warning (which might be well-reflected in a good API). With Wayland, window-placement-requests are simply not supported by default and you instead have to resort to one of the said proprietary extensions. This just leads to a horrible monoculture, as if we hadn’t learnt anything from the browser wars.
And we wouldn’t have needed standardization, just a proper foundational library by Freedesktop.org with well-defined interfaces that encourage reimplementation, if wanted, in a server-client model (i.e. the compositor/window-manager being a client to a proper server with an broad well-defined API) and not a util-library like wlroots that sits on top of a thin API. Such a reimplementation would still only be at most as complex as implementing a single bloody compositor in Wayland!
It is an antipattern that libraries like wlroots have to exist! If you replace wlroots and sway above with Webkit and Chrome, my point should become even clearer.
This seems like fossil. I guess they want to have their own thing to be unique.
Keep in mind that they are 100% compatible with the git-data-format, so it’s just a way simpler frontend everyone is free to use without limiting anybody else within the git-ecosystem.