So apparently a subset of MathML was extracted, named “MathML Core”, and is now generally available for use! This is news to me! I’ve been looking at MathML every couple of years, and walked back as I wasn’t a fan of adding heavy runtime polyfils like mathjax. But it seems now you can just use the thing?
What is currently the recommended shorthand syntax for authoring math and lowering it to MathML Core subset?
From the project description of Temml it seems that it’s basically the code for MathML export ripped out and improved upon, so that makes sense that it works a bit better.
Companies who hoarded a bunch of IPv4 have no incentives to transition, if not the reverse, they can profit from the monopoly they are slowly building.
Internet being less and less decentralized isn’t helping, and again companies have no incentives to reverse this trend, why bother if you’re the only kingdom and everyone has to pay you to access essential services ? That’s a reasonable business plan.
On another note, IPv6 is nice… but is mostly a patch over a broken paradigm that IP is… Top-down networks are nice when your computers are mainframes, but when everyone move at high speed everywhere, it’s fundamentally an huge pile of patches, hardware and software being piled up to counteract this non-natural behavior…
Instead, why wouldn’t we build network from the bottom to the top ? Why whould we keep ourselves with committee issued IPs when we have hardware able to do cryptographics keypairs ? Why can’t we have softwares that communicate with any softwares without even thinking about what logical or physical layer their on ?
IPs is but a broken paradigm pushed by telecoms companies trying to sell phone calls at a premium, and we kept it because history… but it didn’t stopped research before and after arpanet…
Protocol evolution, deployment, and lock-in is a fascinating interplay of economics, historical path-dependence, and technical capability. The “network effects” of communication protocols have such massive scale and power, and yet few seem able to think clearly about them.
What would it take to re-architect the Internet? How can you build an Ouroboros or RINA system at any scale? It’s a heck of a thought experiment.
Alloy is a language for describing structures and a tool for exploring them. It has been used in a wide range of applications from finding holes in security mechanisms to designing telephone switching networks.
OpenSSL reported that local side-channel attacks (i.e., ones where an attacker process runs on the same machine) fall outside of their threat model.
[…]
Taken at face value, this means that OpenSSL is not fit for purpose if you have any untrusted code on your machine, i.e. any code that you cannot be sure not to be malicious. This includes any third-party code that could contain a Trojan, or any code that has a bug that allows it to be highjacked.
I mean, which program does protect itself from other programs that can can potentially write or even read their memory ?
I’m used to distro with apparmor/selinux that kind of enforce boundaries afaik, but a program can’t expect to be shielding themselves from someone that can fiddle with their bits, doesn’t they ?
According to the paper’s conclusion, you should see similar results that HTTP/2, given that it seems most of the reported QUIC’s issues is that there is too much kernel<->userland crossings & not using available kernel’s UDP optimizations, whereas TCP stacks are all already quite fast and mostly contained to the kernel.
Ah damn it. My 12th Gen Intel Framework 13 is working perfectly for almost 2 years now, and now I wish something breaks soon so I have a good excuse to get that screen… :D
You can buy the screen and add it to your 12th Gen Framework 13 :P
And reuse the old one with one of those cheap eDP to DisplayPort controllers as a regular screen !
Remind me that a little company with not much users and a pretty low complexity application called Figma ran on top of one Postgresql server until recently eh
Why do we even have to have these discussions and make these decisions? There should be an abstract interface for converting bytes to bitmaps+metadata and its implementations for various formats. Content/format negotiation feature is already present in the HTTP protocol. Servers offering yet uncommon formats, can transcode on-the-fly or provide an alternative download as a fallback if given format is not accepted by the client. Users and providers that are willing to install more advanced codec could benefit from it and others will just use something more standard. Such ecosystem will change smoothly on the basis of supply and demand, not on the basis of bureaucratic decisions and agreements by a small group of people.
Yes, it is nothing new. Develop against abstract interfaces instead of particular implementation… it is a common concept and good practice, quite an age-old truth, related to modularity. Thanks for the Amiga link, I will look at it.
However, this is not just a matter of design or technology, it is quite broader, rather social thing. The question is whether certain people should make decisions for others or whether they should just maintain the environment and laissez faire.
With the MIME type of the image and a link to a wasm/js decoder (with some standardized API for putting in raw data and getting back image data), every browser would support any format - either using a built-in decoder or by using the provided wasm/js backup - without having each website having to do a lot of serverside or js nonsense to do it manually.
Interesting ! I’m curious why on the simulation I’m being charged so low when I do 100k reads on a single 1GB file in my “bucket”, if I take Backblaze for example, even with 3 times the bucket’s size in egress fee offered, I should be charged around ~1k dollars.
Also, I think you might be interested in Scaleway’s object storage offering, it sits around ~€0.0146/GB/month with a €0.01/GB egress fee (Disclaimer, I’m working on this product)
Oof. I remember looking at the egress charge for Backblaze and thinking that was a weirdly small fraction of a cent. It turns out I forgot I was representing “operations” as “millions of operations”, and thus was massively discounting egress across the board. All the ending conclusions to draw here are wildly different now that egress is correctly computed. Thanks!
I don’t see a minimum charge on Scaleway, so works for me! Added to the table.
We expose a unified WebGPU API, while GLFW gives you the underlying Vulkan/Metal/Direct3D/OpenGL API and you’re left on your own for how to use those, unify them, compile shaders for each, etc. while it’s a single consistent experience with Mach.
We give you multi-threaded rendering by default, without you having to really think about it.
We are working on a webassembly/browser backend, with GLFW you can only get this through emscripten shenanigans and you’d have to use another rendering API (OpenGL ES / WebGL) while we’ll always just use WebGPU everywhere.
We will support android/iOS in the near future; GLFW doesn’t support it.
GLFW is just one backend of mach-core; we may replace it later but it’s not a high priority for us as it gets the job done for now.
Then there’s the whole engine part we’re working on separately.
Oh, I remember what’s going on! I’m using the selectable/unselectable CSS attributes to get reasonable behavior, I.e. article text is selectable but not the space around it. IIRC those attributes aren’t totally standardized yet and I may need to add some variants to support all browsers. So far I’ve just been using Safari and WKWebView as this layout has mostly been used in an app, but now that I’m publishing as a blog I need to test more broadly! Thanks for the bug report, I’ll fix it soon.
We are all experiencing what happened when politicians regulated the web.
[…]
The industry should fix email interoperability before politicians do. We will all win.
This is the result of the almost total absence of politic in the IT sector, or politic driven by this oligopolistic lobby money.
There is nothing new in this imho, the industry is very young, people understand it less at the moment, it is difficult to have a popular opinion on what is good for society, which leads to poor political ownership and lack of interest in regulating it.
We are all experiencing what happened when politicians regulated the web. I hope you are enjoying your cookie modals; browsing the web in 2022 is an absolute hell.
That’s not exactly the politicians’ fault. They’re not serving malware through ads. They DID go and try to regulate things. They just have an off-by-one error, instead of opt-in, they made cookies opt-out.
Actually, the GDPR explicitly requires cookies and all forms of tracking to be opt-in.
It’s just that it takes so long for the courts to punish the websites that they can still keep the old opt-out solution until the courts rule on their specific case.
But at least Google now has cookies and tracking opt-in.
Are you sure? I was certain that everybody was pissed about it.
Also, they said “legitimate requirement” meaning, i need to keep this info _by law _, but the operators took it as “I have a legitimate desire to spy on people in order to satisfy my master the VC”. That’s been challenged recently, I think, but everyone still does it.
If everyone was pissed, I must have missed it. I’ve implemented the GDPR since it passed into law in 2016 (the law had a two year tolerance policy during which non-compliant sites would not be fined, which is why everyone remembers the introduction to be 2018).
And during this entire time it was clearly obvious from the text that all tracking had to be opt-in.
You’re spot on with your analysis of the “legitimate need”, though. There’s only a handful of legitimate needs that are not caused by laws enforcing storage, e.g. storing IPs in logs for a few days to measure DDoS attempts.
But it’s obvious companies are trying to abuse that definition for their own benefit.
Tailwind & consort kinda target something the author seems to forget: many web developers of today are deeply entrenched in “component based frameworks” that kinda prevent such repeat of “atomic css snippets” since, well, you develop a “component” that get repeated as needed.
Classic class based CSS already do this, ofc, but only for CSS + HTML, when you try to bring this component/class system to Javascript, you often ends up in something like React or Vue or idk what’s trendy today.
And then you have 2 competing “components” systems : CSS classes, and “JS components”.
Tailwind kinda allow you to merge them again, reducing overhead when having to work with those JS systems.
I personally prefer the CSS Modules approach to solve this problem, rather than bringing in a whole framework like Tailwind. CSS Modules give you the isolation of components but don’t force you to leave CSS itself behind, and when compiled correctly can result in a much smaller CSS payload than something hand-written. Sure, you lose the descriptive and semantically-relevant class names, but if you’re building your whole app with components, you really don’t need them.
That said, if I didn’t use something like React, or I just needed CSS that followed a similar modular approach, I guess I would reach for Tailwind. But realistically, CSS is so lovely these days that you really don’t need much of a framework at all.
CSS Modules is an abstraction at a lower level than Tailwind. The former can do everything Tailwind can do in terms of the end result. The latter provides really nice defaults/design tokens/opinions.
CSS Modules is an abstraction at a lower level than Tailwind.
Definitely, and that’s why I prefer it. The way I normally organize my applications is by using components, so I don’t really need a whole system for keeping all the styles in order. Ensuring that styles don’t conflict with one another when defined in different components is enough for me. But if I was doing something with just HTML and I didn’t have the power of a component-driven web framework, I probably would try out Tailwind or something like it.
My experience using sql.js was not awesome. It runs out of memory so quickly when you do naive queries like SELECT * FROM VALUES (...), ... where the VALUES are a bunch of JavaScript objects you want to inject in to be able to perform SQL on.
I just wanted to be able to do basic grouping, filtering, joining on data in memory using SQL syntax. But after a few hundred rows sql.js would always run out of memory.
I switched to alasql which can handle the 80MB of data I was trying to work with. It’s not as complete as SQLite but at least the basics work.
There must be more intelligent ways to use sql.js that work around the memory limits I had. They just weren’t obvious to me.
Yes that’s true. My point is that I wasn’t able to just use it on its own very easily. There are clearly other people using it correctly/more efficiently than I was able to figure out.
So apparently a subset of MathML was extracted, named “MathML Core”, and is now generally available for use! This is news to me! I’ve been looking at MathML every couple of years, and walked back as I wasn’t a fan of adding heavy runtime polyfils like mathjax. But it seems now you can just use the thing?
What is currently the recommended shorthand syntax for authoring math and lowering it to MathML Core subset?
Afaik https://katex.org is still the most popular
I’d love to see a https://asciimath.org backend translating directly to mathml tho
For my CMS I use Temml since Katex had a bunch of bugs with MathML only mode.
From the project description of Temml it seems that it’s basically the code for MathML export ripped out and improved upon, so that makes sense that it works a bit better.
Following a sudden rant I wrote on a recent submission about IPv6 https://lobste.rs/s/sm7pk7/ipv6_transition#c_vvdzob I though this article might interest yall
Companies who hoarded a bunch of IPv4 have no incentives to transition, if not the reverse, they can profit from the monopoly they are slowly building. Internet being less and less decentralized isn’t helping, and again companies have no incentives to reverse this trend, why bother if you’re the only kingdom and everyone has to pay you to access essential services ? That’s a reasonable business plan.
On another note, IPv6 is nice… but is mostly a patch over a broken paradigm that IP is… Top-down networks are nice when your computers are mainframes, but when everyone move at high speed everywhere, it’s fundamentally an huge pile of patches, hardware and software being piled up to counteract this non-natural behavior…
Instead, why wouldn’t we build network from the bottom to the top ? Why whould we keep ourselves with committee issued IPs when we have hardware able to do cryptographics keypairs ? Why can’t we have softwares that communicate with any softwares without even thinking about what logical or physical layer their on ?
IPs is but a broken paradigm pushed by telecoms companies trying to sell phone calls at a premium, and we kept it because history… but it didn’t stopped research before and after arpanet…
I’ll stop this weird hot take on a good article by just giving some ressources to dig further if you’re curious https://en.m.wikipedia.org/wiki/Recursive_Internetwork_Architecture https://www.notion.com/blog/louis-pouzin https://ouroboros.rocks/
I have no much hopes that, without politics getting in, we’ll move to something decent, companies and industrials doesn’t really care about
Protocol evolution, deployment, and lock-in is a fascinating interplay of economics, historical path-dependence, and technical capability. The “network effects” of communication protocols have such massive scale and power, and yet few seem able to think clearly about them.
What would it take to re-architect the Internet? How can you build an Ouroboros or RINA system at any scale? It’s a heck of a thought experiment.
TIL
I mean, which program does protect itself from other programs that can can potentially write or even read their memory ? I’m used to distro with apparmor/selinux that kind of enforce boundaries afaik, but a program can’t expect to be shielding themselves from someone that can fiddle with their bits, doesn’t they ?
It’s not a “side-channel attack” if an attacker can directly do this, and not what the article is about.
It would now be interesting to see a comparison against TCP+TLS+HTTP/1.1 under these circumstances…
According to the paper’s conclusion, you should see similar results that HTTP/2, given that it seems most of the reported QUIC’s issues is that there is too much kernel<->userland crossings & not using available kernel’s UDP optimizations, whereas TCP stacks are all already quite fast and mostly contained to the kernel.
That makes sense, thank you!
Ah damn it. My 12th Gen Intel Framework 13 is working perfectly for almost 2 years now, and now I wish something breaks soon so I have a good excuse to get that screen… :D
You can buy the screen and add it to your 12th Gen Framework 13 :P And reuse the old one with one of those cheap eDP to DisplayPort controllers as a regular screen !
Remind me that a little company with not much users and a pretty low complexity application called Figma ran on top of one Postgresql server until recently eh
Why do we even have to have these discussions and make these decisions? There should be an abstract interface for converting bytes to bitmaps+metadata and its implementations for various formats. Content/format negotiation feature is already present in the HTTP protocol. Servers offering yet uncommon formats, can transcode on-the-fly or provide an alternative download as a fallback if given format is not accepted by the client. Users and providers that are willing to install more advanced codec could benefit from it and others will just use something more standard. Such ecosystem will change smoothly on the basis of supply and demand, not on the basis of bureaucratic decisions and agreements by a small group of people.
Browsers already support that https://developer.mozilla.org/en-US/docs/Web/HTML/Element/picture , you can implement it on your server to serve other sources “on-the-fly” if you want.
Thing is, the browsers need to support “advanced codec” to be able to use them, thus why this discussion and decisions are important.
You mean, like AmigaOS’s datatypes? (1992, AmigaOS 3.0)
Yes, it is nothing new. Develop against abstract interfaces instead of particular implementation… it is a common concept and good practice, quite an age-old truth, related to modularity. Thanks for the Amiga link, I will look at it.
However, this is not just a matter of design or technology, it is quite broader, rather social thing. The question is whether certain people should make decisions for others or whether they should just maintain the environment and laissez faire.
To remove the need for server support for various formats, browsers should have something like this:
With the MIME type of the image and a link to a wasm/js decoder (with some standardized API for putting in raw data and getting back image data), every browser would support any format - either using a built-in decoder or by using the provided wasm/js backup - without having each website having to do a lot of serverside or js nonsense to do it manually.
https://www.youtube.com/watch?v=VJaa1Le4W7c
Interesting ! I’m curious why on the simulation I’m being charged so low when I do 100k reads on a single 1GB file in my “bucket”, if I take Backblaze for example, even with 3 times the bucket’s size in egress fee offered, I should be charged around ~1k dollars.
Also, I think you might be interested in Scaleway’s object storage offering, it sits around ~€0.0146/GB/month with a €0.01/GB egress fee (Disclaimer, I’m working on this product)
Oof. I remember looking at the egress charge for Backblaze and thinking that was a weirdly small fraction of a cent. It turns out I forgot I was representing “operations” as “millions of operations”, and thus was massively discounting egress across the board. All the ending conclusions to draw here are wildly different now that egress is correctly computed. Thanks!
I don’t see a minimum charge on Scaleway, so works for me! Added to the table.
Wow, the “On your machine in just ~60 seconds” really works. Clone the repository,
zig build run-deferred-rendering, and it’s running!I was hoping for a miracle, but no, doesn’t work on NixOS:
We have nixOS docs here: https://machengine.org/about/nixos-usage/
I get the same and I’m a bit confused because I thought Mach was an alternative to GLFW?
The differences between mach-core and GLFW are:
GLFW is just one backend of mach-core; we may replace it later but it’s not a high priority for us as it gets the job done for now.
Then there’s the whole engine part we’re working on separately.
Gotcha- so even mach-core is a bit of a higher level abstraction than GLFW. That makes sense, thanks for the explanation!
As far as I understood, Mach’s goal is much bigger than an alternative GLFW.
GLFW’s goals are “creating windows, contexts and surfaces, receiving input and events”.
While Mach is targeting the full game engine space.
(Mach currently use GLFW for the windowing part and maintain Zig bindings for it. https://github.com/hexops/mach-glfw)
I can’t find the usability / UX of Paint.net anywhere in the Linux or MacOS world and it make me really sad / miss Windows often
I’ve heard Pinta (a cross-platform fork) is good but don’t use it myself.
Thanks you. Yeah also heard of it but sadly it’s not near Paint.net and was buggy AF last time I tried 😢
annoying that i can’t copy and paste from the page
Oh? That’s odd. I haven’t done anything to prevent that, or anything weird to the HTML. What OS did this happen on?
Same 🧐 (Firefox 110.0 on MacOS 13.2.1)
Oh, I remember what’s going on! I’m using the selectable/unselectable CSS attributes to get reasonable behavior, I.e. article text is selectable but not the space around it. IIRC those attributes aren’t totally standardized yet and I may need to add some variants to support all browsers. So far I’ve just been using Safari and WKWebView as this layout has mostly been used in an app, but now that I’m publishing as a blog I need to test more broadly! Thanks for the bug report, I’ll fix it soon.
They recently released their runtime’s sources so seemed like a good time to share here https://github.com/socketsupply/socket
This is the result of the almost total absence of politic in the IT sector, or politic driven by this oligopolistic lobby money.
There is nothing new in this imho, the industry is very young, people understand it less at the moment, it is difficult to have a popular opinion on what is good for society, which leads to poor political ownership and lack of interest in regulating it.
Yes. There’s this tidbit just under that one:
That’s not exactly the politicians’ fault. They’re not serving malware through ads. They DID go and try to regulate things. They just have an off-by-one error, instead of opt-in, they made cookies opt-out.
Actually, the GDPR explicitly requires cookies and all forms of tracking to be opt-in.
It’s just that it takes so long for the courts to punish the websites that they can still keep the old opt-out solution until the courts rule on their specific case.
But at least Google now has cookies and tracking opt-in.
Are you sure? I was certain that everybody was pissed about it.
Also, they said “legitimate requirement” meaning, i need to keep this info _by law _, but the operators took it as “I have a legitimate desire to spy on people in order to satisfy my master the VC”. That’s been challenged recently, I think, but everyone still does it.
If everyone was pissed, I must have missed it. I’ve implemented the GDPR since it passed into law in 2016 (the law had a two year tolerance policy during which non-compliant sites would not be fined, which is why everyone remembers the introduction to be 2018).
And during this entire time it was clearly obvious from the text that all tracking had to be opt-in.
You’re spot on with your analysis of the “legitimate need”, though. There’s only a handful of legitimate needs that are not caused by laws enforcing storage, e.g. storing IPs in logs for a few days to measure DDoS attempts.
But it’s obvious companies are trying to abuse that definition for their own benefit.
Tailwind & consort kinda target something the author seems to forget: many web developers of today are deeply entrenched in “component based frameworks” that kinda prevent such repeat of “atomic css snippets” since, well, you develop a “component” that get repeated as needed.
Classic class based CSS already do this, ofc, but only for CSS + HTML, when you try to bring this component/class system to Javascript, you often ends up in something like React or Vue or idk what’s trendy today.
And then you have 2 competing “components” systems : CSS classes, and “JS components”.
Tailwind kinda allow you to merge them again, reducing overhead when having to work with those JS systems.
I personally prefer the CSS Modules approach to solve this problem, rather than bringing in a whole framework like Tailwind. CSS Modules give you the isolation of components but don’t force you to leave CSS itself behind, and when compiled correctly can result in a much smaller CSS payload than something hand-written. Sure, you lose the descriptive and semantically-relevant class names, but if you’re building your whole app with components, you really don’t need them.
That said, if I didn’t use something like React, or I just needed CSS that followed a similar modular approach, I guess I would reach for Tailwind. But realistically, CSS is so lovely these days that you really don’t need much of a framework at all.
I find tailwind much easier to use than css modules when you stick to the defaults.
CSS Modules is an abstraction at a lower level than Tailwind. The former can do everything Tailwind can do in terms of the end result. The latter provides really nice defaults/design tokens/opinions.
Definitely, and that’s why I prefer it. The way I normally organize my applications is by using components, so I don’t really need a whole system for keeping all the styles in order. Ensuring that styles don’t conflict with one another when defined in different components is enough for me. But if I was doing something with just HTML and I didn’t have the power of a component-driven web framework, I probably would try out Tailwind or something like it.
Some tries to trim the “top” part of the hourglass too with projects like https://en.m.wikipedia.org/wiki/Recursive_Internetwork_Architecture
Or it’s, imho, more advanced and mature iteration (while a bit divergent) https://ouroboros.rocks/docs/concepts/problem_osi/
My experience using sql.js was not awesome. It runs out of memory so quickly when you do naive queries like
SELECT * FROM VALUES (...), ...where the VALUES are a bunch of JavaScript objects you want to inject in to be able to perform SQL on.I just wanted to be able to do basic grouping, filtering, joining on data in memory using SQL syntax. But after a few hundred rows sql.js would always run out of memory.
I switched to alasql which can handle the 80MB of data I was trying to work with. It’s not as complete as SQLite but at least the basics work.
There must be more intelligent ways to use sql.js that work around the memory limits I had. They just weren’t obvious to me.
The whole post is about not running sql.js with the memory backend thanks to James’ work 😅
Yes that’s true. My point is that I wasn’t able to just use it on its own very easily. There are clearly other people using it correctly/more efficiently than I was able to figure out.
I like the design goals. Even in the context of Flutter, they maintain a certain KISS philosophy.