Iām working on the OCaml compiler. The runtime code in C has been similarly stuck with c89 for MSVC support, but we recently merged the āMulticore OCamlā project which relies on C11 atomics. For now the plan is to stick with C11-aware compilers, and let our MSVC users hope that Microsoft adds support for atomics quickly enough (I hear that it has been announced as a feature coming soon for some years now). I spend most of my own time outside the C runtime, but I still find it super relaxing to be able to declare variables wherever, and in particular for (int i = 0; ...)
. Yay!
MSVC, GCC, and Clang all have pretty good support for C++20 and the gaps in C++17 support are so small that Iāve not encountered them in real-world code that has to build with all three. I believe XLC and ICC are in a similar state, though Iāve not used either. C11 atomics are a port of C++11 atomics to C, with some very ugly corner cases. I donāt know how much effort it would be to make the OCaml compiler build as C++ but youād have at least 3 production-grade compilers to choose between.
C++ is a totally different standard and code can be safe in C, but unsafe in C++, and vice versa (IIRC there was a malloc difference such that Cās interpretation of the code is safer, and a zeroing related bug, too, itās been about a half decade since Iāve seen them come up though and apparently my google-fu does not exist anymore). The languages are close enough lexically to trick you that they are compatible, but they are not. Such a change is likely to unnoticably break semantics meaning OCaml might in later years be found to be less secure, safe, and stable.
The runtime code in C has been similarly stuck with c89 for MSVC support
MSVC support C17
for few years.
Actually the ādeclarations after statementsā feature I mentioned was already available in Visual Studio 2013 (which still didnāt have all of C99, of course), but the problem is that we VS2013 was released, most Windows machines around did not have it installed. I donāt use Windows myself, but my understanding is that some OCaml users sell software that packages/extends the OCaml compiler, and they want to be able to compile their software on their clients machines, and those may only have old versions of MSVC / Visual Studio available.
Long story short: for years Microsoft has been lagging a decade behind on C compiler support, and Windows users routinely use decades-old operating systems (even enterprise users, thanks to long-term-support from Microsoft), and the combination of the two creates incentives to stick with super-old versions of C.
(Again, now we need C11 atomics, and MSVC support for C11 atomics is⦠not yet released? Well itās only been 11 years now for the most important extension of C in the last two decades⦠So Maybe Microsoft will support C11 atomics in a year or so, and then itās only ten more years of Windows users complaining that the software doesnāt build on their own systems.)
Who owns the web?
The users.
Hence if someone found that a given standard risks the users beyond reasonable the right thing to do is change it. Even if it creates discomfort for developers (With real empathy).
The real question here isnāt if it is hard for developers of fair to them. The question is how high is the risk leaving things as they are.
The users should own the web, but in the current ecosystem (ad supported, walled gardens, etc), the people paying for the a large portion of web are the browser vendor(s) and major sites that can impact them (facebook, amazon, etc)
Very good writing.
For me, I wish Zig would take a step towards being Scientific Programming friendlier:
Another option, based on Pluto.jl
, is Neptune.jl
.
It removes the Always On interactivity which I find to be a better option for larger documents with more operations.
For me, Pluto.jl
is the interactive next step from Jupyter while Neptune.jl
is a real replacement for Jupyter.
I notice that Neptune.jl has ripped not just the reactivity, but also the dependency analysis. The sane file format is still a big improvement over .ipynb, but there is another question to which Neptuneās answer is not as good as Jupyterās:
How do I notice that Iāve run cells out of order (i.e. my session works now, but will fail if I rerun the notebook)?
It would have been nice if Neptune had kept the dependency analysis: then it could be like Jupyter, but also highlight the cells that are now troubled because they depend on a cell whose value you just changed. In other words:
Inspiration: Mercurial/evolve makes the commit graph easier to shape by allowing intermediate inconsistent states. That lets you, for example, rebase commits B-D out of the middle of a branch:
For me, I donāt want any dependency analysis. This is just a script like any script with some HTML synthetic sugar to be able to produce plots and other visualization within the document.
The code should be written to run serially like a native Julia script.
(Disclaimer: Iām a Microsoft employee.)
The way to think about this is there are multiple reasons for a BSOD.
The reason that people disagree over stability is because (2) & (3) are much more likely than (1), so crashes can plague particular configurations while leaving others completely unaffected. Itās very hard for mortals to pinpoint a driver or hardware bug, so all all users see is the result, not the cause.
The part that always frustrates me a little is people who overclock devices, causing hardware errors, and blame the result on Windows. Itās not that Windows is worse than any other kernel, itās that the people who overclock hardware all seem to run Windows.
My impression is that the Windows kernel is really top notch these days (as opposed, to say, the drivers, display manager, etc, etc).
I agree. The one thing I think Windows must improve is in its modularity and letting the user chose which applications and services to be installed.
There are too many services and features Iād like to be able to remove (Or better, chose not to install). There was a talking about Windows Mini Kernel, I want that. I want efficiency.
Have you tried Windows Embedded? Server Core? WinPE?
The guts of Windows is fairly modular and composable. The issue is that each of those services are providing something, so removing them will affect applications or scenarios in ways that may not be obvious. The monolithic nature of Windows is really a result of trying to ensure that programs work, and work the same way, on each machine.
Personally I do a lot of command line development, so I thought Server Core would be an interesting option. Hereās what happened:
It sounds like a mess. Maybe I should take back my words :-).
One of the issues ow Windows is the luggage it carries. It is time you put all pre historic compatibility under a VM and be done with it.
Moreover, I het what you say and still Iād be happy to have user choices to what to install. Windows is bloated. 30 GB for OS is too much. The RAM consumption is too much. Performance are getting better and hopefully one day weāll a File System as fast as Linux and the margin will be negligible.
Iād love to pay for a gaming build of Windows that only includes necessary components and presumes that Iām competent enough to steward maintenance of my own machine.
If you want a gaming build of Windows, you can buy that. It even comes bundled with a computer optimised for running it.
I worked as a repair tech in a computer shop for about three years; this was over ten years ago so most of my experience is with XP, Vista, and 7. In this time I saw a lot of BSODs.
In my experience the overwhelming majority of BSODs are caused by faulty hardware or driver bugs. For example the Dutch version of AT&&T (KPN) handed out these Realtek wireless dongles for a while, but after some update in XP they caused frequent BSODs. Iām going to guess this was Realtekās fault and not Microsoftās, and it just happened to work prior to this update (they never released an update to fix this. They also never made Vista/7 drivers). Plenty of customers were quick to blame Microsoft for this though, in some cases even after I explained all of this to them they still blamed Microsoft.
By far the most common problem though was just faulty memory. By my rough estimate it caused at least half of all problems, if not more, during this time. The rest were a combination of other hardware faults (mainboard, hard drive, etc.) or bad (often third-party) drivers.
No doubt BSODs happen due to Windows bugs, but itās a lot less often than some people think. The biggest issue was actually the lack of tooling. Windows leaves small āminidumpā core dumps, but actually reading them and getting an error isnāt easy. I actually wrote a Python script to read them all and list all reasons in a Tkinter window, and this usually gave you a pretty good idea what the problem was.
Even if i despise Windows nowadays, i agree with you and BSOD stability isnāt a problem nowadays anmore. There are a lot of problems, but kernel stability aināt one
I think it is fair that windows maintains some criticism. A micro kernel would not suffer a systemic failure from a buggy audio driver for instance. Linux is also another insane system where driver code for dozens of architectures are effectively maintained on a budget but i rarely see any crashes on my commodity development box that corporate procured. My dell laptops running win7 and win10 have all crashed frequently.
I think some of the stability that you see on Linux is that the drivers are upstreamed, and so face the same discipline as the rest of the kernel, whereas Windows drivers are often vendor-supplied, and potentially very dodgy. You can easily crash Linux with out-of-kernel-tree drivers, but there are only a few of those that are in common use.
Much of the audio stack in Windows runs in userspace. You can often fix audio driver crashes by restarting the relevant services. The troubleshooting wizard does this for you.
Linux and Windows are both moving to more device drivers in userspace. CUSE on Linux, for example, and Windows also has a framework for userspace USB drivers. Most GPU drivers are almost entirely userspace, for performance reasons: the devices support SR-IOV or similar and allow the kernel to just map a virtual context directly into the userspace address space, so you donāt need a system call to communicate with the device.
On the one hand itās a bit unfair to blame current windows for earlier disgressions but it is what it is.
Regarding your point 3) - Iāve had it SO often that a machine in the 98-XP days would crash on Windows and run for a week on Linux, so I donāt really buy that point. Hardware defects in my experience are quite reproducible āevery time I start a game -> graphics cardā, every time it runs for longer than a day -> RAM, etc.pp. Nothing of āit crashes randomly every odd dayā has ever been a hardware defect for me (except maybe ram, and that is sooo rare).
I donāt think I have claimed Windows is unstable since Iāve been using 7 or 10 (and 2000 and XP were okish). But 98 (non-SE), Me, Vista, 95a and 95b were hot garbage.
The most usable feature of xmake
is the abstraction layer it has.
For me a good build system behaves the same with any compiler (At least for the common flags of compilation).
So I want to be able to use OpenMP, Floating Point Precision (Fast Math), Optimization level, etc⦠without worrying which compiler is used.
I think xmake
is the only build system which provides this.
This is a really great project.
It is the only build system which truly abstract the compiler (Or tries to do so).
The DistroWatch page on Elementary OS has links to several reviews of 5.0 and 5.1, there may be something for you there. The people who maintain that DistroWatch are truly community treasures.
https://www.turris.com/en/omnia/overview/
Open Hardware, Free Software. The only blob is for 5GHz WiFi.
Now I just need some libre power line adapters.
This is great!
Any chance to have the UX of drawing with chalks on the board? Having Free Drawing + Math will be a real scribbling board.
After that, live joint editing :-).
Judging by the badges, there is/was meant to be a project called āNumGo+ā that would be the āNumPy for Go+ā, and which might be the home for that kind of stuff. But that repo has an init commit and nothing else.
There is, however, another project, GoNum, that does have the linear algebra stuff (but isnāt related to Go+).
Release Notes with no mention of performance and memory footprint? Arenāt those important to users and developers?
One of my most recent side-projects was a networked dice roller for my remote table-top sessions
Iām interested in seeing this.
(Donāt know if OP is the article author or if theyāre reading this).
Having recently switched to Windows myself, I do hope to have Windows support in neuron 2.0.
Somebody in fact is already working on it: https://github.com/srid/neuron/pull/586
Neuron itself will continue to be written in Haskell, but I do see the value of using a .NET language for straightforward cross-platform support!
I wonder how fast their Sorting Algorithms implementations is.
Could anyone link to other similar Toolkits?
Not the same thing, but in terms of hash algorithms the ones they offer are far from state of the art (at least in speed.)
Few years ago Microsoft suggested JPEG XR. It uses integer compression algorithm hence decompressing and compressing doesnāt have Quantization errors.
I wonder why it didnāt get wide support (Microsoft granted a free use of patents).
There are some weird things in the specification.
A new open bracket will cancel any preceding unmatched open bracket of its kind.
This suggests that, for example, *foo and *bar*
will get ācorrectlyā processed into *foo and <strong>bar</strong>
. As the user, I would rather get a warning and be invited to escape the first star, because this is likely to be a mistake on my part. (The āimplicit cancellationā rule is not very Strict).
The only form of links CommonMark supports is full reference links. The link label must be exactly one symbol long.
So you cannot write [foo](https://example.com)
, you have to write [foo][1]
. Fine with me. But then āone symbol longā? [foo][1]
is allowed but [foo][12]
is not, the document recommends using letters above ten references, so [foo][e]
is okay but [foo][example]
is not.
I think that this limitation comes from trying to make it easy to parse StrictMark with fairly dumb parser technology. Honestly, while I agree that 10K-lines hand-written parsers are not the way to go for a widely-used document format, I would rather have a good specification that is paired with some tutorials on how to implement decent parsing approaches (for example, recursive-descent on a regex-separated token stream) for unfamiliar programmers, rather than annoying choices in the language design to support poor technical choices.
I totally agree. It would make much more sense to have a limitation of a set of digits with no spaces ([12]
, and [0001]
are acceptable) than a single symbol.
I agree. To make matters worse, the specification says āone symbol wideā. Sadly, āsymbolā does not have a strict definition when it comes to text encoding or parsing. The text can be UTF-16 encoded, where one symbol is actually 2 or more codeunits. Symbols might be language-dependent, a Czech or Slovak reader might consider āchā to be one symbol, a dutch reader might consider āijā to the one symbol. UTF-8 everywhere fans might be dismayed to know that certain symbols are encoded as multiple codepoints by unicode itself, so for example while āŃĢā (cyrillic small letter yu with acute) looks, walks and sounds like one symbol, but itās encoded as by the sequence U+044E cyrillic small letter yu followed by U+0301 combining acute accent.
I think the closest thing to what the author intended is āgrapheme clusterā, roughly, whatever you can highlight as one unit of text using your cursor is your one symbol. Good luck implementing that in a parser though.
a dutch reader might consider āijā to the one symbol
Certainly in the context of computers, I think very few people would, if any, since itās always written as āi jā. Outside of that, things are a bit more complicated and itās a bit of a weird/irregular status, but this isnāt something you really need to worry about in this context.
Thereās a codepoint for it, but thatās just a legacy ligature codepoint, just like ļ¬
(U+FB00) for ff
, ļ¬
(U+FB06) for st
, and a bunch of others. These days ligatures are encoded in the font itself and using the ligature codepoints is discouraged.
The text can be UTF-16 encoded, where one symbol is actually 2 or more codeunits
This has nothing to do with UTF-16, which is functionally identical to UTF-8, except that it encodes the codepoints in a different way (2 or 4 bytes, instead of 1 to 4 bytes). I donāt know what you mean with āone symbol is actually 2 or more codeunitsā as thatās a Unicode feature, not a UTF-16 feature.
UTF-8 everywhere fans might be dismayed to know that certain symbols are encoded as multiple codepoints by unicode itself
Yes, and this works fine in UTF-8?
I think the closest thing to what the author intended is āgrapheme clusterā, roughly, whatever you can highlight as one unit of text using your cursor is your one symbol. Good luck implementing that in a parser though.
Most languages should have either native support for this or a library for it, and itās actually not that hard to implement.
They did mean ācodepointā though, as that is what is in the grammar:
PUNCT = "!".."/" | ":".."" | "[".."`" | "{".."~";
WS = [ \t\r\n];
WSP = WS | PUNCT;
LINK_LABEL = CODEPOINT - WSP - "]";
You probably want to restrict this a bit a bit more; thereās much more āwhite spaceā and āpunctuationā than just those listed, and using control characters, combining characters, format characters, etc. could lead to some very strange rendering artefacts. All of this should really be based on Unicode categories.
My main point is I can see how a naive implementation might use the built-in length
function to check if something is one āsymbolā long and it will fail in non-obvious ways for abstract characters that one might consider to be one character long.
Most languages should have either native support for this or a library for it, and itās actually not that hard to implement.
Except they donāt. Hereās an example, the following string consists of 16 grapheme clusters (including spaces), but anywhere from 20 to 22 codepoints.
ŠŃивеĢŃ ą¤Øą¤®ą¤øą„ą¤¤ą„ שָ×××Ö¹×
I invite you to use any of your tools that you think would handle this correctly and tell me if any do. And this example is without resorting to easy gotchas, like combining emojis āš©āš©āš¦āš¦ā.
My main point is I can see how a naive implementation might use the built-in length function to check if something is one āsymbolā long
Well in this case that would be correct as the specification says itās a single codepoint.
I invite you to use any of your tools that you think would handle this correctly and tell me if any do.
Searching āgraphmeme ā should turn up a library. Some languages have native support (specifically, IIRC Swift has, and I thought Rust too but not sure) and others may include some support in stdlib. Like I said, this is not super-hard to implement; the Unicode specifications always make this kind of stuff seem harder than it actually is because of the way theyāre written, but essentially they just have a .txt file which lists codepoints that are āgraphmeme break charactersā and the logic isnāt that hard.
Iām dreaming of Zig, but with operator overloading to make implementing mathematical expressions bearable. Anyone have experience with Odin?
I would take Juliaās style of multi dispatching. Iād also be happy with Vector / Matrix / Tensor data types built in.
Is there an option to verify the files integrity at the source and backup during the process? The troubling thing about backup it is automated. But what can guarantee the video I last accessed 8 years ago is still a valid file?
I havenāt tested bupstash on windows yet, but itās something I plan to make work, I suspect it might need some fixes first,
It also means you have to implement VSS support into bupstash, because backups on windows without supporting the VSS features wont make any sense ..
If worse is better, then much worse is much better. :-)
While technically, which is what we are all oriented, it might not be a good solution, in practice, WordPress has done much much well to the world.
Think of the volume of knowledge shared by it. That by itself is an amazing bless.
If less is more, too little might be too much. š