Yes, I use Clojure Spec and the Python library Hypothesis.
One thing that I really miss from Spec when using Hypothesis is that the data models are only useful when testing unlike Spec’s which can also be used for data conformation and transformation in the running program. So if you want to do something in the Python world you could explore the possibility of using Marshmallow schemas as data models for Hypothesis.
I were unable to view the site due to the cookie warning covering half the screen and a chat button covering the close button of that warning.
The “chat me” button in the bottom right has the ability to notify. I may look into a way to disable that.
Having worked with TypeScript for little under a month now I’m not so convinced. I’m meeting a lot of friction where there either are no types for a library, which isn’t all that bad, or they’re incorrect and actually preventing you from using the library as its intended to be used. That combined with JavaScripts incredibly weak dynamic types ends up with the result that even though TypeScript thinks your typing is sound some library may take your array and silently make it in to an object without any errors being raised. During this month TypeScript has helped me once, when I mistyped a name to some existing but very similar name (project -> product). Other than that the errors have turned up at run time where TS holds no power.
Thus far I much prefers ClojureScripts much better dynamic types to TypeScripts static and somewhat ineffective static proofs. But that could of course change as I get further in.
very surprising that the BSDs weren’t given heads up from the researchers. Feels like would be a list at this point of people who could rely on this kind of heads up.
The more information and statements that come out, the more it looks like Intel gave the details to nobody beyond Apple, Microsoft and the Linux Foundation.
Admittedly, macOS, Windows, and Linux covers almost all of the user and server space. Still a bit of a dick move; this is what CERT is for.
Plus, the various BSD projects have security officers and secure, confidential ways to communicate. It’s not significantly more effort.
Right.
And it’s worse than that when looking at the bigger picture: it seems the exploits and their details were released publicly before most server farms were given any head’s up. You simply can’t reboot whole datacenters overnight, even if the patches are available and you completely skip over the vetting part. Unfortunately, Meltdown is significant enough that it might be necessary, which is just brutal; there have to be a lot of pissed ops out there, not just OS devs.
To add insult to injury, you can see Intel PR trying to spin Meltdown as some minor thing. They seem to be trying to conflate Meltdown (the most impactful Intel bug ever, well beyond f00f) with Spectre (a new category of vulnerability) so they can say that everybody else has the same problem. Even their docs say everything is working as designed, which is totally missing the point…
Wasn’t there a post on here not long ago about Theo breaking embargos?
Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability.
He agreed to the patch on an already extended embargo date. He may regret that but there was no embargo date actually broken.
@stsp explained that in detail here on lobste.rs.
So I assume Linux developers will no longer receive any advance notice since they were posting patches before the meltdown embargo was over?
I expect there’s some kind of risk/benefit assessment. Linux has lots of users so I suspect it would take some pretty overt embargo breaking to harm their access to this kind of information.
OpenBSD has (relatively) few users and a history of disrespect for embargoes. One might imagine that Intel et al thought that the risk to the majority of their users (not on OpenBSD) of OpenBSD leaking such a vulnerability wasn’t worth it.
Even if, institutionally, Linux were not being included in embargos, I imagine they’d have been included here: this was discovered by Google Project Zero, and Google has a large investment in Linux.
Actually, it looks like FreeBSD was notified last year: https://www.freebsd.org/news/newsflash.html#event20180104:01
By late last year you mean “late December 2017” - I’m going to guess this is much later than the other parties were notified.
macOS 10.13.2 had some related fixes to meltdown and was released on December 6th. My guess is vendors with tighter business relationships (Apple, ms) to Intel started getting info on it around October or November. Possibly earlier considering the bug was initially found by Google back in the summer.
Windows had a fix for it in November according to this: https://twitter.com/aionescu/status/930412525111296000
A sincere but hopefully not too rude question: Are there any large-scale non-hobbyist uses of the BSDs that are impacted by these bugs? The immediate concern is for situations where an attacker can run untrusted code like in an end user’s web browser or in a shared hosting service that hosts custom applications. Are any of the BSDs widely deployed like that?
Of course given application bugs these attacks could be used to escalate privileges, but that’s less of a sudden shock.
there are/were some large scale deployments of BSDs/derived code. apple airport extreme, dell force10, junos, etc.
people don’t always keep track of them but sometimes a company shows up then uses it for a very large number of devices.
Presumably these don’t all have a cron job doing cvsup; make world; reboot against upstream *BSD. I think I understand how the Linux kernel updates end up on customer devices but I guess I don’t know how a patch in the FreeBSD or OpenBSD kernel would make it to customers with derived products. As a (sophisticated) customer I can update the Linux kernel on my OpenWRT based wireless router but I imagine Apple doesn’t distribute the Airport Extreme firmware under a BSD license.
Is it just me, or is the page only using half of the screen width, making it quite hard to read o a mobile device?
Interestingly, my argument why I don’t like macros goes somewhere along those lines: If you have something repetetive where people use macros to work around, it’s probably a flaw in the host language.
I’m not saying they are bad, just that I don’t like them.
Excluded are obviously languages that are fundamentally based on them.
Possible outcome: every project works around the problem in their own slightly incompatible way, and no-one bothers fixing the problem in the host language because it’s easy enough to work around.
I like macros as a way to cheaply prototype proposed language changes. I don’t want to see them in production code; debugging from the output of a (nonstandardised) code generator is awful but still easier than debugging from the input, which is effectively what the choice between code generation and macros boils down to.
I like macros as a way to cheaply prototype proposed language changes. I don’t want to see them in production code; debugging from the output of a (nonstandardised) code generator is awful but still easier than debugging from the input, which is effectively what the choice between code generation and macros boils down to.
This has, by the way, happened with Rusts “try!()” (which, after some modifications, became the “?” operator).
Reminds me of Rust’s primary use of macros: emulating the varargs that the language lacks.
My cardinal rule about macros is that if I have to know that something is a macro, then the macro is broken and the author is to blame.
Rust also messed up in that regard by giving macro invocations special syntax, which acted as an encouragement to macro authors to go overboard with them because “the user immediately sees that it is a macro” – violating the cardinal rule about macros.
Yup, the alternatives are duplication/boilerplate or external codegen until the language catches up. Macros are an decent way to make problems more tractable in the short term (unless your in a wonderous language like Racket), or even to prototype features before they are implemented in the full language. I’d love to see more metaprogramming with a basis in dependant types, but alas there’s still lots of work to be done before that has been made workable for the average programmer.
Sure, that’s why I said they aren’t bad, I just don’t like them.
On the other hand, I also don’t have any problem with codegen over macros, its basically the same thing at another phase.
Say that this flaw is becoming obvious a couple of years after the language’s release. In that case the fix may have the consequence of breaking some subset of existing code which is arguably worse than including macros in the language. I don’t know where I want to go with this strawman-like argument other than to say that language design is hard and macros lets the users of the language make up for the designers deficiencies.
I totally appreciate that. I just don’t see “does the language have macros” as the issue people make it. For example, languages with very expressive metaprogramming systems like Ruby have purposefully not included macros and are doing fine.
Macros are often an incredibly complex and problematic fix for this, though. Just the patterns list of the Rust macros book is huge and advanced: https://danielkeep.github.io/tlborm/book/pat-README.html
(Other languages have nicer systems, I know, but the issue persists: textual replacement is a mess)
I totally see their place, for example, we couldn’t define a type checked println!("{:?}", myvalue) in Rust proper without adding a lot of additional things to the language.
A coworker once pointed out to me that one of the big dangers of putting programs in $HOME like this is that now a malicious program can easily replace your installation of node or ghc or whatever.
I don’t know how worried I am about this, given that the attacker would already have to have home access. But something to think about
If anything it’s proof that file systems aren’t a great abstraction for organizing code that needs to be executed
A malicious program can do this by modifying $PATH in the appropriate startup shell script (you have audited the which command, right?).
So what is the appropriate abstraction for organizing code that needs to be executed?
The problem is more that ideally a user should not have both write and execute permissions to a location, such as $HOME or /tmp.
Under UNIX a noexec flag for the mount solves a lot of problems and since WinXP there has bee Software Restriction Policies (including by path) to get the same effect.
The result, a lot of malicious simply stops working.
…or you create a separate mountpoint for your code in; such as /usr/src or $HOME/stuff or whatever your preference is.
Yes.
Malicious software (for now) just pokes the common locations such as /tmp or $HOME and not search everywhere for possible write/exec points. Setting these locations up to stop this can only be a good thing.
Of course this is a kind of security by obscurity but it is free and not intrusive.
After reading that, I wonder why no one started a fork yet. Perhaps if someone does, people will quickly join.
Most people who could & would implement a fork, use PureScript instead.
Because it is very hard and takes a lot of time I’d wager. Few have the time, money or drive to do such a thing.
There’s not a substantial amount of money in maintaining a language like this, so it would pretty much have to be a labour of love.
Under those circumstances how many people would chose to fork an existing language and maintain it rather than create their own?
Because the whole reason people use something like this is that they don’t want to develop and maintain it themselves.