It’s funny because the conventional wisdom among Python programmers is that the stdlib is where modules go to die. For an example, see this discussion about putting requests in. But really, it’s just unimaginable to me to have to download external modules to delete directories, make temp files, or do reasonable logging. There are non-stdlib implementations of all of these things (py.path, temporary, eliot), but it’s always possible to do a no-dependency stand-alone script. I didn’t know how much I should appreciate that.
The CVE they link is from 2013! Is there a patch that didn’t make it into this distro?
Sometimes a CVE is backdated. It’s the time the bug was first identified, not when it was fixed.
The page mentions git specifically as being vulnerable. While I’m sure that’s true, it seems highly impractical to attempt to move git away from SHA1. Am I wrong? Could you migrate away from SHA1?
[Edit: I forgot to add, Google generated two different files with the same SHA-1, but that’s dramatically easier than a preimage attack, which is what you’d need to actually attack either Git or Mercurial. Everything I said below still applies, but you’ve got time.]
So, first: in the case of both Mercurial and Git, you can GPG-sign commits, and that will definitely not be vulnerable to this attack. That said, since I think we can all agree that GPG signing every commit will drive us all insane, there’s another route that could work tolerably in practice.
Git commits are effectively stored as short text files. The first few lines of these are fixed, and that’s where the SHA-1 shows up. So no, the SHA-1 isn’t going anywhere. But it’s quite easy to add extra data to the commit, and Git clients that don’t know what to do will preserve it (after all, it’s part of the SHA-1 hash), but simply ignore it. (This is how Kiln Harmony managed to have round-trippable Mercurial/Git conversions under-the-hood.) So one possibility would be to shove SHA-256 signatures into the commits as a new field. Perfect, right?
Well, there are some issues here, but I believe they’re solvable. First, we’ve got a downgrade vector: intercept the push, strip out the SHA-256, replace it with your nefarious content that has a matching SHA-1, and it won’t even be obvious to older tools anything happened. Oops.
On top of that, many Git repos I’ve seen in practice do force pushes to repos often enough that most users are desensitized to them, and will happily simply rebase their code on top of the new head. So even if someone does push a SHA-256-signed commit, you can always force-push something that’ll have the exact same SHA-1, but omit the problematic SHA-256.
The good news is that while the Git file format is “standardized,” the wire format still remains a bastion of insanity and general madness, so I don’t see any reason it couldn’t be extended to require that all commits include the new SHA-256 field. I’m sure this approach also has its share of excitement, but it seems like it’d get you most of the way there.
(The Mercurial fix is superficially identical and practically a lot easier to pull off, if for no other reason than because Git file format changes effectively require libgit2/JGit/Git/etc. to all make the same change, whereas Mercurial just has to change Mercurial and chg clients will just pick stuff up.)
It’s also worth pointing out that in general, if your threat model includes a malicious engineer pushing a collision to your repo, you’re already hosed because they could have backdoored any other step between source and the binary you’re delivering to end-users. This is not a significant degradation of the git/hg storage layer.
(That said, I’ve spent a decent chunk of time today exploring blake2 as an option to move hg to, and it’s looking compelling.)
Edit: mpm just posted https://www.mercurial-scm.org/wiki/mpm/SHA1, which has more detail on this reasoning.
Plenty of people download OSS code over HTTPS, compile it and run the result. Those connections are typically made using command line tools that allow ancient versions of TLS and don’t have key pinning. Being able to transparently replace one of the files they get as a result is reasonably significant.
Right, but if your adversary is in a position that they could perform the object replacement as you’ve just described, you were already screwed. There were so many other (simpler!) ways they could own you it’s honestly not worth talking about a collision attack. That’s the entire point of both the linked wiki page and my comment.
That said, since I think we can all agree that GPG signing every commit will drive us all insane, there’s another route that could work tolerably in practice.
It is definitely a big pain to get gpg signing of commits configured perfectly, but now that I have it setup I always use it and so all my commits are signed. The only thing I have to do now is enter my passphrase the first time in a coding session that I commit.
Big pain? Add this to $HOME/.gitconfig and it works?
gpgsign = true
Getting gpg and gpg-agent configured properly and getting git to choose the right key in all cases even when sub keys are around were the hard parts.
That’s exactly what I did.
Sorry, to rephrase: mechanically signing commits isn’t a big deal (if we skip past all the excitement that comes with trying to get your GPG keys on any computer you need to make a commit on), but you now throw yourself into the web-of-trust issues that inevitably plague GPG. This is in turn the situation that Monotone, an effectively defunct DVCS that predates (and helped inspire) Git, tried to tackle, but it didn’t really succeed, in my opinion. It might be interesting to revisit this in the age of Keybase, though.
I thought GPG signing would alleviate security concerns around SHA1 collisions but after taking a look, it seems that Git only signs a commit object. This means that if you could make a collision of a tree object, then you could make it look like I signed that tree.
Is there a form of GPG signing in Git which verifies more than just the commit headers and tree hash?
You are now looking for a preimage collision, and the preimage collision has to be a fairly rigidly defined format, and has to somehow be sane enough that you don’t realize half the files all got altered. (Git trees, unlike commits, do not allow extra random data, so you can’t just jam a bunch of crap at the end of the tree to make the hash work out.) I’m not saying you can’t do this, but we’re now looking at SHA-1 attacks that are probably not happening for a very long time. I wouldn’t honestly worry too much about that right now.
That said, you can technically sign literally whatever in Git, so sure, you could sign individual trees (though I don’t know any Git client that would do anything meaningful with that information at the moment). Honestly, Git’s largely a free-for-all graph database at the end of the day; in the official Git repo, for example, there is a tag that points at a blob that is a GPG key, which gave me one hell of a headache when trying to figure out how to round-trip that through Mercurial.
Without gpg signing, you can get really bad repos in general. The old git horror story artile highlights these issues with really specific examples that are more tractable.
Though, I don’t want to start a discussion on how much it sucks to maintain private keys, so sorry for the sidetrack.
I don’t see why GPG-signed commits aren’t vulnerable. You can’t modify the commit body, but if you can get a collision on a file in the repo you can replace that file in-transit and nothing will notice.
Transparently replacing a single source code file definitely counts as ‘compromised’ in my book (although for this attack the file to be replaced would have to have a special prelude - a big but not impossible ask).
Here’s an early mailing list thread where this was brought up (in 2006). Linus’s opinion seemed to be:
Yeah, I don’t think this is at all critical, especially since git really
on a security level doesn’t depend on the hashes being cryptographically
secure. As I explained early on (ie over a year ago, back when the whole
design of git was being discussed), the security of git actually depends
on not cryptographic hashes, but simply on everybody being able to secure
their own private repository.
the security of git actually depends on not cryptographic hashes, but simply on everybody being able to secure their own private repository.
This is a major point that people keep ignoring. If you do one of the following:
then the argument that SHA3, or SHA256 should be used over SHA1 simply doesn’t matter.
And here’s the new thread after today’s announcement
(the first link in Joey Hess’s e-mail is broken, should be https://joeyh.name/blog/entry/sha-1/ )
I used a similar technique to make my anagram-based conspiracy theory twitter bot. https://twitter.com/Anagrams2spooky
That URL is cheeky - ?utf8=✓&...
By including a character that can not be expressed in latin-1 charset, old browsers are forced to use UTF-8 encoding for form submits.
This is the standard Rails feature to set the encoding of a form:
The hidden input element with the name utf8 enforces browsers to properly respect your form’s character encoding and is generated for all forms whether their action is “GET” or “POST”.
It was originally a snowman: ☃︎ (Unicode 9731) but that whimsy was replaced.
and if you really miss it, https://rubygems.org/gems/bring_back_snowman
I think it’s interesting that the letters start on …0001, but the numbers start on …0000. I can see that representing 0 with a char that ends in zeros would be convenient for when you’re implementing printf - it means you just do some bit munging once you’ve split your integer into decimal place-values. But I can’t think of any particular reason to make the letters start at 1.
“To maximize the structural stability of the [paper] tape, Murray arranged the characters in his code so that the most frequently used letters were represented by the fewest number of holes in the tape.” Love it - complete disregard for alphabetical order - different priorities!
It seems pretty standard. Appears to be calling for a review by senior administration officials of critical infrastructure.
They define “critical infrastructure” in broad terms.
The term “critical infrastructure” means systems and assets, whether physical or virtual, so
vital to the United States that the incapacity or destruction of such systems would have a
debilitating impact on security, national economic security, national public health or safety, or
any combination of those matters.
Is Twitter, or Facebook something that could have debilitating impact? I mean, if Trump’s account were to be hacked, presumably one could do a ton of damage. My lizard brain is starting to think that this is overly broad, and might be used to strong arm companies into complying with gag ordered “monitoring,” a.k.a. spying–Prism style. But, IANAL, nor am I an expert on government spying and such. I do know that this administration doesn’t seem to give a shit about the law, and does whatever it feels like even with Federal level Judgements, in the name of “national security.”
Be fair. The last couple administrations didn’t always give a shit about a law and did whatever it felt like in the name of nation security, too. Long before this administration (which has been in power for only 10 days), we got the Patriot Act and Prism and gag orders, etc.
I also feel this is pretty standard. Form some committees, review what’s going on, make recommendations on what to do. The results of those committee findings and the actions that follow is where people will need to pay attention.
I’m certainly not giving anyone a pass here. I understand that “national security” plays an important part in all Administrations. That doesn’t mean we shouldn’t question what seems like overly broad wording.
“We need to excise a tax on bubble gum.” “Why?” “National Security.” “OH, OK! Gotta keep us safe.”
Yeah, a lot depends on whether the committees end up having competent people on them. Government infrastructure is vast, and just enumerating it is a huge challenge. Securing it to an acceptable level could take years and years, even with good people at the top. It will be very tempting to believe the snake oil sellers, I hope they manage to avoid that.
Yeah, a lot depends on whether the committees end up having competent people on them.
And Trump has shown us that he’s all about picking the best person for the job, and not political cronies that know how to stroke his ego.
Here is some rough and unpopular advice but something you might as well get knocked into your head sooner rather than later:
Very few people care about theoretical computer science, and pure CS is of extremely limited practical utility.
You’ll have plenty of time to goof around with Haskell and other academic subjects if you choose too–in the meantime, maybe focus on a school that has you do a lot of projects and team stuff. That will serve you a lot better in life than another fancy flavor of monad. Oh, and for all that is good and holy, learn some applied linear algebra.
Also, what makes you think that CS is some special snowflake compared to other engineering disciplines? What are these “added complexities” that make it more special than, say, mechanical or chemical engineering?
I think you should broaden your horizons a bit. Like, maybe to include other criteria like “will this school have a good social scene” and “is this school in a part of the country or world outside my comfort zone?”.
That’s the opposite of my experience FWIW. The ML course at university (that most students wrote off as overly academic) is the single thing that’s been most useful in my programming career, by a long way; the Java course by contrast was utterly irrelevant (indeed actively harmful, since it taught an older style of programming that leads to less maintainable code), and the group projects were oriented towards a less realistic style of collaboration (I learnt much more effective modes of collaboration by participating in open-source projects, even though I’ve never worked in a remote-first workplace), and the importance of linear algebra is vastly overrated (particularly in America for some reason?)
My school (Carleton College) didn’t even offer courses in particular programming languages - they said “that would be training, and we don’t do training, we do education.”
Other aspects of the Carleton experience have served me well - I’m much more able to speak the language of my clients than many engineers, for example. But at this school, CS was definitely treated as an academic, rather than an engineering, discipline.
The ML course at university (that most students wrote off as overly academic) is the single thing that’s been most useful in my programming career, by a long way; the Java course by contrast was utterly irrelevant (indeed actively harmful, since it taught an older style of programming that leads to less maintainable code), and the group projects were oriented towards a less realistic style of collaboration (I learnt much more effective modes of collaboration by participating in open-source projects, even though I’ve never worked in a remote-first workplace),
I agree with all of the above.
and the importance of linear algebra is vastly overrated (particularly in America for some reason?)
That I could not disagree more with. Machine learning, high-performance computing, and graphics are inaccessible if you don’t have a strong linear algebra background.
Linear algebra is not loved by many people, mathematicians or computer scientists, but it’s important. I think of it as like an assembly language for math, physics, and computer science. It’s often painfully low level, and almost no one would want to make it a primary tool or area of research, but it lives under so much else that it’s worth learning as soon as one can.
Shrug. I’ve never really had cause to use it; maybe I was better able to understand the machine learning algorithms I worked on in my last job than someone without a linear algebra background, but that was largely a case of “read the spec/paper”. If we’re looking at areas of mathematics to learn I’d say graph theory comes up more often, as does logic and the kind of abstract nonsense (and comfort with manipulating abstractions) that you can learn from either set theory/category theory or from topology - those are the things that have been useful throughout my career and that I’d wish I’d learned more of.
You’re right that linear algebra is necessary for the low-level parts of machine learning, traditional HPC, and graphics. But the latter look like ever-shrinking niches to me, and I suspect the former will follow (it looks kind of bubbly); even if it doesn’t, the low-level implementations mostly already exist. Most of all it just doesn’t seem like the kind of mind-expanding subject where you get a lot of advantage from being taught by experts - it’s more a few simple definitions that you pound into yourself, and then a bunch of fiddly tricks for working with them, which is the kind of thing that’s easy enough to teach yourself from a textbook, at least for me.
One of the projects I worked one, CloVR tried to help the reproduciblity problem by a including the run for a pipeline in the output. When I worked on it, I don’t think we got to the point of automating reruns but the information was there to do it if so desired.
But CloVR was for existing, standardized, pipelines rather than exploration. For anyone that doesn’t know, bioinformatics is an incredibly interesting field and a lot of opportunities exist there.
I did some bioinformatics in college and would love to do some work in the industry - in fact I follow a lot of bioinformatics projects online, I just haven’t figured out how to get into the field. Do you have any advice for starting?
In my experience, university professors often have projects that need skills that they & their students don’t have. I’ve had very good luck getting interesting contract work using a local university’s online classified ad system. My contracts were all for at least several weeks of work, but it’s probably possible to find smaller things if you want to just work 1 day a week and keep your day job.
So why do we even have word problems? They’re so dull, and they aren’t contextual and real-world!
Solving a real-world math problem (“is this shepherd an appropriate age to be dating my brother?”) is a very complex task. There are many phases - (1) articulating what you want to find out, (2) noticing all the evidence that you have available, (3) choosing the evidence sources that seem useful, (4) articulating the operations that will be used to combine the relevant evidence, (5) executing the operations, (6) interpreting the result, and (7) checking the result for reasonableness. Math teachers have to find ways to drill their students on each phase in isolation. Otherwise, someone who is not good at (e.g.) noticing all the evidence that’s available will never get any practice at the later phases, like executing the operations - they’ll be stuck, fail all their questions, decide they suck at math, and avoid it for the rest of their lives. Math is hard, let’s go shopping.
So I don’t think the issue the article raises is particularly damning of our education system, but of the experimental method. It looks like a question that a math teacher has set up to drill phases 4, 5, and 7, and the students actually did a pretty good job of applying 4, 5, and 7. But it’s a really a trick question and tests phase 3. If you really want to evaluate students' grasp of phase 3, then present the same information in the context of someone’s Facebook profile, along with lots of other useless information. It will never occur to them to divide the number of sheep by the number of dogs to find the shepherd’s age. Instead they’ll look at the profile picture, or see how old the person’s name sounds (Agnes? Madison? Wilbur?).
Basically, I am not impressed that someone is smart enough to trick a class full of 6th graders.
If you listen to education policy rhetoric, being interesting, contextual, and real-world are precisely the motivations for giving word problems.
Draw your own conclusions. :) :(
I’ve never written a lick of rust. I basically understand the premise. That said, I can’t help but get the impression the author must step further back from the existing C++ implementation to rethink about how the functionality might be implemented anew in rust.
It’s kind of a ridiculous rant, but it is good to think of it as a case study in what it looks like to fight a language/paradigm. The computer almost always wins, because it is infinitely patient/pedantic.
Frankly, these sorts of experiences are usually the most edifying for me. My recommendation to the author is to suck it up and keep at it. Everyone goes through this and it’s how you learn deeply.
While it is true that this is how you learn, the question is whether you want to learn. To quote OP, “I sincerely hope that people continue to use and improve Rust. But I want to write games. Not wrestle with the compiler to make the language more favourable towards my goals.”
And some people feel while it is reasonable to ask users to learn new idioms in a new programming language, it is not reasonable to ask users to, say, “give up OOP”. In this case, Rust lacks inheritance as in subtyping between struct types, and to some people this is not acceptable, as in “I don’t want to learn any programming language which lacks that”. But that doesn’t sound very good, so the usual expression is “structural inheritance is absolutely necessary to do GUI (or something else), so Rust should add it”.
And it is true all mature GUI systems do use structural inheritance, so it is hard to refute. So the one side says “I don’t want to wrestle with the language to do GUI”, and the other side replies “Did you mean you don’t want to wrestle with the language to do GUI, with the constraint that you program exactly as you programmed in your old language?” And the reply is “No, I mean doing GUI, not doing GUI the same old way. Doing GUI requires inheritance. If you don’t see that, it’s because you don’t know much about GUI.” Then “Is it really required?” “Yes it is. Do you have any counterexample?” And on and on it goes.
And while that may be an interesting theoretical discussion for those who want to write their own GUI toolkit, (but most people don’t, including those who are very loud in these debates) I can’t stop feeling all of these are kind of red herrings. The original formulation, “I don’t want to learn any programming language which lacks inheritance” is perfectly good and defensible position. Even if in the future someone writes a good GUI library without inheritance in Rust, people still may not want to learn a new way to do GUI. So whether it is possible to write a good GUI library without inheritance is not the main point of disagreement. It is there because it sounds better than the main disagreement.
Actually - duh - how does Servo do it? Is someone here able & willing to give a few sentence summary?
I think it’s super interesting how strings replace pointers as the main method of indirection in Tcl. I’m curious how this works from a security perspective - is it easy to accidentally put user data in a string that ends up being evaluated in Tcl?
I’d say its not any easier to do it accidentally than it is to do it in most other languages. A really neat aspect of Tcl, however, is that you can pretty much patch any language function in-situ. So if you are going to play things fast and loose and try to eval some user supplied data, you can patch up eval to do some extra sanity filtering for you and to run in a namespace with limited scope.
Its a really great language for writing programming sandboxes in. Say, for example, a test harness that needs to run individual test procs, but the harness itself needs protection from the test code it is running.
Link to the original blog, avoiding SSL errors: http://varianceexplained.org/r/trump-tweets/
History with a compelling storytelling approach:
Politics: 538 elections, NYT’s The Run-Up, Open Source with Christopher Lydon sometimes when he has an interesting guest.
Tech: I haven’t had good luck with these podcasts, although I will try some of the ones linked here that I haven’t seen before.
Yeah, it’s interesting to see this side of him. We mostly just see that he’s grumpy (if not mean), but the Linux project wouldn’t be so enormous and popular if he didn’t have some skills as a manager.
I think his rants just get more press. From what I’ve seen he’s usually pretty nice, and even when he’s not he usually uses hyperbole which gets misinterpreted. Of course, that’s just my impression.
Nothing sells like conflict.
Any idea why they chose Armistice Day?
Probably just coincidence. It’s two weeks before “black friday” in the U.S. - enough time to sell some and get people excited about them.
I’ve been faced with this problem lately - finding a robust, online way to calculate the mean of a stream of numbers coming in. It is indeed a harder problem than it seems.
My approach is to take an N-nary tree and prune the branches that aren’t needed. So, effectively, for X numbers, I’d be taking logN(X) nested running averages (Of The last 1-N values - updated each insert, Of the last N-NxN values - Updated every N inserts (ei, the running average of the previous complete running averages), and so-on repeat logN(x) times + balancing/rollover operations, which boil down adding a new node at the top). At any point, the mean is an appropriately weighted combination of these running averages.
Each running average involves summing P numbers where P < N and then dividing by N - so you need a double-double but you should get “minimal” error overall, you shouldn’t get accumulating error and other bad things.
If anyone knows or can think-of any Hole/Gotchas to this approach, I’d love to hear them.
What goes wrong with the naive streaming approach of m_1=x_1, m_n = m_(n-1) * (n-1)/n + x_n/n (possibly with some standard fp math tricks I don’t know about)?
m_n = m_(n-1) * (n-1)/n + x_n/n
The problem I’d see with the naive running average is that as n becomes large and then (n-1)/n is going underflow to be 1, 1/n become zero and you’re screwed. Plus errors accumulate with each insertion.
The virtue I’d claim with my approach is that you’re never dividing by more than a fixed constant. And you can make most of divisions be by this constant, which you can choose to be a power of 2, which should give minimal error if done appropriately.
It seems like something that would be in the published literature, but google scholar isn’t finding much for me. This paper has something related, an addition algorithm that minimizes error: http://dx.doi.org/10.1109/ARITH.1991.145549 - maybe it could be inspiration, or there might be something useful in that journal / by that author?
Ah, it seems like anything that emulates arbitrary precisions arithmetic would naturally guarantee exactness. And If you keep a running sum with arbitrary precision arithmetic and divide only at the end, the result algorithm is more or less identical to the approach I have been thinking of - if you break the process down to operations on regular floats.
Learning and reading about new technology and futurism. Reading science fiction and non-fiction. Hiking. Guitar and other music. Figuring how to increase my value as a human.
Read any good sci-fi novels recently?
Not the OP but wanted to mention Seveneves - I really enjoyed it.
Added to my “to-read” pile.Thanks.
Yes!!! I just finished this - I liked it a lot but not as much as Snow Crash, still a fantastic novel.
I started this a week ago! Really enjoying it :)
Ian M. Banks Consider Phlebas and even better The Player of Games. Both part of the Culture series. This is the ONLY science fiction future society that leaves me saying “Sign me up! I’m there!” :)
The Metamorphosis of Prime Intellect
+1 for The Martian
+1 for The Metamorphosis of Prime Intellect
Diaspora by Greg Egan always tops my list.
Just finished Cat’s Cradle (not obscure but a classic), and now starting on the Foundation trilogy (finally). As stated below, Neil Stephenson’s Seveneves was a veritable page-turner. Also anything by William Gibson.
Reading up on Stanislaw Lem. Foundation was awesome, but Lem… Summa Technologiae should be taught in schools!
I tried reading Lem’s Futurological Congress book, but it was just too silly. On the other hand, the Cyberiad somehow hits a sweet spot, it’s one of my favorite books - I made my dad read it to me a lot of times when I was little, and I still really enjoy it.
This is a really useful description of current affairs. I run a bi-weekly Rust Hack & Learn for over a year now and am member of the rust-community team. After all this time “find an easy description of Rust that tells people why it is cool in 5 minutes”, I’d describe this as the hardest problem in Rust.
I believe that problem stems from the fact that many of the features of Rust (safe sharing of mutable and immutable data) click rather late and are rather boring in short example. Most of these relate to the borrow management and the whole ownership system. The great thing about them is that they scale really well to large codebases and enable a lot of guarantees, especially in the face of concurrency. But that’s something for people that are 3 weeks in. Before that, Rust is a fancy new C dialect with iterators :).
My biggest problem learning Rust isn’t it’s “cool factor” (I already think it is), it’s that the interesting projects I’ve come up with that Rust might be a good fit for are scarily big (ex: Writing a Unikernel, Building the storage layer for a database, Game Engine/ECS stuff). I think that’s partially because Rust’s benefits are evident for large codebases, so my mind goes there. More than “why is Rust cool”, I need to find a project that doesn’t overlap with my typical usage of other languages like Haskell (mostly web services)… or I need to figure out how to break down one of the bigger ones like a unikernel. /shrug
On the other hand, after writing this comment, maybe I should just get over my “fear of a large project” in a domain I’m unfamiliar with.
i’ve bounced off rust several times because every time i think i’ll start a project in it, and start reading docs, it always ends up being “oh, this is both easier and more concise in ocaml”. i think i just don’t write the sort of programs rust is best suited for.
i think i just don’t write the sort of programs rust is best suited for.
I feel exactly the same way. Also applies to Erlang, where you’d be nuts to use anything else for writing a cluster of non-HTTP network servers, but I can nearly always find an off-the-shelf solution for that kind of problem instead of writing my own.
I can totally feel that. Rust has interesting limitations, but it grows on you over a few months. I really don’t want to code in a another language for the time being.
Also, the language does not care so much about being concise, but I also don’t care about conciseness too much. (that used to be worse)
That said though, that’s probably true for many languages and not a feature of Rust.
I there any reason why you wouldn’t just write a component to a larger system, e.g. the rumprun unikernel (which is a supported target for Rust).
For contributing to an existing project, I’m more motivated to contribute to MirageOS or HaLVM which unfortunately doesn’t help my Rust interests.
The Mirage project has interest in Rust support but no one to take point. Two birds, one stone.
That looks great :) Thanks for digging that up!
What about a rust backend for GHC?
rustc is already a frontend to LLVM – doesn’t Haskell have an LLVM backend?
GHC does as of 7, and I believe it was improved in 8 (although I haven’t checked in on it)
I recently saw a talk about implementing an TLS library in Rust in a way that the type system guarantees resistance to timing attacks.
I thought that was cool, but I don’t know how much that will help you.
Could you find a link to it? That sounds super cool!
I don’t think the talk was recorded, and I don’t know if the slides are up, but the paper can be found here.
That is very cool. If I end up going the Rust Unikernel route, I’ll build a TLS stack anyway. Mirage built their own, but afaik it isn’t resistant to timing attacks (and opens up an attack via GC).