Wait, what? Do I understand this correctly?
Cloudflare fixes a problem that they created themselves by inserting Cloudflare in between an app publisher and a user by asking the user to install a Cloudflare extension so that it can query a Cloudflare endpoint to make sure Cloudflare did not mess with said website app?
And also: do they really direct my Firefox browser to a Chrome extension in the same article where they argue that ‘Security should be convenient’?
I believe you misunderstood and the main picture of the article explains it actually really well.
I suppose the idea is that internet access to Cloudflare is even harder to mess with undetected than WhatsApp and through their partnership, WhatsApp is profiting from their reach.
Let me explain my.thought process a bit, because I still don’t get it.
Without Cloudflare:
With Cloudflare before the solution described in the article:
With the solution described:
So what I mean is: if you suspect that Cloudflare could be compromised in any way, why would this be better?
In the proposed [and current] solution:
So steps (1) and (3) are business as usual. Steps (2) and (4) add a further level of security/verification:to deliver a forged/tempered version of whatsapp the attacker should compromise both WhatsApp and CF endpoint. This works under the assumption that the system (from WhatsApp) pushing the hashes to the the CF endpoint is somewhat separated from the system (from WhatsApp) serving the web app.
I believe their solution achieves tamper resistance against active man in the middle attacks for TLS (probably not in most people’s thread model). But they say it’s for an “at risk” user population 🤷♂️
I’m just as confused. Once they have an extension to verify the right code is being downloaded, why would CF need to do anything special? It’s just a file mapping the version to a hash, stored in some service - it could live in a tweet if they wanted, as long as the hashes and the code come from different sources/paths.
Interestingly enough I supposed chromeos somewhat restricted programmatic clipboard access (i.e. https://chromeos.dev/en/linux/linux-on-chromeos-faq#can-i-readwrite-the-clipboard-automatically-from-inside-the-vm ) but a quick test proved my assumptions very wrong. Opened terminal:
sudo apt-get install wl-clipboard
old="";
while true; do
sleep 1;
clipped=$( wl-paste );
if test "$old" != "$clipped"; then
echo $clipped;
old=$clipped;
fi;
done
shows the whole system clipboard contents (i.e. if I copy something something in chrome it gets displayed on terminal).
What would be the advantage over something like yaml? If I want to display a hierarchy in CSV I just add leading commas, so it can’t be to do that….
The advantage is to retain the convenience (simple to write, by hand, no big spec to look at) and the readability of CSV. As the author states:
One of the trade-offs in this design is to favor a valid representation, even if ambiguous, over syntax errors. (If you prefer more explicit syntax, try a different hierarchical format such as YAML and JSON.)
This makes sense as a super simple hierarchical data format in the context of interactive exploration and visualization.
This reminds me of Depinguinator
I’ve put together some code for building a FreeBSD disk image which will boot into memory, configure the network, set a root password, and enable SSH. This can be used to “depenguinate” a Linux box, without requiring any access beyond a network connection.
Hmm, I can’t seem to find the link (I think it was on Twitter somewhere), but the only person I’ve seen who attempted to actually correlate them, so far, found no correlation between Trump mentioning a company positively/negatively in a Tweet and short-term stock-price movements in either direction. Unless there’s better evidence of a relationship, running this bot is probably about as good as just trading randomly…
The bot was featured in an article on Mashable Google guy builds bot that earns money from Trump tweets which in turn referred to the original post on medium by the bot author This Machine Turns Trump Tweets into Planned Parenthood Donations , long story short:
But does it actually work? Let’s look at the numbers.
Check out the benchmark report. It’s essentially a test run that shows you how the algorithm performs on past tweets and market data. You’ll see that it sometimes misses a company or gets a sentiment wrong, but it also gets it right a lot. The trading strategy sometimes leaves you up and sometimes down.
Overall, the algorithm seems to succeed more often than not: The simulated fund has an annualized return of about 59% since inception. There are limits to the simulation and the underlying data, so take it all with a grain of salt.
Well you’d make money about 50% of the time, then, and that’s a better return than most investment strategies. ;)
I read an article yesterday that said there was an impact for example Locheed Martin price dropping after Trump tweeted that the fighter jet price was too high, but the impact was relatively short and all the companies recovered.
The impact of each tweet is expected to lessen in the next few months as investors work out that he is a scatter brained moron that doesn’t actually know what he is talking about and won’t actually implement any of his claims. ie. I wouldn’t bother buying up shared in concrete or wall building companies.
Yeah, Lockheed did seem to go down in the short term, but when he tweeted negatively about Nordstrom, it went up. Two examples is only slightly better a ‘data set’ than one, but I’m not convinced there’s a real pattern there that I’d bet money on.
I’ve been a fan of baobab for a few years, but others I forget the name of going back to the late 90s. As @peter says, this is well-explored design space.
I used to use http://grandperspectiv.sourceforge.net/, and, well, du -h . | sort -h
.
There is a chapter on io
in the first Seven Languages in Seven Weeks and it is available online as an excerpt PDF.
I can very much recommend it, playing with Io was a lot of fun to get introduced to OOP concepts when taken to the extreme.
Obligatory wat talk.
I don’t think that chart is accurate. According to the chart [] should work as NaN does for reflective, but if you try it both in the game and in the JS console it doesn’t. So I think it may be safe to say, don’t trust that table…
I think the chart is pretty accurate; according to the chart [] != []
and in fact it is (being the left and right different instances of Array
), only NaN works in reflective
because it is only the value that is different from itself, i.e. if a = []
then a == a
holds true
while if a = NaN
it does not.
Interesting take on sonification, still I would find quite difficult just listening to this “sonification” and getting a suggestion on the data being sonificated.
I think a great example, where the sonification enriches the visuals but you can close your eyes and follow along the “narrative” of the data, is the Sonification of Income Inequality on the NYC Subway’s 2 Train.
I’ve been doing a fair amount of AVR work in Ada lately, as well; at some point, I’ll probably switch to using Spark. It’s nice to have strict typing and formal semantics on even these devices.
One of the (many) items on my (ever growing) todo list is to experiment with Ada, SPIN and model-checking embedded-systems software.
@angersock Spin is a model-checker i.e. you provide a formal specification of a program (usually a concurrent one) and it is able to prove some properties (i.e. absence of deadlocks), it also features tools to aid you in extracting [formal] models from C programs. I was speculating that Ada should be more friendly with respect to formal model extraction and hence use with Spin.
Whoa, it’s quite a trip :) it kind of reminded me of from-nand-to-tetris (whose it would be a nice follow-up).
Previous articles/threads on similar topics: It’s Time To Get Over That Stored Procedure Aversion You Have and Actually Using the Database. It’s worth noting that the database mentioned is always Postgres, which I guess it’s not by accident, and it is indeed related to its expressiveness (rich native data types) and extensibility.
Yeah. Well, MySQL explicitly doesn’t place a priority on this sort of capability. Oracle certainly does, but essentially nobody can afford Oracle.
I’d like to decide that for myself someday, but it doesn’t look likely I’ll ever be able to justify the expense. :)
Sigh. Completely unreadable on iPhone. Locked viewport with half the content off screen. Mobile site design never tested on mobile device considered harmful.
It wasn’t even wide enough for my Galaxy S4. I lost a few words off the side. I tried different pages and it got worse.
And you don’t even have to have a mobile device. Just use the chrome mobile development mode or Firefox developer edition.
As someone in charge of ui on a website, I am paranoid about these things and check on my devices as well as chrome.
I should probably digest this a bit further before posting this but… this article seems to take web tech as the canonical UI app development context, and thus makes the assumption that anything bringing that to other platforms is axiomatically good. Has it really come to this? Excuse me, I’m just going to go hide in a cave until something else happens cos I’m so tired of seeing the same square wheel over and over and over.
To me it looks like the OP is opposing React to the traditional/vanilla way of developing web apps as he writes:
“The web is fundamentally weird to build apps on: the mess of HTML and CSS get in the way of frameworks instead of helping them”
The question that he poses is how well the React way (i.e. declarative rendering of the UI based on some state) and tooling translate in building native apps, and the answer is “so far so good”.
This may well be the much rumored merger of web and native apps that’s been prophesied for quite some time. The two are having sex, and the result doesn’t appear too bad (to my eyes at least). They’re doing something Java tried but failed to do, and that’s an accomplishment that I’m happy to see.
kudos @shazow, it’s pretty awesome (and neat).
If someone happens to use a dark Solarized theme on the terminal (as me) /theme mono
helps with (somewhat hidden text in) system messages.
The data center operating system would not need to replace Linux or any other host operating systems we use in our data centers today. The data center operating system would provide a software stack on top of the host operating system. Continuing to use the host operating system to provide standard execution environments is critical to immediately supporting existing applications.
While this makes sense, I think it’s a very frightening future. Existing host OS’s are so complicated, and distributed systems increase the complexity significantly. Building on top of that leaning tower is going to be fragile. It will work, after a lot of effort and probably some really nasty warts people just accept.
As a comparison, James Hamilton’s re:Invent talk for this year mentioned that Amazon rewrote its networking stack from ground up and it had better availability than bought things. The reason being: it only did what they needed it to do, so millions of lines of code could be tossed out. Millions of lines of code that just adds bugs.
It’s frightening, indeed, as I was recently (re)reading Unit Testing in Coders at Work, and in particular, the Bloch anecdote on the bug in the assembly for lock/try-lock I couldn’t help but think how deep can go the rabbit hole of our current software “stacks” and no wonder if everything is broken (all the time).
The hope is that after “immediately supporting existing applications” we move to shaving off cruft; Amazon AWS reimplementation of the networking stack has proven a somewhat evolutionary approach (towards less cruft) is, after all, possible (for organisations with the right resources and motivations).
[slightly related] this reminded me that Darcs, written in Haskell, was one of the first free/opensource DVCS and is also built around a rather compact kernel of concepts (its “patch theory”) although experimental rather than proven and battle-tested (like in git case).
The source of the extension is on GitHub if someone wants to figure out how this actually is supposed to work - https://github.com/facebookincubator/meta-code-verify/
They are currently not using Subresource Integrity but working around that with a combination of
fetch()
,TextEncoder()
andcrypto.subtle.digest()
. That’s really surprising.I would have assumed that they register a ServiceWorker to handle all
fetch
events and then replace the existing request with afetch(sameURL, { integrity: expectedHash})
… The variables have names likeworkaround
, so maybe they are dealing with some browser inconsistencies here?(Using TextEncoder is also a bit error-prone. I wrote it up for them in https://github.com/facebookincubator/meta-code-verify/issues/128).
Does this offer anything over just using subresource integrity?
It adds a further (and “independent” from the web app provider) audit point. Suppose an attacker compromises whatsapp web server/CDN, she would be able to change as well subresource integrity hashes in the HTML source of the web page. With Code Verify she would have also to compromise CloudFlare verification endpoint (with the compromised hashes).