Yjs is a competing CRDT library that’s quite a bit more performant than Automerge, despite Automerge being compiled to WASM. Take a look at their benchmark suite and judge for yourselves:
https://github.com/dmonad/crdt-benchmarks
It took me the better part of 2 days to get everything working the way I wanted, though, so I would love to see this natively supported in the installer :)!
Wow, cool! Do you know by chance if there’s a builtin shortcut that would let me easily insert an em-dash (—) on Windows? Can I maybe somehow enter it quickly from keyboard through this Win+. dialog?
Yes, on Windows there are Alt code shortcuts. It requires a numpad keyboard, with Numlock turned on. If you’re on a Laptop, most should allow you to simulate the Numpad by pressing Fn + (you’ll have to look it up per manufacturer and device model).
Anyways, specifically for your request:
Left Alt + 0151
That is, hold down left Alt and type in 0151 in the numpad and then left go of left Alt. This works for all sorts of other characters, as well. A simple search will find you plenty more.
I already found the more legitimate versions of these articles to be silly, that concerned themselves with, say, timing information for very specific types of hardware.
I find this article and its naming scheme that imitates those other articles to be drivel. I can’t write for anyone, sans myself, but I’ve never used this AWS nonsense and never plan to. I find so many willing to recentralize the Internet to be disconcerting. I suppose these numbers would be fine to have if you actually used this AWS nonsense, but why would you be purchasing something before you knew the cost thereof? Is it normal for people to not only rent from a behemoth instead of a VPS or the more respectable self-hosted server and then, in addition, not know what they’re doing until they get a bill?
I can’t write for anyone, sans myself, but I’ve never used this AWS nonsense and never plan to
That’s fine, but I don’t think you need to be overly antagonistic at the article author.
I suppose these numbers would be fine to have if you actually used this AWS nonsense, but why would you be purchasing something before you knew the cost thereof? Is it normal for people to not only rent from a behemoth instead of a VPS or the more respectable self-hosted server and then, in addition, not know what they’re doing until they get a bill?
You are assuming a lot here. You are making the assumption that companies which requisition AWS’s services are not aware of their TCO (total cost of ownership) by the mere presence of this blog post? I have to disappoint you, but when I was pitching to one of my previous CTO’s about using Redshift, I had to create a spreadsheet of pricing calculations and projected cost depending on varying usage scenarios. We were very aware of what we were getting ourselves into, and the cost/benefit ratio. I imagine any company that likes to stay in business is also aware of such factors.
You also miss out on one crucial use case that is almost always a net win with cloud infrastructure: extremely bursty workloads.
Netflix–I can only imagine–must save hundreds of millions annually by leveraging AWS’s elastic nature to spin up insane amounts of compute during Friday evenings, only to spin them down come Monday. It would be extremely wasteful to be paying for tens (hundreds?) of thousands of idling machines during Monday -> Thursday, under the scenario that they have to own enough compute power that commensurate with their peak load.
One of the best arguments against (or for) centralizing is cost. This article helps attempt to give better information about that for people that might want to host things themselves.
Your dismissal of it and, more importantly, the people who might find it useful could drive off more folks than you would have convinced if it had been friendlier.
My naive assumption as to why that may be is that since Snapchat targets so many devices, the run time for the filter is capped at some millisecond of compute (around 16ms or 33ms for 60/30FPS, respectively). So if you have a slower/older device, then your device might not have enough time to compute the filter before it’s forced to “render” whatever has been computed so far.
Fat Arrow => for declaring an anonymous function with scope context preservation
String interpolation.
Splats and destructuring.
But that’s not the extent of CoffeeScript’s ergonomic improvements (in no particular order):
Everything is an expression. I love this the most about Ruby/Rust/Elm/etc. No need for explicit return keyword (in most cases, except when wanting to short circuit). The last “expression” in a function is automatically returned as its return value.
? to guard against possible undefined keys when doing nested object access (e.g. val = obj.?key.?might.?not.?exist will not crash. In JS you’d have to guard against every level of object access via if (obj && obj.key && obj.key.might && obj.key.might.not)
-> skinny arrow to not preserve scope context when declaring an anonymous function. In legitimate cases where you want a closure’s this (or @ syntactic sugar in CoffeeScript) to actually refer to the new anonymous function scope’s this or arguments, you don’t want to use a =>. In vanilla JS, that means writing out function(), in CoffeeScript, it’s a skinny arrow.
Not requiring parenthesis for function calls (e.g. alert "Hey ma, no parenths!").
Control flow expressions can be suffixed to a line (e.g. alert "You should see this if..." if truthy_value).
List Interpretation syntax for loops (Python-esque).
This is so cool! I really like the structure of this post: recognizing something one person has done well (and therefore other people have failed) and then explaining it
I really don’t like GNU’s implementation, NetBSD and COHERENT seem to have the most readable yes out of all the yesses I looked over (BusyBox had the worst).
It may be possible to apply this to other utilities like dd and cat, which I plan to look into soon (unless someone else beats me!).
It’s super compact both in code size and resource consumption (one stack variable!!), and it’s still relatively easy to understand. I’d say it’s doing its job marvellously.
Havent had time to look at the code, but alpine linux uses it by default.
And it’s targeted mostly to embeded linux, so I’m guessing ultra optimization is more important to them than readability in this case.
Yeah, that isn’t cool. I thought they were just trying to avoid reusing a variable, then I realised they were reusing a variable, and/or moving on to argv[1] :(
Classic HN. Always reject the mundane explanation that the program is fast because somebody wanted it to go fast in favor of a narrative involving an epic struggle against corporate overlords.
I’m rejecting your characterization of that HN comment, because this is a common method for GNU programs. I am not rejecting your assessment of why it changed though.
Interesting, I wonder what the backstory to that is. The example is oddly specific enough (involving a pipeline of yes, echo, a shell range expansion, head, and md5sum), that it look like an unexpected slowdown someone actually ran into in practice, vs. just a bored person benchmarking yes.
If “yes” was written once, decades ago, and someone spent all of one entire week validating, I’m ok with getting a 10x performance increase on every *nix system in existence ongoing.
I love it when pipelines/shell scripts can scale vertically for a long time before having to rewrite in some native language.
Some enterprising soul out there… please mass manufacture this! I’m dying for an Ergo Dox replacement that doesn’t presume the owner has extremely large hands (the thumb clusters are placed way out of my normal hand reach!)
I have a Diverge II and love it. The more natural thumb placement is one of the reasons I went with it over an ErgoDox. Also, I offset my key map “inward” by one column (i.e., g and h are the keys on the inward side of my home row) so that the thumb clusters are even more convenient and so that the outside columns can be used for symbols and meta keys akin to a standard layout.
Not exactly mass-produced of course, since the demand isn’t there in terms of volume. Mine is similar to the one in the link except as a one-piece, so it’s easier to travel with. Also it has a wooden case instead of just using the bare PCB.
After using Colemak (3+ years) and then attempting Workman (slightly better than Colemak at reducing discomfort with reduced horizontal index finger travel for me personally), I’m ready for a keyboard that’s optimized for reduced pinky usage (even on Windows/Linux machines, I’ve swapped Ctrl with Alt/Meta such that keyboard shortcuts primarily use my thumb like Mac OSX’s Cmd) while still reducing the horizontal finger motion that was so common with Colemak.
Time to roll up my sleeves and learn QGMLWY!
For anyone who suffers from typing discomfort, I can’t recommend alternative keyboard layouts enough. It’ll likely take a long while to get used to typing in a different keyboard layout, however (I believe Colemak took me well over 8+ months to get decently proficient at [80+ WPM; my QWERTY baseline is about 95WPM], and I never did get proficient to the level I would have liked with Workman…).
However, if you’re not willing to take the plunge to retrain your muscle memory (not a small undertaking!), there’s two small changes that really helped me out which I would recommend to anyone:
Swap Capslock with Backspace. No more reaching the top right side of the keyboard with your right pinky in an awkward motion! Some VIM users have told me they remapped this to Esc… but I’m much more of a Ctrl+C person (plus, after the second tip below, Ctrl+C no longer becomes a torture test on your left pinky!)
Swap Left Ctrl and Left Alt so that hotkeys only requires your thumb to hold onto the modifier instead of your pinky! (This is unnecessary if you’re on Mac OSX)
I had pinky problems and have been using QFMLWY for 6 years. It’s one of the best investments I’ve made in my career. If you want a keyboard try the Kinesis Advantage.
Thanks for the testimonial! Btw, what made you choose QFMLWY over QGMLWY (the latter is the one with ZXCV unchanged)? Part of the reason I was attracted to Colemak/Workman was because I didn’t want to have to change my hot key muscle memory/bindings (one of the reason why I never gave Dvorak a try). I’m guessing you didn’t find that to be a problem?
I’ve demo’d the Kinesis Advantage in person, and wasn’t quite a fan of the bowl size (I have small hands. I’ve also used the Ergo Dox previously and had to sell it because my hands also too small to reach the keys and the thumb clusters comfortably)–I’m thinking of getting a TypeMatrix 2030 keyboard since I did enjoy the columnar non-staggered layout of the Ergo Dox.
Oops, I actually use QGMLWB, I can never remember which and just copied what I (mistakenly) said last time. They’re similar enough that you can confuse them so I don’t think it matters what you pick :) I’d just go with your intuition.
However, your concern still applies. The answer is that I don’t use keyboard shortcuts outside of my custom Emacs setup in any significant capacity. But even if I did, it wouldn’t have been a consideration–I overhauled everything at once and just resigned to being useless for a few weeks.
The TypeMatrix looks good to me except for Ctrl under the pinky. I think if I had used this keyboard I would have kept with the foot pedals.
Yeah, I’m definitely going to give the “most optimized” version a try… What do I have to lose ;)?
Re: TypeMatrix: Per my own “life pro tip #2” in my GP post, I would personally be swapping Left Ctrl and Left Alt, so that I’d be using my thumb instead of my pinky for Ctrl (I never ever use Right Ctrl anyways, so that’s not much of a big deal, and if I needed to use Alt, for say Alt + Tab, I just use a combination of my right thumb [on R-Alt] and my left ring finger [on Tab]).
I took the hardware way to solve the ‘pinky’ problem, and bought a typematrix 2030. It brings the enter amd backspace in the middle so you use your index/thumb to press them. The shift/control keys are also taller to make them easier to access.
I swapped CapsLock for Ctrl and its 1000% more comfortable for my hands to not have to reach for the Ctrl key. Having CapsLock on home row and then having it be such a rare keypress (does anyone use caps lock any more) is easy to change into a big win.
I’ve used Dvorak for a couple years and as a programmer, I would recommend Colemak to someone interested simply because they leave the symbol keys alone. Having dvorak’s home row vowels is a huge win but largely off setted by putting <>? up at QWE.
However, if you’re not willing to take the plunge to retrain your muscle memory (not a small undertaking!)…
Still not a small undertaking, but you feel better even after an hour of fumbling as you learn it. Compared to Colemak and Workman where I still couldn’t get with O and I after weeks of practice…
‘A’ being on the other hand entirely will take some getting used to.
wow, great read. this is one hell of a treat ( i actually wrote that even before reading your username :O )
thanks for the references, some more great thoughts in those. (there might be one broken link that lead me to a phx.corporate-ir.net domain .. cant remember which link i had clicked)
The broken link is supposed to link to one of Amazon’s security filings (also referred to as the “2016 Letter to Shareholders”). It’s the letter where Jeff Bezos lays out his “Day 1” vs “Day 2” philosophy and publicly coins the term “disagree and commit”.
The relevant portion on “disagree and commit”:
Third, use the phrase “disagree and commit.” This phrase will save a lot of time. If you have conviction on a particular direction even though there’s no consensus, it’s helpful to say, “Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?” By the time you’re at this point, no one can know the answer for sure, and you’ll probably get a quick yes.
This isn’t one way. If you’re the boss, you should do this too. I disagree and commit all the time. We recently greenlit a particular Amazon Studios original. I told the team my view: debatable whether it would be interesting enough, complicated to produce, the business terms aren’t that good, and we have lots of other opportunities. They had a completely different opinion and wanted to go ahead. I wrote back right away with “I disagree and commit and hope it becomes the most watched thing we’ve ever made.” Consider how much slower this decision cycle would have been if the team had actually had to convince me rather than simply get my commitment.
I’m pretty sure Andy Grove came up with it at Intel even earlier, but it’s all a part of the cult of management at this point. Disagreements are merely people slowing down “good business activity” from occurring, those bastards. The disease they’re trying to prevent is pretty awful as well: people who think that disagreeing with others is how their voice can be heard, and their value communicated at work.
At least Andy Grove’s catch phrase was “constructive confrontation”. His books also bring up trying to find the Cassandra’s in your staff, listen to what they have to say, and incorporate it into your strategy. In printed from, at least, Grove was very for searching for the truth and not just plowing over subordinates.
this is a win-win situation, as far as I can tell. some app makers don’t want their content on user-owned devices. as a user, I don’t want to support such companies. now I have to do less work to avoid them.
the unfortunate irony, though, is that Netflix still runs just fine on my rooted devices and my friends always insist on installing it.
Hopefully Google pulls more stunts like this. I am waiting patiently for a mass migration to something like Sailfish or B2G. The sooner this duopoly ends, the better.
I think this optimism is a good thing, but the biggest hurdle for user migration is application support. The GP mentioned Netflix: How many users do you think will actually use a platform that doesn’t have key “killer” apps like Instagram, Snapchat, Netflix, Facebook, etc?
The irony is that I’m playing the devil’s advocate, as I don’t use any of the aforementioned apps, but I completely understand that the vast majority of people depend on them (and largely for socially benign reasons like catching up on the latest pop-culture info, statuses of friends & family, etc).
Not only that, but there’s also the question of platform stability and usability. It might seem like it was only yesterday that Android was nipping at iOS’s feet, but it wasn’t always like this. Android for years (for the better part of a decade) was a completely atrocious user experience for most (UI jitter, battery life issues, app compatibility issues due to device fragmentation, etc).
Neither of the two platforms you mentioned (Sailfish OS & Boot To Gecko/Firefox OS) are ready for prime time in the above regards, and one of them (B2G/FF OS) is already discontinued…
Working on the next big update to Learn TLA+. A lot of small changes, but the main one is that I’m ripping out the current reference section (“here’s the set of all automorphic functions over a set!” is cool but not very useful) and replacing it with a ton of example specs (“here’s how to simulate a client-server architecture!” / “here’s how to find bugs in MongoDB!”) and techniques (“here’s how to add cronjobs!” / “here’s how properly use model values!”). I think that will make it much more useful to people who know the basics but aren’t sure how to apply it.
I don’t believe education helps. Just provide consulting for the tendering and commissioning.
Requirements analysis is better done by a techie learning the problem domain than a domain expert learning technology. Not every techie can, though. Requirements analysis is a skill very different from coding.
If by “education doesn’t help”, you meant to say “educating people who are neither technical nor domain experts to perform requirements analysis does not help”, then you’d be likely right on some practical level.
However, philosophically, I don’t think I can agree.
That kind of thinking is how we write-off large swaths of individuals in orgs as simply being “unproductive” (i.e. “education doesn’t work anyways, so why bother educating them to becoming more self sufficient?”).
I don’t claim to have the answer either, but I don’t want to write off one possible avenue of the solution: educating both sides of the table of the tendering process.
Besides, “provide consulting” isn’t really a solution… Someone still has to learn to do the job, you’ve simply externalized the cost onto another entity that doesn’t even have a vested interest in your system succeeding (it’s no secret that this is one of the longest standing problems of outsourcing/contracting any expert-skill work in an area that you yourself are not an expert in).
2FA is mostly security theatre [0], and 2FA that uses SMS is most definitely just masquerading as security theatre in 2017.
Even NIST updated their guidelines [1] last year to discourage using public switched telephone networks (PSTN) to deliver multi-factor authentication tokens:
Note: Out-of-band authentication using the PSTN (SMS or voice) is discouraged and is being considered for removal in future editions of this guideline.
Google is “strong arming” (by threat of blacklisting) Certificate Authorities to comply with a “Certificate Transparency” program that Google has pushed through the IETF (Internet Engineering Task Force).
The “Certificate Transparency” (hereon referred to as “CT”) program requires that all issued certificates are logged with 2 separate CT servers that is publicly auditable by anyone. The premise being that we can’t prevent Root CA’s from being compromised, but we can do the next best thing, which is to prevent errant certificates from working at all in the popular browsers (starting with Chrome).
If a certificate is used by a server that doesn’t appear in the 2 CT logs, then Chrome will show a bad certificate warning in Chrome (the same way they show a warning for expired or self-signed certificates today).
The other 3 major browser makers (Mozilla, Apple and Microsoft) have yet to comment on whether or not they will follow suit in using the CT logs to blacklist errant certificates.
Here’s Slate’s closing remarks:
There’s really no chance that consumers will ever know enough about these obscure systems to push Google one way or another. So for now, there’s little to stop the company from redesigning the internet’s critical infrastructure however it wants.
I think part of the lack of comments from the other browsers is that they like the changes, but figure they don’t have the market share and/or clout to push changes like this through. So if Google succeeds, hooray, we’ll follow their lead. If Google fails, then no sweat of our back, only the Chrome team has egg on their face.
Personally, I’d like this initiative to succeed. The biggest concern with TLS was always that every CA could issue any certificate ever and no one could double check that they are behaving. Now that Chrome has a huge dominant position (60% globally I think), they are forcing CAs to behave.
I got scared by the title, as huge companies “improving security” often means screwing over hobbyists (i.e. SecureBoot, locked down phone bootloaders). Relieved to see that it’s just forcing CAs to behave better.
Using Tor for webcams and baby monitors due to Tor’s security design sounds nice and all, but… one thing that’s missing from that PDF is how horrendously low* throughput of the Tor network as a whole. It’s dependent on individuals and organizations volunteering bandwidth and compute cycles, and the last time (2 or 3 years ago?) I tried using Tor, it was a terribly slow experience even for regular browsing.
Forget streaming webcams and baby monitors, even highly distributed Youtube videos with edge servers worldwide are sluggish as heck!
[ * ] With loads of caveats. There are fast nodes out there, and you can set up a fast relay of your own to use as the first hop, but the over all throughput is still very much dependent on others in the onion network.
I think what should be taken from the slides is “this is a solved problem”.
Tor as it is today may not be up to serving the throughput and latency needs, but the protocol and near-zero effort for the end user to access their devices is.
I have hope that now we have this thought developers will run with it rather than go all ZOMG WEBSCALE CLOUD BBQ on it or try to re-invent their own version of Tor.
Maybe all that is needed is a private closed Tor service (self hosted on the devices themselves) to push signalling over (think encryption keys) and then you can make a direct connection.
Of course the assumption here is that this is a technical problem. Programming rarely is the hard part, the economics for the manufacturers may simply favour centrally controlled infrastructure.
We don’t need a new form of money. Especially one that is based on stone age ideas like the gold standard. We need something better than money. I don’t know what that looks like, but I personally would love to live in a world where there is no money.
Only reason bitcoin works today is because it is convertible to state money. Can you have a stateless currency? I suspect you can’t. Why? Because historically currency was used as a tool to provision armies and states.
What is bitcoin provisioning? Oh shit, is it skynet? It’s skynet isn’t it.
I don’t know what that looks like, but I personally would love to live in a world where there is no money.
Definitely far from perfect, but a fluid reputation-esque based currency called a “Whuffie” was mentioned in Cory Doctorow’s book, Down and Out in the Magic Kingdom (a fun read, plus it’s available on his website for free!).
These kinds of post-fiat currency “monetary surrogates” are largely predicated on some sort of post-scarcity society, of which we are not anywhere near (though often promised by Singularitarians and Futurists of all stripes…).
Also, have you seen Black Mirror’s 3rd seasons’ episode “Nosedive” (spoiler alert, link goes to Wikipedia article for that episode)?
I immediately thought of Nosedive when I was reading your comment! Reputation based systems are fraught with danger because people are so good at gaming the system - any system.
I think this is absolutely true. Previous job there was a group that wanted to create company wide project stats like “number of refactors, test coverage, errors, lint errors, etc” as a way to motivate employees.
I was like “wow, this is going to be gamed so fast”. Reminds me of the soviet Gosplan. They had an intricate system to monitor the economy using computers to make sure things are going as planned. As expected people gamed the system like making products travel over rail back and forth to increase “rail miles”.
This is why I wonder if we can make a system that’s not money like, or point based, or whatever.
Definitely far from perfect, but a fluid reputation-esque based currency called a “Whuffie” was mentioned in Cory Doctorow’s book, Down and Out in the Magic Kingdom (a fun read, plus it’s available on his website for free!).
I mean… the entire point of the book is that if you only reward people for being popular they start doing really shitty things. The point of the book is kinda that Whuffie is a terrible idea.
Whuffie’s creator describes it as a deliberately “a terrible currency”. It exists to criticize misfeatures of our current system by making them much worse.
Wow. I am confused. This is just mindless rambling yet people seem to like it.
We don’t need a new form of money. Especially one that is based on stone age ideas like the gold standard.
Well we might not ‘need’ it but an alternative is definitely useful. There are things that bitcoin can do better than state-sanctioned money.
Also eating food is a ‘stone age idea’ so I guess we could stop doing that too? Just because something has been around since forever, does not make it bad, stupid or obsolete.
We need something better than money. I don’t know what that looks like, but I personally would love to live in a world where there is no money.
I could literally say this about anything. We need something better than cars. I don’t know what it is but I would love to live in a world where there are no cars.
Only reason bitcoin works today is because it is convertible to state money.
That is also not true. You could definitely buy some stuff with bitcoin without state money. The fact that state money has been around for centuries and the world’s economy has built itself around state money does mean that it is the most easily used monies.
Can you have a stateless currency? I suspect you can’t. Why? Because historically currency was used as a tool to provision armies and states.
This is wild speculation that is clearly false because money is useful even without states or state-sanctioned warfare.
And just because states use money to pay for armies therefore money cannot exist without state is just a non-sequitur.
There are things that bitcoin can do better than state-sanctioned money.
Like what? Expensive to do transactions, and at least state money can be truly anonymous.
Just because something has been around since forever, does not make it bad, stupid or obsolete.
Stone age implying we found something better than stone. Stone age does not just imply old.
I could literally say this about anything. We need something better than cars. I don’t know what it is but I would love to live in a world where there are no cars.
I’m saying bitcoin isn’t a new thing. Block chain is novel, but money on it isn’t. I can’t imagine what the better thing is because if I could I would make it.
That is also not true. You could definitely buy some stuff with bitcoin without state money.
I’m making a claim that bitcoin would not work if it wasn’t convertible to state money. In fact, i can’t imagine you could prevent that from happening anyway. In fact if we lived in a parallel universe where there is no money, nobody in their right mind would think bitcoin solves a problem they have.
This is wild speculation that is clearly false because money is useful even without states
History would like to have a word with you. Yes money is useful outside of paying taxes but that’s a side effect. the seeding of it is by states. States go away so does the money and it’s usefulness.
Example, soviet ruble was used after the break up of the soviet union only because the former states decided to remain using it until the new ruble took over in 1993. Nobody kept using the soviet union beyond that.
I think money has brainwashed us. We grow up with it, of course it’s normal. It’s part of life. But it’s just an invention that has a very real and focused purpose. It’s there to provision the state.
We have to use a little more imagination to get rid of it, and bitcoin is not that. Bitcoin is a boring version of the same thing. The communists at least had a little more imagination.
Like what? Expensive to do transactions, and at least state money can be truly anonymous.
Let you send money to somebody across the globe quickly without the banks taking 5%.
I’m making a claim that bitcoin would not work if it wasn’t convertible to state money.
A claim which you support with what evidence?
This whole thing doesn’t even make sense. What do you mean exactly by ‘would not work’? The more I think about it the sillier this as a thought is.
States go away so does the money and it’s usefulness.
You do know trading and money has existed before states, right?
Example, soviet ruble was used after the break up of the soviet union only because the former states decided to remain using it until the new ruble took over in 1993.
Plenty of times people have made their own currency e.g. scrips for community use when state money was in short supply.
Let you send money to somebody across the globe quickly without the banks taking 5%.
What’s the per-transaction cost (in electricity generation) for BTC?
I’ve seen estimates upwards of $7, which puts it firmly in the ‘not really better than banks’; the receiver will transact again, either to use the money or convert it to fiat, so that’d be $14 (assuming the conversion was free, or you spent all the money on a single transaction).
Where does the money to pay for this electricity come from? Inflation.
Technically you can buy in BTC without ever exchanging, and that’s what people are trying to achieve, but the scale of it is a niche of a niche at best.
Money measures value and enables trade, you can use anything for that purpose, as long as your counterpart recognizes its value. Certainly monopolization helps governments in taxation for monpolized activity like armies and I doubt anyone disagrees.
One problem with BTC is completely uneven injection. It’s like the “1%” that gets access to QE rounds and such, and can reap the benefits of this newly expanded monetary base, before it evens out in the market and devalues the currency.
So if BTC were widely adopted, the Chinese mining cabal would be the new 1% and people would rather go back trading in tobacco leaves and squirrel skins instead of putting up with that shit.
Whatever the programming language’s semantics say they mean.
see the state – what is the computer thinking?
Nothing! Computers don’t think.
The create-by-reacting way of thinking could be stated as: start with something, then adjust until it’s right.
Is this guy seriously considering depriving people of the joy of crafting their own beautiful algorithms? It is admittedly a lot of work, and not everyone’s cup of tea, but why deprive people of the chance to even try it?
We expect readers to understand code that manipulates variables, without ever seeing the values of the variables. The entire purpose of code is to manipulate data, and we never see the data.
Working in the head doesn’t scale.
Mutually inconsistent complaints. It is precisely by not fixing values for your variables that you can prove that something works for any possible value. And this is precisely the only technique that automatically scales to infinitely many cases using a finite amount of reasoning.
Is this guy seriously considering depriving people of the joy of crafting their own beautiful algorithms?
I believe his thesis is that people learn crafting algorithms by reusing other algorithms and adjusting them. Any program starts as the Hello World algorithm and is then adjusted into something different.
Cherry picking straw man sentences does not a point make.
Why not address his two bolded thesis statements at the top of the post?
Programming is a way of thinking, not a rote skill. Learning about “for” loops is not learning to program, any more than learning about pencils is learning to draw.
People understand what they can see. If a programmer cannot see what a program is doing, she can’t understand it.
Pretty sure his first sentence nullifies your caricature of his position. He does not advocate divorcing the the thinking from the doing, but rather unifying them so that one can think and do, and one’s doing influences the thinking in a virtuous cycle, aided by the programming tools at hand.
The rest of his post goes on to show a few (but not the only!) ways the tools can aid in the visualization of the program execution, thereby helping the programmer understand just exactly how their program is actually being interpreted by the computer.
If I can summarize his position (and my interpretation of yours), it’s that Brett Victor believes people think and learn differently, and teaching students in a purely abstract symbolic manipulation without ever seen the concrete effects of their code is not going to be effective for all.
Your counter argument is that coding is symbolic manipulation, and making abstract concepts concrete does not help, but rather hampers an individual’s ability to internalize what’s happening and inhibits the building of a mental model that allows working with the program abstraction. Or did I misrepresent what you’re trying to say?
I don’t actually think you two are in conflict at all. Brett is talking generally, whereas you’re being very specific about algorithmic development. Surely you don’t believe teaching children to program should begin with the same verbosity and tools as those used by working professionals?
Let me attempt to summarize the core argument of the article:
Scripted narration in a medium that’s supposed to champion interactivity is a fool’s errand. Instead, narratives should be emergent via mechanics of the game that fosters discovered self-narration.
Put more crudely, the author would like gaming to be akin to a child playing with toys. The toys offer zero narration of their own–it’s all in the player’s head!
Though games like “Minecraft”, “Dreams” and “The Witness” are not mentioned by name, I would imagine the author very much would like to see more of these, and less of the… well, other games.
It’s a positive review of the the game What Remains of Edith Finch, which argues that this game helps show us the way forward for the medium,
Secondarily, though this gets more space, headline, and attention, what some other people have argued is the way forward for the medium, the ol’ Interactive Storytelling dream of folks like Chris Crawford, Janet Murray, and David Cage, is maybe a dead-end, which we can definitively realize now that we’ve seen what the better way forward is.
Admittedly, this is reading between the lines a bit and he doesn’t quite make this argument as I’ve reconstructed it (he seems to be hitting in various directions other than David Cage, who I personally would’ve chosen as a better foil). Bogost’s a personal friend who I’ve known for a little over a decade, and I like much of his writing, but this isn’t my favorite piece of his, even if I’m sympathetic to the form of the argument I’ve reconstructed.
OK. So among the games I’ve played, Doom and Bioshock, which is better? I can accede to the idea that a hypothetical Libertarian Atlantis movie would tell a better story than Bioshock. But does the addition of story elements make Bioshock worse than Doom? Would eliminating all the voiceovers from Bioshock and reducing it to “kill stuff and push buttons” like Doom make it a better game? Not inclined to agree.
I don’t think it’s really arguing at the level of “game A is better than game B”, but more about future agendas. It argues that the holodeck “interactive narrative” dream, which views true interactive storytelling as the way to take the medium to the next level, isn’t promising, and is in favor, instead, of an alternative path forward, which it argues the game What Remains of Edith Finch embodies. Now, it’s hard for me to judge this last claim, because I haven’t played that game.
(The “holodeck” reference has an outsized significance in academic game studies, perhaps not obvious to the average reader, because the metaphor was used in an influential 1998 book by Janet Murray entitled Hamlet on the Holodeck: The Future of Narrative in Cyberspace. In addition to referring to the Star Trek holodeck, of course.)
Secondarily, though this gets more space, headline, and attention, what some other people have argued is the way forward for the medium, the ol’ Interactive Storytelling dream of folks like Chris Crawford, Janet Murray, and David Cage, is maybe a dead-end, which we can definitively realize now that we’ve seen what the better way forward is.
I’m curious what the “better way” is, in your opinion?
“Better” is in the eye of the beholder, as well. As much as I’d love (I don’t, actually…) to play Halo 15 and Call of Duty 26’s multiplayer portion and weave my own narrative devoid of any scripted narrative–such that no two players will experience the same arc of encounter and will each walk away with their own unique experience–I’d much rather experience the works of David Cage, et al. rather than play for play’s own sake (I, for one, cannot wait for Detroit to be released!).
I enjoy games the most when it makes me think and relate back to something in the real world and case me to appreciate it more, or see it in a different light.
Without any spoilers, the latest game I completed (Horizon: Zero Dawn), made me truly appreciate the design behind Erlang (yes, a seemingly out-of-the-left-field connection!).
I don’t have strong opinions on the better way personally. I actually started my academic career building AI support for interactive storytelling, with the goal of making games that were non-scripted but still heavily story-based. So I have some sympathy for the Grand Interactive Storytelling dream, enough that I spent a few years working on it (albeit on the backend tech side, since I’m not a writer or game designer), and occasionally still go back to it. But I also have some sympathy for arguments like Bogost’s that argue this is trying to put a square peg in a round hole. I suppose I should stake out a strong opinion on this, given that it’s close to my research area, but I’m somehow just very undecided about it.
I really appreciate you adding context to the article. It sounded like something interesting, but I had a hard time making out the thesis. Clearing up the holodeck reference, in particular, helped
I’m a through-and-through erlang devotee, but I’m not sympathetic to the argument that performance doesn’t matter. It matters a lot; there’s a bunch of hard/interesting problems that you’d prefer to fit on one box or within one programming paradigm, and erlang (and ruby, and python, and clojure, and …) frequently makes that impossible. The fact that erlang has a good multi-box story and an acceptable multi-paradigm story doesn’t really help the fact that multi-box and multi-paradigm are both incredibly holistically costly.
Agreed. This antipathy for speed is also a very odd idol in the realm of programming.
But I think there’s a bigger issue at play here, a tunnel vision of sorts. There’s three general problem domains when it comes to performance:
Hard real-time (a process absolutely must finish in time, or else it’s in a failed state)
Soft real-time (a process should finish in time the vast majority of the time. Occasional lapses are not an issue)
Not real-time (“offline” processing, more or less)
These posts are all super focused on the problem domain that Erlang excels at, which is soft real-time, but there are plenty of problems in this world that doesn’t fall into soft real-time problems! Hard real-time applications (like video games) are on one end of the interactive spectrum. It must respond at least 30 frames a second, or else it’s essentially game over for the player. Jank-tastic!
And then on the way other side, are offline processes that cares about over all throughput, but not immediate responsiveness (e.g. protein folding).
On both end of the spectrum, speed matters. There happens to be a niche in the middle where reactive/responsiveness happens to matter a lot more than pure throughput, but let’s not mistake the tree for the forest!
I think videogames fall differently into hard or soft real-time categories depending on genre.
A fighting game like Soul Caliber or Street Fighter is hard real-time, because the entire game is considered worthless and customers will pay no money for it if there’s even a rumour that it might occasionally drop a frame.
A game like Skyrim or Mass Effect is definitely soft real-time. They have to give smooth framerates most of the time because in-game combat is real-time-ish. However, these kinds of games can ship with noticeable intermittent (but not frequent) framerate hitches and people will still pay for them. For example, when new areas and scripts get paged in as the player traverses the world, or when a bunch of complicated in-engine scripting kicks in all in one go for some reason.
A game like XCOM or Civilisation is pretty much best-effort. It can skip frames all over the place without breaking gameplay. It’ll feel irksome if animations are constantly choppy. Players can even opt to disable (some|most) animations in these games, and many will.
…I think FPS games fall into the hard real time category for players who pay close attention to how well they do in competitive matches but the soft real time category for players who don’t. ?
It’s absolutely okay to run an FPS slower than 30 Hz, provided you do so at a constant slow rate (so players can model it in their heads without getting annoyed). Also, note that the game logic is different from the rendering logic–the Quake series ran the actual game logic at between 5 and 20 Hz depending on the game iirc, however fast the rendering itself was happening.
For a single-player (or PvE) FPS, sure. For a multi-player (PvP) FPS, less so. Aiming with 60Hz rendering is qualitatively easier than aiming with 30Hz rendering, even if rock solid.
A game like XCOM or Civilisation is pretty much best-effort. It can skip frames all over the place without breaking gameplay. It’ll feel irksome if animations are constantly choppy. Players can even opt to disable (some|most) animations in these games, and many will.
An interesting middle-ground being RTS. They can run at slow rates, but the interface must stay responsive. I used to play CoH competitively on a box that was always between 10 and 20 frames, with occasional drops and Relic has really handled that well.
Yjs is a competing CRDT library that’s quite a bit more performant than Automerge, despite Automerge being compiled to WASM. Take a look at their benchmark suite and judge for yourselves: https://github.com/dmonad/crdt-benchmarks
This is great! I’d love to see an option to have ZFS as the root volume at install time.
For anyone interested in getting Ubuntu to boot from root ZFS volume, there’s this in-depth step-by-step guide which I followed to get there: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
It took me the better part of 2 days to get everything working the way I wanted, though, so I would love to see this natively supported in the installer :)!
Emoji Keyboard: it’s build in on most OSes, no need for a extension (macOS: ctrl + cmd + space, windows: win + . (or win + ; ))
Didn’t know that. Thanks.
Wow, cool! Do you know by chance if there’s a builtin shortcut that would let me easily insert an em-dash (—) on Windows? Can I maybe somehow enter it quickly from keyboard through this
Win+.
dialog?Yes, on Windows there are Alt code shortcuts. It requires a numpad keyboard, with Numlock turned on. If you’re on a Laptop, most should allow you to simulate the Numpad by pressing Fn + (you’ll have to look it up per manufacturer and device model).
Anyways, specifically for your request:
Left Alt + 0151
That is, hold down left Alt and type in 0151 in the numpad and then left go of left Alt. This works for all sorts of other characters, as well. A simple search will find you plenty more.
No numpad on my laptop and no luck for me it seems. I’ll have to stay with thirdparty tools apparently :/
I already found the more legitimate versions of these articles to be silly, that concerned themselves with, say, timing information for very specific types of hardware.
I find this article and its naming scheme that imitates those other articles to be drivel. I can’t write for anyone, sans myself, but I’ve never used this AWS nonsense and never plan to. I find so many willing to recentralize the Internet to be disconcerting. I suppose these numbers would be fine to have if you actually used this AWS nonsense, but why would you be purchasing something before you knew the cost thereof? Is it normal for people to not only rent from a behemoth instead of a VPS or the more respectable self-hosted server and then, in addition, not know what they’re doing until they get a bill?
That’s fine, but I don’t think you need to be overly antagonistic at the article author.
You are assuming a lot here. You are making the assumption that companies which requisition AWS’s services are not aware of their TCO (total cost of ownership) by the mere presence of this blog post? I have to disappoint you, but when I was pitching to one of my previous CTO’s about using Redshift, I had to create a spreadsheet of pricing calculations and projected cost depending on varying usage scenarios. We were very aware of what we were getting ourselves into, and the cost/benefit ratio. I imagine any company that likes to stay in business is also aware of such factors.
You also miss out on one crucial use case that is almost always a net win with cloud infrastructure: extremely bursty workloads.
Netflix–I can only imagine–must save hundreds of millions annually by leveraging AWS’s elastic nature to spin up insane amounts of compute during Friday evenings, only to spin them down come Monday. It would be extremely wasteful to be paying for tens (hundreds?) of thousands of idling machines during Monday -> Thursday, under the scenario that they have to own enough compute power that commensurate with their peak load.
Netflix could make a cool buck renting out those servers when they’d otherwise be idle.
They’d also have to pay a buck to cool them ;)
They have an internal spot market afaik
One of the best arguments against (or for) centralizing is cost. This article helps attempt to give better information about that for people that might want to host things themselves.
Your dismissal of it and, more importantly, the people who might find it useful could drive off more folks than you would have convinced if it had been friendlier.
This filter doesn’t work nearly as well on my phone. Is this limited to certain devices?
My naive assumption as to why that may be is that since Snapchat targets so many devices, the run time for the filter is capped at some millisecond of compute (around 16ms or 33ms for 60/30FPS, respectively). So if you have a slower/older device, then your device might not have enough time to compute the filter before it’s forced to “render” whatever has been computed so far.
For me the entire app hangs indefinitely when I try to use it.
According to whom has CoffeeScript served its purpose?
A lot of the good features in Coffeescript like fat arrows made it in to ES6
Three major features certainly made it into ES6:
=>
for declaring an anonymous function with scope context preservationBut that’s not the extent of CoffeeScript’s ergonomic improvements (in no particular order):
Everything is an expression. I love this the most about Ruby/Rust/Elm/etc. No need for explicit
return
keyword (in most cases, except when wanting to short circuit). The last “expression” in a function is automatically returned as its return value.?
to guard against possible undefined keys when doing nested object access (e.g.val = obj.?key.?might.?not.?exist
will not crash. In JS you’d have to guard against every level of object access viaif (obj && obj.key && obj.key.might && obj.key.might.not)
->
skinny arrow to not preserve scope context when declaring an anonymous function. In legitimate cases where you want a closure’sthis
(or@
syntactic sugar in CoffeeScript) to actually refer to the new anonymous function scope’sthis
orarguments
, you don’t want to use a=>
. In vanilla JS, that means writing outfunction()
, in CoffeeScript, it’s a skinny arrow.Not requiring parenthesis for function calls (e.g.
alert "Hey ma, no parenths!"
).Control flow expressions can be suffixed to a line (e.g.
alert "You should see this if..." if truthy_value
).List Interpretation syntax for loops (Python-esque).
Try this in ES6:
(Tagged template literals + coercion from Array to String, har har har.)
This is so cool! I really like the structure of this post: recognizing something one person has done well (and therefore other people have failed) and then explaining it
“done well”
https://github.com/openbsd/src/blob/master/usr.bin/yes/yes.c https://github.com/coreutils/coreutils/blob/master/src/yes.c
optimizing to the extreme for fun is kind of interesting, but to do it at the expense of clarity with nothing really to gain seems like a loss.
I really don’t like GNU’s implementation, NetBSD and COHERENT seem to have the most readable
yes
out of all the yesses I looked over (BusyBox had the worst). It may be possible to apply this to other utilities likedd
andcat
, which I plan to look into soon (unless someone else beats me!).Who on Earth thinks that BusyBox thing is a good idea? I’d hate to see anything even remotely complicated from whomever wrote that.
It’s super compact both in code size and resource consumption (one stack variable!!), and it’s still relatively easy to understand. I’d say it’s doing its job marvellously.
Havent had time to look at the code, but alpine linux uses it by default. And it’s targeted mostly to embeded linux, so I’m guessing ultra optimization is more important to them than readability in this case.
Yeah, that isn’t cool. I thought they were just trying to avoid reusing a variable, then I realised they were reusing a variable, and/or moving on to argv[1] :(
One poster on Hacker News suggested this: https://news.ycombinator.com/item?id=14543640
Classic HN. Always reject the mundane explanation that the program is fast because somebody wanted it to go fast in favor of a narrative involving an epic struggle against corporate overlords.
Check the thread again, GNU explicitly asks people to do this: https://www.gnu.org/prep/standards/standards.html#Reading-Non_002dFree-Code
So why did they wait so long to make this change?
I’m rejecting your characterization of that HN comment, because this is a common method for GNU programs. I am not rejecting your assessment of why it changed though.
This wasn’t done “with nothing really to gain” (although the gain might be subjective). It was performed as a reaction to a filed bug: https://debbugs.gnu.org/cgi/bugreport.cgi?bug=20029
Interesting, I wonder what the backstory to that is. The example is oddly specific enough (involving a pipeline of
yes
,echo
, a shell range expansion,head
, andmd5sum
), that it look like an unexpected slowdown someone actually ran into in practice, vs. just a bored person benchmarkingyes
.If “yes” was written once, decades ago, and someone spent all of one entire week validating, I’m ok with getting a 10x performance increase on every *nix system in existence ongoing.
I love it when pipelines/shell scripts can scale vertically for a long time before having to rewrite in some native language.
Some enterprising soul out there… please mass manufacture this! I’m dying for an Ergo Dox replacement that doesn’t presume the owner has extremely large hands (the thumb clusters are placed way out of my normal hand reach!)
You might like the Diverge (now at version III with silly LEDs): https://unikeyboard.io/product/diverge/
I have a Diverge II and love it. The more natural thumb placement is one of the reasons I went with it over an ErgoDox. Also, I offset my key map “inward” by one column (i.e., g and h are the keys on the inward side of my home row) so that the thumb clusters are even more convenient and so that the outside columns can be used for symbols and meta keys akin to a standard layout.
Hi; I sell assembled and DIY kits that don’t require a lot of hand movement:
https://atreus.technomancy.us
Not exactly mass-produced of course, since the demand isn’t there in terms of volume. Mine is similar to the one in the link except as a one-piece, so it’s easier to travel with. Also it has a wooden case instead of just using the bare PCB.
Thank you for this!!!!
After using Colemak (3+ years) and then attempting Workman (slightly better than Colemak at reducing discomfort with reduced horizontal index finger travel for me personally), I’m ready for a keyboard that’s optimized for reduced pinky usage (even on Windows/Linux machines, I’ve swapped
Ctrl
withAlt/Meta
such that keyboard shortcuts primarily use my thumb like Mac OSX’sCmd
) while still reducing the horizontal finger motion that was so common with Colemak.Time to roll up my sleeves and learn
QGMLWY
!For anyone who suffers from typing discomfort, I can’t recommend alternative keyboard layouts enough. It’ll likely take a long while to get used to typing in a different keyboard layout, however (I believe Colemak took me well over 8+ months to get decently proficient at [80+ WPM; my QWERTY baseline is about 95WPM], and I never did get proficient to the level I would have liked with Workman…).
However, if you’re not willing to take the plunge to retrain your muscle memory (not a small undertaking!), there’s two small changes that really helped me out which I would recommend to anyone:
I had pinky problems and have been using QFMLWY for 6 years. It’s one of the best investments I’ve made in my career. If you want a keyboard try the Kinesis Advantage.
I wrote a little more here last time this came up on lobste.rs
Thanks for the testimonial! Btw, what made you choose
QFMLWY
overQGMLWY
(the latter is the one with ZXCV unchanged)? Part of the reason I was attracted to Colemak/Workman was because I didn’t want to have to change my hot key muscle memory/bindings (one of the reason why I never gave Dvorak a try). I’m guessing you didn’t find that to be a problem?I’ve demo’d the Kinesis Advantage in person, and wasn’t quite a fan of the bowl size (I have small hands. I’ve also used the Ergo Dox previously and had to sell it because my hands also too small to reach the keys and the thumb clusters comfortably)–I’m thinking of getting a TypeMatrix 2030 keyboard since I did enjoy the columnar non-staggered layout of the Ergo Dox.
Oops, I actually use QGMLWB, I can never remember which and just copied what I (mistakenly) said last time. They’re similar enough that you can confuse them so I don’t think it matters what you pick :) I’d just go with your intuition.
However, your concern still applies. The answer is that I don’t use keyboard shortcuts outside of my custom Emacs setup in any significant capacity. But even if I did, it wouldn’t have been a consideration–I overhauled everything at once and just resigned to being useless for a few weeks.
The TypeMatrix looks good to me except for Ctrl under the pinky. I think if I had used this keyboard I would have kept with the foot pedals.
Yeah, I’m definitely going to give the “most optimized” version a try… What do I have to lose ;)?
Re: TypeMatrix: Per my own “life pro tip #2” in my GP post, I would personally be swapping Left Ctrl and Left Alt, so that I’d be using my thumb instead of my pinky for Ctrl (I never ever use Right Ctrl anyways, so that’s not much of a big deal, and if I needed to use Alt, for say Alt + Tab, I just use a combination of my right thumb [on R-Alt] and my left ring finger [on Tab]).
I took the hardware way to solve the ‘pinky’ problem, and bought a typematrix 2030. It brings the enter amd backspace in the middle so you use your index/thumb to press them. The shift/control keys are also taller to make them easier to access.
I swapped CapsLock for Ctrl and its 1000% more comfortable for my hands to not have to reach for the Ctrl key. Having CapsLock on home row and then having it be such a rare keypress (does anyone use caps lock any more) is easy to change into a big win.
I’ve used Dvorak for a couple years and as a programmer, I would recommend Colemak to someone interested simply because they leave the symbol keys alone. Having dvorak’s home row vowels is a huge win but largely off setted by putting <>? up at QWE.
Still not a small undertaking, but you feel better even after an hour of fumbling as you learn it. Compared to Colemak and Workman where I still couldn’t get with O and I after weeks of practice…
‘A’ being on the other hand entirely will take some getting used to.
wow, great read. this is one hell of a treat ( i actually wrote that even before reading your username :O )
thanks for the references, some more great thoughts in those. (there might be one broken link that lead me to a phx.corporate-ir.net domain .. cant remember which link i had clicked)
The broken link is supposed to link to one of Amazon’s security filings (also referred to as the “2016 Letter to Shareholders”). It’s the letter where Jeff Bezos lays out his “Day 1” vs “Day 2” philosophy and publicly coins the term “disagree and commit”.
The relevant portion on “disagree and commit”:
Here’s the actual filing as hosted by the SEC:
https://www.sec.gov/Archives/edgar/data/1018724/000119312517120198/d373368dex991.htm
Small correction: that phrase has been part of the Amazon principles for, basically, ever.
https://www.amazon.jobs/principles
I’m pretty sure Andy Grove came up with it at Intel even earlier, but it’s all a part of the cult of management at this point. Disagreements are merely people slowing down “good business activity” from occurring, those bastards. The disease they’re trying to prevent is pretty awful as well: people who think that disagreeing with others is how their voice can be heard, and their value communicated at work.
At least Andy Grove’s catch phrase was “constructive confrontation”. His books also bring up trying to find the Cassandra’s in your staff, listen to what they have to say, and incorporate it into your strategy. In printed from, at least, Grove was very for searching for the truth and not just plowing over subordinates.
this is a win-win situation, as far as I can tell. some app makers don’t want their content on user-owned devices. as a user, I don’t want to support such companies. now I have to do less work to avoid them.
the unfortunate irony, though, is that Netflix still runs just fine on my rooted devices and my friends always insist on installing it.
I think this optimism is a good thing, but the biggest hurdle for user migration is application support. The GP mentioned Netflix: How many users do you think will actually use a platform that doesn’t have key “killer” apps like Instagram, Snapchat, Netflix, Facebook, etc?
The irony is that I’m playing the devil’s advocate, as I don’t use any of the aforementioned apps, but I completely understand that the vast majority of people depend on them (and largely for socially benign reasons like catching up on the latest pop-culture info, statuses of friends & family, etc).
Not only that, but there’s also the question of platform stability and usability. It might seem like it was only yesterday that Android was nipping at iOS’s feet, but it wasn’t always like this. Android for years (for the better part of a decade) was a completely atrocious user experience for most (UI jitter, battery life issues, app compatibility issues due to device fragmentation, etc).
Neither of the two platforms you mentioned (Sailfish OS & Boot To Gecko/Firefox OS) are ready for prime time in the above regards, and one of them (B2G/FF OS) is already discontinued…
Working on the next big update to Learn TLA+. A lot of small changes, but the main one is that I’m ripping out the current reference section (“here’s the set of all automorphic functions over a set!” is cool but not very useful) and replacing it with a ton of example specs (“here’s how to simulate a client-server architecture!” / “here’s how to find bugs in MongoDB!”) and techniques (“here’s how to add cronjobs!” / “here’s how properly use model values!”). I think that will make it much more useful to people who know the basics but aren’t sure how to apply it.
Is the “HEY” the part you’ll be getting to soon ;)? https://www.learntla.com/introduction/
That part is 1000% perfect and nothing will ever change my mind
I don’t believe education helps. Just provide consulting for the tendering and commissioning.
Requirements analysis is better done by a techie learning the problem domain than a domain expert learning technology. Not every techie can, though. Requirements analysis is a skill very different from coding.
If by “education doesn’t help”, you meant to say “educating people who are neither technical nor domain experts to perform requirements analysis does not help”, then you’d be likely right on some practical level.
However, philosophically, I don’t think I can agree.
That kind of thinking is how we write-off large swaths of individuals in orgs as simply being “unproductive” (i.e. “education doesn’t work anyways, so why bother educating them to becoming more self sufficient?”).
I don’t claim to have the answer either, but I don’t want to write off one possible avenue of the solution: educating both sides of the table of the tendering process.
Besides, “provide consulting” isn’t really a solution… Someone still has to learn to do the job, you’ve simply externalized the cost onto another entity that doesn’t even have a vested interest in your system succeeding (it’s no secret that this is one of the longest standing problems of outsourcing/contracting any expert-skill work in an area that you yourself are not an expert in).
2FA is mostly security theatre [0], and 2FA that uses SMS is most definitely just masquerading as security theatre in 2017.
Even NIST updated their guidelines [1] last year to discourage using public switched telephone networks (PSTN) to deliver multi-factor authentication tokens:
tl;dr:
Google is “strong arming” (by threat of blacklisting) Certificate Authorities to comply with a “Certificate Transparency” program that Google has pushed through the IETF (Internet Engineering Task Force).
The “Certificate Transparency” (hereon referred to as “CT”) program requires that all issued certificates are logged with 2 separate CT servers that is publicly auditable by anyone. The premise being that we can’t prevent Root CA’s from being compromised, but we can do the next best thing, which is to prevent errant certificates from working at all in the popular browsers (starting with Chrome).
If a certificate is used by a server that doesn’t appear in the 2 CT logs, then Chrome will show a bad certificate warning in Chrome (the same way they show a warning for expired or self-signed certificates today).
The other 3 major browser makers (Mozilla, Apple and Microsoft) have yet to comment on whether or not they will follow suit in using the CT logs to blacklist errant certificates.
Here’s Slate’s closing remarks:
I think part of the lack of comments from the other browsers is that they like the changes, but figure they don’t have the market share and/or clout to push changes like this through. So if Google succeeds, hooray, we’ll follow their lead. If Google fails, then no sweat of our back, only the Chrome team has egg on their face.
Personally, I’d like this initiative to succeed. The biggest concern with TLS was always that every CA could issue any certificate ever and no one could double check that they are behaving. Now that Chrome has a huge dominant position (60% globally I think), they are forcing CAs to behave.
I got scared by the title, as huge companies “improving security” often means screwing over hobbyists (i.e. SecureBoot, locked down phone bootloaders). Relieved to see that it’s just forcing CAs to behave better.
Using Tor for webcams and baby monitors due to Tor’s security design sounds nice and all, but… one thing that’s missing from that PDF is how horrendously low* throughput of the Tor network as a whole. It’s dependent on individuals and organizations volunteering bandwidth and compute cycles, and the last time (2 or 3 years ago?) I tried using Tor, it was a terribly slow experience even for regular browsing.
Forget streaming webcams and baby monitors, even highly distributed Youtube videos with edge servers worldwide are sluggish as heck!
[ * ] With loads of caveats. There are fast nodes out there, and you can set up a fast relay of your own to use as the first hop, but the over all throughput is still very much dependent on others in the onion network.
I think what should be taken from the slides is “this is a solved problem”.
Tor as it is today may not be up to serving the throughput and latency needs, but the protocol and near-zero effort for the end user to access their devices is.
I have hope that now we have this thought developers will run with it rather than go all ZOMG WEBSCALE CLOUD BBQ on it or try to re-invent their own version of Tor.
Maybe all that is needed is a private closed Tor service (self hosted on the devices themselves) to push signalling over (think encryption keys) and then you can make a direct connection.
Of course the assumption here is that this is a technical problem. Programming rarely is the hard part, the economics for the manufacturers may simply favour centrally controlled infrastructure.
We don’t need a new form of money. Especially one that is based on stone age ideas like the gold standard. We need something better than money. I don’t know what that looks like, but I personally would love to live in a world where there is no money.
Only reason bitcoin works today is because it is convertible to state money. Can you have a stateless currency? I suspect you can’t. Why? Because historically currency was used as a tool to provision armies and states.
What is bitcoin provisioning? Oh shit, is it skynet? It’s skynet isn’t it.
Definitely far from perfect, but a fluid reputation-esque based currency called a “Whuffie” was mentioned in Cory Doctorow’s book, Down and Out in the Magic Kingdom (a fun read, plus it’s available on his website for free!).
These kinds of post-fiat currency “monetary surrogates” are largely predicated on some sort of post-scarcity society, of which we are not anywhere near (though often promised by Singularitarians and Futurists of all stripes…).
Also, have you seen Black Mirror’s 3rd seasons’ episode “Nosedive” (spoiler alert, link goes to Wikipedia article for that episode)?
I immediately thought of Nosedive when I was reading your comment! Reputation based systems are fraught with danger because people are so good at gaming the system - any system.
I think this is absolutely true. Previous job there was a group that wanted to create company wide project stats like “number of refactors, test coverage, errors, lint errors, etc” as a way to motivate employees.
I was like “wow, this is going to be gamed so fast”. Reminds me of the soviet Gosplan. They had an intricate system to monitor the economy using computers to make sure things are going as planned. As expected people gamed the system like making products travel over rail back and forth to increase “rail miles”.
This is why I wonder if we can make a system that’s not money like, or point based, or whatever.
I mean… the entire point of the book is that if you only reward people for being popular they start doing really shitty things. The point of the book is kinda that Whuffie is a terrible idea.
Whuffie’s creator describes it as a deliberately “a terrible currency”. It exists to criticize misfeatures of our current system by making them much worse.
This whole thread is building on soft ground.
So is whuffie kind of like lobster karma?
Wow. I am confused. This is just mindless rambling yet people seem to like it.
Well we might not ‘need’ it but an alternative is definitely useful. There are things that bitcoin can do better than state-sanctioned money.
Also eating food is a ‘stone age idea’ so I guess we could stop doing that too? Just because something has been around since forever, does not make it bad, stupid or obsolete.
I could literally say this about anything. We need something better than cars. I don’t know what it is but I would love to live in a world where there are no cars.
That is also not true. You could definitely buy some stuff with bitcoin without state money. The fact that state money has been around for centuries and the world’s economy has built itself around state money does mean that it is the most easily used monies.
This is wild speculation that is clearly false because money is useful even without states or state-sanctioned warfare.
And just because states use money to pay for armies therefore money cannot exist without state is just a non-sequitur.
Like what? Expensive to do transactions, and at least state money can be truly anonymous.
Stone age implying we found something better than stone. Stone age does not just imply old.
I’m saying bitcoin isn’t a new thing. Block chain is novel, but money on it isn’t. I can’t imagine what the better thing is because if I could I would make it.
I’m making a claim that bitcoin would not work if it wasn’t convertible to state money. In fact, i can’t imagine you could prevent that from happening anyway. In fact if we lived in a parallel universe where there is no money, nobody in their right mind would think bitcoin solves a problem they have.
History would like to have a word with you. Yes money is useful outside of paying taxes but that’s a side effect. the seeding of it is by states. States go away so does the money and it’s usefulness.
Example, soviet ruble was used after the break up of the soviet union only because the former states decided to remain using it until the new ruble took over in 1993. Nobody kept using the soviet union beyond that.
I think money has brainwashed us. We grow up with it, of course it’s normal. It’s part of life. But it’s just an invention that has a very real and focused purpose. It’s there to provision the state.
We have to use a little more imagination to get rid of it, and bitcoin is not that. Bitcoin is a boring version of the same thing. The communists at least had a little more imagination.
Let you send money to somebody across the globe quickly without the banks taking 5%.
A claim which you support with what evidence?
This whole thing doesn’t even make sense. What do you mean exactly by ‘would not work’? The more I think about it the sillier this as a thought is.
You do know trading and money has existed before states, right?
Plenty of times people have made their own currency e.g. scrips for community use when state money was in short supply.
What’s the per-transaction cost (in electricity generation) for BTC?
I’ve seen estimates upwards of $7, which puts it firmly in the ‘not really better than banks’; the receiver will transact again, either to use the money or convert it to fiat, so that’d be $14 (assuming the conversion was free, or you spent all the money on a single transaction).
Where does the money to pay for this electricity come from? Inflation.
Technically you can buy in BTC without ever exchanging, and that’s what people are trying to achieve, but the scale of it is a niche of a niche at best.
Money measures value and enables trade, you can use anything for that purpose, as long as your counterpart recognizes its value. Certainly monopolization helps governments in taxation for monpolized activity like armies and I doubt anyone disagrees.
One problem with BTC is completely uneven injection. It’s like the “1%” that gets access to QE rounds and such, and can reap the benefits of this newly expanded monetary base, before it evens out in the market and devalues the currency.
So if BTC were widely adopted, the Chinese mining cabal would be the new 1% and people would rather go back trading in tobacco leaves and squirrel skins instead of putting up with that shit.
Whatever the programming language’s semantics say they mean.
Nothing! Computers don’t think.
Is this guy seriously considering depriving people of the joy of crafting their own beautiful algorithms? It is admittedly a lot of work, and not everyone’s cup of tea, but why deprive people of the chance to even try it?
Mutually inconsistent complaints. It is precisely by not fixing values for your variables that you can prove that something works for any possible value. And this is precisely the only technique that automatically scales to infinitely many cases using a finite amount of reasoning.
I believe his thesis is that people learn crafting algorithms by reusing other algorithms and adjusting them. Any program starts as the Hello World algorithm and is then adjusted into something different.
That’s not how I understood him. What he literally said is:
This isn’t about code reuse. It’s about trial and error.
Cherry picking straw man sentences does not a point make.
Why not address his two bolded thesis statements at the top of the post?
Pretty sure his first sentence nullifies your caricature of his position. He does not advocate divorcing the the thinking from the doing, but rather unifying them so that one can think and do, and one’s doing influences the thinking in a virtuous cycle, aided by the programming tools at hand.
The rest of his post goes on to show a few (but not the only!) ways the tools can aid in the visualization of the program execution, thereby helping the programmer understand just exactly how their program is actually being interpreted by the computer.
If I can summarize his position (and my interpretation of yours), it’s that Brett Victor believes people think and learn differently, and teaching students in a purely abstract symbolic manipulation without ever seen the concrete effects of their code is not going to be effective for all.
Your counter argument is that coding is symbolic manipulation, and making abstract concepts concrete does not help, but rather hampers an individual’s ability to internalize what’s happening and inhibits the building of a mental model that allows working with the program abstraction. Or did I misrepresent what you’re trying to say?
I don’t actually think you two are in conflict at all. Brett is talking generally, whereas you’re being very specific about algorithmic development. Surely you don’t believe teaching children to program should begin with the same verbosity and tools as those used by working professionals?
That’s a very narrow minded way to think about thinking.
A great many words, but I have no idea what was said.
Let me attempt to summarize the core argument of the article:
Scripted narration in a medium that’s supposed to champion interactivity is a fool’s errand. Instead, narratives should be emergent via mechanics of the game that fosters discovered self-narration.
Put more crudely, the author would like gaming to be akin to a child playing with toys. The toys offer zero narration of their own–it’s all in the player’s head!
Though games like “Minecraft”, “Dreams” and “The Witness” are not mentioned by name, I would imagine the author very much would like to see more of these, and less of the… well, other games.
My generous summary of this is:
It’s a positive review of the the game What Remains of Edith Finch, which argues that this game helps show us the way forward for the medium,
Secondarily, though this gets more space, headline, and attention, what some other people have argued is the way forward for the medium, the ol’ Interactive Storytelling dream of folks like Chris Crawford, Janet Murray, and David Cage, is maybe a dead-end, which we can definitively realize now that we’ve seen what the better way forward is.
Admittedly, this is reading between the lines a bit and he doesn’t quite make this argument as I’ve reconstructed it (he seems to be hitting in various directions other than David Cage, who I personally would’ve chosen as a better foil). Bogost’s a personal friend who I’ve known for a little over a decade, and I like much of his writing, but this isn’t my favorite piece of his, even if I’m sympathetic to the form of the argument I’ve reconstructed.
OK. So among the games I’ve played, Doom and Bioshock, which is better? I can accede to the idea that a hypothetical Libertarian Atlantis movie would tell a better story than Bioshock. But does the addition of story elements make Bioshock worse than Doom? Would eliminating all the voiceovers from Bioshock and reducing it to “kill stuff and push buttons” like Doom make it a better game? Not inclined to agree.
I don’t think it’s really arguing at the level of “game A is better than game B”, but more about future agendas. It argues that the holodeck “interactive narrative” dream, which views true interactive storytelling as the way to take the medium to the next level, isn’t promising, and is in favor, instead, of an alternative path forward, which it argues the game What Remains of Edith Finch embodies. Now, it’s hard for me to judge this last claim, because I haven’t played that game.
(The “holodeck” reference has an outsized significance in academic game studies, perhaps not obvious to the average reader, because the metaphor was used in an influential 1998 book by Janet Murray entitled Hamlet on the Holodeck: The Future of Narrative in Cyberspace. In addition to referring to the Star Trek holodeck, of course.)
Ok, thanks, this helps put the article in perspective.
I’m curious what the “better way” is, in your opinion?
“Better” is in the eye of the beholder, as well. As much as I’d love (I don’t, actually…) to play Halo 15 and Call of Duty 26’s multiplayer portion and weave my own narrative devoid of any scripted narrative–such that no two players will experience the same arc of encounter and will each walk away with their own unique experience–I’d much rather experience the works of David Cage, et al. rather than play for play’s own sake (I, for one, cannot wait for Detroit to be released!).
I enjoy games the most when it makes me think and relate back to something in the real world and case me to appreciate it more, or see it in a different light.
Without any spoilers, the latest game I completed (Horizon: Zero Dawn), made me truly appreciate the design behind Erlang (yes, a seemingly out-of-the-left-field connection!).
I’d really like see what connection you made there. I don’t know if we have a spoiler tag, but maybe something behind a link?
I don’t have strong opinions on the better way personally. I actually started my academic career building AI support for interactive storytelling, with the goal of making games that were non-scripted but still heavily story-based. So I have some sympathy for the Grand Interactive Storytelling dream, enough that I spent a few years working on it (albeit on the backend tech side, since I’m not a writer or game designer), and occasionally still go back to it. But I also have some sympathy for arguments like Bogost’s that argue this is trying to put a square peg in a round hole. I suppose I should stake out a strong opinion on this, given that it’s close to my research area, but I’m somehow just very undecided about it.
I really appreciate you adding context to the article. It sounded like something interesting, but I had a hard time making out the thesis. Clearing up the holodeck reference, in particular, helped
I’m a through-and-through erlang devotee, but I’m not sympathetic to the argument that performance doesn’t matter. It matters a lot; there’s a bunch of hard/interesting problems that you’d prefer to fit on one box or within one programming paradigm, and erlang (and ruby, and python, and clojure, and …) frequently makes that impossible. The fact that erlang has a good multi-box story and an acceptable multi-paradigm story doesn’t really help the fact that multi-box and multi-paradigm are both incredibly holistically costly.
Agreed. This antipathy for speed is also a very odd idol in the realm of programming.
But I think there’s a bigger issue at play here, a tunnel vision of sorts. There’s three general problem domains when it comes to performance:
These posts are all super focused on the problem domain that Erlang excels at, which is soft real-time, but there are plenty of problems in this world that doesn’t fall into soft real-time problems! Hard real-time applications (like video games) are on one end of the interactive spectrum. It must respond at least 30 frames a second, or else it’s essentially game over for the player. Jank-tastic!
And then on the way other side, are offline processes that cares about over all throughput, but not immediate responsiveness (e.g. protein folding).
On both end of the spectrum, speed matters. There happens to be a niche in the middle where reactive/responsiveness happens to matter a lot more than pure throughput, but let’s not mistake the tree for the forest!
I think videogames fall differently into hard or soft real-time categories depending on genre.
A fighting game like Soul Caliber or Street Fighter is hard real-time, because the entire game is considered worthless and customers will pay no money for it if there’s even a rumour that it might occasionally drop a frame.
A game like Skyrim or Mass Effect is definitely soft real-time. They have to give smooth framerates most of the time because in-game combat is real-time-ish. However, these kinds of games can ship with noticeable intermittent (but not frequent) framerate hitches and people will still pay for them. For example, when new areas and scripts get paged in as the player traverses the world, or when a bunch of complicated in-engine scripting kicks in all in one go for some reason.
A game like XCOM or Civilisation is pretty much best-effort. It can skip frames all over the place without breaking gameplay. It’ll feel irksome if animations are constantly choppy. Players can even opt to disable (some|most) animations in these games, and many will.
…I think FPS games fall into the hard real time category for players who pay close attention to how well they do in competitive matches but the soft real time category for players who don’t. ?
It’s absolutely okay to run an FPS slower than 30 Hz, provided you do so at a constant slow rate (so players can model it in their heads without getting annoyed). Also, note that the game logic is different from the rendering logic–the Quake series ran the actual game logic at between 5 and 20 Hz depending on the game iirc, however fast the rendering itself was happening.
Isn’t quake actually the poster child for FPS dependent logic because you could jump higher with more FPS due to rounding?
Oh yes. Apparently you jump higher at 125fps than you do at 90fps in Quake 3 (and its predecessors).
For a single-player (or PvE) FPS, sure. For a multi-player (PvP) FPS, less so. Aiming with 60Hz rendering is qualitatively easier than aiming with 30Hz rendering, even if rock solid.
An interesting middle-ground being RTS. They can run at slow rates, but the interface must stay responsive. I used to play CoH competitively on a box that was always between 10 and 20 frames, with occasional drops and Relic has really handled that well.
Well said!
It’s quite astonishing what a single box can do these days - rare indeed is the dataset that can’t fit into RAM.