Decades later, SF has finally managed to recreate the tenements of the Lower East Side & East Village. But with wifi.
More importantly, the authors seems to have found a way to live in a better housing situation in SF without working as a programmer but doesn’t explain how; seems suspicious…
Why not to use transactions and (compound?) unique indexes in order to make the whole operation atomic?
It’s a trade-off; there’s a cost to using transactions and you can avoid that cost if your operations are idempotent. As you say though, if it’s infeasible to make an op idempotent then transactions are an excellent fallback :)
Um, yeah, the author deleted the post. So what do I do now? Delete my post here? Hope that the post might come back?
I had to build a web app a few months ago and instead of using images for icons, I just used emoji. They provide a nice level of visual punch and take just 1 unicode character of space :)
It starts as a unified system to replace an antiquated one, then we keep adding more features.
Soon, we can vote on anything a la direct democracy via the blockchain.
Eventually we ICO and then sell VoteCoins.
Wait…. back up. Nvm. The first two parts sound good though.
Direct democracy existed before computers. But whether it’s a good idea to have citizens directly voting on everything is a separate question. The point of representative democracy is even stronger nowadays, in my view, as things become more complex and the public becomes both less able to comprehend the finer details and more easily swayed by mass media.
Odd, because I didn’t read the XKCD comic as making fun of security people for saying ‘voting machines won’t work, stay away’ at all. I read it as saying voting machines won’t work and that we should stay away from them. And to that I have to say: I totally agree. Voting works fine as it is: done by humans, counted by humans, entirely on paper with not a computer or network in sight.
Elections are really hard regardless if it’s done by computers or not, but we didn’t get to the point where we figured out the computer side of it at all. What’s worse, is that adding computers into the mix was an excuse to go back on well-tested election related rules, such as secret voting. No, we can’t have voting over the internet or via mobile phones or anything like that.
We should really go back to limiting computer involvement in elections to UI, with the papertrail as the official record of votes. Involving computers in the actual process adds such a huge leap of complexity that it excludes most people from ever being able to verify results. Everyone can verify paper ballots.
Not really sure why you’d even want computers as UI. The ‘UI’ of a piece of paper you tick a box on really is quite good.
All I can say is that I’m glad that New Zealand has never (as least to my knowledge) involved computers in actual voting. Not even UI. I hope that the complete disaster that was our recent attempt at doing a census online[0] will help dissuade anyone from trying to do elections online as well.
[0]: Somehow they managed to simplify the census, put it online, reduce the number of questions and get fewer responses than before even though it’s still mandatory. What. And in return for significantly reducing the amount of information we get from the census, now they have a mandatory incredibly invasive survey of a randomly selected few percent of the population.
The reason for fewer responses may have little to do with technology and more to do with that notorious citizenship question.
What’s worse, is that adding computers into the mix was an excuse to go back on well-tested election related rules, such as secret voting. No, we can’t have voting over the internet or via mobile phones or anything like that.
There’s designs and protocols for that. We could even have diverse suppliers on the hardware side to mitigate the oligopoly risks. The question is, “Should we?” I think traditional, in-person methods combined with optical scanning is still the best tradeoff. The remote protocols might still be useful to reduce cost or improve accuracy on some mail-in votes, though.
I absolutely agree. Voting should be as simple for voters to understand as possible. Introducing an electronic device makes it auditable only to experts and even they might have a difficult job given the many layers at which things can go wrong (including hardware vulnerabilities).
One of the reasons people are advocating electronic voting is their lower cost. Personally, I think this argument is totally wrong. Cost is a factor but not the most important one - not having elections would be cheaper.
And let’s face it, how significant is the cost of having elections really? The 2008 general election in NZ cost about $36 million. Sounds like a lot, but that’s $12 million per year: 1/1719th of the Government’s budget. Spending 0.058% of the budget to ensure we have safe and fair elections is pretty insignificant really, it’s about as much as is spent on Parliament and its services and buildings etc, and about half as much as the Police earn the Government in fines from summary infringement notices (speeding tickets etc).
100% agree. I counted votes in the last federal election of Germany and that is some serious work, but totally worth it and very hard to tamper with.
Nice article. Also interesting for reasoning about DNS-over-HTTPS.
As far as I can say from my experience in Kenya, it should also be noted that Africans have a very different way of perceiving time. And security. And… everything! :-D
How would I address this issue?
I think that I would basically create a reverse proxy serving over HTTP those sites that could benefit more from caching (eg Wikipedia). Probably with a custom domain such as wikipedia.cached.local so that people could not be fooled to take the proxied for the original. Rewriting URIs for hypertexts shouldn’t be an issue, but it could be harder to Ajax pages. Probably I would also create a control page so that a page could be prefetched or updated. With a custom protocol and a server in Europe, one could also prefetch several contents at once and send them back together, maximizing bandwidth usage.
Obviously it wouldn’t be safe, but it would be visibly unsafe, and limited to those website that can get advantage of such caches without creating serious threats.
As for service workers, I do not think they would improve the user experience at all, since they are local to the browser and the browser has a cache anyway. The problem is to share such cache between different machines.
Local reverse proxy is a clever idea, and a proxy that you explicitly set up clients trust a la corporate middleboxes (see Lanny’s comment) seems like it can work in some environments too. Sympathetic to the problem of existing solutions no longer working, sort of surprised the original blog post wasn’t more about how to improve things now.
The point of the machinery I described was to make user explicitly choose between security and access time.
You can make everything smoother (and easier to implement) with a local CA or by installing proper fake certificates in the clients and transparent proxy, but then people cannot easily opt-out.
Worse: they might be trusting the wrong people without any benefit, as for sensible pages that cannot be cached (shopping carts, online banking and similar…)
That why using the reverse proxy should be opt-in, not default and trivial to opt out: there’s no need for a proxy if you want to edit a wikipedia page!
Sympathetic to the problem of existing solutions no longer working, sort of surprised the original blog post wasn’t more about how to improve things now.
Eric Meyer is a legend of HTML, CSS and Web accessibility. A legend, beyond any doubt.
Before HTML5 I used to read his website daily. He teached me a lot.
But he is a client-side guy.
I think his reference to service workers is an attempt to improve things now.
Don’t sacrafice fonts for being “minimal”, though. That font and text size is not great for reading if that’s your primary goal.
Not really. Browser zoom is so that people who need to zoom can do so on the page. However, the page still should be designed with your average user in mind. There’s no reason to force the average user to zoom when unnecessary, that is a usability concern. My other usability concern with this suggestion is that browser text zoom makes the usability of your documents pretty poor on mobile.
Whatever happened to the end user being in control of fonts and colors, anyway? If minimal sites became common again, I’d like to see client-side styling become much more prominent (say, a font & size dropdown right next to the URL bar on every major browser, along with a background & foreground color selector).
Leaving the web designer in charge of the theming is a boon for branding, but the end user doesn’t care about supporting some company’s branding (which in some cases – like the prevelence of blue-heavy designs – does real harm), and it’s ultimately they who use the thing so they should have full control. Yet, overriding default colors and fonts breaks most websites (not just webapps – which we would expect to be fragile against that – but web SITES).
I’m absolutely not suggesting that the user shouldn’t be in control of the fonts and colors - which they completely are even in many modern web documents - but only to suggest that the defaults provided for your document should be reasonable for your average user.
The way to think about font size is based on a rough average number of characters width per-column, because it helps prevent eye strain. My assumption here is that most users are human and using their eyes to read the content. Other cases exist, so the defaults must not be assumed to be the only case - but they should be reasonable for the average user.
they completely are even in many modern web documents
As someone who, for years, set his default font style to monospace & color to orange-on-black for usability reasons & enabled font override on as many sites as possible, this does not track with my experience at all. Even the main google search page was not usable when font colors were overridden – most buttons became invisible.
It was probably a mistake to allow web designers to control the fonts, colors, and positions of elements in the first place. Giving that control to them has only provided shallow benefits & an invitation to implement really bad ideas, while nearly every time they’re taken advantage of, usability & accessibility suffers.
We can’t possibly be talking about the same facility.
Every major browser has, buried in the settings, control over default typeface, size, and color, along with a checkbox indicating that the recommendations by the website itself should be overridden. This configuration (for the past ~10 years, on both chrome and firefox) will not fix hard-coded CSS alignment (which is tragically common) and will also not fix the use of transparent images on top of faux-buttons.
The result: if you increase font size and use a dark background color with a light foreground color, text overruns boxes and sits on top of other text while faux-buttons become totally invisible. This is a behavior that happens in all major browsers (because it’s not a browser behavior but a result of idiomatic use of CSS being fragile), and it’s a huge accessibility issue for people who have poor vision but do not use a screen reader.
It’s trivial to reproduce: go into your browser settings & invert the colors, then visit gmail. This problem is, essentially, the reason extensions like deluminate & features like default zoom exist: normal font & color controls are borderline useless because most existing CSS breaks in response to these controls.
The alignment thing is tragically common because CSS didn’t have any other way to perform alignment until recently.
There is the facility that you mentioned, users are allowed to install extensions, and you can disable sites from using CSS. If you use the first one then disabling CSS is probably reasonable.
I’m sure there are other ways to solve this. Either way, you are describing problems with browsers and not problems with the way the website is designed or the web itself?
Still sounds like an issue with your browser. links renders Google fine, for instance.
Links renders google fine because links ignores all css color information (meaning that background & foreground colors cannot be specified through secondary methods in piecemeal ways).
And yes, I consider browsers, web standards, and web developers equally at fault for the state of the world in this respect. These idioms (justified by browser features, made possible by web standards, and used by web developers) are user-hostile.
How the web designer would like something to look is completely irrelevant. A site that doesn’t work if you turn off CSS is broken. But, people who actually modify how sites look in any way are rare enough and quiet enough that it’s possible for web developers to go through life not considering whether or not their sites still work when they’ve been re-styled. That is not a ‘browser problem’ – it’s a culture problem.
I’m trying to convince my workplace to get rid of whiteboarding interviews, does anyone know if there are resources for ideas of alternatives? Anyone have a creative non-whiteboarding interview they’d like to share?
The best that I’ve found is to just ask them to explain some tech that’s listed on their resume. You’ll really quickly be able to tell if its something they understand or not.
My team does basic networking related stuff and my first question for anyone that lists experience with network protocols is to ask them to explain the difference between TCP and UDP. A surprising number of people really flounder on that despite listing 5+ years of implementing network protocols.
This is what I’ve done too. Every developer I’ve ever interviewed, we kept the conversation to 30min-1hr and very conversational. A few questions about, say, Angular if it was listed on their resume, but not questions without any context. It would usually be like- “so what projects are you working on right now? Oh, interesting, how are you solving state management?” etc. Then I could relate that to a project we currently had at work so they could get a sense of what the work would be like. The rapid-fire technical questions I’ve find are quite off-putting to candidates (and off-putting to me when I’ve been asked them like that).
As a side note, any company that interviews me in this conversational style (a conversation like a real human being) automatically gets pushed to the top of my list.
Seconded. Soft interviewing can go a long way. “You put Ada and Assembler on your CV? Oh, you just read about Ada once and you can’t remember which architecture you wrote your assembly for?”
I often flunk questions like that on things I know. This is because a question like that comes without context. If such a problem comes up when I’m building something, I have the context and then I remember.
I don’t think any networking specialist would not know the difference between TCP and UDP, though. That sounds like a pretty clear case of someone embellishing their CV.
So if you can’t whiteboard and you can’t talk about your experience, what options are left? Crystal ball?
I like work examples, open ended coding challenges: Here’s a problem, work on it when you like, how you like, come back in a week and lets discuss the solution. We’ve crafted the problem to match our domain of work.
In an interview I also look out for signs of hostility on the part of the interviewer, suggesting that may not be a good place for me to work.
A sample of actual work expected of the prospective employee is fair. There are pros and cons to whether it should be given ahead of time or only shown there, but I lean towards giving it out in advance of the interview and having the candidate talk it through.
Note that this can be a hard sell, as it requires humility on the part of the individual and the institution. If your organization supports an e-commerce platform, you probably don’t get to quiz people on quicksort’s worst-case algorithmic complexity.
I certainly don’t have code just sitting around I could call a sample of actual work. The software I write for myself isn’t written in the way I’d write software for someone else. I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun. The code I’ve written for work is the intellectual and physical copy of my previous employers, and I couldn’t present a sample even if I had access to it, which I don’t.
Yup, the code I write for myself is either 1) something quick and ugly just to solve a problem 2) me learning a new language or API. The latter is usually a bunch of basic exercises. Neither really show my skills in a meaningful way. Maybe I shouldn’t just throw things on GitHub for the hell of it.
Oh, I think you misinterpreted me. I want the employer to give the employee some sample work to do ahead of time, and then talk to it in person.
As you said, unfortunately, the portfolio approach is more difficult for many people.
I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun.
Perhaps in the future we will see people taking on side projects specifically in order to get the attention of prospective employers.
I recently went through a week of interviewing as the conclusion of the Triplebyte process, and I ended up enjoying 3 of the 4 interviews. There were going to be 5, but there was a scheduling issue on the company’s part. The one I didn’t enjoy involved white board coding. I’ll tell you about the other three.
To put all of this into perspective, I’m a junior engineer with no experience outside of internships, which I imagine puts me into the “relatively easy to interview” bucket, but maybe that’s just my perception.
The first one actually involved no coding whatsoever, which surprised me going in. Of the three technical interviews, two were systems design questions. Structured well, I enjoy these types of questions. Start with the high level description of what’s to be accomplished, come up with the initial design as if there was no load or tricky features to worry about, then add stresses to the problem. Higher volume. New features. New requirements. Dive into the parts that you understand well, talk about how you’d find the right answer for areas you don’t understand as deeply. The other question was a coding design question, centered around data structures and algorithms you’d use to implement a complex, non-distributed application.
The other two companies each had a design question as well, but each also included two coding questions. One company had a laptop prepared for me to use to code up a solution to the problem, and the other had me bring my own computer to solve the questions. In each case, the problem was solvable in an hour, including tests, but getting it to the point of being fully production ready wasn’t feasible, so there was room to stretch.
By the time I got to the fourth company and actually had to write code with a marker on a whiteboard I was shocked at how uncomfortable it felt in comparison. One of my interviews was pretty hostile, which didn’t help at all, but still, there are many, far better alternatives.
I’m a little surprised that they asked you systems design questions, since I’ve been generally advised not to do that to people with little experience. But it sounds like you enjoyed those?
There are extensive resources to help with the evangelism side of things.
The author is convincingly enthusiastic about Pop OS! My girlfriend thinks so, too; she’s making the boot USB as I write this.
The author does, however, come across as more of an interface rules zealot than an interface expert; at least, that’s what I get when he defends each of the App Menu’s problems as ‘actually the right thing’ or ‘we aren’t enforcing the rules strictly enough’, without reference to users’ lived experiences. I, too, like the App Menu in theory! It could work! But you have to make it work, not say it works.
Anyway, we’re going to try out Pop OS, this should be fun :-D
To make it more complicated, the author is assuming a byte is 8 bits. I do some work on a TI C2000 where a “byte” (or really the addressable unit) is 16 bits.
Human brains are really bad at reasoning about the correctness of concurrent systems. Even when we use files in simple ways we are usually skirting several race conditions that just happen not to pop up while running the code on a single laptop for a few days. After starting to write my networked code in ways that let me do randomized partition testing in-process, I’ve been so humbled by the dramatic number of bugs that pop out when you just start to impose possible-yet-unanticipated orders of execution on concurrent systems.
I’ve started to expand the approach to file interfaces that record writes since the last fsync, and can be “crashed” during tests at different points. Concurrent algorithms that can be deterministically replayed in any interleaving of cross-thread communication points. This has been fairly straightforward on systems that run on top of libpthread, but I’ve been struggling to apply these techniques to existing go codebases, where concurrency is not interactive from go programs themselves. External instrumentation of the process at runtime from ptrace gets gnarly really quickly.
I wish that as a language that encourages the usage of concurrency more than most others, go also embraced understanding its concurrent behavior more. Does anyone know of better options than using rr’s chaos mode on a go system after forcing all goroutines to LockOSThread?
Do you already know about TSAN? https://golang.org/doc/articles/race_detector.html
TSAN and the go race detector are similar (I think Dmitry Vyukov might have been involved in creating tsan, in addition to the go race detector) and they work by instrumenting points in the code in similar ways as what I mentioned, but I’m interested in further applying these techniques to actually cause different scheduled interleavings (and ideally having deterministic replay) to gain more confidence in the implementation. Just running TSAN or go -race will only catch synchronization issues (tsan has false positives because it can’t reason about bare or implicit memory fences, but go -race doesn’t have this issue because you can’t express these memory semantics in native go) that your execution happens to stumble on while running an instrumented binary. Sometimes the mere instrumentation of the binary causes timing changes that prevent race conditions from popping out as often as they would in production. I want to force diverse executions (ideally fully exhaust the interleaving space for small bounded execution combinations) in addition to just detecting issues.
Jepsen is thousands to millions of times slower than in-process network partition simulation, takes a month to stand up if you already have clojure skills, and it does not result in deterministically-replayable regression tests for issues it finds. It’s fairly expensive to run on every commit, and you can’t afford to block devs on jepsen runs in most cases. Here’s an example of what I’m talking about for network testing applied to a CASPaxos implementation. In 10 seconds it will find way more bugs than jepsen usually will in 10 hours, so you can actually have devs run the test locally before opening a pull request.
What you’re describing reminds me of the FoundationDB simulation testing talk.
They instrumented their C++ codebase to remove all sources of non-determinism when running under test, so they could test their distributed database and replay failures, stressing it in exactly the way you’ve described.
Let me try to find a link.
Edit: I believe this is it: video
Yeah! This talk was really inspiring to me, and I’ve been pushing to test more things in this way. There have been some advances since that talk that address some of the complexity downsides he mentions, particularly lineage-driven fault injection. LDFI is the perfect way to make a vast fault-space tractably testable, by looking at what goes right and then introduces failures from there.
I was expecting this to knock the stock price down a bit but strangely that did not happen. Was it somehow already priced in?
One thing that the Terminal app in Mac OS X has always done extremely well is to automatically rewrap text when resized - even Linux terminal window apps, numerous though they may be, don’t seem to handle that well. Windows wouldn’t even let you resize the damn window - hopefully that will be fixed now!
The requirement that even racially underrepresented people only count if they’re American does strike even me as weird.
This made me smile. At some point in the recent past I read about someone who had the job of taking some ancient executables written for MS-DOS and writing a web api wrapper for them that would accept an input, run the exe in dos emulation and return the output. This was because the programs themselves still worked but nobody had the original source and it was all for converting ancient documents in obscure formats to something that could be read by a modern computer (or another DOS based file convertor.)
I worked a job where I had to get a 1975-era Fortran contour-map library working inside a 64-bit .NET GUI app. My employer had bought the company which made the library in the 90s, and sat on it until they decided it was time to add it to their newest product. Luckily for me, it had a C interface API for the PDP11, so I hacked up the interface and just pretended to be a really futuristic plotter on a really fast PDP11.
I keep thinking about that article from back in April…
I’ve sponsored quite a few 1-2 person teams, and each team I’ve regretted it. To repeat: I have regretted it every single time.
Interesting. The best teams I’ve ever been on—by any and all metrics: innovation, quality, delivery speed, cohesion—were small, 2 or maximum 3 people. Conversely, the worst teams have uniformly been larger, 6 to 8 people. My experience was that communication overhead and the cost of building even rough consensus were always the bottlenecks, and worth optimizing for. But maybe I work on different problem domains, or in different types of organizations, than the author of this article.
Yes, typically 6–18mo, but shifting responsibilities slightly in that time. The most common pattern was that the small team would build and productionize a project to a steady state, and then fold it into the operational responsibilities of a larger group. The engineers would rotate through that larger group and form new, small teams when business needs arose.
“if the GNI is much higher than the GDP this can mean that the country receives large amounts of foreign aid”
The lone country in their accompanying graph where this inequality holds is Norway. I find it hard to believe that Norway is the recipient of any foreign aid…
In Norway’s case, the reason is probably their gigantic sovereign wealth fund. Income from foreign investments held by nationals is included in GNI (as it’s income flowing in to the country) but not included in GDP (because it’s not economic activity that takes place in the country itself).