[Comment removed by author]
Lovely colour scheme. What is it?
I like those top rectangles! What’s that ?
As someone who also prefers static typing but went with Flow I’ve been disappointed with its lack of power. It constantly gets in the way yet offers few benefits :(
When making a new path, even in tests, use filepath.Join, not path.Join and definitely not by concatenating strings and slashes. filepath.Join will do the right thing with slashes for the OS.
I see this advice a lot, for every language, but never get it. Windows supports forward slashes just fine. Use them everywhere. All of the bugs I’ve had resulted from somebody inconsistently using both / and \ in different places, mostly by accident but also nearly inevitable. Changed everything to / everywhere, all the problems went away.
I suppose some things might be outside your control (e.g. a base path specified by the user in a configuration file). If you’re in the habit of always using filepath.Join then you’ll never accidentally mix slashes.
The main reason I would use the “proper” backslashes in a Windows application is if those paths are ever displayed or interacted with by end users. Windows users have been trained to expect paths to have backslashes and the forward slashes might be either confusing or, at least, appear slightly off. (It’s a tiny, trivial thing but it feels like a lack of polish to deviate from the platform convention.)
I’m very disappointed that many of the links are broken.
Well, the post post was written in 2009, and the link to the article is actually to web.archive.org, meaning that there’s a very good chance that the original article has been taken down/removed/lost.
Most of the time I could guess how a bug was present in code, but I’m a bit stumped on this one.
I don’t understand how something like this is possible on a code level. Surely you do at least a couple seconds of pre-validation of the payment? Maybe you don’t do any payment logic at first. But how do you end up with code that could properly check credit card payments but not properly check for a payment method that doesn’t exist? There is surely some obtuse code path somewhere that caused this.
I bet this was some silly JS-related error (or some leftover mock code?), but it’s still pretty shocking.
errors = 
if payment_type_id == 'card'
# validate and append any errors
if payment_type_id == 'cash'
# validate and append any errors
if errors.length == 0
# book a cab
Privacy for the high-tech developed world is dead. What we have now is a zombie: any attention will deconstruct the spell and it’ll fall apart. The future is unevenly distributed, and you can slow it down by not acquiring an Alexa and smartdevices through the house; that will, eventually, not be an available option. Anyway, the implication is that if you would keep your thoughts secret, keep them in your head, unspoken, unwritten.
I worry about the inevitable “you can’t get car insurance unless you attach this gps dashcam to your car”. I’m sure there will be similar problems with “let us track your TV viewing/temperature/electricity/internet usage for a percentage off”.
In my country insurance providers are already offering discounts if you let them install their telemetry device :(
And then a decade later, “The biggest cause of accidents is people texting and driving. Install this rootkit on your phone if you want insurance.”
This needs solving with politics not technology.
Please go and fix this!
If you do not participate in any political movement or party, you are enabling these sociopaths. No amount of technology can fix bad policies. If this continues, more people will just plainly refuse to travel. Eventually, the very same sociopaths will prohibit encrypted cross-border digital communication. Then what?
If you do not participate in any political movement or party, you are enabling these sociopaths. No amount of technology can fix bad policies.
I don’t intend to debate my vaguely anarchist/reactionary political philosophy on lobsters, but I just wanted to point out that it is reasonable to disagree with this. It seems to me that technology (in the software/hardware sense, or institutional/social/etc.) is more or less the only thing that can fix and prevent bad policies in the long term. I am extremely skeptical that most Western democratic processes can do the same; indeed, one can reasonably blame many examples of bad policies or poor governance on democratic process under universal suffrage.
I am very happy to remain passivist in most politics for exactly that reason - I believe that staying away from the fray and working diligently on technology is a far more realistic and peaceful method for effecting lasting positive change. If it’s “you’re either with us or against us”, then the only way to win is not to play.
I don’t intend to debate my vaguely anarchist/reactionary political philosophy on lobsters, but I just wanted to point out that it is reasonable to disagree with this. It seems to me that technology (in the software/hardware sense, or institutional/social/etc.) is more or less the only thing that can fix and prevent bad policies in the long term.
Counterpoint: No amount of technology will save you from rubber-hose cryptanalysis.
Do you believe that the typical lobsters reader’s contribution to a) politics or b) technology is more likely to reduce the incidence of rubber-hose cryptanalysis? Why?
The point is that it doesn’t matter how you’ve hidden your data if you’re required by law to give it up. The typical lobsters' users contribution to politics may be small, but it is the only way forward.
We need to create a society that supports people keeping their data encrypted. Direct involvement in politics is one possible way to make that more likely, but it’s at least conceivable that e.g. creating more usable encryption tools so that more people use encryption might be more effective.
But without political support for encryption the tech can be rendered useless.
Edit: I may have misunderstood your point. Do you mean that creating more usable encryption could be an approach to bringing it to the general public’s attention and from there it can gain mindshare?
The point is that it doesn’t matter how you’ve hidden your data if you’re required by law to give it up.
How do you figure? There are several obvious technological countermeasures to rubber hose cryptanalysis, including plausibly deniable encryption with different passwords unlocking “fake” or “real” volumes.
If this ever gets to be a common practice, authorities are going to start seeing through it. In particular, if the data you’re protecting is your social media presence, it’s completely implausible to try to claim that you don’t have one. And it does seem that that’s a lot of what these searches are aimed at, right now.
Do you believe that the typical lobsters reader’s contribution to a) politics or b) technology is more likely to reduce the incidence of rubber-hose cryptanalysis? Why?
Politics (ok, real answer: that’s a false dichotomy. Do both. But if you insist that it’s one or the other, I think politics is the more important). At the end of the day, the only thing that stops the government from beating you to death with a rubber hose is making sure the government doesn’t want to beat you to death with a rubber hose.
As long as future governments share the attitudes of the last several (in favor of torture, in favor of surveillance, in favor of compromising civil liberties, convinced that the ends justify the means), I think that even succeeding in making strong encryption ubiquitous would simply encourage them to double down on using detention, force, and intimidation to achieve what they can no longer achieve through passive surveillance. I do not believe that there is a point whereat these people will look at the state of technology and change their behaviors and desires. To paraphrase Swift, you’ll never be able to present them with a set of facts about technology that will cause them to reason their way out of a set of positions they didn’t use reason to reach in the first place.
Couple this with the fact that we’re staring down the barrel of a jobless future which is going to make technologists very convenient scapegoats for an unemployed and desperate populace, and I think you have a recipe for Bad Things.
We’ve seen a broadly ignorant coalition of people with a shaky grasp of reality and a stock of poorly spelled signs successfully takeover the Republican party and now the Whitehouse inside of a decade. Mass political engagement from the traditionally disengaged tech sector has a real chance at changing the people making decisions.
It’s slow and tedious and not as nice as sitting at home and typing at your computer, but getting involved in local politics is an important and necessary act if we want things to change, in my opinion.
I do not believe that there is a point whereat these people will look at the state of technology and change their behaviors and desires. To paraphrase Swift, you’ll never be able to present them with a set of facts about technology that will cause them to reason their way out of a set of positions they didn’t use reason to reach in the first place.
Oh, I wasn’t expecting it to be a matter of reason. Rather a matter of getting people to love their crypto.
I get your idea there – but I’m skeptical that ubiquity will achieve it. Anecdata: My mom has an iPhone. Its contents are pretty strongly encrypted by default. Her iMessages to me are encrypted. Etc. Apple, for all their faults, have been trying to make that stuff ubiquitous for people like her.
Consequently, because it’s so ubiquitous and easy to use and on by default, it’s completely invisible to her. She doesn’t conceive of herself as someone who even USES encryption, and certainly not as someone who is emotionally invested in its legality. Presented with these facts, her response is along the lines of “I have nothing to hide, so I have nothing to fear”.
Getting her, and the broad populace like her, to emotionally invest in the legality of encryption is an education problem, which is a subset of political problems rather than technical, in my view.
Not just that, but most people in the world are not in the United States, and are not United States citizens. They have very little influence on United States politics (read none), but they can have an influence on the technology.
They can not go there. It would take a pretty deep dip in tourism and very low or negative migration or hell freezing over before they revert most of these policies though. The main outcome they will see is people with brand new “empty” phones. I don’t know if the average TSA employee really cares though, “Hey, no bomb schematics or whitehouse plans, get out of my face!”.
While we’re on the subject, does anyone have a great recommendation for a book on SQL? I don’t have to write a ton of super complex queries in my job, but once every month or two, some task calls for a good bit of SQL writing, and I’d like to get a better foundation that just “what I’ve picked up over the years plus Google”.
Not a book recommendation but a couple pieces of advice which helped me shift out of the procedural mindset:
Think about the problem in terms of sets and transformations rather than individual instances.
When formulating a query start with the SELECT and write a list of what you want to see in the result set. This is the goal you’re working towards. Add in appropriate joins to build a set of the data you require. This is your starting point. Figure out the intermediate set transformations required to get from start to finish. Coincidentally this made the ordering of SELECT, FROM, and WHERE click. I was previously thinking in terms of FROM, WHERE, and SELECT.
Hopefully that’s not too elementary. Coming from a similar background I’d never really had that spelled out to me.
I came across the advice in Joe Celko’s SQL for Smarties, which I think is probably too basic for your needs. I haven’t read anything else by him so can’t vouch but “Joe Celko’s Thinking in Sets: Auxiliary, Temporal, and Virtual Tables in SQL” might be helpful? I’ve also heard good things about “The Essence of SQL” but it’s out of print so good luck finding a copy!
I find it amazing how differently you approach this than I do, yet I would assume, we would still end up writing very similar queries.
How do you approach it?
I tend to think of the database like a tree or a perhaps a map (as in google not reduce). I look at the fields I know I need to return, and then mentally map them across the database. I start my query by working from the trunk, or root node if you don’t like wood, of my tree and then map out the query from there. The root node isn’t always the same table; so that can vary based upon the required fields. After selecting all the fields I need from the root, I proceed to join the next table required by my SELECT. That isn’t always direct, and many times, there are tables in between. The process repeats till I have all the tables I need.
This line of thinking has lended itself well to pretty much every dataset I’ve encountered. Words like “set”, “transformation”, and “instance” never even crossed my mind.
Now obviously words like “set” and “instance” have a great deal of meaning in database land, but as far as writing queries go, those aren’t words I tend to think of.
I use CTEs a lot in Postgres, so I find that I work towards the final result in a different way, more like I would in code - by treating the query as a series of data transformations. So, for example, if I want to produce a result for a set of users on a range of days, I write a CTE to get the users, then another one to generate the range of dates, then another one to combine them to get all possible “user-days”, then another to retrieve the data for each and process it, and so on.
This results in very readable queries - way better than having subqueries all over the place. There are performance caveats to using CTEs so sometimes I have to structure the query differently, but it works well for a lot of them.
The docs for Postgres are amazing, and a good resource for this. It will call out which parts of SQL it’s explaining are Postgres-specific/not in the SQL standard.
I’m a very from first principles thinker and so I hope this recommendation isn’t off the mark for your needs. I really liked “Relational Theory for Computer Professionals” by C.J. Date. The book is roughly broken up into parts, the first is an introduction to relational theory and what that’s all about. This is the best intro to relational algebra that I’ve ever seen, a close second is the Stanford Database online class (you can just study the SQL part of the course). The second part of the book takes what you now know about relational algebra and shows how it fits with SQL.
This helped me peel away some of the syntactic confusion around SQL syntax and made the whole concept make more sense.
My 2 favorite resources over the years have been: “SQL for web nerds: http://philip.greenspun.com/sql/” and the official postgres docs.
You can also check this book : https://twitter.com/BonesMoses/status/832983048266330113 and his blog how is full of very good sql
In my experience, the developers who make it past the “intermediate” level take the time to learn “deeper” things instead of the latest trending library, API, or “practice”. I don’t have a good definition of “deeper” but it can include algorithms, architecture/systems, distribution, focusing on a niche like security or performance, etc.
I see a lot of developers with several years of experience tasked with similar things as me (<2 years). The ones I know that aren’t and have improved their career all have the above in common.
In a way, tech news sites are actively harmful to those pursuing mastery, because they continually push beginner-intermediate stuff at the expense of deeper, harder material. They emphasize quantity over quality, and provide a facade of ‘technical relevance,’ which is self-referentially defined as, “what everyone else is talking about,” rather than more durable skills.
“But, HN sometimes links deeply technical stuff!” you contend. Of course it does. It’s also buried amongst the sea of largely-skippable-marketing-disguised-as-technical content. Is your time worth digging through all that?
Ha, I am in complete agreement with you. I quit HN, Twitter, and reddit last month for this very reason. Lobsters (for now) is more than enough :-)
Not wasting my time with poorly written Medium articles anymore feels nice.
Medium is one of the worst offenders. Lots of articles relevant to tech/programming are basically the person patting themselves on the back: “How I made X into a success with over 1,000 GitHub stars.”
Such, erm, self-aggrandizement (if you get my gist) is tedious to read for everyone involved. Instead of writing material to help the reader, they’re writing the article to help themselves. Worse, people lap up this brand PR as if it’s genuinely helpful.
The great thing about Medium is that it’s an effective signal that I shouldn’t waste my time.
If all the content was hosted individually I’d lose that filter.
If we keep downvoting those who try to keep this site on-topic, we will soon have the same problem.
I don’t really agree with that. If a person jumps onto every technology and tool bandwagon they see mentioned, then yes, it will be a problem. But who does that?
I read HN and Lobsters a few times every day, and 90% of the articles (on both sites, FWIW) I just ignore because I can tell from the headline they’re about some technology I have no interest in or some event that I don’t care about.
On the contrary, I’d say reading the sites makes me a better developer because while I might not dump everything and move to Swift or Go or Haskell tomorrow, seeing what’s going on with them can make me more aware of the general software development landscape.
Mindshare is precious. It is a choice to regard the things that people discuss as ‘relevant.’ I don’t; I’m advocating that position.
HN erases scores on comments, which is something they introduced a few years ago so they could “rank ban” contributors deemed controversial without it being obvious, and so moderators could tweak rankings of content favorable/disfavorable to YC startups to the same effect. This actually encourages people to game the system more, insofar as prestige comes from (a) having top comment, and (b) having submissions that get a lot of points. For reasons that you described, this pushes people toward non-technical news and bike-shed topics that drive out the sort of things that drive out the harder, denser material.
The problem with Hacker News isn’t just community decay, one should note. When you consider the salaries of people like Paul Graham, who used to spend several hours per day on it, it’s probably the most expensively moderated web forum in history. HN moderation takes an active role in pursuing the marketing and business interests of Y Combinator and its companies. It’s certainly unethical, and I’m disgusted that such a cool concept (I’m related to the guy who came up with it) is used as the name of that scummy business operation.
I don’t see this danger for Lobsters, because it only appeals to technical people. Lobsters fame has no business value and there’s no economic incentive to game it. I don’t think the points really matter. I’d rather get +4 for a post on a technical topic than +25 for a throwaway joke.
I agree with your larger point, too. Tech news aggregators feed the monkey, and it’s hard to develop expertise without focus. The difference is that Lobsters has less intellectual junk food, and it also dries up whereas Hacker News and Reddit serve up a buffet of distractions.
One way to go about this is to put emphasis on durable skills, so that they can be built upon at all before being made obsolete by a trend. Lower level stuff tends to change far slower, and can often be squeezed for more value over time per unit of time spent learning.
IMO, software design skills are universally applicable and don’t change very quickly.
You may realize them differently in different languages and paradigms, but the basis of them forms a foundational aesthetic that you can trust to guide you to writing your best code as much as possible, further improving your practice.
I agree, but software design skills are also hard to demonstrate. So, while those skills are durable, the game of getting credit for having them is not always straightforward.
Front-end designers have it easier insofar as average people actually know what a good UI looks like. They might not be able to create them (someone is creating bad UIs, after all) but they can tell beautiful work from ugly, sloppy, crappy work. They can build portfolios, rather than just hoping to find someone of similar intelligence and tastes who can vouch for them.
With software, you have subjectivity but it’s also nearly impossible for a non-programmer to tell good design from bad. Ultimately, shots are often called by non-technical managers who aren’t qualified to be making them, and this is where the constantly shifting fads come in.
Ultimately, shots are often called by non-technical managers who aren’t qualified to be making them
There are firms that value engineering expertise, it is just up to the developer to find them.
I’m off work for the week so I’ve started working through Dasgupta’s Algorithms. My math is pretty terrible so progress has been much slower that I expected :(
I flagged this as off-topic because:
It’s sad to compare the AMP and non-AMP versions of an article on the same domain. They’ve demonstrated that they’re capable of building a fast loading page yet don’t offer that to people visiting their main site.
This. So much! One up-vote is not enough.
One of the comments says:
I’ve heard [it] said repeatedly that if you try and put the developers on call you’ll have a hiring problem, the ‘best’ people won’t want to do that (and have a choice of employers)
Do people here count it as a negative against a potential employer? Seems to me to be a logical part of my responsibilities (and accounted for when determining compensation).
I’ve always found it extremely difficult to convey how stressful being on call can be, regardless of whether one gets paged or not. Compensation really needs to take that into consideration when the team size on rotation is <= 5.
I’m less willing now I’ve got a toddler waking me regularly.
I prefer an on-call rotation to the alternative which, I believe, is that your more senior devs are always on-call. I view the rotation as a gift. I have time that I do not need to worry about having my phone with me or what is going on with the platform.
As an aside, I think it also ensures that issues get fixed. If you are paged on the same issue repeatedly then there is an opportunity for mitigation or automation.
Absolutely, if it’s presented as “we don’t have ops, everyone just does rotation” it falls deep into the immature management category. Not making a plan for executing the product after you ship it is just insane amounts of immaturity.
If there is a core ops team, and a small number of devs rotate in helping out over time.. it’s not so bad if it’s a brand new product, and you’re trying to figure out how and where it fails. I say new product as many features and deep product issues haven’t been worked out yet thus need development to investigate quickly.
If there is a core ops team, and a lot of devs rotating in and out, but it’s a product that’s been shipping for over a year.. then we’re back to poor management. If your engineering team (inc mgmt) can’t create a concrete plan around shipping a product, and operating the product with your customers – then you’re probably not in a good place.
Huge caveat: this does not mean the company won’t be successful. Amazon is a great example of somewhere with horrifying management, incredible turn over, shit quality of life – but they make money hand over fist. Deciding if it’s a place you want to work is another matter.
I used to work somewhere where the on-call policy was: rotate within team for week-long chunks, and you have a monetary bonus when you are on call. In my case, I ended up on call every other week, so it upped my salary by an okay bit.
How much was the bonus?
At Google we do the same thing and the bonus is 33% of your salary.
I count it as a negative against a potential position. I have turned down a job that offered substantially higher salary than the one I took, and where the interviewer was possibly the best programmer I’ve met, because it involved being on-call. Uninterrupted sleep is something I value very highly, and at the time I didn’t quite trust even my own code to never have problems.
absolutely. if I have a choice of jobs, or teams/roles within a job, I’ll always take the one that doesn’t involve being on call
Forcing constraint validation into the database; I’ve written more about this here, https://kev.inburke.com/kevin/faster-correct-database-queries/
Using secretbox for two-way encryption; happy to say I’ve merged documentation examples for both a Node library and the Go standard library
Using hub fork / hub pull-request to open pull requests (instead of using the browser or my gitopen tool)
hub fork / hub pull-request
Preparing/running database statements when you start your app instead of always parsing them when queries run
Spending more time with programming languages that have good standard libraries and make it easy to benchmark (not Node)
Merging PR’s as a single commit, I’ve probably merged ~300-400 pull requests this year and maybe 3 had more than one commit in them
Always merging branches on HEAD, so commits are linear.
Purposely duplicating blocks of code until it gets painful to do so / the abstraction is more obvious
I’d probably use GRPC for a new project, or at least protobufs for sending messages between servers.
Duplicating code until I’m more confident of the appropriate abstraction is something I’ve been doing more lately and I’ve found it really satisfying and interesting every time the eventual refactoring takes a different direction from my initial expectations.
I’ve been tempted to try single commit PR’s for a while but have always refrained due to a fear of loosing useful information. On reflection, I can’t think of a single time this year where the individual commits have been useful so I could save a small amount of time and effort squashing rather than organising commits before submitting a PR.
Speaking of single commits, kinda related, my former VP of engineering used to commit at the end of the day… and if the work wasn’t done or ready to be committed, he would just git reset –hard. He would redo all the work the next day, do it way faster, and often find better ways of doing it.
Feels like half the story is missing here. What’s in this for Intel and AMD?
Less platforms to support I suppose?
Hmm yeah could be. Plus maybe they want to drive sales of their new product?
not sure “guys, great news it’s not us, it’s our adopters' misunderstanding” is a particularly powerful analysis.
That’s a pretty pessimistic reading of the blog post. Discovering the users expectations and shaping the perception of the language correctly is a huge challenge.
It’s much less of a “guys, great news it’s not us, it’s our adopters' misunderstanding”, and more of a “oh crap, people are trying out rust with a mindset that might not be productive or encouraging.”
The author admits that correcting the expectation is a much harder problem than acknowledging that there is a problem, but it’s a good first step.
“oh crap, people are trying out rust with a mindset that might not be productive or encouraging" is merely and exactly rhetorical dressing on the fault laying with the users. Maybe I made that too barefaced for your liking, but the OP rubbed me the wrong way.
The author doesn’t assign fault to anyone but does say “how can we fix this?”
The question is how to correct expectation. I really have no idea, but maybe explicitly talking about the problem will help.
I don’t think the author was saying “great news, it’s not us; we don’t have to improve ourselves”. I think he was saying “I’ve identified the problem, and it’s ‘how can we improve our communication to C++ users?’” It is totally possible to answer that question, rather than giving up and blaming the users.
One obvious way to mitigate this problem is to change parts of the Rust documentation or home page. Another, unlikely, way would be to change the syntax of Rust to something less C-like so C++ developers don’t think it looks easy to learn. It’s interesting that this perception is a problem, given that I think the syntax was made C-like to drive adoption in the first place.
There’s been talk in the Rust community before about where people are coming to Rust from. Each group “gets” different parts of Rust right away, and each group has their own confusions about and desires for Rust. (Warning: lots of generalization coming.)
People coming from statically-typed C-family languages like C, C++, and Java are usually enticed by Rust’s promise of performance and scalability. People coming from each of these languages have their own problems, but it’s this group that has the “why can’t I make a linked list?” response to Rust. Note that they are (usually) quick to grok ownership and borrowing.
People coming from functional languages like Lisp, OCaml, and Haskell are often enticed by Rust’s type system (which is very similar to Haskell’s), its pattern matching, and its bevy of functional-style interfaces. This is the group that asks when higher-kinded types are coming to the language, and generally pushes for expansion of the Rust type system so it can encode more (and therefore check more) about Rust programs. They generally have the hardest time with ownership and borrowing, as the languages they’re coming from usually just box up everything.
Anyway, sometimes I think it would be worthwhile to create three separate introductory “tracks” for Rust, one for each group, to both highlight the things that the group is most likely to appreciate, and to spend the most time on the parts of Rust most likely to give people in that group trouble. In the end each track would cover all the same material, but the organization and amount of time spent on different parts would vary between the groups. Obviously, this would be a serious undertaking.
It’s not exactly the same, but this reminds me of the situation with Scala where you have three very distinct camps trying to shape the language in their own directions. You have the hardcore FP crowd that really just wants to write what’s been termed “Haskell fanfic” on the JVM; you have the people who just want to write Java in a way that’s not unbearably tedious, then you have the Scala Faithful who follow TypeSafe closely and use Akka for everything.
In the case of Scala it’s led to some pretty massive cultural fragmentation (from what I’ve seen as an outsider to the community). I’m not sure if the same is true of Rust because it does seem to have a more coherent vision; maybe the differences fade away as the learning progresses.
Yeah, in my experience with Rust everyone does end up in roughly the same place. The groups still have their own favorite ideas for what to do next in the language, but I also think everyone is reasonable enough and respectful enough toward each other to recognize which of the many possible ideas for progress is most important (right now that’s MIR, which will enable a bunch of other changes that benefit everyone).
The more interesting question is to ask what causes that misunderstanding, but this is a starting point. Just yesterday I read a presentation from somebody who said “C is really hard because I don’t understand pointers, but rust was easy.” Now, I do understand pointers, so will I find rust easy?
Now, I do understand pointers, so will I find rust easy?
Personally, I found Rust a lot easier once I started thinking of it as a more modern C, instead of a more modern C++ (which, for whatever odd reason, is how Rust seems to get pitched). There’s a lot more you have to ‘unlearn’ coming from C++.
Why does the drop from 14 to 12 times a day result in the probability of death increasing sevenfold?
I’m a little confused by how a link could trigger this. Does Symantec intercept all read() syscalls or something? So you could send a really big link and trigger a buffer overflow?
Because Symantec use a filter driver to intercept all system I/O
I read that, but I couldn’t believe operating systems would let an application do that ? Is that true of all OS’s? What are the legitimate use cases?
For filter drivers specifically, wikipedia mentions some use-cases, although it seems to me that those use-cases don’t actually need to be in a driver.
To nitpick your question just a bit, it’s a driver, not an application the kind of which one might install on their smart phone on a whim.
Because drivers require low-level access to hardware functions in order to operate, drivers typically operate in a highly privileged environment and can cause system operational issues if something goes wrong [wiki/Device_driver]
Is letting users install drivers that run in ring0 (i.e. like-the-kernel) a bad idea? Seems that way. Is it possible to provide “low-level access to hardware” without running code in ring0 unhindered? Yes, but there are trade-offs with performance and complexity.
Remember that these are the same kinds of programs as those which talk to a video or a network card. Surely we want to allow third parties to make hardware with an interface “on an equal playing field” with the kernel. Surely we don’t want to add layers of abstraction or security constraints which slow things down there. Surely it’s easier to make sure everyone who writes code there is really really careful, given the risks.
What I understand least about all this is the lack of a reaction here.
Why should it? Is this news realistically expected to impact their sales forecasts? Honestly, even if investors did care about this sort of stuff, Tavis has been hammering away at AV products for a while now and found horrendous vulnerabilities in the majority of the big players. If there’s a secure alternative AV product out there, I’ve yet to hear about.
Is this news realistically expected to impact their sales forecasts?
Presumably people buy Stmantec products to improve rather than degrade workstation security, it’s their whole value proposition and this would seem to cast significant doubt on that. You might argue consumers are unlikely to even hear of this bug, but Symantec has a lot enterprise customers, I’d expect a majority of their net at this point, who should have someone who can both make purchasing decisions and whose confidence in the company is shaken by this finding.
Oh, I’d definitely be disturbed and have lost some/a lot of confidence if I was a Symantec customer. My point is I don’t think they’re substantively worse than the competition. So there’s still going to be the same demand for security software, and I can’t see this news causing people to jump ship. Honestly, the PR impact is going to be far more dictated by their response to the flaw and speed of patching than the flaw itself.
All AV software has remote code execution vulns in ring0 on their main platform? Citation required.
Writing secure software isn’t impossible or even hard, it’s about putting in the effort. Google is putting in the resources to reverse engineer and find errors in Symantec’s shoddy work, ostensibly because they care about security. If Symantec cared enough, they’d have hired someone like Tavis to do code reviews.
The most straightforward conclusion seems to be that Symantec isn’t in the security business, but in the bureaucratic ass covering business. Their flagship products do not add to an organization’s security and are obsoleted by modern methods of sharing documents and distributing applications. There might still be useful work to be done in scanning websites for exploits before a browser renders them, but with evergreen browsers now the norm, the amount of time between an attack being used and fixed is a lot shorter than it used to be.
What am I missing?
Because society cares very little about computer security unfortunately.
To refine this thought, I’d add that not programming (right now) is also an option. It’s a skill, like any other, that gets better with practice, but that doesn’t mean you need to practice if you don’t have a use for it. You dont need to know programming if you don’t have anything to program.
Just relax, content with the knowledge that if you had a problem, you’d be able to solve it. And never worry, you’ll have plenty of problems soon enough.
Thinking back to when I was a new developer and experienced this predicament the desire to program wasn’t to solve a problem, it was because I had a taste of this fun, intellectually stimulating activity and I wanted to do more of it, in addition to improving my skills. Programming is an enjoyable activity in itself so I’d argue if you want to program more then go ahead, even if it’s not solving a problem :)