IMO there are two main reasons behind bugs:
We will probably improve a bit on the first issue in a couple of centuries, but the second is inherent to the matter.
Motor vehicle industry was at a very early stage and built by humans too, but that stage was very short, not decades of mass use. While you’re right, I think there’s third reason that supports the first two: the scale of things in IT is so much larger than anything else! This makes the period of getting mature much longer.
Reminds me of organic beings. Small insects mature and live quickly, large mammals do so longer, their systems are more complex.
Callously equating people actually dying to software bugs (which, granted, are occasionally that severe):
Per-capita motor vehicle fatalities in the US peaked in 1937 at 29 per 100,000, 37 years after the first statistics in Wikipedia, and about 52 years after the invention of the automobile. They bounce around a bit but don’t change a whole lot for the next 30 years: 26 per 100,000 in 1969. Then we see a decline, with a real drop-off beginning in the 1980s and continuing to today.
If we’re looking to the motor vehicle industry for an analogy we might expect quite a long period of buggy software ahead of us.
Wait, aren’t we mixing terms here? How many people are dead because of actual motor vehicle failure? As in the brakes stopped working or the engine exploded?
I’m pretty sure the majority of those deaths are due to human error, both on part of the drivers and pedestrians. As soon as we implemented things like mandatory seatbelts and airbags, and developed better roads, signals and laws, the number went down. Cars, of course, became safer themselves, but we see old and vintage cars on the streets today, and I don’t think the death rate among their drivers os on the same level as it was when the car was produced.
A 1950 car in 2018 is safer, than a 1950 car in 1950.
So, if what I said is at least remotely true (can’t do research atm), then the main reason for cars becoming safer are external, something that was added to the cars and systems around them. This isn’t possible to buggy software: we can’t create universal patches to make the whole set of software less buggy.
They literally have cars accelerating when they shouldn’t be after a hundred years of use. Their bugs are worse than ours.
They only figured out seatbelts in the most recent quintile of that hundred years. Airbags are as recent. They haven’t yet figured out “don’t crash into stationary object”.
If that’s what software is going to be like, my grandchildren will be born, grow up, and die before progress is made on the front of reliable software.
I agree with every word of this. It’s good to have a suitable label that nicely encapsulates the concept.
I would like to see some examples. I would say that https://www.gov.uk/ would be one of the best examples of brutalist web design. What do you think?
Like the sibling comment, I, too, had my “parked domain” flags triggered. However, the actual practice of using the gov.uk constellation of websites is fantastic. Very easy to use.
The login.gov stuff and related things in the US are catching up to the usability of the UK sites.
Like the sibling comment, I, too, had my “parked domain” flags triggered.
I agree, but once you scroll down it improves a lot. Nice site.
Government jobs tend to be 40 hours or less. State government in my state has a 37.5 hour standard. There is very occasional off-hours work, but overtime is never required except during emergencies – and not “business emergencies”, but, like, natural disasters.
I’m surprised that tech workers turn up their nose at government jobs. Sure, they pay less, but the benefits are amazing! And they really don’t pay too much less in the scheme of things.
How many private sector tech jobs have pensions? I bet not many.
I work in a city where 90% of the folks showing up to the local developer meetup are employed by the city or the state.
It’s taken a lot of getting used to being the only person in the room who doesn’t run Windows.
I feel like this is pretty much the same for me (aside from the meetup bit).
Have you ever worked with windows or have you been able to stay away from it professionally?
I used it on and off for a class for about a year in 2003 at university but have been able to avoid it other than that.
Yeah. I hadn’t used Windows since Win 3.1, until I started working for the state (in the Win XP era). I still don’t use it at home, but all my dayjob work is on Windows, and C#.
they pay less
Not sure about this one. When you speak about pay, you also have to count all the advantages going with it. In addition, they usually push you out at 5pm so your hourly rate is very close to the contractual one.
Most people who are complaining that they pay less are the tech workers who hustle hard in Silicon Valley or at one of the big N companies. While government jobs can pay really well and have excellent value especially when considered pay/hours and benefits like pensions, a Google employee’s ceiling is going to be way higher.
There’s a subreddit where software engineers share their salaries and it seems like big N companies can pay anything from $300k–700k USD when you consider their total package. No government job is going to match that.
I do.
Pros: hours, and benefits. Less trend-driven development and red queen effect. Less age discrimination (probably more diversity in general, at least compared to Silicon Valley).
Cons: low pay, hard to hire and retain qualified people. Bureaucracy can be galling, but I imagine that’s true in large private sector organizations, too.
We’re not that behind the times here; we’ve avoided some dead-ends by being just far enough behind the curve to see stuff fail before we can adopt it.
Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.
I will say, I once did a contract with the Virginia DOT during Peak Teaparty. Never before in my life have I seen a more downtrodden group. Every single person I talked to was there because they really believed in their work, and every single one of them was burdened by the reality that their organization didn’t and was cutting funding, cutting staff, and cutting… everything.
They were some of the best individuals I ever worked with, but within the worst organization I’ve ever interacted with.
Contrast that to New York State- I did a shitton of work for a few departments there. These were just folks who showed up to get things done. They were paid well, respected, and accomplished what they could within the confines of their organization. They also were up for letting work knock off at 2PM.
Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.
Agreed. There’s no such thing as an ethical corporation.
Do you mind sharing the minimum qualifications of a candidate at your institution? How necessary is a degree?
I’m asking for a friend 😏
No, not even them.
When you think about what “profit” is (ie taking more than you give), I think it’s really hard to defend any for-profit organization. Somebody has to lose in the exchange. If it’s not the customers, it’s the employees.
That’s a pretty cynical view of how trade works & not one I generally share. Except under situations of effective duress where one side has lopsided bargaining leverage over the other (e.g. monopolies, workers exploited because they have no better options), customers, employees and shareholders can all benefit. Sometimes this has negative externalities but not always.
Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.
Reciprocity is a natural law: markets function because both parties benefit from the exchange. As a nod to adsouza’s point: fully-informed, warrantied, productive, voluntary exchange makes markets.
Profit exists because you can organize against risk. Due to comparative advantage, you don’t even have to be better at it than your competitors. Voluntary exchange benefits both weaker and stronger parties.
Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.
I mean, yes, I was repeating myself. I wasn’t concluding anything: I was merely rephrasing “profit.” I’m not sure what you’re trying to get at here aside from fishing for a logical fallacy.
a tautology. i.e., meaningless repetition.
Intentionally meta?
Reciprocity is a natural law
Yup. No arguments here. However, reciprocity is not profit. In fact, that’s the very distinction I’m trying to make. Reciprocity is based on fairness and balance, that what you get should be equal to what you give. Profit is expecting to get back more than what you put in.
Profit exists because you can organize against risk.
Sure, but not all parties can profit simultaneously. There are winners and losers in the world of capitalism.
So, if I watch you from afar and realize that you’ll be in trouble within seconds, come to your aid, and save your life (without much effort on my side) in exchange for $10, who’s the one losing in this interaction? Personally, I don’t think there’s anything morally wrong with playing positive-sum games and sharing the profits with the other parties.
For an entry-level developer position, we want either a batchelor’s degree in an appropriate program, with no experience required, an associate’s degree and two years of experience, or no degree and four years of experience. The help-desk and technician positions probably require less for entry level but I’m not personally acquainted with their hiring process.
I would fall into the last category. Kind of rough being in the industry for 5 years and having to take an entry level job because I don’t have a piece of paper, but that’s how it goes.
For us, adding an AS (community college) to that 5 years of experience would probably get you into a level 2 position if your existing work is good. Don’t know how well that generalizes.
Okay cool! I have about an AS in credits from a community college I’d just need to graduate officially. Though, at that point, I might as well get a BS.
Thanks for helping me in my research :)
I don’t, but I’m very envious of my family members who do.
One time my cousin (works for the state’s Department of Forestry) replied to an email on Sunday and they told him to take 4 hours off Monday to balance it off.
That said, from a technological perspective I’d imagine it would be quite behind in times, and moves very slowly. If you’re a diehard agile manifesto person (I’m not) I probably wouldn’t recommend it.
EDIT: I guess it’s really what you value more. In the public sector, you get free time at the expense of money. In the private sector, vice versa. I can see someone who chases the latest technologies and loves to code all day long being miserable there, but for people who just code so they can live a fulfilling life outside of work it could be a good fit.
Oh my god you’re right. Edit: none of this comment is sarcasm! genuinely dumbfounded
git checkout branch1
git treesame-commit branch2 # a complex program
is the same as
git checkout branch1
branch1ref=$(git log --format=%H -1)
git reset --hard branch2
git reset $branch1ref
git add .
git commit -m "treesame-commit" # no complex program
and this whole time I believed there was no way to do this!
In my literally 7 years of using my own custom tool that writes git commit objects by hand, how have I never realized that? :headdesk:
Of course, to make the latter really solid you’d also want a git clean -dffx, git submodule sync, git submodule update --init --recursive, etc. In some sense, it’s hard to beat the concreteness of writing out the exact tree hash you want into a commit object, so frankly I’ll probably still keep using my tool but I do feel a bit sheepish.
Another person on reddit pointed out git read-tree and git checkout-index which also can solve the problem.
Nope! Just dumbfounded by my own stupidity. I agree though, I don’t know how to convey the dumbfoundedness without sounding sarcastic. Internetting is hard.
In theory, if people’s fear is greater than the true cost, it seems like there’s an opportunity for people to offer recommendations and then accept the legal risk in exchange for a fee. Since the risk here is posited to be tiny, you could charge them a tiny amount and take all the risk on yourself and make money. Then we can see the true cost of implementation. For instance, I have a blog I haven’t written anything on since college that has comments on it. If the cost to indemnify is $2.5k, I might as well just black hole EU traffic before it hits the site. If the cost to recommend a bunch of Wordpress plugins and then accept indemnification is $25, I might pay it and install the plugins.
Since this blogpost is clearly in reaction to announcements of the sort that monal.im recently made I think it is reasonable to demand something simple from proponents of the law. Here’s an offer: $25 for initial audit, and $5/yr afterwards for indemnification so long as I comply with remediation suggestions. I’ll retain the right to publish the remediation suggestions. What’s your offer?
(Personally, I’m not worried about GDPR for my blog, which is why I value these services so low, but I think the lack of existence of these services is evidence that those arguing that these reactions are hysteria are incorrectly pricing the cost of operating under the increased regulation)
I can’t audit a company for $25. That’s crazy. A big part of GDPR consulting is getting companies to document what data they collect, what they do with it, how long they keep it, and why do they think they’re keeping it safe. This takes days- sometimes months to tease out. If you can write it on a napkin, I could just about read that napkin for $25, and tell you if it’s compliant or not, but really? I’d tell you for a beer.
If you’ve got a Wordpress site, and you respond to reasonable email/post on how to (a) delete themselves (if they’re a commenting user) and/or (b) their comments, as well as (c) get a list of all their comments and log-in attempts, then you’re probably fine.
If you’ve got a Wordpress site without any comments/users, then you’re probably fine.
If you’ve got a Wordpress site without any comments/users, but you use some plugin from some third-party site to do comments; etc, then you might need to talk to that third-party. Maybe.
Chasing down that third party is where most people blow their budget.
Well, then whatever you’d charge is the true cost of compliance, isn’t it? OP’s big argument is that people are hysterical about GDPR: i.e. that fear of costs are much higher than costs and that costs after accounting for risk are actually not high. If OP is right, then OP could become fabulously wealthy by arbitrage between these two by selling indemnification insurance.
OP is not doing that, and as a matter of fact, there’s no one doing that at a rate these people would use. This means that either people aren’t hysterical or OP has underestimated costs after accounting for risk.
Essentially, to OP I’m saying “if what you’re saying is true, you have a direct path to incredible wealth. That you aren’t taking it makes it apparent to me that what you’re saying isn’t true”. An audit is worthless without the indemnification.
Not really.
It’s more like buying insurance against bad PR: I’m sure you can find someone to sell it to you, but it’s much cheaper to simply not be an asshole.
That’s not true. It’s like buying insurance against bad PR that requires that you’ll behave in some fashion and assuming the fallout of the PR is measurable.
If cost to make this happen is low but I think it’s high and I’m willing to pay some number past the cost to make it happen, then you can get rich. The article is arguing that cost to make this happen is low and I think it’s high. The only wiggle room it has is whether I’ll pay some number past the cost.
So far no one has offered the service while taking on the responsibility of having done a good job. Not one person.
There is no shortage of people saying it’s easy and low risk and a huge shortage of people who’ll do it. Chances are it’s what you’re saying: it’s months of work. It’s expensive, and people are pricing accordingly when they exclude the EU.
Yes it is true.
If the cost of the audit and assurances from the regultors are lower than this indemnification insurance (which I can only offer if I audit and speak to the regulators) then nobody will offer than indemnification you’re looking for except snake-oil salesmen.
Same thing for PR: once you’ve accurately evaluated your specific risk, there’s no point in buying insurance. Just fix the problems.
Here’s another way to look at it: People buy insurance when they can’t evaluate the risk, and actuaries are able to take the mean projected risk to cost that insurance. If the variance on that risk is low then it is mass market insurance. If it is high but still lower than the market mean then you can buy revenue protection (like futures). If the variance exceeds the market mean, however, you’re not selling the work of an actuary, you’re making bets not selling insurance. Market segmentation for insurance purposes is our audit in this case, and since I can accurately determine compliance/risk it’s much better for the company to just fix these issues.
Here’s yet another way to look at it: the risk of your being in a plane crash isn’t 1:11million if you don’t fly. A sucker is someone who buys plane crash insurance — even at those rates — when they don’t fly.
Also: That the GDPR is not so complex that few companies actually need an audit is the point of the article. Not that “risk is low”.
Love the idea, but I’d prefer to abstract one level higher. Overhead when reading code comes from recurring structural patterns that have to be mentally re-parsed, not individual tokens. I’d love to have definable sugar over stuff like
x, err := fnCall("wat")
if err != nil {
return err
}
or
var new []int
for i, x := range old {
new = append(new, fn(x))
}
Being able to spot those common patterns in my code and refer to them as reterr x := fnCall("wat") or new := map(old, fn) would be awesome. I get that it’s much more difficult, but I like the idea of writing in something with more control but reading in something higher level. Can always “dive in” to the actual impl much like you’d do with a function call.
That sounds something like macro expansion. rustc with --pretty=expanded will expand, though it may be too much detail.
It does! Just kind of…backwards. Macro contraction? I’d love the actual file and what I write to be the full lang, but to have a way to apply some kind of semantic compression for skimming
Do you think that some open source project running a bunch of servers to have people use a free and open social networking thing outside of Facebook have the skills and resources to comply with the regulation?
Yes, because from my experience (I’m running a few small open services, which do comply with it) it’s been quite easy. To comply with German privacy law I already anonymize IPs in most logs (the rest will be changed by the end of next week), and I don’t collect any analytics or tracking data. The only data I have is that which users submitted themselves, and users can not only view and delete it through the software, I also wrote a small tool to easily export all data for a user as PDF, so if a user sends a “nightmare letter”, I can quickly respond per letter/fax with all data.
I don’t use any external APIs or services in my software unless the user explicitly opts in, which means no data is shared with third parties, also making compliance easy.
In general, it seems to me like the only people that have to worry about the GDPR are the a*holes that are stuffing their sites with analytics, tracking, ads, and selling userdata.
Up to 4% of global revenue, or 20 million EUR, whatever is higher.
But I trust that the courts will act reasonably if I’m being cooperative and focus on privacy from the get go.
There are some good descriptions of some patterns at the Azure Cloud Design Patterns sie
Why would I apply the patches against this on my home computer? One userspace process can steal from another userspace process? That’s just me and my one user.
And any javascript in your web browser, happily served to you without your knowledge by untrusted third parties.
Out of curiosity, why? I’ve always considered it to be superior to have build configuration versioned directly with code. You can guarantee lockstep changes with build and code and you only need a single to to understand code and how it’s built.
We use Jenkins with configuration solely in Jenkins at the moment and it’s inferior to if we used Jenkins files in our repos.
I hope I answered in my other comment, at least to some extent.
Building (and testing) recipe shouldn’t be part of CI configuration file, it should be part of Makefile or whatever is used for your language toolchain/building framework. You should be able to build application w/o having your own CI instance. Sure, Makefile won’t install packages needed to build software (dependencies should be mentioned in README file), as it’s a distro-specific thing, so that’s why CI environments need preparation step, and it needs to be stored somewhere. I just don’t like storing such things in project’s repository, as it’s strictly related only to tooling wrapping the project, and this tooling may change at some point, it’s CI-specific (and therefore often distro-specific) thing, so it’s meta project workflow thing. If you want it versioned, I think it should be versioned separately.
If you care about compatibility, you should be able to avoid lockstep changes (your older source code should build fine with newer build preparation configuration or your newer source code should build fine with older build preparation configuration). And if it’s not possible, then it’s good thing to have clear indicator as build failure, because it will most likely hit users too.
README (or INSTALL) file should be enough to let user/developer know how to prepare building environment in their own distro (which may not necessarily be your distro or distro used by CI).
tl;dr (simplistic): Today you may be using one CI, but it may be other thing in future. It’s not crucial part of the project (even if very useful), thus it shouldn’t pollute its development repository. That’s my view.
This is my pet hate with Jenkins, every time we open a new longer-lived branch that deserves a CI build, it’s an exercise in copying config from one browser tab to another. I’d love to just be able to modify a .jenkins.toml file.
If you use a newer Jenkins with Pipelines, you can use a Jenkinsfile.
I also agree with major version belonging in the name. For version 4 and 5 of SBJson I renamed all classes, enums, and identifiers so that you can install version 3.x.x, 4.x.x and 5.x.x in the same app without conflicts. I did this because I wanted to ease the upgrade path for people. If they use SBJson in different parts of their app (which is likely, in big apps) this allows them to upgrade parts of their app at a time, rather than be forced to upgrade all uses in one go. More importantly though: it also allows people to upgrade their own usage in their app, even as dependencies they rely on have not yet upgraded their usage of the library.
The Apache Commons Java libraries practise your method and I think it’s fantastic for precisely the reasons you mention. Guava does not and that last sentence of yours is a huge ticket in Hadoop.
That sounds more like a workaround to avoid the issues of runtimes not being able to handle versions and the lack of reasonable migration tooling.
I disagree somewhat. Renaming was simple to do, and is simple to understand & deal with for users and machines alike. There’s no special case migration tooling or runtime support required at all. One could argue that requiring a runtime that is able to handle versions of libraries and requiring migration tooling is a workaround for poor versioning on behalf of library authors. However, I’ll admit renaming has its problems too. It would make back porting fixes across major versions much more annoying, but luckily my project is mature and small enough that it has not been a problem.
Wow, these are actually surprisingly convincing. My position on fake reviews has always been that they are a great signal to permit on your website so long as you penalize them silently in search results. When Amazon started removing the “honest and unbiased” reviews, they actually made it harder to avoid fake products, and so I was in favour of leaving them in. They’re obvious enough even for people without Fakespot and stuff like that.
But something like this:
I love this place. I love their asparagus. The scallops and pasta are also delicious. I will continue to come here anytime I am in town.
That’s almost convincing except who on earth remarks about the asparagus. I’d still chalk it up to people just being oddly specific in their tastes.
This is pretty crazy.
That’s almost convincing except who on earth remarks about the asparagus.
Germans.
We’ve got places here where asparagus is grown with heating under the plants to make sure it reaches the market earlier then the season. People pay up to 20 Euro a kilo.
I think the takeaway here is a) don’t confuse all kind of errors with a http request with invalid tokens (I’m not familiar with the Github API, but I suppose it returns 503 unauthorized correctly) and b) don’t delete important data, but flag it somehow.
It returns a 404 which is a bit annoying since if you fat finger your URL you’ll get the same response as if a token doesn’t exist.
https://developer.github.com/v3/oauth_authorizations/#check-an-authorization
Invalid tokens will return 404 NOT FOUND
I’ve since moved to using a pattern of wrapping all external requests in objects that we can explicitly check their state instead of relying on native exceptions coming from underlying HTTP libraries. It makes things like checking explicit status code in the face of non 200 status easier.
I might write on that pattern in the future. Here’s the initial issue with some more links https://github.com/codetriage/codetriage/issues/578
Why not try to get issues, and if it fails with a 401, you know the token is bad? You can double check with the auth_is_valid method you’re using now…
That’s a valid strategy.
Edit: I like it, I think this is the most technically correct way to move forwards.
Then there’s your problem. Your request class throws RequestError on every non-2xx response, and auth_is_valid? thinks any RequestError means the token is invalid. In reality you should only take 4xx responses to mean the token is invalid – not 5xx responses, network layer errors, etc.
I think the takeaway is that programmers are stupid.
Programs shouldn’t delete/update anything, only insert. Views/triggers can update reconciled views so that if there’s a problem in the program (2) you can simply fix it and re-run the procedure.
If you do it this way, you can also get an audit trail for free.
If you do it this way, you can also scale horizontally for free if you can survive a certain amount of split/brain.
If you do it this way, you can also scale vertically cheaply, because inserts can be sharded/distributed.
If you don’t do it this way – this way which is obviously less work, faster and simpler and better engineered in every way, then you should know it’s because you don’t know how to solve this basic CRUD problem.
Of course, the stupid programmer responds with some kind of made up justification, like saving disk space in an era where disk is basically free, or enterprise, or maybe this is something to do with unit tests or some other garbage. I’ve even heard a stupid programmer defend this crap because the the unit tests need to be idempotent and all I can think is this fucking nerd ate a dictionary and is taking it out on me.
I mean, look: I get it, everyone is stupid about something, but to believe that this is a specific, critical problem like having to do with 503 errors instead of a systemic chronic problem that boils down to a failure to actually think really makes it hard to discuss the kinds of solutions that might actually help.
With a 503 error, the solution is “try harder” or “create extra update columns” or whatever. But we can’t try harder all the time, so there’ll always be mistakes. Is this inevitable? Can business truly not figure out when software is going to be done?
On the other hand, if we’re just too fucking stupid to program, maybe we can work on trying to protect ourselves from ourselves. Write-only-data is a massive part of my mantra, and I’m not so arrogant to pretend it’s always been that way, but I know the only reason I do it is because I deleted a shit-tonne of customer data on accident and had the insight that I’m a fucking idiot.
I agree with the general sentiment. It took me a bout 3 read throughs to parse through all the “fucks” and “stupids”. I think there’s perhaps a more positive and less hyperbolic way to frame this way.
Append only data is a good option, and basically what I ended up doing in this case. It pays to know what data is critical and what isn’t. I referenced the acts_as_paranoid and it pretty much does what you’re talking about. It makes a table append only, when you modify a record it saves an older copy of that record. Tables can get HUGE, like really huge, as in the largest tables i’ve ever heard of.
/u/kyrias pointed out that large tables have a number of downsides such as being able to perform maintenance and making backups.
you can do periodic data warehousing though to keep the tables as arbitrarily small as you’d like but that introduces the possibility of programmer error when doing the data warehousing. it’s an easier problem to solve than making sure every destructive write is correct in every scenario though.
Tables can get HUGE, like really huge, as in the largest tables i’ve ever heard of
I have tables with trillions of rows in them, and while I don’t use MySQL most of the time, even MySQL can cope with that.
Some people try to do indexes, or they read a blog that told them to 1NF everything, and this gets them nowhere fast, so they’ll think it’s impossible to have multi-trillion-row tables, but if we instead invert our thinking and assume we have the wrong architecture, maybe we can find a better one.
/u/kyrias pointed out that large tables have a number of downsides such as being able to perform maintenance and making backups.
And as I responded: /u/kyrias probably has the wrong architecture.
Of course, the stupid programmer responds with some kind of made up justification, like saving disk space in an era where disk is basically free
It’s not just about storage costs though. For instance at $WORK we have backups for all our databases, but if we for some reason would need to restore the biggest one from a backup it would take days where all our user-facing systems would be down, which would be catastrophic for the company.
You must have the wrong architecture:
I fill about 3.5 TB of data every day, and it absolutely would not take days to recover my backups (I have to test this periodically due to audit).
Without knowing what you’re doing I can’t say, but something I might do differently: Insert-only data means it’s trivial to replicate my data into multiple (even geographically disparate) hot-hot systems.
If you do insert-only data from multiple split brains, it’s usually possible to get hot/cold easily, with the risk of losing (perhaps only temporarily) a few minutes of data in the event of catastrophe.
Unfortunately, if you hold any EU user data, you will have to perform an actual delete if the EU user wants you to delete their stuff if you want to be compliant with their stuff. I like the idea of the persistence being an event log and then you construct views as necessary. I’ve heard that it’s possible to use this for almost everything and store an association of random-id to person, and then just delete that association when asked to in order to be compliant, but I haven’t actually looked into that carefully myself.
That’s not true. The ICO recognises there are technological reasons why “actual deletion” might not be performed (see page 4). Having a flag that blinds the business from using the data is sufficient.
Very cool. Thank you for sharing that. I was under the misconception that having someone in the company being capable of obtaining the data was sufficient to be a violation. It looks like the condition to be compliant is weaker than that.
No problem. A big part of my day is GDPR-related at the moment, so I’m unexpectedly versed with this stuff.
There’s actually a database out there that enforces the never-delete approach (together with some other very nice paradigms/features). Sadly it isn’t open source:
Well, it’s finally happened.
So in my opinion, what should now happen is that the manufacturer is fined out of existence, the executive team and board are all fined into destitution, and the fear of god is put into everyone else in the market so they actually take security seriously. What I suspect actually will happen is that 465,000 patients pay out of their own pocket or insurance to rectify the incompetence of their medical device supplier, who ends up facing no meaningful repercussions. I really really hope it doesn’t come to actual deaths from poor security practices for us to fix this.
Karen Sandler has a great talk about this topic.
What should happen is that the software in these things should be treated like the public good that it is: free for everyone to inspect, audited, with paid and properly trained government employees or contractors to inspect it. When it’s a matter of life and death, this should not be left in the hands of secretive corporations.
I watched 20 min or so of that talk last night. It was good. The problem is that they’re certainly not going to go FOSS on this stuff. If we want to improve security/quality, we can instead make them go through one or more evaluations by independent pentesters. We can also let academics or new people in pentesting at reputable firms review it under NDA on the code itself w/ responsible disclosure like we ordinarily do. They do this stuff before the product is released, maybe while being developed, with the last one allowed to happen after. This order to reduce costs, increase industry acceptance, and (best for last) kill less people by spotting problems early.
Just a regulation forcing high-quality development practices followed by strong review will drive quality up. That was evidence by the TCSEC for security and currently DO-178C for safety in aerospace. The latter created an entire ecosystem of tooling including model-based development, static analyzers, certified compilers (or insert tool here), safer languages, graphics stacks, and even companies expediting certification-oriented tasks. Each component is done the best they can since re-certification likely costs more than removing the defect before certification. Then, they re-use what components they can to further reduce certification costs to just new components plus integration specs/code. It’s a proven model.
I’ll note that neither safety nor security critical stuff allowed wireless by default unless it was absolutely necessary as part of the system. Having radio in military or aerospace is an example. Even then, many products mediate it some way with middleware or at the switch. Type 1-certified, WiFi adapters even address side channels. None of this shit is new in sub-fields with good regulation for safety or security. FDA just seems to have terrible regulation on software side based on everything I’ve seen. That’s on top of the companies’ own BS. Bad combo.
The problem is that they’re certainly not going to go FOSS on this stuff
Why the fuck not? We should be demanding it instead of saying “meh, it’ll never happen”. Certainly if we don’t even demand it won’t happen.
The FDA audits other things about medical devices, why not the software?
You should do a survey on how many people are “demanding” these things be FOSS versus how many told their elected representatives they wouldn’t vote for them unless it happened versus how many raised funds to pay (lobby) those representatives to make it happen. The tiny, tiny number you’ll get out of that survey of active participants versus comments or likes on social media is why I’m assuming it won’t happen. I’m instead going for compromises with vendors that help consumers with little or no damage to vendors bottom line. Those kind of compromises happen all the time just as a part of doing business (i.e. being competitive).
Of course, most of them won’t care. So, I try to push quality plus resulting PR as a good differentiator on the companies. On the consumers, I tell them to try to force it. I’m barely active in doing that now past educating them about options when they ask. The participation level is so low on average it’s not even worth trying. Occasionally, like with SOPA or Snowden, something becomes a hot media topic where I can try to push something with folks taking action. Otherwise, I’m focusing on ways to incentivize creating and maintaining better things now. Plus doing it myself with R&D.
Note: This is the U.S. I’m talking about. The environment in other countries might be better for reformers.
I can’t think of anything more out of step than current US politics than asking for a major increase in FDA budget to hire a technical staff capable of auditing proprietary software in medical devices. I mean, it’s rational and good policy, but we’ve had 50 years of marketing of the idea that regulation is evil.
I agree with you, however, software in vehicles affects billions of people a year and we haven’t achieved this so far :(
Why does that help anyone? Perhaps we’d have better results in the industry if we took the same approach that the NTSB takes to people who make mistakes piloting a plane without killing anyone.
The patient didn’t have to buy the medical device, did they?
If I found somebody dying in a desert and gave him some water, do I need to make him sign a waiver that says he won’t sue me if my water isn’t up to WHO drinking water quality standard?
People need to stop acting as if personal agency ceases to exist as soon as you step into a hospital.
The patient didn’t have to buy the medical device, did they?
Are you fucking serious?
I mean, no, technically they could have chosen to die instead of getting a pacemaker implanted, but if you’re so sociopathic you think that’s a real choice, I feel fully comfortable dismissing everything you have to say out of hand.
… This is an exploit that allows anybody within fifty feet to untraceably kill you. How the fuck is that the patient’s fault?
If the patient knew that, then buying it would be their fault if they had alternatives. In her case, she got an older one without wireless. If they patient didn’t know that, blame gets more complicated given how much demand side had to do with us having no or shitty INFOSEC requirements in this area. I still blame the manufacturer by default, though, since safety-critical markets are supposed to assess and mitigate risks where possible. They’re barely trying.
The patient didn’t have to buy the medical device, did they?
The patient was not properly informed that the device about to be plucked inside his meat is a malfunctioning and unsecured piece of shit.
Go read some Austrians writing about customary law, IDK.
The patient was not properly informed
Why didn’t the patient demand a security audit, or to see the result of one?
If the patient had, then he was able to make the best decision available to him. If he hadn’t, then who else could be blamed?
Why is an insecure device the only option available? Why don’t the patients or would be patients pressuring the manufacturers or the insurers to secure their devices before 465k of them were produced and implanted?
We all know people don’t care about security until it kills them or their company. Then they just cry and whine about how the government or the ‘industry’ should have prevented this. When all it needed was consumer demand for secured stuff, but then the blame would be placed squarely on themselves.
Do you audit each and every item that can potentially kill you? That heated toilet seat in the hotel? ABS controller in your vehicle or your Uber ride? Baby formula for your child?
Or do you count on manufacturers and regulatory bodies doing their job properly?
It’s called Consumer Reports. They have a large number of subscribers who read up on their functionality and quality assessments of various products before they buy. We call it being a responsible or informed consumer. If it’s a life-critical product, then making the wrong choice can kill you. In that case, it’s certainly the consumer’s responsibility to at least try to assess the risk. I noted in another comment manufacturers were working hard to make that hard to cover up their BS. Regulations, lawsuits, and vote with wallet are solutions to that when such is revealed as in this case.
I will note a subset of people can still avoid trouble by just making the connection that an Internet or wireless connection can equal being hacked. There’s a lot of lay people in the Mid-South US that know that. They see hacks in the media all the time. So, they avoid stuff with “too much technology” or Internet-connected if it’s about risk reduction. They’re being forced to make concessions balancing their needs as consumers in things like automobiles where every manufacturer seems to be cramming more computers into their vehicles. A lot of folks still buy older vehicles or appliances, though. I exclusively do with my appliances being more reliable, too. That’s another thing lay people know and can act on: “They don’t build things like they used to. Stuff used to last forever.”
Consumer Reports, do you seriously suggest reading them for about any possibly dangerous item you use?
I will note a subset of people can still avoid trouble by just making the connection that an Internet or wireless connection can equal being hacked.
I don’t know. Radio has been around for over a century, doesn’t get hacked. TV with bunny ears, doesn’t get hacked in a probably trillion-year of cumulative use worldwide. Why should the users (and potential pacemaker users sure peak out past middle age) expect a certified medical device to be vulnerable?
OK let’s take some any medical item that can kill you. Did you ever had a CAT scan or an X-ray? Do you guys read up on them in Consumer Reports? Do you call up the vendor and audit the source code?
“Consumer Reports, do you seriously suggest reading them for about any possibly dangerous item you use?”
You’re pulling that and the using it for X-ray machines out of thin air. I offered them as evidence that a lot of people do research on pro’s and con’s of major, buying decisions. Especially on reliability or safety with Underwriters Laboratories testing a lot of them. People also use reviews. Your comment indicated we should expect consumers to do nothing in a market where many suppliers have actively harmed them with that being widespread knowledge among the same consumers. That’s irrational.
Consumers should be doing research on something as serious as a pacemaker. They should also be collectively pushing for safety and security regulations. They’re already abuzz on Twitter or Facebook over various device hacks but that’s the limit of most of them’s participation in democracy, research, or buying decisions. I blame them as much as the suppliers given suppliers respond to changes in regulation or demand. Like they did with DO-178B in aerospace or quality in automotive after Toyota proved it was profitable.
“Radio has been around for over a century, doesn’t get hacked. “
Sure it does. People snooping on radio channels, interference with signals, or messing with people’s WiFi are known risks to large amounts of the general public due to media and/or personal experience. Movies and news stories also periodically feature direct attacks that happen wirelessly. That so many people without technical background connected the dots on this makes me think it’s pretty obvious to anyone paying attention. Many don’t though since apathy or laziness prevails. Some just miss the information or can’t understand it. I don’t judge them.
“Did you ever had a CAT scan or an X-ray? Do you guys read up on them in Consumer Reports? Do you call up the vendor and audit the source code?”
I ask about the risks. Doing so made me get less of them where possible (not often possible…). If I do take one, I’m also aware we can at least punish the manufacturer in court with a class-action. I also push everyone from consumers to legislators to force effective regulations on those companies for safety or security of built-in electronics. I’ve even recommended specific tools to people who claim to work for such companies to cost-effectively improve safety/security. So, yeah, I do what I can. Just not enough of us doing it with this problem really needing voters to force the suppliers to prevent risks or at least be honest about them. Like we do for food labels with a lot of benefit.
When you asked about the risks of an X-ray, did your radiologist include the machine malfunctioning and killing you? Mine did not, despite the possibility of software or hardware tampering.
Even with no tampering, https://en.wikipedia.org/wiki/Therac-25
We all know people don’t care about security until it kills them or their company. Then they just cry and whine about how the government or the ‘industry’ should have prevented this.
To support that point, they also won’t buy it most of the time. The security-focused products usually either die in the market or get minimal sales. The highly-usable software with better security (eg encrypted chat) is avoided for either network effects or just frivilous reasons. The demand side is strongly in favor of taking on risk even for convenience or even just what an app looks like. They’ll take on a known risk then blame others when shit happens. Or they’ll not even try to assess risk of something life-critical followed by saying it’s other people’s fault.
This case is trickier to assess in terms of the big picture. The medical vendors added functionality that could get people killed. They didn’t disclose the risks. Based on her report, I’m assuming the doctor was getting paid extra to push that vendor’s product. These are both common things in the market that get patients killed. The pay-offs in particular are so widespread that a major journal gave up one time on trying to find doctors to do independent reviews of medical studies who weren’t taking bribes from one of the companies. I think they raised it to $10k in that situation. Also, Karen Sandler had to work her ass off to even get close to talking to engineers who then blew her off. She’s a persistent, tech-oriented person who couldn’t get the risk assessment.
It’s clear that the market failed miserably if it’s about delivering value to the customers, esp saving their lives Alternatively, the market succeeded if it’s about delivering value to the doctors and medical companies while killing lots of their patients or causing recalls. This is a good example of why regulations on security precautions or at least risk disclosures are a good thing. Why those don’t happen is partly the consumers’ fault, though, as they’re rarely taking democratic action against these problems. Only a few of the hundred plus people I’ve talked to about these things even wrote a letter to Congress. The companies’ lobbyists are active as hell, though.
This seems like an odd response to me. Most others responding to you seem more interested in being tribal, which is unfortunate. But surely you must recognize that blame isn’t necessarily on the purchaser, no? If I sell you a device and claim it does x, y and z, and if it falls short of my claim, then I’ve committed fraud. In which case, purchasers of my device would be entitled to some form of restitution.
Your analogy falls flat for me, because the relevant bits here are what the seller is claiming they’re selling. And depending on the context, it is easy to see how even private courts can treat different situations based on precedent and standards of reasonableness.
Stop whining about things you don’t understand. Yeah, civilization is complex and it won’t get simpler just because you cry like a baby every time an actual strategy is required. If you can’t cope, learn alone or ask for help.
Ancap rulez byatch, I don’t have to understand a shit about the world or care about stupid fucker’s opinion. Sup.
Your ideology don’t scale and will lead to a series of wars ending with once again consolidated power or total annihilation. Either improve it or shut the fuck up.
This is a great usability improvement. Thank you Peter Hessler :)
That said, it’s still a little bit sad that this is only just being introduced in 2018.
Technically - OpenBSD has had various toolings (1, 2, 3 and others) to do this very task for quite a long time. But none of them were considered the correct approach.
Also, this is something that’s pretty unique to OpenBSD IMO. The end result is the same as with other systems.. sure. But this is unique among the unix world.
Q: What’s the difference?
Glad I asked! This is entirely contained within the base system and requires no tools beyond
ifconfig!Linux has
ip, iw, networkmanager, iwconfig..(likely others)… and they are all using some weird combo of wpa_supplicant.. autogen’d text files.. and likely other things.Have you ever tried to manually configure wireless on linux? It’s a nightmare. Always has been.
NetworkManager does a really good job of making it feel like there isn’t a kludge going on behind the scenes.. It does this by gluing all the various tools together so you don’t have to know about them. IMO this is what happens when you “get it done now” vs “do it right”.
With great simplicity comes great security:NetworkManager@6c3174f6e0cdb3e0c61ab07eb244c1a6e033ff6e:VS
ifconfig@1.368:Anyway - I guess my point is this:
No. The Linux’s I use come with an out-of-the-box experience that makes wireless as easy as clicking a box, clicking a name, typing in the password, it works, and it reconnects when nearby. They have been like that since I bought an Ubuntu-specific Dell a long time ago. They knew it was a critical feature that needed to work easily with no effort with some doing that upon installation so parts of the install could be downloaded over WiFi. Then, they did whatever they had to do in their constraints (time/talent/available code) to get it done.
And then I was able to use it with only breaks being wireless driver issues that had answers on Q&A sites. Although that was annoying, I didn’t have to think about something critical I shouldn’t have to think about. Great product development in action for an audience that has other things to do than screw around with half-built wireless services. That’s a complement about what I used rather than a jab at OpenBSD’s which I didn’t use. I’m merely saying quite a few of us appreciate stuff that saves us time once or many times. If common and critical, adoption can go up if it’s a solved problem with minimal intervention out of the box.
That said, props to your project member who solved the problem with a minimally-complex solution in terms of code and dependencies. I’m sure that was hard work. I also appreciate you illustrating that for us with your comparisons. The difference is almost comical in the work people put in with very different talents, goals and constraints. And m4 isn’t gone yet. (sighs)
And then something goes wrong in the fragile mess of misfeatures, and someone has to dig in and debug, or a new feature comes along and someone has to understand the stack of hacks to understand it, before it can be added. There’s something to be said for a system that can be understood.
There is something to be said for a system to be understood. I totally agree. I also think there’s something to be said for a reliable, more-secure system that can be effortlessly used by hundreds of millions of people. A slice of them will probably do things that were worth the effort. The utilitarian in me says make it easy for them to get connected. The pragmatist also says highly-usable, effortless experience leads to more benefits in terms of contributions, donations, and/or business models. These seemingly-contradicting philosophies overlap in this case. I think end justifies the means here. One can always refactor the cruddy code later if it’s just one component in the system with a decent API.
The problem isn’t the code, it’s the system that it’s participating in.
This just leads to systemd, and more misfeatures…
There’s Linux’s without systemd. Even those that had it didn’t before they got massive adoption/impact/money. So, it doesn’t naturally lead to it. Just bad, decision-making in groups controlling popular OS’s from what I can tell. Then, there’s also all the good stuff that comes with their philosophy that strict OS’s like OpenBSD haven’t achieved. The Linux server market, cloud, desktops, embedded, and Android are worth the drawbacks if assessing by benefits gained by many parties.
Personally, I’m fine with multiple types of OS being around. I like and promote both. As usual, I’m just gonna call out anyone saying nobody can critique an option or someone else saying it’s inherently better than all alternatives. Those positions are BS. Things things are highly contextual.
This is really great. I wish all other projects can do that, preferring elegancy to throwing code on the wall, but sometimes life really takes its toll and we cave and just make Frankenstein to get shit done.
I really appreciate all the works by OpenBSD folks. Do you have any idea how other *BSD’s deal with the wireless?
I don’t - sorry :D
Whats really sad is that the security of other operating systems can’t keep up despite having more man power.
It’s almost like if you prioritize the stuff that truly matters, and be willing to accept a little bit of UX inconvenience, you might happen upon a formula that produces reliable software? Who would have thought?
That’s what I told OpenBSD people. They kept on a poorly-marketed monolith in unsafe language without the methods from CompSci that were knocking out whole classes of errors. They kept having preventable bugs and adoption blockers. Apparently, the other OS developers have similarly, hard-to-change habits and preferences with less focus on predictable, well-documented, robust behavior.
I think this is just a matter of what you think matters. There’s no sadness here. The ability to trade off security for features and vice versa is good. It lets us accept the level of risk we like.
On the other hand, it’s really sad, for instance, that OpenBSD has had so many public security flaws compared to my kernel ;P
What’s your kernel?
It’s a joke. Mine is a null kernel. It has zero code, so no features, so no security flaws. Just like OpenBSD has fewer features and fewer known security flaws than Linux, mine has fewer features but no security flaws.
Unlike OpenBSD, mine is actually immune to Meltdown and Spectre.
Not having public flaws doesn’t mean you don’t have flaws. Could mean not enough people are even considering checking for flaws. ;)
Oh OK lol.
Would you like to clarify what you mean by this comment? Cause right now my interpretation of it is that you feel entitled to have complicated features supported in operating systems developed by (largely unpaid) volunteers.
I’m getting a bit tired of every complaint and remark being reduced to entitlement. Yes, I know that there is a lot of unjustified entitlement in the world, and it is rampant in the open source world, but I don’t feel entitled to anything in free or open source software space. As someone trying to write software in my spare time, I understand how hard it is to find spare time for any non-trivial task when it’s not your job.
Though I am not a heavy user, I think OpenBSD is an impressive piece of software, with a lot of thought and effort put into the design and robustness of the implementation.
I just think it’s somewhat disheartening that something this common (switching wireless networks) was not possible without manual action (rewriting a configuration file, or swapping configuration files, and restarting the network interface) every time you needed to switch or moved from home to the office.
Whether you feel like this is me lamenting the fact that there are so few contributors to important open source projects, me lamenting the fact that it is so hard to make time to work on said project, or me being an entitled prick asking for features on software I don’t pay for (in money or in time/effort) is entirely your business.
Just for the record I didn’t think you sounded entitled. The rest of the comment thread got weirdly sanctimonious for some reason.
Volunteers can work on whatever they want, and anybody’s free to comment on their work. Other operating systems have had the ability to switch wifi networks now for a long time, so it’s fair to call that out. And then Peter went and did something about it which is great.
Previously I’ve been using http://ports.su/net/wireless for wifi switching on my obsd laptop, but will use the new built-in feature when I upgrade the machine.
Some of the delay for the feature may be because the OS, while very capable, doesn’t seem designed to preemptively do things on the user’s behalf. Rather the idea seems to be that the user knows what’s best and will ask the OS to do things. For instance when I dock or undock my machine from an external monitor it won’t automatically switch to using the display. I have a set of dock/undock scripts for that. I appreciate the simple “manual transmission” design of the whole thing. The new wifi feature seems to be in a similar spirit, where you rank each network’s desirability and the OS tries in that order.
Interesting, I didn’t know about that to. I used my own bash script to juggle config files and restart the interface, but the new support in ifconfig itself is much easier.
I think the desire for OpenBSD to not do things without explicit user intent are certainly part of why this wasn’t added before, as well as limited use as a laptop OS until relatively recently.
Thanks for taking the time to respond.
To be clear, I don’t believe you’re some sort of entitled prick – I don’t even know you. But, I do care that people aren’t berating developers with: “That’s great, but ____” comments. Let’s support each other, instead of feigning gratitude. It wasn’t clear if that’s what you were doing, hence, my request for clarification.
That being said, my comment was poorly worded, and implied a belief that you were on the wrong side of that. That was unfair, and I apologize.
Well, I’m just not going to touch this…. :eyeroll:
I apologize if my response was a little bit snide. I’ve been reading a lot of online commentary that chunks pretty much everything into whatever people perceive as wrong with society (most commonly: racism, sexism, or millenial entitlement - I know these are real and important issues, but not everything needs to be about them). I read your remark in the context and may have been a little harsh.
Regarding the last segment - how WiFi switching worked before - there may have been better ways to do this, but I’m not sure they were part of the default install. When I needed this functionality on OpenBSD, I basically wrote a bash script to do these steps for me on demand, and that worked alright for me. It may not have been the best way, so my view of the OpenBSD WiFi laptop landscape prior to the work of Peter may not be entirely appropriate or accurate.
I’m more blunt here that leaving that to be true in a world with ubiquitous WiFi was a bad idea if they wanted more adoption and donations from market segment that wanted good, out-of-the-box support for WiFi. If they didn’t want that, then it might have been a good choice to ignore it for so long to focus on other things. It all depends on what their goals were. Since we don’t know them, I’ll at least say that it was bad, neutral, or good depending on certain conditions like with anything else. The core userbase was probably OK with whatever they had, though.
First, both free speech and hacker culture say that person can gripe about what they want. They’re sharing ideas online that someone might agree with or act on. We have a diverse audience, too.
Second, the project itself has developers that write cocky stuff about their system, mock the other systems, talk that one time about how they expect more people to be paying them with donations, more recently talk about doing things like a hypervisor for adoption, and so on. Any group doing any of that deserves no exception to criticism or mockery by users or potential users. It’s why I slammed them hard in critiques, only toning it down for the nice ones I met. People liking critiques of other projects or wanting adoption/donations should definitely see others’ critiques of their projects, esp if its adoption/donation blockers. I mean, Mac’s had a seemless experience called Rendevous or something in 2002. If I’m reading the thread right, that was 16 years before OpenBSD something similar they wanted to make official. That OpenBSD members are always bragging when they’re ahead of other OS’s on something is why I’m mentioning it. Equal treatment isn’t always nice.
“But, I do care that people aren’t berating developers with: “That’s great, but ____” comments. Let’s support each other, instead of feigning gratitude. It wasn’t clear if that’s what you were doing, hence, my request for clarification.”
I did want to point out that we’ve had a lots of OpenBSD-related submissions and comments with snarky remarks about what other developers or projects were doing. I at least don’t recall you trying to shut them down with counterpoints assessing their civility or positivity toward other projects (say NetBSD or Linux). Seems a little inconsistent. My memory is broken, though. So, are you going to be countering every negative remark OpenBSD developers or supporters make about projects with different goals telling them to be positive and supportive only? A general rule of yours? Or are you giving them a pass for some reason but applying the rule to critics of OpenBSD choices?
I’m not the Internet Comment Police, but you seem to think you are for some reason… Consider this particular instance “me griping about what I want.”
This wasn’t about OpenBSD at all. This started out as a request for clarification on the intent of an ambiguous comment that seemed entitled. There seems to be a lot of that happening today, and a lot of people defending it for whatever reason, which is even worse.
Your comments came off that way to me between the original and follow-ups. Far as not about OpenBSD, it’s in a thread on it with someone griping it lacked something they wanted. The OpenBSD members griping about third party projects not having something they wanted to see more of typically got no comment from you. The inconsistency remains. I’m writing it off as you’re just a fan of their style of thinking on code, quality, or something.
i think he’s sad that there haven’t been enough volunteers to make it happen sooner
That’s certainly one possibility, but not how I took it initially, and why I asked for clarification. I’ve seen too many people over the years attempt to disguise their entitlement by saying “thanks.”
I’d have liked to see this comment worded as:
Now, it’s also possible that the OP has ties to OpenBSD, and the comment was self-deprecating. But, one can’t infer that from the information we see without investigating who the OP is, and their affiliations…
one can’t infer anything beyond what they said
I’m not sure you understand what infer means. One certainly can infer meaning from a comment, based on previous actions, comments, etc..
My point remains: It’d be nice if the OP would clarify what they mean. My interpretation of the OP’s comment is just as likely as your interpretation. My interpretation is damaging to the morale of existing volunteer contributors to FOSS, and gives potential contributors to FOSS reasons to not contribute all together. I don’t know about you, but I want to encourage people to contribute to FOSS, as doing so moves us closer to a free and open society. And, that alone, is the reason I’m even bothering to continue responding to this thread…
he said “it’s sad.” that’s all we know. the leap is that this means “entitlement.”
“It’s pretty sad that it took someone else so long to prioritize work I think is necessary.”
I think it’s pretty easy to take what was written and read it this way. But maybe my glass is half empty today.
One can infer based on a comment, but the inference will most likely be dimwitted bullshit.
Without the magic trifecta of body language, vocal intonation, and facial expression us monkeys are just shit at picking up on any extra meaning. So take the comment at face value.
It expresses gratitude, it focuses on a specific recipient, and it lauds the feature. After, it regrets that it couldn’t/didn’t happen earlier.
There’s no hidden meaning here, and if the commenter intended a hidden meaning he’s a dufus too, because there’s no unicode character for those. U+F089A6CDCE ZERO WIDTH SARCASTIC FUCK YOU MARK notwithstanding.
At some point we all need to stop insisting that we have near-telepathic powers, especially outside of meatspace.
So, what you’re saying is that I can write anything I want, and since you can’t see or hear other clues, there’s no way you can downvote (in good faith) this comment as trolling?
Not sure text works that way…
They had the solution to do it all the time, but it wasn’t invented here, so it’s bad.