All aboard the follow train - https://mastodon.social/@mulander
Followed. I just set up https://mastadon.social/@munyari
I, too, am on this network, at https://icosahedron.website/@mjn
Something that I think would fit the federated model well, if @jcs wanted to do it (or let someone else do it with the domain), is if there were a lobste.rs server. In addition to being able to follow individual people who use any federated server, the client also supports viewing a global firehose (all federated servers) or a local firehose (just this server). The latter, on smallish servers, can provide a kind of IRC-like community, while also interoperating with the broader Twitter-like usage.
I think people are still working out what the social configuration of a federated system would look like, though. Some tech exists, but a lot is still up in the air.
I’m https://mastodon.social/@stevelord if anyone wants to follow me. Mostly posting AVR development and security stuff.
I’m there at pnathan.
My Twitter tends to be a blend of liberal retweets, bad jokes, and occasional tech remarks.
I also have some kind of other gnu social account I forgot somewhere, but it wasn’t interesting enough there…
I’m there, https://mastodon.social/@balrogboogie
He doesn’t highlight this strongly, but I will: this is for x86 cpus. Other cpus have different designs for handling interrupts, different expectations, etc.
It’s a fascinating journey when you start digging into the deeper parts of different CPUs. Highly recommend to anyone serious about our art.
I don’t feel about this as strongly as @michaelochurch (I don’t know that I feel about anything as much as he does about everything) but yeah, this is pretty clearly at best a neutral fact, and an artifact of the truth that, to a first approximation, nobody knows how to deliver good software.
nobody knows how to deliver good software
This is provably false with data available, but with a caveat I think you’d probably agree with me on.
We definitely know how to deliver good software, and we figured it out somewhere in the mid-80s. Capers Jones even has a textbook called “The Economics of Software Quality” where it’s even cheaper in the long-term to produce high quality software over low quality software with less definition and explicit effort on quality. They also ship sooner, with less bugs according to the data.
EXCEPT: We don’t know how to define our work, and most people in management fear committing to any outcome, as they can be judged by it. Without a definition of what we’re trying to achieve with any serious amount of design work, any other attempts at quality are largely a joke. When management says something like “we should be flexible in the ever changing landscape of <insert completely static requirements>”, call them out.
Finally, individuals have made enormous sums of money shipping absolute piles of steaming shit because they are defining new markets of which they are monopolies. I strongly believe they could have made MORE with higher quality software.
You’re absolutely right on the point that we know how to make good software. It’s not mathematically impossible. It’s just infeasible with on the schedule and under the resource budget that corporations, even when software is their main product, expect.
I think the main issue pertains to where the costs and risks are put. The people who thrive in the corporate world are those who externalize negatives (costs, risks, embarrassments) in a way that they’ll stick to someone else. One easy way to do this is to externalize into the future, since any politically capable manager will be promoted away from whatever he’s working on before anything bad happens.
With good software, the risks are that schedules will slip, hiring will be slow because it’s hard to find good people, and the engineers might end up knowing more than their bosses. These threaten a middle manager’s position in the short term. With bad software, the costs and risks are greater but the probability that they manifest in a way that harms the manager’s career is very low. More likely than not, the shortfall will be detected years later, and that will provide enough time for the savvy corporate social climber (i.e. the middle manager) to blame someone else– if nothing else, he can always blame his subordinates… and he can always say, “I may not have communicated the importance of quality clearly enough, but that was 5 years ago and I’ve grown as a manager; I’m an SVP now!”
The only thing that business executives have insight into is how long things take, not how well they were done. Consequently, they’re always going to favor the shitty thing delivered early over the great thing delivered late.
If you care about quality software, your best bet is to work for a government or for a corporation that has been around for 50 years and plans to be around for another 50. This doesn’t guarantee technical excellence, but there is a shot at it.
Well, if you want to go this way, you could go all the way. If a company is committed to ship high-quality software, that means that some plan can fail to be adopted because it is incompatible with shipping quality software. And most likely the person who can determine this incompatibility would be more technical than the current set of top managers. This means one more person with effectively company-wide veto power, and that slightly devalues the power of the top managers, not just threatens the middle managers.
I agree. We know how to deliver quality software. See the CMM level 5 companies on this list:
I don’t know that an SEI rating can actually prove that a company knows how to deliver quality software. It proves that they can work in a prescriptive manner. But that’s just process, not output.
Specifically they must prove that they have metrics to measure the quality of their output, especially with defect rates per function point (line of code, etc), for 4 or 5 levels.
Maybe I don’t understand your point, but I don’t feel that it’s “just process” if they have requirements around what you measure (as opposed to what you do).
Accenture is CMM level 5 per the above link. I would not hire them to come within 1000 yards of a quality software project.
In practice this process model doesn’t correlate well with anyone actually being satisfied with the quality of software produced.
Note that it’s a very specific division of Accenture. Not trying to argue your personal experience.
Yes, exactly. I think many on here are probably not old enough to remember the CMM craze. There was a LOT to getting CMM certified, and IMO attaining a level 5 CMM certification has weight.
That’s not true. Airplanes, air traffic control systems, space craft, and most other safety critical embedded systems all have “good” software that doesn’t fail very often. The problem is that it takes a lot of time and is very expensive, and most people choose cheap and broken over expensive and correct.
Open office plans make all kinds of development more expensive because nobody can concentrate for long and that makes everything take longer.
Speaking with embedded dev hat on. One reason for a lot of the systems working properly is very limited interaction surface which is easy enough to test semi-comprehensively. Another is limited, conservative functionality: e.g. PLCs use ladder logic since forever.
That said, there’s rarely rocket science inside all that, and it’s not particularly pretty. I’ve seen an industrial Ethernet switch vendor run Node, seizing the switching fabric when you refresh the stats page; a certain large PLC/automation vendor who can’t TCP/IP properly and so on. The reason one would think things are smooth there is one never sees the ugly side of it :)
I mean, I did say, “to the first approximation.” Sure, we could all work like NASA did on shuttle flight control software, but then much less software would be produced. I think I’d probably be OK with that, but there are other imperatives at work.
And yes, open plan offices are a catastrophe.
I think that proves the point right? We do know how to deliver quality software, it’s just not a tradeoff we’re willing to make for software that isn’t critical to safety.
You can argue that OpenSSL et. all are critical to safety too, but it isn’t in direct control of a car or the space shuttle.
Sure. But the incentive structure we labor under is such that the choice is between shitty software and no software, realistically. I do wonder about the implicit warranty waiver and other shitty language in license agreements, as this problem is not one of science, but rather of law.
A lot of those systems are riddled with bugs that are avoided not because the software is robust but because the only users of the software are highly trained quite literally in what not to do with the software. You don’t have Joe Random User just sitting down in the front of an A340, you have a guy/gal who’s been told specifically “if you put these inputs into the flight computer you cannot trust the output, so don’t do that”
It’s gotten better as time has gone on but it’s not as perfect as you’d think.
I’d take it a step further. For the most part we don’t really have an agreement on what good software is, which is pretty much a pre-requisite for reliably delivering it.
Is the facebook android app “good” software?
Is the linux kernel “good” software?
Einstein once said time is what you measure with a clock and space is what you measure with a stick. For most of us, good software is defined to be software our superiors think is good and our customers are willing to use.
That’s provably wrong. There’s companies that have been delivering good software for years. The high-assurance field all do it. Then there’s companies in competitive industries that charge a bit more for delivering better stuff. Comments downthread act like you need a NASA budget. They didn’t see the evidence clearly.
Cleanroom’s empirical results had it delivering very, low defects at an overhead ranging from slightly less than broken software to not much more:
http://infohost.nmt.edu/~al/cseet-paper.html
Old, high-assurance systems showed that reaching level of formal verification of design and near-exhaustive testing of system’s security (their focus) was between a 30-40% premium over regular, software process:
https://cryptosmith.files.wordpress.com/2014/10/lock-eff-acmp.pdf
A modern shop that does similar stuff claimed a 50% premium for software with almost no defects whose TCB could often provably avoid common defects:
http://www.anthonyhall.org/c_by_c_secure_system.pdf
Then there’s niche operators doing things such as using logic programming to encode the specs of the problem in a way that bypasses much of the coding and flexibility problems:
https://dtai.cs.kuleuven.be/CHR/files/Elston_SecuritEase.pdf
So, yes, there’s companies delivering good software. Some Cleanroom companies and Altran/Praxis even warranty their software to a specific defect rate. They fix any other problems at their own cost. Most of the problems are from either (a) industry not knowing the methods available to deliver low-defect software or (b) bad management making sure that doesn’t happen. Michael writes about the latter. My stuff is about the former mostly. There’s a lot of both going on.
hey NickP, I’ve seen you around, and you are pretty knowledgeable about this stuff. (Didn’t you post on Schneier’s blog a lot a few years ago?)
Do you perchance have an essay where you could elucidate “How To Do The Job Right”, along with sources?
Yep, it’s me. I switched it to nickpsecurity because NickP wasn’t available on many sites. Stayed for years posting my designs on Schneier’s blog since it used to have many talented engineers and businessmen delivering great peer review. Many left with multiple trolling operations, one maybe state-sponsored, drowning out all signal. Had to mostly back off. Now on Hacker News and here.
Far as the paper, Im not sure what that title would mean. Are you talking about corporate issues? I recommend PeopleWare book as a start on that. Are you talking about one of my essays on security engineering, certification, or embedding into businesses? With more detail, I can try to dig it up. I have a text file with links to many of them.
What is the specific advantage of being patch based vs file/object based? I havn’t dug into pijul at length, but a familiar problem with git/hg that, e.g., ClearCase solved was renames by abstracting files out to first class objects.
Happy to read any academic research on the differences, btw. :)
An interesting tidbit: https://en.wikipedia.org/wiki/SRGB - the visible color space in an sRGB monitor is far smaller than perceptible with normal human eyes.
This becomes an interesting exercise in thought when you consider that the physical representation of images is typically transmitted on screens these days: we’re literally losing pieces of information between source reality and eyeball sensing.
If you take up painting, you will, I think, realize that the nuance of color of the paint exceed the capacity of the monitor to display, as I did.
Yes, it’s even more fascinating if you think about it: Cameras nowadays are very much capable of capturing all perceivable colours. This information is still present in the raw files, but once you do raw postprocessing, this information get lost.
This is one reason why if you take a photo of a sunset with the rich, almost spectral reds and yellows, it turns out to be dull and almost uninspiring on your screen. Display technology is a joke, and I’m sure we’ll look back at our display technology nowadays in twenty years like we now look back at CRT screens of the mid 90’s. It’s hard to find good fluorescents in screens, but OLEDs and mLEDs might change this dramatically (research is underway).
Linux really needs better colour management. Everything is assuming sRGB, and we missed the chance in Wayland to do it right. Hell, even GIMP didn’t have proper colour management until a few years ago. Like in the Allegory of the Cave, we might be shocked in a few years when we realize that the entire infrastructure needs to be “reworked” in the interest of being able to actually see the colours our cameras are capturing. And up to this point, I hope everybody keeps their RAW data so they might actually be able to leverage the screens of the future. For what it’s worth, nobody would only keep the b/w prints if they had taken the images in colour in the first place.
Yes, the infrastructure is nearly completely absent outside of highly specialist equipment & software. You have to ensure the camera picks up everything, but the cameras these days render to a screen (phone or on-device LED); then, that has to be examined on the computer. The pipeline is flawed…. raw data -> software to display/manipulate it -> device drivers / OS display software -> monitor. Then as you manipulate it, you are adjusting an incorrect file with an incorrect rendering.
I would guess that Windows will eat this space alive in about 5 years, as Apple commodifies and shifts downmarket and the Linux world keeps falling off the cliff of locking into the server market. :-/
Real color theory is very complex, however this author just gave up and dealt with the HSL colorwheel, further teaching another generation that we should not look beyond RGB color spaces, which in my opinion is a fallacy once display technology gets better.
What resources do you recommend as an alternative? I’m interested in color theory and alternate color spaces, but haven’t really found any accessible resources for learning about it.
I would recommend to get your hands dirty and dive into the theory. There’s lots of handwaving in most sources available. Try to understand what the motivation behind the XYZ primaries is (to keep it short, they are three imaginary colors that, combined linearily, can create any perceivable colour), which followed from the fact that there are colours which cannot be “mixed” from RGB. I found the video series “Color Vision” by Craig Blackwell 1 2 3 4 5 to be very helpful and enlightening.
Now, once you’ve understood that, you have to understand that the “perfect” spectral colors (which means monochromatic light of different wavelengths) are the “boundaries” of human vision, so to say the convex hull of the XYZ-coordinates of all visible spectral colours (modulo lightness) is the realm of human vision. You can get those coordinates here. It turns out, these “coordinates” X(lambda), Y(lambda), Z(lambda) are called color matching functions (you can get them there as CSV’s).
The final piece in the puzzle is to convert these XYZ coordinates to common normalized formats like xyY or even CIELab or CIELuv, or even polar coordinate forms like HCL(uv). All these formulas can be found here, an excellent resource of transformation functions between all kinds of different formats, including XYZ <-> xyY and so on, by Bruce Lindbloom. He is a really awesome guy. Now the only thing left to do is plot the xyY data in ParaView or something and if you look at just the xy-plot you’ll see the familiar horseshoe. :)
If you look at the color transformations from RGB to XYZ, you’ll see that RGB data includes a lot of implicit information, including the reference white and which RGB primaries you actually choose. Most people, talking about RGB, are talking about sRGB, but you can “invent” an RGB color space easily by just defining three primaries. If you manage to convert an RGB coordinate to an XYZ, Luv or Lab coordinate, the latter are all “universal” in describing color perception. These spaces allow you to express any visible colour. With RGB, they over the years defined new colour spaces (Adobe RGB, ProPhoto RGB and so on), moving the primaries a bit to make the range of colours bigger, but in the end there are still quite a lot of colours left which just cannot be expressed within currently “popular” RGB colour spaces. There are drawbacks to working with anything other than sRGB coordinates (imaginary colours, no good software backing, lots of superfluous nomenclature, …), but you gain a lot of things. For instance, CIELuv is perceptually uniform, which means that if you select equidistant colors on a “line” within the uv-diagram, these colours will all be equidistant with regard to perception. If you work with HSL based on RGB, this does not work, which is very cumbersome if you want to select nice colours for diagrams and such. The reason for that is for instance that sRGB has a nonlinear gamma curve. You might want to work with Linear RGB then, but if you do any of that stuff on your data, you might as well work with the real thing.
I hope this introduction was sufficient to get you started. Send me a mail if you want to see some code. I might give a talk about it this year, but it takes some preparation.
Woah, thanks for the intro - I remember seeing your slcon talk a few months ago on color spaces and farbfeld, which is initially what got me interested. I’ll probably dive into all of it a bit more soon when I start fiddling with 3D graphics again. Thanks for the resources - I’ll definitely check them out :D
not OP, but https://www.handprint.com/HP/WCL/wcolor.html is probably the most accessible treatment I’ve found.
Color theory is interesting, but (I feel) a bit of a rabbit hole. I learned a little about color spaces a few years back while writing a blog post about how the difference in color space and gamma value between older Mac and Windows computers might have influenced some of the popular color themes.
I found Danny Pascale’s “A Review of RGB Color Spaces” a useful (if somewhat terse) source of information about computer color spaces.
Are there non-RBG displays coming down the pipeline? It makes sense to design for the displays that exist rather than the ones that might. I think this article is, as the title says, going for practical color theory rather than comprehensive color theory.
I assume you are referring to displays that can display colours beyond sRGB. Yes, these displays exist already. In fact, the current iPhone and most AMOLED displays fall within that group.
Real color theory is very complex
I think the issue here isn’t really about “real color” theory as in “all there is no know about color” but “i am making a blog what color what I make all the stuff in it?” and choosing 2 colors then making a light and dark version of each is a solid strategy for an otherwise ‘design-illiterate’ developer.
This is both intelligent and moronic at the same time. Simplicity is better: but ignorance of what you’re doing and needing is worse. Don’t farm off the understanding of your problem and your solutions to someone else… some vendor happily offering those solutions for a lo lo cost…. you’ll hurt.
I deal with this a lot: don’t use NoSQL reflexively. Use a dang Postgres instance until it’s clear that the reasons for using a NoSQL database ( e.g., dynamodb) outweigh using a SQL database (approximately this looks like “very high volume with very few relations”, but YMMV - do the due diligence).
I deal with this a lot: don’t use NoSQL reflexively. Use a dang Postgres instance until it’s clear that the reasons for using a NoSQL database ( e.g., dynamodb) outweigh using a SQL database (approximately this looks like “very high volume with very few relations”, but YMMV - do the due diligence).
I think this is backwards. If you’re using Postgres you’re signing up to a lot of constraints and non-obvious failure modes (“you want to do a 5-way unindexed join? Sure, knock yourself out. Oh, you put a few more rows in that table and your query’s running slowly?”) when you aren’t necessarily reaping the benefits (e.g. your database may be doing a lot of work enforcing ACID when you haven’t actually set up your transaction boundaries to correspond to something with business meaning; your inserts may commit slowly because your indices have to be updated every time when actually those indices are only used by daily batch queries; your database server may spend most of its CPU time parsing SQL syntax when you don’t even do any ad-hoc queries).
one might take a look at … https://www.mercurial-scm.org/wiki/ChangesetEvolution
I think human curated directories for specific niches are an idea worth pursuing. It’s essentially a link aggregator without the time domain.
The dump is mostly junk from some contractor’s home directory, no code or exploits, but lots of PDFs of file lists for possibly interesting stuff. I can’t help but feel like this is a distraction from the Russia story to discredit one of the agencies that has a painful relationship with Trump. I’m generally in favor of fairly radical transparency efforts, but I’m suspicious that the intention of whoever gave this material to WL was to distract, rather than illuminate.
I agree that this dump is crap. https://wikileaks.org/ciav7p1/cms/files/Sassy-Cat-Pic-640x607.jpg
Unit Tests! The CIA’s secret NOFORN weapon!
Eh. The James Coplien rant that went by awhile back.
Of course, it occurs to me there are all these documents in a trove of documents from a bunch who are world class experts in exploiting bugs in document readers to infect systems….
I do sort of wonder whether we haven’t been epically trolled by the CIA…
I don’t use this generic enterprise service bus pattern with queues. Some orgs do, but I’ve not been there.
What I tend to do is use it as a replacement for HTTP when I don’t need a response - the queue service is HA with durable messages (SQS, if you’re curious), and the consumers are unreliably there. The messages get axed when correctly processed, or sent off to a dead letter queue after a certain duration of time. Generally I strive for idempotent operations as a matter of design, just in case, but of course you can’t always do that. The key idea here is that it allows a buffer for spikes, as well as transient failures on the consumer side. There are drawbacks, of course - the added complexity is not always trivial in this design, but it’s far simpler than the article’s pubsub system. And, while it centralizes a lot of things into a SPOF system, it allows the contra-wise advice of “putting all your eggs into one basket, then watching that basket very hard”.
Unplanned outages are far worse than planned outages for risk management. And it was prime-time US time. And most orgs aren’t willing to incur the costs for being multicloud because it’s expensive to transition to that, so a very low ROI. ::shrug:: It is what it is.
If you want to guarantee reliability and invest deeply in your infrastructure for long-term reliably, build multicloud from the getgo, using open source components and an interface layer that permits multicloud rebalancing and switching. Not trivial, and it’s mostly a long-term play.
Sounds like there’s room for an open source solution that handles abstracting over the major cloud providers.
Privacy for the high-tech developed world is dead. What we have now is a zombie: any attention will deconstruct the spell and it’ll fall apart. The future is unevenly distributed, and you can slow it down by not acquiring an Alexa and smartdevices through the house; that will, eventually, not be an available option. Anyway, the implication is that if you would keep your thoughts secret, keep them in your head, unspoken, unwritten.
I worry about the inevitable “you can’t get car insurance unless you attach this gps dashcam to your car”. I’m sure there will be similar problems with “let us track your TV viewing/temperature/electricity/internet usage for a percentage off”.
In my country insurance providers are already offering discounts if you let them install their telemetry device :(
And then a decade later, “The biggest cause of accidents is people texting and driving. Install this rootkit on your phone if you want insurance.”
shared cultural rituals help establish tribal identity. news at 11.
but, by peering into the swirls of the tea and unpacking the leaves, we can start to reflect on our own world and how we choose shibboleths and rituals. what kind of tribe are we building and sustaining? never doubt that we build a tribal culture…
it’s a very good blog post.
And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!
The closed-mindedness of this statement astonishes me. How to reliably write correct software is by and at large an unsolved problem, especially when you add the constraint “cheaply enough”. Anything that increases the likelihood and reduces the cost of finding errors must be considered welcome. Rejecting one approach because we are too emotionally invested in another is dangerously irresponsible.
This is a lesson I had to learn the hard way. At some point in time, I was rabidly anti-testing. It offended my sensibilities (and, frankly, it still does) to run my code against a few test cases, when I had already proven it correct on paper for all infinitely many cases. Until one day I made an error while transcribing a proven correct program: I replaced a variable called v with another called w. As luck would have had it, both variables were of type int, so the type checker couldn’t find the error. The consequences were completely hilarious in retrospect - but back then I was furious.
All these constraints, that these languages are imposing, presume that the programmer has perfect knowledge of the system; before the system is written.
Now we know who hasn’t experienced the joy of refactoring typeful code.
And how do you avoid being punished? There are two ways. One that works; and one that doesn’t. The one that doesn’t work is to design everything up front before coding. The one that does avoid the punishment is to override all the safeties.
This contradicts my experience. Although I don’t use Swift or Kotlin, I use languages that are arguably even more typeful than them (Standard ML and Rust), and the last thing that would cross my mind is to work around the safety checks. Au contraire! I design my programs so that types catch as much as can reasonably be caught. (And, of course, manually prove correct and test the rest.)
This is a lesson I had to learn the hard way…
Knuth’s hilarious chestnut “beware of bugs in the above code; I have only proved it correct, not tried it” is, as typical, both useful and on-point. =)
I think this Knuth quote is going to get really annoying in a decade or two when dependent types and proof systems mature from academic experiments into industry tools.
Dependent types are beautiful, but there’s a real danger that we end up locking ourselves into a mindset where types are the only legitimate verification technique. One of the main points of my original message is that we must keep our options open.
Types solve some problems beautifully:
But types also have serious weaknesses:
Type systems tend to be more convenient for proving things about (“no program in this language ever deadlocks”) than proving things in (“this specific program doesn’t deadlock”). So type systems normally tackle the kinds of problems that language designers, rather than language users, want to rule out by construction.
Type theory is fundamentally based on natural deduction: typing rules are deduction rules, not axioms. This is great for implementing a type system on a computer (which is dumb and can only execute pre-programmed rules anyway), but awful for humans to calculate in (since natural deduction proofs tend to be looooong, whereas axiomatic systems can be explicitly designed to shorten proofs).
The people using Esterel SCADE, SPARK Ada, Atlier-B, and Mercury are already annoyed. They figure it’s easier to build productivity or usability on top of their tooling for safe software than turning usable, productive crud into something safe. Most industry either disagrees or refuses to really assess the tooling. Consistently for years or over a decade depending on the method. Hence the… aggravation.
already annoyed
I think of Rust/SML/Haskell as free editions of 6-figure installs of Coverity/CodeSonar/Klocwork. And comparing a reasonably sized well-typed program’s maintenance reliability/cost vs a puddle of Python (etc) is just laughable.
Mercury
Do you have a link to that? A couple searches doesn’t turn up anything.
Lol. That’s one way to look at it. Might even restate it similarly in the future when marketing such languages.
Regarding Mercury, it’s like a better Prolog with functional programming and performance boost.
At least one company uses it below. Many companies use Prolog since it’s essentially executable specifications for the domain. Im sure Mercury could have similar use.
Oh, it’s that Mercury! I was thinking it was a provable software framework. Nifty! I’ll take a look into it again, I keep wanting to write Prolog for things, but get stuck in the “boring business code that isn’t easily statable as horn clauses” bit.
And again, Im not saying it’s verified code so much as cheating around it a bit by executing your specs in the logic. Change specs also has no recode step with that property.
One other thing that might help you is to remember there can be error in any technique of quality assurance. It can come from a lot of places. The important thing is it might show up. Knowing this, it’s wise to have some redundancy with one technique catching problems in another helps counter a lot of that. It’s why A1 class of the Orange Book required formal specs, proofs, reviews, info-flow analysis, testing, and pentesting. In high-assurance systems, authors would find each of these would catch at least one, unique defect others missed. Precise specs, human reviews, and tests discovered the most consistently over time at reasonable cost. Proofs discovered most obscure or corner cases at high cost. Note that type systems are a form of formal specification but way weaker than what they were doing. Quick & dirty specs that catch common, small issues.
Exactly. Help in finding errors must be appreciated from wherever it might come, because problems can come from a lot of sources too. Back then, I was so obsessed with one tool (proofs) that handles some problems (coming up with the right design) that I had neglected that other problems were possible too (in this case, making sure that the implementation agrees with the design).
Shockingly, centralized decryption system controlled by third party allows snooping by other third party. This should be absolutely expected from Facebook, since Facebook’s perspective on privacy (from FB) has been well established, and since the system allows it.
Signal seems to be better. But, I look forward to having a fully independent crypto real-time messaging system that federates (i.e., allowing you and your friends to stand up a server and control its software), which would overcome the well-documented issues with Signal’s design.
I look forward to having a fully independent crypto real-time messaging system that federates (i.e., allowing you and your friends to stand up a server and control its software)
Check out https://conversations.im. It’s federated (based on XMPP), and supports multi-end to multi-end encryption using OMEMO, an improvement on the Off-The-Record protocol that Signal and WhatsApp use.
Has OMEMO been audited or formally proven secure? Until that’s been done, you probably should assume it’s broken. Innovations in security protocols tend to be of the disastrous kind when rolled out into production.
Yes, Radically Open Security published an analysis last June.
The OMEMO standard provides a protocol for secure communication with multiple devices. This protocol is only secure if both users apply good operational security in securing their devices and in adding devices of the other party. When both users are careful, they can set up a secure multi-device session. However, if one of the users makes a mistake and adds a malicious device, or if just one device of the users gets compromised, the authentication of all messages is compromised
Like matrix.org/riot.im? Been using it with a few friends. They have E2E encryption with a system to easily trust/distrust other clients keys and they put a lot of work toward bridge to external network such as really well working IRC and an incoming full Slack bridge.
this will sound like a pedantic weasel way of saying things, but like has no place in the area of crypto. “What are the qualities provided by the solution, have these qualities been vetted, and do they suit the needs of the operators?” is the question set I ask.
Matrix has gotten some press and has some momentum. I have not evaluated it. I would prefer to see a transparent audit by a trusted third party before I recommend or suggest it to people. I haven’t heard of such. It may be a rotten heap of vulns. It may be amazing. Assume it’s compromised until demonstrated otherwise; layer that compromise into your threat model.
Seems odd to announce intention to add more dependencies without stating what those dependencies are, and to say it’ll only support sbcl without a list of issues / problems that they’ll get to close.
I appreciate a maintainer’s need for sanity (WONTFIX4LIFE) but it seems odd that it’s a project’s stated goal to do things that are largely seen as negative. Less compatibility. More interdependent.
Does anyone have other resources, ml posts, etc about this? I switched back to Ion after playing with stumpwm for a few months.
sbcl is the consensus standard libre Common Lisp on Linux. It’s not a bad thing to support other implementations, but when your time is limited, locking to SBCL is absolutely the right thing to do.
One dependency I’ve been trying to add for a long time is Alexandria. So we can use parse-body to correctly process the declaration and documentation string in defcommand
If you look into the code base you’ll find a worse (re)-implementation of split-sequence (split-seq) and other utilities in Alexandria
SBCL only will simply the event-loop implementation, which currently is separated in SBCL and it will allow easy access to OS features through sb-unix and sb-posix. Another way to achieve this would be using iolib but that requires the installation of libfixposix which would bea burden on users.
Well, as a StumpWM user and Common Lisp developer, I welcome both of these changes. They make it much more likely that I’ll contribute to the project in the future.
I follow StumpWM development a little bit on GitHub, and I really don’t think these changes are a surprise to anybody or will affect very many people.
I’m pretty sure all of the Linux distros that have StumpWM in their repositories already build with SBCL anyway.
Submitted because just about every opinion in it is wrong, but Martin is still influential so we’re going to see this parroted.
Sadly yes. Most bizarre is that he seems to be directly contradicting some positions he’s held re:professionalism and “real engineering”.
A sampler to save people having to read through the thing:
If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
At some point, we agreed to stop using lead paint in our houses. Lead paint is perfectly harmless–defect-free, we might even say–until some silly person decides it’s paintchip-and-salsa o'clock or sands without a respirator, but even so we all figured that maaaaybe we could just remove that entire category of problem.
My professional and hobbyist experience has taught me that it if a project requires a defect-free human being, it will probably be neither on-time nor under-budget. Engineering is about the art of the possible, and part of that is learning how to make allowances for sub-par engineers. Uncle Bob’s complaint, in that light, seems to suggest he doesn’t admit to realities of real-world engineering.
You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.
Bwahahahaha. The great lie of system testing, that you can test the unexpected. A suitably cynical outlook I’ve seen posited is that a well-tested system will still exhibit failures, but that they’ll be really fucking weird because they weren’t tested for (in turn, because they weren’t part of the mental model for the problem domain).
We now have languages that are so constraining, and so over-specified, that you have to design the whole system up front before you can code any of it.
Well, yes, that sort of up-front design is the difference between engineers and contractors.
More seriously, while I don’t agree with the folks (and oh Lord there are folks!) who claim that elaborate type systems will save us from having to write tests, I do think that we can make tremendous advances in certain areas with these really constrained tools.
It’s not as bad as having to up-front design “the whole system”. We can make meaningful strides at the layer of abstractions and system boundaries that we normally do, we can quickly stub in and rough in those things as we’ve always done, and still have something to show for it.
I’ve discussed and disagreed at length with at least @swifthand about this, the degree to which up-front design is required for “Engineering” and the degree to which that is even desirable today–but something we both agree on is that these type systems do have a lot to offer in making life easier when used with some testing. That’s a probably a blog post for another day though.
And so you will declare all your classes and all your functions open. You will never use exceptions. And you will get used to using lots and lots of ! characters to override the null checks and allow NPEs to rampage through your systems.
And furthermore, frogs will fill the streets, locusts will devour the crops, the water will turn to blood and sulfur will rain from the sky! You know, the average day of a modern JS developer.
More likely, you’ll start at the bottom, same as we’ve always done, and build little corners of your codebase that are as safe as possible, and only compromise in the middle and top levels of abstraction. A lot of people will write shitty unsafe code, but it’s gonna be a lot easier to check it automatically and say “Hey, @pushcx was drunk last night and made everything unsafe…maybe we shouldn’t merge this yet” than it is to read a bunch of tests and say “yep, sure, :shipit:”.
~
In general, this kinda feels like Uncle Bob is starting to be the Bill O'Reilly of software development–and that makes me sad. :(
You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.
For some languages that step is called compiling.
I’m generally not a fan of the “but types are tests”-argument, but you rightly call that out.
“Nullness” is something that can be modeled for the compiler to easily analyse, so I don’t understand why he calls that out. (especially as non-null is such a prevalent default case and most errors of not passing a value are accidents).
I wish I could upvote this comment a thousand times. Concise, funny, but also brutally true. You nailed it.
… plus a thorough type system lets the compiler make a whole bunch of optimizations which it might not otherwise be able to do.
Thank you for the thorough debunking I didn’t have the heart for.
In general, this kinda feels like Uncle Bob is starting to be the Bill O'Reilly of software development–and that makes me sad. :(
Clean Code was a great, rightly influential book. But the farther we get from early 90s tools and understandings of programming, the less right Martin gets.
This post makes total sense if your understanding of types and interfaces is C++ and your understanding of safety is Java’s checked exceptions and both are circa 1995. I used them, they were terrible! But also great because they recognized a field of potential problems and attempted to solve them. Even if they weren’t the right solution (what first system is?), it takes years of experience with different experiments to find good solutions to entire classes of programming bugs.
This article attacks decent modern systems with criticisms that either applied to the problem 20 years ago or fundamentally misunderstand the problem. His entire case against middle layers of his system needing to explicitly list exceptions they allow to wander up the call chain is also a case in favor of global variables:
Defects are the fault of programmers. It is programmers who create defects – not languages.
Why prevent goto, pointer math, null? Why provide garbage collection, process separation, and data structures?
I guess that’s why this article’s getting such a strong negative reaction. The argument boils down Martin not understanding the benefits of features that are now really obvious to the majority of coders, then writing really high-flying moral language opposed to that understanding
It’s like if I opened a news site today to read an editorial about how not using restrictive seatbelts in cars is the only sane way to drive, and drivers who buckle their kids into car seats are monsters for deliberately crashing their cars. It’s so wrong I can barely figure out where the author started to hope explain the misunderstanding, but the braying moral condemnation demolishes my desire to engage. Martin’s really wrong, but he’s not working towards shared understanding so he’s only going to get responses from people who think that makes for a worthwhile conversation.
Java’s checked exceptions and both are circa 1995. I used them, they were terrible!
Interestingly for me, I came from a scripting language background and hated java checked exceptions with a passion. Because they felt tedious. It seemed lame that a large part of my programming involved IDE generated lists of exceptions. As I got more experienced and started writing software that I really want to not crash, I started spending a lot of mental effort tracking down what exceptions could be thrown in python and making sure I caught them all. Relying on/hoping documentation was accurate. I began to yearn for checked exceptions.
Ironically it seems like in java land they’ve mostly gone the route of magic frameworks and unchecked exceptions. So things like person.getName() can be used easily without worrying about whether or not the underlying runtime generated bytecode is using a straightup property access or if this attribute is being lazily initialized.
It seems like one of the simplest ways to retain your sanity is to uncouple I/O from your values and operate on simple Collections of POJOS. This gets into the arena of FP and monads, which use language level features to force this decoupling.
I also prefer the checked exception approach. Spent a lot of time with exceptions being thrown uncaught, got tired of it.
Why prevent goto, pointer math, null? Why provide garbage collection, process separation, and data structures?
I would say that Go has show there is a middle ground somewhere between 100% type-proven safety, and unsafe yet efficient paradigms.
I’m pretty fond of Rust, or Haskell, but also enjoy less strict tools like JS or Ruby. Of course, I would rather like it if my auto-cruiser were written in Rust rather than Node, but one tool’s success does not mean the others are trash: I may be mistaken but if Martin’s point is “type-safety sucks”, it seems you are just saying “non-type-safety sucks more”. I’m not convinced by either arguments.
My point was that the people had deliberate reasons for the features they included or removed. I’m repeatedly asking “why” because Martin’s article dismisses the creators' reasons with an argument about personal responsibility and by characterizing them as punishments. The arguments Martin makes against these particular features also apply broadly to features he takes for granted.
I was writing entirely on the meta level of flaws in the article, not trying to argue for a personal favorite blend of safety/power features.
More seriously, while I don’t agree with the folks (and oh Lord there are folks!) who claim that elaborate type systems will save us from having to write tests, I do think that we can make tremendous advances in certain areas with these really constrained tools.
Yes. This. Exactly. Evolutionary language features AND engineering discipline. No need for either or, that’s just curmudgeonly.
then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
Another argument to be made is productivity since he brought up a job. The productive programmers create maximum output in terms of correct software with minimum labor. That labor includes both time and mental effort. The things good, type systems catch are tedious things that take a lot of time to code or test for. They get scattered all throughout the codebase which adds effort if changing it in maintenance mode. Strong typing of data structures or interfaces vs manually managing all that will save time.
That means he’s essentially saying that developers using tools that boost their productivity should quit so less productive developers can take over. Doesn’t make a lot of business sense.
Bwahahahaha. The great lie of system testing, that you can test the unexpected. A suitably cynical outlook I’ve seen posited is that a well-tested system will still exhibit failures, but that they’ll be really fucking weird because they weren’t tested for (in turn, because they weren’t part of the mental model for the problem domain).
I wrote this a couple weeks ago, but I figure it’s worth repeating in this thread. I wrote a prototype in Rust to determine if using conjunctive-normal form to evaluate boolean expressions could be faster than naive evaluation. I created an Expr data type that represents regular, user-entered expressions and CNFExpr which forces conjunctive-normal form at the type system level. In this way, when I finished writing my .to_cnf() method, I knew that the result was in the desired form, because otherwise the type system would have whined. Great! However, it did not guarantee that the resulting CNFExpr was semantically equivalent to the original expression, so I had to write tests to give myself more confidence that my conversion was correct.
Testing and typing are not antagonists, they’re just different tools for making better software, and it’s extremely unnerving that someone like Uncle Bob, who has the ear of thousands of programmers, would dismiss a tool as powerful as type systems and suggest that people who think they are useful find a different line of work.
Thanks for the summary. Seems The Clean Coder has employed some dirty tricks to block Safari’s Reader mode, making this nigh on unreadable on my phone.
And furthermore, frogs will fill the streets, locusts will devour the crops, the water will turn to blood and sulfur will rain from the sky! You know, the average day of a modern JS developer.
As a modern JS developer, I’ve started using Flow and TypeScript and have found that the streets have far fewer frogs now :)
More like the Ann Coulter of programming in so much that it is increasingly clear that they spout skin-deep ridiculous lines of reasoning to trigger people so that they gets more publicity!
Remember, when one retorts the troll has already won. Don’t feed the troll!
~
A passing thought
defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
This brings to mind one of my Henry Baker’s taunting remark about our computing environments
computer software are “cultural” objects–they are purely a product of man’s imagination, and may be changed as quickly as a man can change his mind. Could God be a better hacker than man?
Has it not occurred to him that these languages come from programmers themselves? Of course it has. So sure, defaults are always the responsibility of people. Some are the fault of the application programmer; some are the fault of the people responsible for the language design. (And when one is knee deep writing some CRUD app whose technological choices are already set in stone, determining whose fault is it is of little use)
The entire point of software is to do stuff that people used to do by hand. Why on earth should we spend boatloads of hours writing tests to prove things that can be proved in milliseconds by the type system? That’s what type systems are for. If we were clever enough to write all the right tests all the time, we’d be clever enough to just not introduce NPEs in the first case.
I had the same reaction reading this. He’s off his rocker. The whole point of Swift being so strongly typed is that we’ve learned if the language does not enforce it, then it’s not a matter of if those bugs will happen but how often we will waste time dealing with them.
The worst part to me is that right off the bat he recognizes these languages aren’t purely functional; implying that there is a big difference between a language that enforces functional programming and one that doesn’t. Of course there is, and the same thing goes for typing.
He has just posted a follow up… http://blog.cleancoder.com/uncle-bob/2017/01/13/TypesAndTests.html
Alas, he says this…
Types do not specify behavior. Types are constraints placed, by the programmer, upon the textual elements of the program. Those constraints reduce the number of ways that different parts of the program text can refer to each other.
No Bob, Types are a name you give for a bundle of behaviour. It’s up to you to ensure that the type you name has the behaviour you think it has.
But whenever you refer to a type, the compiler ensures you will get something with that bundle of behaviour.
What behaviour exactly? That’s the business of you to decide and tests to verify or illustrate.
Whenever you loosen the type system…. you allow something that has almost, but not quite, the requested behaviour.
In every case I have investigated “Too Strict Type System”, I have come away with the feeling the true problem is “Insufficiently Expressive Type System” or “Type System With Subtle Inconsistencies” or worse, “My design has connascent coupling, but for commercial reasons I’m going to lie to the compiler about it rather than explicitly make the modules dependent.”
I know him as a standard name in the Agile and Ruby communities, I think he’s well-known in Java but am not close enough to it to judge.
My college advisor loved talking about him and referencing him, but I think he’s mostly lost his influence with programmers today. At least, most people I know generally disagree with everything he’s written in the past decade.
Such blatant refusal (“everything is wrong”) seasoned with mockery (“parroted”) is exactly what has been stopping me from writing posts on this very topic.
Declaring that the responsibility for your inaction belongs to strangers leaps over impolite into outright manipulation. I pass.
This sort of thing is a powerful argument for a macro system…