Thanks to you and the team for making Rubocop so flexible. Iâm glad that I have the choice to enforce the rules that I think matter, instead of debating whether to use the tool or not.
Everybody agrees coding style is Very Very ImportantâŚ.
Nobody agrees which style is Correct.
Sigh.
There is a life lesson in that somewhere.
Itâs that making (carefully considered, openly discussed with relevant peers and evidence weighed) choices and staying with those choices is what matters, while the exact choice doesnât matter.
I just had a child. I believe I would have an equally happy, perhaps happier, life without a child. But we chose to have this child and will nourish and love it forever. There was no Correct choice, but there was a choice.
Note: this article contains inline images of marked classified documents.
This comment is not intended to spark a discussion; simply put, some people may want to avoid the article for this reason.
Iâm pro having this warning. Maybe we should have a specific tag if people object to this helpful comment text.
Iâve posted this text before and hate to copy-pasta verbatim. If itâs inappropriate, please indicate and/or suggest alternative method to flag.
Can you explain why?
Not having the flags is a problem for US government contractors and employees because theyâre required to segregate classified and non-classified information onto different machines. Accidentally getting classified material (and a leak does automatically cause declassification) on an unclassified, government-owned, machine can lead to jail in the worst case. In the best case it can lead to your hard drive being wiped and re-imaged, losing you any work that isnât backed up. The latter happened to several folks I knew at government contractors after they clicked on links to news articles not realising that they included copies of the Snowden leaks.
Accidentally getting classified material (and a leak does automatically cause declassification) on an unclassified, government-owned, machine can lead to jail in the worst case.
I believe you missed a load-bearing not there. A leak does not automatically cause declassification.
I would differentiate between accidentally seeing what I am not supposed to see and (intentionally or not) breaking measures to see what I am not supposed to see.
Having your device wiped sounds unfortunate but I donât understand why solving that on the news aggravation site would be better than in some IT policy.
I would differentiate between accidentally seeing what I am not supposed to see and (intentionally or not) breaking measures to see what I am not supposed to see.
You might differentiate, but Davidâs point is that the US government (and presumably others) does not. Unless youâre planning to change how they handle such instances, it seems appropriate to warn people early.
Wait, I can go to jail because I clicked on the wrong link, and some document my government wanted to keep secret end up on my browser cache? Like, really? Whatâs next, being arrested because my set top box recorded the news, which happened to contain footage from Collateral Murder?
Or is it specific to unclassified, government owned machines?
If you hold a security clearance from the government, as part of the process whereby they clear you to handle classified information you agree that putting it onto systems which are not classified has certain consequences. Some of these consequences may include criminal punishment. I donât think that is common for accidental spillage.
It will, at a minimum, involve an unpleasant conversation with your facilityâs security officer.
I fully understand why some people appreciate warnings like this.b
I was not talking about accidental spillage. I was talking about storing on a computer Iâm using information that were already spilled, and doing that from a publicly available source. Like, Clicking on a link from the front page of Hacker news, and ending up downloading the image a slide from some secret NSA surveillance project.
The way @david_chisnall was writing seemed to encompass that case. If itâs a genuine accidental spillage, of course I should be trouble.
I was referring to getting classified data onto an unclassified machine, accidentally, by clicking a link form the front page of HN or some such, as âaccidental spillage.â The fact that the data does already exist in a public channel doesnât really matter. The cleared person putting it onto an unclassified machine is still party to spillage.
If you hold a security clearance from the government that marked the documents in the linked post, you received clear training on the possible consequences for this. If you donât, as @pushcx mentioned, you donât have anything to worry about from viewing them.
Thatâs a very strange definition of âspillageâ to be honest. One that we may suspect is tailored to facilitate prosecution if a document ends up where it should not end up. Training or not, the mistake is unavoidable. Literally, flat out _unavoidable.
Hereâs how it plays out:
OK, OK, people with security clearance should never browse the web outside of private mode. The cache should be cleared after each sessions, cookies erased, the whole shebang. Youâre still not out of the woods yet:
What silly precautions must people take to avoid being in trouble? Because if thatâs the kind of risk involved, I donât even want security clearance, to the point of being okay with losing a contract over this.
The system is designed around the principle that everyone should know the least amount of classified info necessary, siloâed as strictly as is feasible. If you have a US security clearance, you are part of this system and swear to keep it this way under risk of heavy penalties. The concept is that if you, say, find an open folder full of documents marked âsecretâ lying on a table in a coffee shop then you do not casually leaf through it to see what all the fuss is about, you close it immediately and take it to your security officer.
How this translates to a computerized world where it is possible to very easily copying stuff by accident, is imperfect to say the least. But if someone has a US security clearance they keep their lives much, much simpler by adhering to the strictest possible interpretation. The Gestapo wonât break down your door and drag you off with a bag over your head, but if someone asks âhave you ever seen classified documents you shouldnât haveâ, itâs a lot simpler to be able to honestly say ânoâ without qualifications.
Because if thatâs the kind of risk involved, I donât even want security clearance, to the point of being okay with losing a contract over this.
Correct, you probably donât want it! That is the system working as intended.
No. This conversation is about the U.S. governmentâs system for classified documents, which includes training on scenarios like this. For those of us who donât hold a security clearance, thereâs nothing to worry about, though there is the still-developing topic of law and journalism in their publishing; The Pentagon Papers are a good starting point, and continue with the Snowden disclosures.
I do not intend to litigate anything. I am honestly confused about the scenario I describe. What is the purpose of taking such measures with respect to information that has effectively become publicly available?
The answers to why the system works that way are off-topic, a current hot-button topic for political argument, the subject of ongoing litigation, and are addressed in the links as well as many excellent government regulations, law articles, and books. I donât think I understand the intended design and current state of the system well enough to provide a worthwhile answer.
Itâs silly, but individuals wisely refrain from throwing themselves in the wood chipper just for the sake of demonstrating that. ;P
(I assume actual reasons revolve around having a simple bright-line rule.)
Well, I graduated from a film school. I have minimal knowledge of math and engineering (I dropped out of engineering after my second year). I still find a ton of value in SICP.
I think it is completely OK to recommend SICP. It is just that it is not an easy book; it requires effort. That effort required varies from reader to reader, and it may require you to go fetch another book and study for a while before coming back. It is OK to be challenged by a good book. A similar thing happens with TAoCP as well, heck, that book set was so above my pay grade that sometimes I had to go wash my face with cold water and think.
Now, what I think is important is to know when to recommend SICP or not. Someone says they want to learn programming basics fast because theyâre in some bootcamp and need a bit of background to move on, then youâd be better recommending something more suitable.
As for alternatives for SICP for those who donât want to dive too deep into math and engineering, I really enjoy How To Design Programs which Iâve seen being described as âSICP but for humanitiesâ.
As for alternatives for SICP for those who donât want to dive too deep into math and engineering, I really enjoy How To Design Programs which Iâve seen being described as âSICP but for humanitiesâ.
A side note, but another book that can work well as a prequel (or alternative) to SICP is Simply Scheme. In fact, thatâs exactly how the authors describe the book.
The only trouble with SICP is that it was written for MIT students, all of whom love science and are quite comfortable with formal mathematics. Also, most of the students who use SICP at MIT have already learned to program computers before they begin. As a result, many other schools have found the book too challenging for a beginning course. We believe that everyone who is seriously interested in computer science must read SICP eventually. Our book is a prequel; itâs meant to teach you what you need to know in order to read that book successfully. Generally speaking, our primary goal in Parts I-V has been preparation for SICP, while the focus of Part VI is to connect the course with the kinds of programming used in âreal worldâ application programs like spreadsheets and databases. (These are the last example and the last project in the book.)
Some of Bryan Harveyâs lectures on Scheme are available on YouTube. There used to be more, as I recall, but some are private now. A shameâI remember enjoying his lectures a lot.
I found the first chapter or two of SICP to be uncomfortably math heavy. But my recollection is that after those, itâs relatively smooth sailing.
I have to say comments like these and the GP are reassuring. Iâm working through it now as an experienced programmer trying to formalize my foundational computer science stuff and just having a hard time digging in on the start. Not that Iâm uncomfortable with recursion or Big O stuff, itâs just very information dense and hard to âjust readâ while keeping all the moving parts in your head space.
Brian Harvey is amazing.
He also wrote a series of books called âTeaching Computer Science LOGO Styleâ or similar (Heâs the author of USCB Logo).
I really enjoyed those books as Iâm an avid LOGO fan. Iâm still kinda sad that dynaturtles are effectively going to die because the only implementation still even remotely extant is LCSIâs Microworlds.
Iâve been wanting to learn Racket for a while. Itâs next on my list after Javascript. Sadly it takes me years to truly grok a programming language to any level of real mastery :)
In the realm of alternatives to SICP to teach programming, Iâve really enjoyed The little schemer and follow-up books.
I love that book. Did you read through the latest one? The Little Typer. I havenât yet moved past the seasoned schemer.
The latest one is in the virtual book pile. But Iâd like to get to it eventually. Thanks for the reminder. :)
The Little LISPer and its descendants are seriously pure sheer delight in book form.
They embody all the beautiful playfulness and whimsy I LOVE in computers that has been sucked out of so much happening today.
Bonus points: Even if you could care less about Scheme/LISP you learn recursion!
Bonus points: Even if you could care less about Scheme/LISP you learn recursion!
After struggling hard at trying to understand recursion in my freshman year with Concurrent Clean (the teachers pet language, a bit Haskell-like), this book made everything click. It also made me fall in love with Scheme because of its simplicity and functional high level approach. After the nightmare of complexity and weird, obscure tooling of Clean, this was such a breath of fresh air.
I donât get The Little Schemer. There doesnât seem to be a point to it, something itâs working towards. I feel like I should be enlightened in some way in the end, but it just seemed to end without doing anything special. What am I missing?
I like this approach. Recommend SICP, but make it SUPER clear that itâs a difficult book. Itâs OK to get to a point where you say âThis is over my headâ and for those who are up for it, thereâs an additional challenge there of âOK now take the next step! Learn what you need in order to understandâ.
Not everyone has the time or the patience for that, but as long as weâre 1000% clear in our recommendations, I think recommending SICP is just fine.
However IMO it is THE WORST to recommend SICP to people who are looking for a quick and easy way to learn to program or learn the rudiments of computer science.
Note that its code generating quality isnât any better than this, but seeing it in writing makes it more obvious.
This is entirely subjective but I have looked at zig a few times and it does feel enormously complicated to the point of being unapproachable. This is coming from someone with a lot of experience using systems programming languages. Other people seem to really enjoy using it, though⌠to each their own.
Huh, thatâs interesting to hear. Out of curiosity what where the features you found the most complicated?
Iâve had the exact opposite experience actually. Iâm comparing against Rust, since itâs the last systems language I tried learning. Someone described rust as having âfractaling complexityâ in its design, which is true in my experience. Iâve had a hard time learning enough of the language to get a whole lot done (even though I actually think rust does the correct thing in every case Iâve seen to support its huge feature set).
Zig, on the other hand, took me an afternoon to figure out before I was able to start building stuff once I figured out option types and error sets. (@cImport is the killer feature for me. I hate writing FFI bindings.) Itâs a much smaller core language than rust, and much safer than C, so Iâve quite enjoyed it. Although, the docs/stdlib are still a bit rough, so I regularly read the source code to figure out how to use certain language featuresâŚ
Item.c => |*item| blk: {
item.*.x += 1;
break :blk 6;
}
Itâs sigil soup. You cannot leverage old knowledge at all to read this. It is fundamentally newcomer hostile.
Any of these things in isolation are quite simple to pick up and learn, but altogether itâs unnecessarily complex.
Someone said something about common lisp that I think is true about Rust as well: the language manages to be big but the complexity is opt-in. You can write programs perfectly well with a minimal set of concepts. The rest of the features can be discovered at your own pace. That points to good language design.
Iâm thinking of anyopaque, all the alignment stuffâŚ
Maybe your systems programming doesnât need that stuff, but an awful lot of mine does.
what does errdefer add to the language and why does e.g. golang not need it?
A lot! Deferring only on error return lets you keep your deallocation calls together with allocation calls, while still transferring ownership to the caller on success. Go being garbage-collected kinda removes half of the need, and the other half is just handled awkwardly.
I didnât quite see the point for a while when I was starting out with Zig, but I pretty firmly feel errorsets and errdefer
are (alongside comptime) some of Zigâs biggest wins for making a C competitor not suck in the same fundamental ways that C does. Happy to elaborate.
Someone said something about common lisp that I think is true about Rust as well: the language manages to be big but the complexity is opt-in. You can write programs perfectly well with a minimal set of concepts.
Maybe, if you Box
absolutely everything, but I feel that stretches âperfectly wellâ. I donât think this is generally true of Rust.
I think Zigâs a pretty small language once you actually familiarise yourself with the concepts; maybe the assortment of new ones on what looks at first blush to be familiar is a bit arresting. orelse
is super good (and itâs not like it comes from nowhere; Erlang had it first). threadlocal
isnât different to C++âs thread_local
keyword.
I get that it might seem more unapproachable than some, but complexity really isnât whatâs here; maybe just a fair bit of unfamiliarity and rough edges still in pre-1.0. Itâs enormously simplified my systems programming experience, and continues to develop into what I once hoped Rust was going to be.
Being unfamiliar, nonstandard and having features for what you consider âoddly shaped usecaseâ may be exactly what makes it a worthwhile attempt at something âdifferentâ that may actually solve some problems of other languages, instead of being just another slight variant that doesnât address the core problems?
I personally think itâs unlikely another derivative of existing languages is likely to improve matters much. Something different is exactly what is needed.
what does errdefer add to the language and why does e.g. golang not need it?
This is a joke, right? Please tell me this is a joke.
Go does not need errdefer because it (used to?) has if err != nil
and all of the problems that came with that.
The developer of these libraries intentionally introduced an infinite loop that bricked thousands of projects that depend on âcolors and âfakerâ.
I wonder if the person who wrote this actually knows what âbrickedâ means.
But beyond the problem of not understanding the difference between âbrickedâ and âbrokeâ, this action did not break any builds that were set up responsibly; only builds which tell the system âjust give me whatever version you feel like regardless of whether it worksâ which like ⌠yeah, of course things are going to break if you do that! No one should be surprised.
Edit: for those who are not native English speakers, âbrickedâ refers to a change (usually in firmware on an embedded device) which not only causes the device to be non-functional, but also breaks whatever update mechanisms you would use to get it back into a good state. It means the device is completely destroyed and must be replaced since it cannot be used as anything but a brick.
GitHub has reportedly suspended the developerâs account
Hopefully this serves as a wakeup call for people about what a tremendously bad idea it is to have all your code hosted by a single company. Better late than never.
There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people (which doesnât make it any less of a good idea for people to make their code hosting infrastructure independent from Github). The developer was absolutely trolling (in the best sense of the word) and a lot of people have made it cleared that theyâre very eager for Github to deplatform trolls.
I donât blame him certainly; heâs entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.
The right solution here is for any users of these packages to do exactly what the developer suggested and fork them without the broken commits. If npm (or cargo, or any other programming language ecosystem package manager) makes it difficult for downstream clients to perform that fork, this is an argument for changing npm in order to make that easier. Build additional functionality into npm to make it easier to switch away from broken or otherwise-unwanted specific versions of a package anywhere in your projectâs dependency tree, without having to coordinate this with other package maintainers.
The developer was absolutely trolling (in the best sense of the word)
To the extent there is any good trolling, it consists of saying tongue-in-cheek things to trigger people with overly rigid ideas. Breaking stuff belonging to people who trusted you is not good in any way.
I donât blame him certainly; heâs entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order
And GitHub was free to dump his account for his egregious bad citizenship. Iâm glad they did, because this kind of behavior undermines the kind of collaborative trust that makes open source work.
to express his displeasure at companies using his software without compensating him in the way he would like.
Take it from me: the way to get companies to compensate you âin six figuresâ for your code is to release your code commercially, not open source. Or to be employed by said companies. Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.
Itâs not foolish to trust, initially. Whatâs foolish is to keep trusting after youâve been screwed. (Thatâs the lesson of the Prisonerâs Dilemma.)
A likely lesson companies will draw from this is that free software is a risk, and that if you do use it, stick to big-name reputable projects that arenât built on a house of cards of tiny libraries by unknown people. Thatâs rather bad news for ecosystems like node or RubyGems or whatever.
Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.
Thankyou. This is the point everybody seems to be missing.
I mean, it did. Hopefully companies will start moving to software stacks where people are paid for their effort and time.
Not if youâre being responsible and pinning your deps though?
Even if that werenât true though, the maintainer doesnât have any obligation to companies using their software. If the company used the software without acquiring a support contract, then thatâs just a risk of business that the company should have understood. If they didnât, thatâs their fault, not the maintainerâs - companies successfully do this kind of risk/reward calculus all the time in other areas, successfully.
I know there are news reports of a person with the same name being taken into custody in 2020 where components that could be used for making bombs were found, but as far as I know, no property damage occurred then. Have there been later reports?
Yeah, like proprietary or in-house software. Great result for open source.
Really, if I were a suit at a company and learned that my product was DoSâd by source code we got from some random QAnon nutjob â that this rando had the ability to push malware into his Git repo and weâd automatically download and run it â Iâd be asking hard questions about why my company uses free code it just picked up off the sidewalk, instead of paying a summer intern a few hundred bucks to write an equivalent library to printf ANSI escape sequences or whatever.
Thatâs inflammatory language, not exactly my viewpoint but Iâm channeling the kind of thing Iâd expect a high-up suit to say.
There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people
Each new incident is another feather. For some, itâs the last one to break the camelâs back.
in order to express his displeasure at companies using his software without compensating him in the way he would like.
This sense of entitlement is amusing. This people totally miss the point of free software. They make something that many people find useful and use (Very much thanks to the nature of being released with a free license, mind you), then they feel in their right to some sort of material/monetary compensatiom.
This is not miss universe contest. Itâs not too hard to understand that had this project been non free, it would have probably not gotten anywhere. This is the negative side of GitHub. GitHub has been an enormously valuable resource for free software. Unfortunately, when it grows so big, it will inevitably also attract this kind of people that only like the free aspect of free software when it benefits them directly.
This people totally miss the point of free software.
An uncanny number of companies (and people employed by said companies) also totally miss the point of free software. They show up in bug trackers all entitled like the license they praise in all their âempowering the communityâ slides doesnât say THE SOFTWARE IS PROVIDED âAS ISâ in all fscking caps. If you made a list of all the companies to whom the description âcompanies that only like the free aspect of free software when it benefits them directlyâ doesnât apply, you could apply a moderately efficient compression algorithm and it would fit in a boot sector.
I donât want to defend what the author did â as someone else put it here, itâs dumbshittery of an advanced level. But if entitlement were to earn you an iron âIâm an assholeâ pin, weâd have to mine so much iron ore on account of the software industry that weâd trigger a second Iron Age.
This isnât only on the author, itâs what happens when corporate entitlement meets open source entitlement. All the entitled parties in this drama got exactly what they deserved IMHO.
Now, one might argue that what this person did affected not just all those entitled product managers who had some tough explaining to do to their suit-wearing bros, but also a bunch of good FOSS âcitizensâ, too. Thatâs absolutely right, but while this may have been unprofessional, the burden of embarrassment should be equally shared by the people who took a bunch of code developed by an independent, unpaid developer, in their spare time â in other words, a hobby project â without any warranty, and then baked it in their super professional codebases without any contingency plan for âwhat if all that stuff written in all caps happens?â. This happened to be intentional but a re-enactment of this drama is just one half-drunk evening hacking session away.
Itâs not like they havenât been warned â when a new dependency is proposed, that part is literally the first one thatâs read, and itâs reviewed by a legal team whose payment figures are eye-watering. You canât build a product based only on the good parts of FOSS. Exploiting FOSS software only when it benefits yourself may also be assholery of an advanced level, but hoping that playing your part shields you from all the bad parts of FOSS is naivety of an advanced level, and commercial software development tends to punish that.
They show up in bug trackers all entitled like the license they praise in all their âempowering the communityâ slides doesnât say THE SOFTWARE IS PROVIDED âAS ISâ in all fscking caps
Slides about F/OSS donât say that because expensive proprietary software has exactly the same disclaimer. You may have an SLA that requires bugs to be fixed within a certain timeframe, but outside of very specialised markets youâll be very hard pressed to find any software that comes with any kind of liability for damage caused by bugs.
Well⌠I meant the license, not the slides :-P. Indeed, commercial licenses say pretty much the same thing. However, at least in my experience, the presence of that disclaimer is not quite as obvious with commercial software â barring, erm, certain niches.
Your average commercial license doesnât require proprietary vendors to issue refunds, provide urgent bugfixes or stick by their announced deadlines for fixes and veatures. But the practical constraints of staying in business are pretty good at compelling them to do some of these things.
Iâve worked both with and without SLAs so I donât want to sing praises to commercial vendors â some of them fail miserably, and Iâve seen countless open source projects that fix security issues in less time than it takes even competent large vendors to call a meeting to decide a release schedule for the fix. But expecting the same kind of commitment and approachability from Random J. Hacker is just not a very good idea. Discounting pathological arseholes and know-it-alls, there are perfectly human and understandable reasons why the baseline of what you get is just not the same when youâre getting it from a development team with a day job, a bus factor of 1, and who may have had a bad day and has no job description that says âbe nice to customers even if you had a bad day or elseâ.
The universe npm
has spawned is particularly susceptible to this. Itâs a universe where adding a PNG to JPG conversion function pulls fourty dependencies, two of which are different and slightly incompatible libraries which handle emojis just in case someone decided to be cute with file names, and theyâre going to get pulled even if the first thing your application does is throw non-alphanumeric characters out of any string, because theyâre nth order dependencies with no config overrides. Thereâs a good chance that no matter what your app does, 10% of your dependencies are one-person resume-padding efforts that turned out to be unexpectedly useful and are now being half-heartedly maintained largely because you never know when youâll have to show someone youâre a JavaScript ninja guru in this economy. These packages may well have the same âno warrantyâ sticker that large commercial vendors put on theirs, but the practical consequences of having that sticker on the box often differ a lot.
Edit: to be clear, Iâm not trying to say âproprietary â good and reliable, F/OSS â slow and clunkyâ, we all know a lot of exceptions to both. What I meant to point out is that the typical norms of business-to-business relations just donât uniformly apply to independent F/OSS devs, which makes the âno warrantyâ part of the license feel more⌠intense, I guess.
The entitlement sentiment goes both ways. Companies that expect free code and get upset if the maintainer breaks backward compatibility. Since when is that an obligation to behave responsibly?
When open source started, there wasnât that much money involved and things were very much in the academic spirit of sharing knowledge. That created a trove of wealth that companies are just happy to plunder now.
releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.
Was that honestly the intent? Because in that case: what hubris! These libraries were existing libraries translated to JS. He didnât do any of the hard work.
There is further variation on the âbrickedâ term, at least in the Android hackerâs community. You might hear things like âsoft brickedâ which refers to a device that has the normal installation / update method not working, but could be recovered through additional tools, or perhaps using JTAG to reprogram the bootloader.
There is also âhard brickedâ which indicates something completely irreversible, such as changing the fuse programming so that it wonât boot from eMMC anymore. Or deleting necessary keys from the secure storage.
this action did not break any builds that were set up responsibly; only builds which tell the system âjust give me whatever version you feel like regardless of whether it worksâ which like ⌠yeah, of course things are going to break if you do that! No one should be surprised.
OK, so, whatâs a build set up responsibly?
Iâm not sure what the expectations are for packages on NPM, but the changes in that colors library were published with an increment only to the patch version. When trusting the developers (and if you donât, why would you use their library?), not setting in stone the patch version in your dependencies doesnât seem like a bad idea.
When trusting the developers (and if you donât, why would you use their library?), not setting in stone the patch version in your dependencies doesnât seem like a bad idea.
No, it is a bad idea. Even if the developer isnât actively malicious, they mightâve broken something in a minor update. You shouldnât ever blindly update a dependency without testing afterwards.
Commit package-lock.json
like all of the documentation tells you to, and donât auto-update dependencies without running CI.
And use npm shrinkwrap
if youâre distributing apps and not libraries, so the lockfile makes it into the registry package.
Do you really think that a random developer, however well intentioned, is really capable of evaluating whether or not any given change they make will have any behavior-observable impact on downstream projects theyâre not even aware of, let alone have seen the source for and have any idea how it consumes their project?
I catch observable breakage coming from âpatchâ revisions easily a half dozen times a year or more. All of it accidental âoh we didnât think about that use-case, we donât consume it like thatâ type stuff. Itâs truly impossible to avoid for anything but the absolute tiniest of API surface areas.
The only sane thing to do is to use whatever your toolingâs equivalent of a lock file is to strictly maintain the precise versions used for production deploys, and only commit changes to that lock file after a full re-run of the test suite against the new library version, patch or not (and running your eyeballs over a diff against the previous version of its code would be wise, as well).
Itâs wild to me that anyone would just let their CI slip version updates into a deploy willynilly.
This neatly shows why Semver is a broken religion: you canât just rely on a version number to consider changes to be non-broken. A new version is a new version and must be tested without any assumptions.
To clarify, Iâm not against specifying dependencies to automatically update to new versions per se, as long as thereâs a CI step to build and test the whole thing before it goes it production, to give you a chance to pin the broken dependency to a last-known-good version.
Semver doesnât guarantee anything though and doesnât promise anything. Itâs more of an indicator of what to expect. Sure, you should test new versions without any assumptions, but that doesnât say anything about semver. What that versioning scheme allows you to do though is put minor/revision updates straight into ci and an automatic PR, while blocking major ones until manual action.
The general form of the solution is this:
Download whatever source code you are using into a secure versioned repository that you control.
Test every version that you consider using for function before you commit to it in production/deployment/distribution.
Build your system from specific versions, not from âlast updateâ.
Keep up to date on change logs, security lists, bug trackers, and whatever else is relevant.
Know what your back-out procedure is.
These steps apply to all upstream sources: language modules, libraries, OS packages⌠dependency management is crucial.
Amazon does this. Almost no-one else does this, but thatâs a choice with benefits (saving the set up effort mostly) and consequences (all of this here)
When trusting the developers (and if you donât, why would you use their library?)
If you trust the developers, why not give them root on your laptop? After all, youâre using their library so you must trust them, right?
Thereâs levels to trust.
I can believe youârea good person by reading your public posts online, but Iâm not letting you babysit my kids.
How do they ban them, theyâre not paying them? Unless you mean the people who did not pin the dependencies?
I think it is bannable on any platform, because it is malicious behavior - that means he intentionally caused harm to people. Itâs not about an exchange of money, itâs about intentional malice.
The behavior was intentionally malicious. Itâs not about violating a contract or guarantee. For example, if he just decided that he was being taken advantage of and removed the code, I donât think that would require a ban. But he didnât do that - he added an infinite loop to purposefully waste peopleâs time. That is intentional harm, thatâs not just providing a library of poor quality with no guarantee.
Beyond that, if that loop went unnoticed on a build server and costed the company money, I think he should be legally responsible for those damages.
Whatâs painful is the following:
% ps ax | grep -c firefox
24
No way to tell which ones actually use memory and whether I might want to reduce the number of processes created.
My motivation for limiting memory use over other aspects is that I use a dedicated profile for work stuff that involves outlook web, teams web plus jira and confluence. I absolutely donât care if something crash there but I care that these memory hogs are somehow constrained. They could even be twice slower if they used even 10% less memory. Right now with FF 95, Iâm completely at loss regarding memory usage.
Try any of
Unfortunately, we need this high amount of processes to mitigate Spectre vulnerabilities. See https://hacks.mozilla.org/2021/05/introducing-firefox-new-site-isolation-security-architecture/ for more
Oh. I had already gone to about:processes but your comment made me spend more time in it and now I understand it better. TBH the UX could really be improved. At least the PID shouldnât be only something at the end of the Name field because you might want to find by PID (if youâre looking at this because of something youâve seen in another tool).
What Iâd like to know is what the current model is. Itâs not one process per tab plus one process for each domain per frame for each tab. I have a single process for two of my tabs (same domain) and I have one process for each of for my three lobste.rs tabs.
Also, is there a way to have more sharing or is that a thing of the past? In other words, is there any hope that my two awful outlook and teams tabs can share more?
about:performance
Interesting: I noticed Ghostery being quite active⌠even after having paused it. Not sure what it was doing, but Iâm having none if it any more. NoScript and uBlock are probably enough anyway.
The more script and content blockers you have the more code will when a script is attempted to load. If performance is dear to you, set the Firefox builtin tracking protection to strict mode. Itâs a bit more performant to do less JS/C++ context switches but it adds up.
When Firefox is dealing with the 70-90% of undesired scripts, youâre addons will be less busy
Iâve always been under the impression that FFâs tracking protection ran after add-ons such as ublock because nothing is ever blocked on my machines while ublock blocks stuff.
Normal mode doesnât block at all. It loads stuff but with a separate, isolated cookie jar. This has shown as the best balance between privacy protection and web compatibility. Itâs shown that most of our users would blame the browser if some important iframe doesnât work or show up.
Now with a loading tab the user has something to interact with.
So gor power users the recommendation is to set it to âstrict modeâ, which doesnât isolate but actually block.
Good point, Iâve been wondering if FF blocks first, then addons, or vice-versa or even a mix of the two. Any thoughts?
This is incredible. Thoughts going out to all the enterprise devops people forced to update their Magneto installs on the Friday two weeks before Christmas!
Ah, Iâve seen at least one security researcher popping a Magneto install online with this so I just assumed it was Java based - presumably there was a Java component somewhere behind the scenes!
Which is exactly the problem of course - if any user supplied input can reach a backend Java component that logs it though log4j then youâre vulnerable.
Note the difference in spelling between your comments and soatokâs comment. Maybe youâre referring to two difference pieces of software?
Yeah, at the MANGA that I work at, tickets were filed on all affected teams with instructions to treat this like a Sev 1.
Maybe one company started it, but this expression is what I heard in many places. Including some that donât use the same system, but people knew what Sev 1 means. (Iâd probably write P1, even though thatâs not a thing at my current job)
I doubt that. https://www.google.com/search?hl=en&q=Sev%201%20incident
Looks like the employee is based in the UK. As you might expect, most of the responses to his announcement are Bad Legal Advice. This comment is also going to be Bad Legal Advice (IANAL!) but I have some experience and a little background knowledge so I hope I can comment more wiselyâŚ
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time. Opinions Iâve heard from solicitors and employment law experts suggest that this practice might constitute an over-broad, âunfairâ, contract term under UK law. That means you might be able to get it overturned if you really tried, but youâd have to litigate to resolve it. At any rate the de facto status is: they own it by default.
What employees typically do is seek an IP waiver from their employer where the employer disclaims ownership of the side-project. The employer can refuse. If youâve already started they could take ownership, as apparently is happening in this case. Probably in that scenario what you should not do is try to pre-emptively fork under some idea that your project is FOSS and that you have that right. The employer will likely take the view that because you arenât the legal holder of the IP that you arenât entitled to release either the original nor the fork as FOSS - so youâve improperly releasing corporate source code. Pushing that subject is an speedy route to dismissal for âgross misconductâ - which a sufficient reason for summary dismissal, no process except appeal to tribunal after the fact.
My personal experience seeking IP waivers, before I turned contractor (after which none of the above applies), was mixed. One startup refused it and even reprimanded me for asking - the management took the view that any side project was a âdistraction from the main goalâ. Conversely ThoughtWorks granted IP waivers pretty much blanket - you entered your project name and description in a shared spreadsheet and they sent you a notice when the solicitor saw the new entry. They took professional pride in never refusing unless it conflicted with the client you were currently working with.
My guess is that legal rules and practices on this are similar in most common law countries (UK, Australia, Canada, America, NZ).
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time.
This seems absurd. If Iâm a chef, do things I cook in my kitchen at home belong to my employer? If Iâm a writer do my kidsâ book reports that I help with become privileged? If Iâm a mechanic can I no longer change my in-lawsâ oil?
Why is software singled out like this and, moreover, why do people think itâs okay?
There have been cases of employees claiming to have written some essential piece of software their employer relied on in their spare time. Sometimes that was even plausible, but still itâs essentially taking your employer hostage. There have been cases of people starting competitors to their employer in their spare time; what is or is not competition is often subject to differences of opinion and are often a matter of degree. These are shadow areas that are threatening to business owners that they want to blanket prevent by such contractual stipulations.
Software isnât singled out. Itâs exactly the same in all kinds of research, design and other creative activities.
There have been cases of people starting competitors to their employer in their spare time;
Sounds fine to me, whatâs the problem? Should it be illegal for an employer to look for a way to lay off employees or otherwise reduce its workforce?
whatâs the problem?
I think itâs a pretty large problem if someone can become a colleague, quickly hoover up all the hard won knowledge weâve together accumulated over the past decade, then start a direct competitor to my employer, possibly putting me out of work.
Youâre thinking of large faceless companies that you have no allegiance to. Iâm thinking of the two founders of the company that employs me and my two dozen colleagues, whom I feel loyal towards.
This kind of thing protects smaller companies more than larger ones.
âŚstart a direct competitor to my employer, possibly putting me out of work.
Go work for the competitor! Also, people can already do pretty much what you describe in much of the US where non-competes are unenforceable. To be clear, I think this kind of hyper competitiveness is gross, and I would much rather collaborate with people to solve problems than stab them in the back (Iâm a terrible capitalist). But Iâm absolutely opposed to giving companies this kind of legal control over (and âprotectionâ from) their employees.
Go work for the competitor!
Who says they want me? Also I care for my colleagues: who says they want them as well?
where non-competes are unenforceable
Overly broad non-competes are unenforceable when used to attempt to enforce against something not clearly competition. They are perfectly enforceable if you start working for, or start, a direct competitor, profiting from very specific relevant knowledge.
opposed to giving companies this kind of legal control
As I see it we donât give âthe companyâ legal control: we effectively give humans, me and my colleagues, legal control over what new colleagues are allowed to do, in the short run, with the knowledge and experience they gain from working with us. Weâre not protecting some nameless company: weâre protecting our livelihood.
And please note that my employer does waive rights to unrelated side projects if you ask them, waives rights to contributions to OSS, etc. Also note that non-compete restrictions are only for a year anyway.
Who says they want me? Also I care for my colleagues: who says they want them as well?
Well then get a different job, get over it, someone produced a better product than your company, thatâs the whole point of capitalism!
They are perfectly enforceable if you start working for, or start, a direct competitor, profiting from very specific relevant knowledge.
Not in California, at least, itâs trivially easy to Google this.
As I see it we donât give âthe companyâ legal control: we effectively give humans, me and my colleagues, legal control over what new colleagues are allowed to do, in the short run, with the knowledge and experience they gain from working with us.
Are you a legal party to the contract? If not, then no, itâs a contract with your employer and if it suits your employer to use it to screw you over, they probably will.
I truly hope that you work for amazing people, but you need to recognize that almost no one else does.
Even small startups routinely screw over their employees, so unless Iâve got a crazy amount of vested equity, I have literally zero loyalty, and thatâs exactly how capitalism is supposed to work: the company doesnât have to care about me, and I donât have to care about the company, we help each other out only as long as it benefits us.
Go work for the competitor?
Why would the competitor want/need the person they formerly worked with/for?
Why did the original company need the person who started the competitor? Companies need workers and if the competitor puts the original company out of business (I was responding to the âputting me out of workâ bit) then presumably it has taken on the original companyâs customers and will need more workers, and who better than people already familiar with the industry!
Laying off and reducing the workforce can be regulated (and is in my non-US country). The issue with having employees starting competitor products is that they benefit from an unfair advantage and create a huge conflict of interest.
Modern Silicon Valley began with employees starting competitor products: https://en.wikipedia.org/wiki/Traitorous_eight
If California enforced non-compete agreements, Silicon Valley might well not have ended up existing. Non-enforcement of noncompetes is believed to be one of the major factors that resulted in Silicon Valley overtaking Bostonâs Route 128 corridor, formerly a competitive center of technology development: https://hbr.org/2016/11/the-reason-silicon-valley-beat-out-boston-for-vc-dominance
I donât think we are talking about the same thing. While I agree that any restriction on post-employment should be banned, I donât think it is unfair for an organization to ask their employees to not work on competing products while being under their payroll. These are two very different situations.
If the employee uses company IP in their product then sure, sue them, thatâs totally fair. But if the employee wants to use their deep knowledge of an industry to build a better product in their free time, then it sucks for their employer, but thatâs capitalism. Maybe the employer should have made a better product so it would be harder for the employee to build something to compete with it. In fact, it seems like encouraging employees to compete with their employers would actually be good for consumers and the economy / society at large.
An employee working on competing products on its free time creates an unfair advantage because the employees have access to an organization IP to build its new product while the organization does not have access to the competing product IP. So whatâs the difference between industrial espionage and employees working on competing products on their free time?
If the employee uses company IP in their product then sure, sue them, thatâs totally fair.
That was literally in the comment you responded to.
Joel Spolsky wrote a piece that frames it well, I think. I donât personally find it especially persuasive, but I think it does answer the question of why software falls into a different bucket than cooking at home or working on a car under your shade tree, and why many people think itâs OK.
Does this article suggest the employers view contracts as paying for an employeeâs time, rather than just paying for their work?
Could a contract just be âin exchange for this salary, weâd like $some_metric of workâ, with working hours just being something to help with management? It seems irrelevant when you came up with something, as long as you ultimately give your employer the amount of work they paid you for.
Why should an employer care about extra work being released as FOSS if theyâve already received the amount they paid an employee for?
EDIT: I realise now that $some_metric is probably very hard to define in terms of anything except number of hours worked, which ends up being the same problem
Does this article suggest the employers view contracts as paying for an employeeâs time, rather than just paying for their work?
I didnât read it that way. Itâs short, though. Iâd suggest reading it and forming your own impression.
Could a contract just be âin exchange for this salary, weâd like $some_metric of workâ, with working hours just being something to help with management? It seems irrelevant when you came up with something, as long as you ultimately give your employer the amount of work they paid you for.
Iâd certainly think that one of many possible reasonable work arrangements. I didnât link the article intending to advocate for any particular one, and I donât think its author intended to with this piece, either.
I only linked it as an answer to the question that I read in /u/lorddimwitâs comment as âwhy is this even a thing?â because I think itâs a plausible and cogent explanation of how these agreements might come to be as widespread as they are.
Why should an employer care about extra work being released as FOSS if theyâve already received the amount they paid an employee for?
As a general matter, I donât believe they should. One reason Iâve heard given for why they might is that theyâre afraid it will help their competition. I, once again, do not find that persuasive personally. But it is one perceived interest in the matter that might lead an employer to negotiate an agreement that precludes releasing side work without concurrence from management.
I only linked it as an answer to the question that I read in /u/lorddimwitâs comment as âwhy is this even a thing?â because I think itâs a plausible and cogent explanation of how these agreements might come to be as widespread as they are.
I think so too, and hope I didnât come across as assuming you (or the article) were advocating anything that needs to be argued!
I didnât read it that way. Itâs short, though. Iâd suggest reading it and forming your own impression.
Iâd definitely gotten confused because I completely ignored that the author is saying that the thinking can become âI donât just want to buy your 9:00-5:00 inventions. I want them all, and Iâm going to pay you a nice salary to get them allâ. Sorry!
There is a huge difference: Weâre talking about creativity and invention. The company isnât hiring your for changing some oil or swapping some server hardware. Theyâre hiring you to solve their problems, to be creative and think of solutions. (Which is also why I donât think itâs relevant how many hours you actually coded, the result and time you thought about it matters.) Your company doesnât exist because itâs changing oil, the value is in the code (hopefully) and thus their IP.
So yes, thatâs why this stuff is actually different. Obviously you want to have exemptions from this kind of stuff when you do FOSS things.
I think the chef and mechanic examples are a bit different since theyâre not creating intellectual property, and a book report is probably not interesting to an employer.
Maybe a closer example would be a chef employed to write recipes for a book/site. Their employer might have a problem with them creating and publishing their own recipes for free in their own time. Similarly, maybe a writer could get in trouble for independently publishing things written in their own time while employed to write for a company. I can see it happening for other IP that isnât software, although I donât know if it happens in reality.
I think the ânot interestingâ bit is a key point here. I have no idea what Bumble is or the scope of the company, and I speak out of frustration of these overarching âlegalâ restrictions, but its sounds like they are an immature organization trying to hold on to anything interesting their employees do, core to the current business, or not, in case they need to pivot or find a new revenue stream.
Frankly if a company is so fearful that a couple of technologies will make make or break their company, their business model sucks. Technology != product.
Similarly, maybe a writer could get in trouble for independently publishing things written in their own time while employed to write for a company
I know of at least one online magazineâs contracts which forbid exactly this. If you write for them, you publicly only write for them.
This is pretty much my (non-lawyer) understanding and a good summary, thanks.
If you find yourself in this situation, talk to a lawyer. However I suspect that unless you have deep pockets and a willingness to litigate âis this clause enforceableâ through several courts, your best chance is likely to be reaching some agreement with the company that gives them what they want whilst letting you retain control of the project or at least a fork.
One startup refused it and even reprimanded me for asking - the management took the view that any side project was a âdistraction from the main goalâ
I think the legal term for this is âbunch of arsehatsâ. Iâm curious to know whether you worked for them after they started out like this?
I think the legal term for this is âbunch of arsehatsâ.
https://www.youtube.com/watch?v=Oz8RjPAD2Jk
Iâm curious to know whether you worked for them after they started out like this?
I left shortly after for other reasons
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time
Is it really that widespread? Itâs a question that we get asked by candidates but our contract is pretty clear that personal-time open source comes under the moonlighting clause (i.e. donât directly compete with your employer). If it is, we should make a bigger deal about it in recruiting.
I would think the solution is to quit, then start a new project without re-using any line of code of the old project - but I guess the lawyers thought of this too and added clauses giving them ownership of the new project tooâŚ
How about a private leaderboard for lobste.rs? Join with this code: 400344-db76bd5d.
And hereâs my repo: https://github.com/Scorpil/aoc2021
Just linked to last yearâs thread in a top level comment; that mentions
There are 2 leaderboards for this site, here are the codes
989653-afc97283 (already full?)
400344-db76bd5d
Last yearâs thread: https://lobste.rs/s/3uxtgb/advent_code_2020_promotion_thread
Iâm not a security expert, but I found it fascinating to read about all this ceremony. The lengths they go to were eyebrow-raising for me: the modified air-gapped laptop with no hard-drive, booting from CD-ROMs stored in a tamper-proof bag in a safeâand the whole OS being reproducible. Incredible.
If youâre looking for more the Internet Assigned Numbers Authority put recordings of all of their key ceremonies on YouTube :) https://www.youtube.com/channel/UChND9hEeJQjtLDFZ-m8U47A
On the one hand it sounds like paranoid overkill. On the other hand, the cost of it is low relative to the risk.
From the âupdateâ at the bottom:
Go on, bag on me for being ignorant. I know what that really means.
What does that really mean?
I took it as a gender bias comment. This is a successful woman in tech who has faced harsh criticism from her peers, over the years, for being a woman in a male dominated industry.
I agreed with you since it made sense, but after reading THE ONE from the other comment, I no longer agree. It seems just a screed against internet trolls who work in positions where they never could break prod or, in this case, make database design decisions.
Her post THE ONE, which she links to in the first paragraph, should make it clearer.
That the commenter is more interested in putting someone down to make themselves feel/appear better than in actually engaging with the content of the article.
I keep saying Clojure is not a lisp. Itâs a productivity focused language that uses a lot of parenthesis.
People donât seem to like lisp. We keep seeing stories about tools written in CL being rewritten so coders can be hired to work on it. This might be the main reason people leave lisp, and one reason people donât start lisp projects.
Clojure somehow avoids these stories. Maybe because, from my perspective, it splits with lisp tradition while focusing on the best bits.
I keep trying to love Clojure, but I still feel happiest and most productive in Common Lisp. That said, Iâd use Clojure in a heartbeat if I were operating in a Java shop, where shipping as a JAR would be a win.
Funny anecdote: at a client some years ago, my team split in two to try out two new JVM languages. As part of a data migration project, one group wrote a migration tool in Scala, and the other a validation tool in Clojure. We met up at the end of the sprint to showcase our tools.
The Clojure demo was straightforward - hands-on demo followed by a walk through the code.
The Scala group started their walkthrough with a primer on type theory o_O
I donât think they wandered off, necessarily - one just had to understand a fair bit about the subject before reading the code.
Itâd have been the same with Lisp macros if weâd needed them on the Clojure project.
Have you tried programming in ABCL? I have a bit. With much respect to the maintainers, itâs pretty slow and not incredibly mature.
Because Clojure was seen as a more natural fit for the JVM than ABCL. I suggested it at the time but the team chose not to evaluate it.
What exactly makes it not a Lisp in your book? Itâs very different from Scheme (more CL-like things like nil punning and the namespace/package system) and very different from CL (more Scheme-like things like a functional focus and a single namespace for functions and variables), but thatâs valid - Scheme and CL are different enough from eachother to leave enough design space for another Lisp-like between the two.
I read them as saying âClojure is not a Lispâ with the goal to not dissuade people from trying/using Clojure, which they fear would otherwise happen, as âpeople donât seem to like Lispâ
To be clear I also donât think real Lisp people like Clojure. It doesnât âfeelâ the same. Even the attempt to put Clojure on Racket doesnât feel the same, to me, as Clojure/Clojurescript. As such, focusing on Clojure as a Lisp misses the boat. It isnât the same sort of language. Sure it has powerful macros, sure it has lists, and sure it overloads on the parenthesis⌠but thatâs not all Lisp languages are. R.H., author of Clojure, has said he tried to split with Lisp tradition on several things⌠because not everything thatâs Lisp is in Clojure.
A practical NEGATIVE example: I have errors in a Clojure function Iâm working on. I canât (currently in my workflow, surely there is a libraryâŚ) just stop execution there and let me investigate the run stack at that point in time. I want to. I could do that in a Lisp. Clojure isnât a Lisp.
I could do that in a Lisp.
As far as I know, CL is the only lisp which has restarts as part of the language; are you saying Racket, LFE, Emacs, etc are also not lisps because they donât have restarts?
Iâve heard a lot of shaky gatekeeping arguments from CL fans over the years but I think this is a new one.
Iâve heard a lot of shaky gatekeeping arguments from CL fans over the years but I think this is a new one.
I mean, I like the conditions & restarts system but I wouldnât say lack of it make Racket, Clojure, etc. not Lisps. Iâd just argue that it makes CL one that I prefer using.
Iâve heard a lot of shaky gatekeeping arguments from CL fans over the years but I think this is a new one.
Iâm not a CL fan. Just putting that out there. Iâm a major Clojure fan.
I know you say you need to look at languages as a whole rather than just obsessing over the cons cell so I wonât discuss that further.
I also know you work on other lisps that try to be as fun to write as Clojure.
My primary purpose in saying Clojure is not a Lisp is that I never get anywhere near the same âfeelingâ from other Lisps as Clojure (and Fennel, yes). Meanwhile Lisps have a bad rap to the point that saying âitâs written in a Lispâ leads many people to turn about and go âNEXT!â Rather than fighting the conception of lisps as so powerful yet having a bad rap, I prefer to say Clojure is not a Lisp since it doesnât fail in the way other Lisps are perceived to have failed.
Itâs easy to get people to write Clojure. Itâs easy to get people to read Clojure. Yes there are some performance problems with idiomatic Clojure but you can fix all those if they become actual problems. I still want to see it succeed in more places (so I can then use it for work in more places).
I enjoyed this quite a bit when I first saw it, and I still do kinda enjoy the original - it was never meant to be taken too seriously of course and it succeeds as a joke - but since then, the word âwatâ has become a trigger of rage for me.
These things tend to have a reasonable explanation if you take the time to understand why it does what it does. They tend to be predictable consequences of actually generally useful rules, just used in a different way than originally intended and coming out a bit silly. You might laugh but you can also be educated.
But you know how often I see people taking the time to try to understand the actual why? Not nearly as often as I see people saying âWATâ and just dismissing things and calling the designers all kinds of unkind things. And that attitude is both useless and annoying.
These things tend to have a reasonable explanation if you take the time to understand why it does what it does. They tend to be predictable consequences of actually generally useful rules, just used in a different way than originally intended and coming out a bit silly.
For me the most important takeaway is that rules might make sense by themselves, but you have to consider them in the bigger picture, as part of a whole. When you design something, you must keep this in mind to avoid bringing about a huge mess in the completed system.
Exactly. It is generally underappreciated how incredible hard language design is. The cases Bernhardt points out are genuine design mistakes and not just the unfortunate side effects of otherwise reasonable decisions.
Thatâs why there are very few languages that donât suffer from ugly corner cases which donât fit into the whole or turn out to have absurd oddities. Programming languages are different, contrary to the âwell, does it really matter?â mindset.
I donât know. What I always think about JS is that R4RS Scheme existed at the time JS was created and the world would be tremendously better off if they had just used that as the scripting system. Scheme isnât perfect but it is much more regular and comprehensible than JS.
I think one have to remember the context in which JavaScript was created; Iâm guessing the main use case was to show some funny âalertâ pop-ups here and there.
In that context a lot of the design decisions start to make sense; avoid crashes whenever possible and have a âdo what I meanâ approach to type coercion.
But yeah, I agree; we wouldâve all been better off with a Scheme as the substrate for maybe 80% of todayâs end-user applications. OTOH, if someone wouldâve told Mozilla how successful JS would become we could very well have ended up with some bloated, Java-like design-by-committee monstrosity instead
I donât think I know a single (nontrivial - thinking about brainfuck/assembly maybe) programming language with no âunexpected behaviourâ.
But some just have more or less than others. Haskell, for example, has a lot of these unexpected behaviours but you tend not to fall on these corner cases by mistake. While in javascript and Perl, it is more common to see such a âsurprise behaviourâ in the wild.
Another lesson I gather from this talk is that you should try to stick as much as possible in your well-known territory if you want to predict the behaviour of your program. In particular, try not to play too much with âauto coercion of typesâ. If a function expects a string
, I tend not to give it a random object even if when I tried it, it perform string coercion which will most of the time be what I would expect.
Well, there are several non-trivial languages that try hard not to surprise you. One should also distinguish betweem âunexpected behaviourâ and convenience features that turn out to be counterproductive by producing edge-cases. This is a general problem with many dynamically typed languages, especially recent inventions: auto-coercion will remove opportunities for error checking (and run-time checks are what make dynamically typed languages type-safe). By automatic conversion of value types and also by using catch-all values like the pervasive use of maps in (say) Clojure, you effectively end up with untyped data. If a function expects a string, give it a string. The coercion might save some typing in the REPL, but hides bugs in production code.
In javascript, overloading the + operator and the optional semicolon rules I would call unforced errors in the language and those propagate through to a few other places. Visual Basic used & for concatenation, and it was very much a contemporary of JS when it was new, but they surely just copied Javaâs design (which I still think is a mistake but less so given the type system).
Anyway, the rest of the things shown talk I actually think are pretty useful and not much of a problem when combined. The NaNNaN Batman one is just directly useful - it converts a thing that is not a number to a numeric type, so NaN is a reasonable return, then it converts to string to join them, which is again reasonable.
People like to hate on == vs === butâŚ. == is just more useful. In a dynamic, weakly typed language, things get mixed. You prompt for a number from the user and technically it is a string, but you want to compare it with numbers. So thatâs pretty useful. Then if you donât want that, you could coerce or be more specific and they made === as a shortcut for that. This is pretty reasonable. And the [object Object] thing comes from these generally useful conversions.
== vs ===
It definitely makes sense to have multiple comparison operators. Lisp has = (numeric equality), eq (object identity), eql (union of the previous two), equal (structural equality).
The problem is that js comes from a context (c) in which == is the âdefaultâ comparison operator. And since === is just ==, but more, it is difficult to be intentional about which comparison you choose to make.
Well, a lot of these things boil down to implicit type coercion and strange results due to mismatched intuitive expectations. Itâs also been shown time and again (especially in PHP) that implicit type coercions are lurking security problems, mostly because intuition does not match reality (especially regarding == and the bizarre coercion rules). So perhaps the underlying issue of most of the WATs in here simply is that implicit type coercion should be avoided as much as possible in languages because it results in difficult to predict behaviour in code.
Yeah, I perfer a stronger, static type system and thatâs my first choice in languages. But if it is dynamically typed⌠I prefer it weaker, with these implicit coercion. It is absurd to me to get a runtime error when you do like var a = prompt("number"); a - whatever;
A compile time error, sure. But a runtime one? What a pain, just make it work.
Lots of dynamic languages do this (e.g. Python, Ruby, all Lisp dialects that spring to mind), and IME itâs actually helpful in catching bugs early. And like I said, it prevents security issues due to type confusions.
Yeah, I think that this talk was well-intentioned enough, but I definitely think that programmers suffer from too much ânopingâ and too little appreciation for the complexity that goes into real-world designs, and that this talk was a contributor⌠or maybe just a leading indicator.
There was a good talk along these lines a couple of years ago, explaining why javascript behaves the way it does in those scenarios, and then presenting similar âWATâs from other languages and explaining their origins. Taking the attitude of âok this seems bizarre and funny, but letâs not just point and laugh, letâs also figure out why itâs actually fairly sensible in contextâ.
Sadly I canât find it now, though I do remember the person who delivered it was associated with ruby (maybe this rings a bell for somebody else).
Isnât the linked talk exactly the talk youâre thinking about? Gary is âassociated withâ Ruby and does give examples from other languages as well.
No. I was thinking of this, linked else-thread.
While things might have an explanation, I do strongly prefer systems and languages that stick to the principle of least surprise: if your standard library has a function called âmaxâ that returns the maximum value in an array and a function called âminâ that returns the position of the minimum element instead, you are making your language less discoverable and putting a lot of unnecessary cognitive load on the user.
As someone who has been programming for over 20 years and is now a CTO of a small company that uses your average stack of like 5 programming languages on a regular basis I donât want to learn why anymore, I just want to use the functionality and be productive. My mind is cluttered with useless trivia about inconsistent APIs I learned 20, 15, 10 years ago, the last thing I need is learning more of that.
The article says:
A âfinal encodingâ is one where you encode information by how you intend to use it
This tends to simplify your program if you know in advance how the information will be used
Itâs worth noting that âknowing what youâll do in advance of writing the programâ is not an inherent property of a final encoding; a final encoding can still be generic over how the data is used.
Thatâs the essence of tagless final style. I have some examples in http://catern.com/tfs.html
Three main things identified as improvements:
Note: Iâm the author of the linked article.
premature optimization is the root of all evil
I do not think this can be the takeaway.
There is no way to not prematurely optimize if you are trying for anything. You read the documents and choose the options that seem nicest.
I donât agree that was a case of premature optimisation. You had a parameter you were choosing the value for. You considered the options and chose the one that made sense.
I think premature optimisation would be if you spent time analysing just that bit and improving it (how about a kernel timer that causes thread activation to do the work?) without any evidence that this area needs attention/improvement.
Sounds exactly like what you should run into and learn from. Makes you understand what TCP does and what application protocols do and why. Someone like me canât learn this by reading about it and has to run exactly this gauntlet.
The article by Aaronson is interesting, but when I read about the work heâs rebutting I thought: Ah the old academic clickbait trick of naming your boring cost function something interesting so as to get more reads.
Also, I have sat in on some of Kochâs lectures. My main question in all of them is how someone brought up in the American heartland can have such a pronounced accent. I then heard rumors he cultivated that also as a form of academic clickbait.
I do have a quibble. Aaronson states, as do many researchers
The most obvious thing a consciousness theory could do is to explain why consciousness exists
I think there is one step before this. I think the most important thing a theory of consciousness should do is first define what consciousness is and isnât before it puts the rest of its clothes on.
I agree with your quibble. In fact I suspect thatâs the real meat of the problem here, is that there is a concept floating around that we call consciousness, and this particular attempt to nail something concrete to it doesnât seem to hit it all that close. Although I would have never known if I hadnât read this takedown, and I agree with Aaronson that a bad attempt that follows the rules of science is better than a scientist writing a non-scientific book of poetry about what conscious is, implicitly appealing to their own expertise.
Consciousness is an internal hallucinatory synesthetic experience. While it is impossible to rely on reports of internal worlds, many humans report consciousness, and so it is worth investigating in a manner similar to pain, which is also intangible but associated with reports of internal worlds.
A theory of quantum mechanics doesnât begin by defining what QM âisâ. We would never have gotten anywhere if people had gotten stuck on that point.
Definition is particularly important when talking about âconsciousnessâ because âconsciousnessâ can refer to a wide variety of related phenomena, many of which have totally disjoint implications.
Etc. Iâm sure we could come up with more given time.
An important thing Newton did was to define what inertia meant in his system, which let others build off of it. We need that for consciousness too. I see tons of bad writing about consciousness which just jumps from ânot asleepâ to âwhatever it is that medium size animals are doingâ to âadult-like reasoningâ with no consciousness (heh) of the slippage.
An important thing Newton did was to define what inertia meant in his system
Yes, and thatâs different from âdefining what inertia isâ. A word only has a meaning in a context. Subsequently his definition and use of âinertiaâ turned out to be so useful that it drowned out most other definitions. Not because it was somehow more âcorrectâ, just because it was more useful.
We need that for consciousness too.
I think you need to taboo the word the âconsciousnessâ if you want to get anywhere. Weâd be better of with different words, or even the short explanatory sentences you just gave, for what specifically you are talking about when you would use that word.
But even that doesnât rid you of the âhow do you define a gameâ problem. Take âBeing awake as opposed to asleepâ. There is a continuum of states between âawakeâ and âasleepâ (sleepwalking, drugs, psychiatric patients, the list of confounding phenomena is endless). Trying to define when someone should be considered âawakeâ or âasleepâ is hopeless and useless as no succinct definition will ever match our lived experience of whether we would consider someone âawakeâ or âasleepâ.
The only thing separating science from nonsense is the clear definition of what we are measuring.
Quantum Mechanics is extreme in that what we measure is extremely well defined but the interpretation of these measurements is difficult.
The field of âConsciousnessâ studies is extreme in that it is full of interpretations but extremely fluffy on what we measure.
Surprisingly, the two topics are directly connected by the Free Will theorem. Start with the KS lemma and build up to Conwayâs lectures. To sum up the connection: Is a particle conscious when it makes a choice in a quantum context?
Always first check which files
find
returns before operating on them. đ¤ˇ