I smelled that the author is pushing his own, so I went to see what’s going on.
This is actually post-rationalization because although it gives a good rational, this is not really what’s going on. What is actually going on is that the modulus and division are connected.
The way how they are connected can be described as: a = (a / b) * b + a % b.
Division gives a different result depending on rounding. C99 spec says that the rounding goes toward to zero, but we have also had floor division implementations and systems where you can decide on the rounding mode.
If you have floor division, the 19 / -12 gives you -2. That is correct when the modulo operator gives you -5. If you do a round-towards-zero-division, the 19 mod -12 must give you 7.
On positive numbers, the rounding to zero and floor rounding give the same results.
Also checked on x86 spec. It’s super confusing about this. If the online debugger I tried was correct, then the x86 idiv instruction is doing floor division.
Forgive my extreme mathematical naivety, but a = (a / b) * b + a % b doesn’t make much sense to me. Given (a / b) * b will always equal a, doesn’t this imply that a % b is always 0?
The division operator in this case is not division in the algebraic sense, and it does not cancel with the multiplication such that (a / b * b = a) {b != 0}. Otherwise your reasoning would be correct.
To still make this super clear, lets look at 19 / -12. The real number division of this would give you -1.58... But we actually have division rounded toward negative (floor division) or division rounded toward zero, and it’s not necessarily clear which one is it. Floor division returns -2 and division rounding toward zero returns -1.
The modulus is connected by the rule that I gave earlier. Therefore 19 = q*-12 + (19 % -12). If you plug in -2 here, you’ll get -5 = 19 % -12, but if you plug in -1 then you get 7 = 19 % -12.
Whatever intuition here was is lost due to constraints to stick into integers or approximate numbers, therefore it’s preferable to always treat it as if modulus was connected with floor division because the floor division modulus contains more information than remainder. But this is not true on every system because hardware and language designers are fallible just like everybody else.
We don’t want to get submissions for every CVE and, if we do get CVEs, we probably want them tagged security.
while I agree with you in this case, I don’t particularly like the “I speak for everyone” stance you seem to be taking here.
This one is somewhat notable for being the first (?) RCE in Rust, a very safety-focused language. However, the CVE entry itself is almost useless, and the previously-linked blog post (mentioned by @Freaky) is a much better article to link and discuss.
Second. There was a security vulnerability affecting rustdoc plugins.
Do you think an additional CVE tag would make sense? Given there’s upvotes some people seem to be interested.
Yeah, I’d rather not have them at all. Maybe a detailed, tech write-up of discovery, implementation, and mitigation of new classes of vulnerability with wide impact. Meltdown/Spectre or Return-oriented Programming are examples. Then, we see only the deep stuff with vulnerability-listing sites having the regular stuff for people using that stuff.
There are a lot of potentially-RCE bugs (type confusion, use after free, buffer overflow write), if there was a lobsters thread for each of them, there’d be no room for anything else.
Here’s a list a short from the past year or two, from one source: https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&q=Type%3DBug-Security+label%3AStability-Memory-AddressSanitizer&sort=-modified&colspec=ID+Type+Component+Status+Library+Reported+Owner+Summary+Modified&cells=ids
i’m fully aware of that. What I was commenting on was Rust having one of these RCE-type bugs, which, to me, is worthy of discussion. I think its weird to police these like their some kind of existential threat to the community, especially given how much enlightenment can be gained by discussion of their individual circumstances.
But that’s not Rust, the perfect language that is supposed to save the world from security vulnerabilities.
Rust is not and never claimed to be perfect. On the other hand, Rust is and claims to be better than C++ with respect to security vulnerabilities.
It claims few things - from the rustlang website:
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
None of those claims are really true.
It’s clearly not fast enough if you need unsafe to get real performance - which is the reason this cve was possible.
It’s clearly not preventing segfaults - which this cve shows.
It also can’t prevent deadlocks so it is not guaranteeing thread safety.
I like rustlang but the claims it makes are mostly incorrect or overblown.
Unsafe Rust is part of Rust. I grant you that “safe Rust is blazingly fast” may not be “really true”.
Rust prevents segfaults. It just does not prevent all segfaults. For example, a DOM fuzzer was run on Chrome and Firefox and found segfaults, but the same fuzzer run for the same time on Servo found none.
I grant you on deadlocks. But “Rust prevents data race” is true.
I’m just going to link my previous commentary: https://lobste.rs/s/7b0gab/how_rust_s_standard_library_was#c_njpoza
I passed this around a few friends in my professional network who have used AWS extensively in the past, and some of whom still use it now. Results ranged from 3/20 (me, anti-bragging rights, lol) to 7/20. AWS’s visual and interface design is truly in a league of its own in terms of utter hostility to users.
I’m an operations professional who set up the AWS infrastructure for a “hip, well-funded startup,” and I’m here to join the 3/20 club.
I did get the color of the Node.js SDK right, which I’m proud of, having never noticed the logo consciously.
OK this site has THE MOST obnoxious “HEY AGREE TO ACCEPT OUR COOKIES, DAMMIT!” mechanism EVER.
I’m partially blind, and all I saw when surfing to this article was that the screen was way diimed and nothing I did seemed to fix it.
Then, at the VERY top of the screen, in TINY text was the !@#$ thing to accept cookies.
https://github.com/ryanbr/fanboy-adblock/blob/master/fanboy-cookiemonster.txt
Integrated in at least uBlock Origin as “Fanboy’s Cookiemonster List”.
hey at least they asked right?
cause it’s totally impossible to just set an option in your web browser to not save cookies (….well it used to be possible, but it’s not anymore because then you can never get past these nag boxes)
That’s like saying “They beat me down, tied me up and then asked if I’d like to come with them.” :)
An IBM/Lenovo SK-8835 Thinkpad desktop keyboard. I’ve also used the similar (newer, no numpad) SK-8855, and the SK-8845 is the same but has no numpad.
I use it because I’ve used Thinkpads since high school; I am highly acclimated to the touch and layout of the keyboard, and I use the trackpoint regularly (even though, at my desk, I have a separate mouse—a Logitech Trackman Wheel, model T-BB18—which I use primarily).
It has zero programmable features and is in all ways other than physical layout unremarkable. It supports an adequate range of hotkeys I’ve configured with xbindkeys. I find the comfort of using the keyboard far more important than the number of bells and whistles it comes with.
(As an aside, I’ve never liked the feel of any mechanical keyswitch I’ve tried. My preferred force profile is high initial resistance followed by collapse with a soft landing—in other words, exactly what good rubber-dome switches provide—which I have never seen in a mechanical switch. Buckling-spring switches have the right force profile but are in all other ways far too harsh. Topre switches are better, but I still significantly prefer my high-quality laptop keyboard rubber-dome scissor switches.)
I use the SK-8845. It’s a pretty good keyboard, but I still miss the one in my old ThinkPad T500. The SK-8845 has some flaws:
I’ve been considering converting a ThinkPad T500 keyboard/touchpad to USB for a while, but never quite had the motivation to do it.
The internet search experience suffered a setback when the major browsers abandoned the separate search box for the combined address/search box. Only FireFox retains this feature, where your default search engine is the first choice in a list.
In the days before Alta Vista became better than Yahoo, and then Google crushed all other search options, there were meta-search engines that combined, filtered, and formatted results from several search engines of your choice. IIRC Magellan was one of these. I’ve toyed with the idea of reviving this idea for my own use. Google and Bing are pretty similar, but not perfectly similar, and provide different results depending on whether you are signed-in or anonymous. DDG usually provides different enough results to be important. There’s a lot of room for innovation in meta-search.
Finally there are still all sorts of specialized search options. In this category I would start with Amazon and Wikipedia. There are also sites like noodle.com, specializing in education related searches.
DuckduckGo is my go to search.
It is simple and doesn’t have the Google bloat to it and thise smart searches like where you can generate a md5 hash for example in a search query or do number system conversions is pretty cool
Duckduckgo owns, its my configured default search on all devices. When i need something specific from Google, i use the bang feature for google, !g.
I never knew that was a bang available, my word. Is there a !b for bing too? (Update: there is wow)
I don’t know. I switched to DDG at home and I’ve always been able to find what I’m looking for. I still use Google at work so I’m able to compare and contrast. About the only place where Google is better (in my opinion) is in image search, and that may be due to how Google displays them vs. DDG.
Here’s a concrete example. Let’s say I’m trying to remember the name of the project that integrates Rust with Elixir NIFs.
First result for me for the query “elixir rust” on Google is the project in question: https://github.com/hansihe/rustler
After scrolling through three pages of DDG results, that project doesn’t seem to be listed or referenced at all, and there are several Japanese and Chinese-language results despite the fact that I have my location set to “United States”. I will forgive all the results about guitar strings since DDG doesn’t have tracking data to determine that I’m probably not interested in those (although the usage of the word “rust” in those results is in the term “anti-rust” which seems like a bad result for my query).
That query is admittedly obtuse, but that’s what I’ve become accustomed to using with Google. These results feel generally characteristic of my experience using DDG. I end up using the !g command a lot rather than trying to figure out how to reframe my query in a way that DDG will understand.
I think you did that wrong. You were specifically interested in NIF but left that key word off. Even Lobsters search engine, which is often really off for me, gets to Rustler in the first search when I use these: elixir rust nif. Typing it into DDG like this gives me Rustler at Page 1, Result 2.
Just remember these high-volume, low-cost engines are pretty dumb when not backed by a company the size of Google or Microsoft. You gotta tell them the words most likely to appear together. “NIF” was critical in that search. Also, remember that you can use quotes around a word if you know for sure it will appear and minus in front of one to eliminate bogus results. Put “site:” in front if you’re pretty sure which place or places you might have seen it. Another trick is thinking of other ways to say something that authors might use. These tricks 1990’s-early2000’s searches get me the easy finds I submit here.
I disagree that “NIF” was essential to that query. There are a fair number of articles and forum posts on Google about the Rustler library. It’s one of the primary contexts that those two languages would be discussed together. DDG has only one of those results as far as I see. Why? Even if I wasn’t looking for Rustler specifcally, I should see discussions of how those two languages can be integrated if I search for them together.
There are a fair number of pages where Elixir and Rust will show up without Rustler, too. Especially all the posts about new languages. NIF is definitely a keyword because you’re wanting a NIF library specifically instead of a page about Rust and Elixir without NIF. It’s a credit to Google’s algorithms that it can make the extra connection to Rustler pushing it on the top.
That doesn’t mean I expect it or any other search engine to be that smart. So, I still put every key word in to get consistently accurate results. Out of curiosity, I ran your keywords to see what it produces. The results on the top suck. DuckDuckGo is usually way better than that in my daily use. However, instead of three pages in, DuckDuckGo has Rustler on page 1, result 6. Takes about 1 second after hitting enter to get to it. Maybe your search was bad luck or something.
I did exactly that search and found it at the 5th position.
While “elixir rust github” put it at 1st position. Maybe you have some filters? I have it set to “All Regions”.
Google has so many repeated results for me that I feel they have worse quality for most of my queries than ddg or startpage. Maybe I’ve done something wrong and gotten myself into a weird bubble, but these days I find myself using Google less and less.
Guess so. I have been using it at uni though for a long time and gotten atleast what I needed.
But I admit that googs has more in their indexes.
Searx is a fairly nice meta search engine.
Finally there are still all sorts of specialized search options. In this category I would start with Amazon and Wikipedia.
DuckDuckGo has a feature called “bangs” that let you access them. Overview here. Even if not using DDG, their list might be a nice reference of what to include in a new, search engine.
I thought that was clear. What I like about the old style dedicated search box is it that its is so easy to switch between search engines.
I believe that you can use multiple search engines in an omnibar by assigning each search engine a keyword, and typing that keyword (and then space) before your search.
With keyword searching (a feature I first used in Opera, and which is definitely present in Firefox; I can’t speak to any other browsers), it’s “so easy” to switch between search engines—in fact, far easier than with a separate search box. I type “g nephropidae” to search Google, or “w nephropidae” for Wikipedia, “i nephropidae” for image search, or even “deb nephropidae” for Debian package search (there’s no results for that one).
This is not completely obvious from the user experience. Without visual cues, much available functionality is effectively hidden. You must have either taken the initiative to research this, someone told you, or you stumbled upon it some other way. This also effectively requires you to have CLI-like commands memorized, the exact opposite of what GUIs purport to do. And adding new search engines? That’s non-obvious.
1900: people going around on horses, public lightning using gas.
1960: cars, jet and nuclear powered airplanes, satellites, semiconductors, computers with LISP and COBOL compilers, antibiotics, fiber optics, nuclear fusion experiments (tokamak)
2020 - another 60 years and do we really have to show?
Compared to “commonplace” things like cars and antibiotics? Internet, GPS, maglevs, a vast array of surgical techniques, the absence of smallpox…
Compared to “works but government and academia only” things like satellites and compilers? Hololens, quantum computers, drones, railguns, graphene, carbon nanotubes, metamaterials…
Compared to “wildly experimental and probably won’t ever happen” things like tokamak and nuclear airplanes? Probably a lot of classified shit. Antimatter experiments at LHC. Arguably a lot of work with AI
Maglevs were invented in 1950s and first operated in 1970s. I also don’t have anything made from graphene, or know anyone who knows anyone owning a graphene artefact.
More importantly, none of that is imagination shattering from 1960s point of view. We do not have things mid-century people couldn’t come up with.
More importantly, none of that is imagination shattering from 1960s point of view. We do not have things mid-century people couldn’t come up with.
Antibiotics, heavier-than-air flight, cars, and computers (if you count Jacquard Looms) were all demonstrated before the 1900’ss. They weren’t imagination shattering from a 1890’s point of view.
Even the internet isn’t imagination shattering from an 1890’s point of view.
Antibiotics, heavier than air flight, and a programmable computer were not demonstrated before 1900s.
Do any of these look close to what our modern conceptions of these things are? Not really. But it shows that the evolution of the first demonstrations of ideas to widespread use of polished version takes time.
There’s a huge difference between observation of mold and a concept of antibiotics, no matter how trivial that sounds with hindsight.
The “uncontrolled hop” does not qualify as a flight, except in the most trivial sense.
The loom is not a computer, but I’d love to see a fizzbuzz with Jacquard patterns to prove me wrong.
It still means that all of the “imagination shattering” stuff in the 1960’s had precedents more than half a century old. We do not have things mid-century people could not have come up with. They did not have things 1800’s people could not have come up with, so we shouldn’t be thinking that our era is particularly barren.
I think it is reasonable to say that the reworking of daily life has slowed.
The stove, the refrigerator and the car changed the routine of life tremendously.
The computer might be more impressive by any number of measures but it didn’t rework daily life so much as add another layer on top of ordinary life. We still must cook meals and drive around.
The linear extension of the car and the stove would be the auto-chef and the flying/auto-driving car.
Both things are still further than is sometimes claimed by the press but the seem a bit closer than 2012. However, the automation offered by externally available power, which began in the 1800s, definitely has reached a point of diminishing returns.
We may experience further progress through computers, AI and such. But this seems to hampered by a “complexity barrier” - an equivalent amount of daily life automation as various technologies offered earlier through power now requires systems that are much more computationally complex. Folding towels really does turn out to be the hard part of washing, etc and even with vast advances in computational ability, we may still be at diminishing returns.
There have been significant advances since then (for instance, in medical treatments like cancer therapies and surgery—life expectancy in the US has risen from 70 to 79 since 1960), but nothing revolutionary, that would seem remotely as magical as the developments across the first half of the century.
My first suggestion would be to install VirtualBox, put your distro of choice inside that, and then run it full screen most of the time.
You could also try the Windows 10 Linux stuff. I haven’t tried it (because I don’t use windows) but those who do say it is pretty great.
Cygwin would be a very, very remote third. I used it once in $corporate_office_job and the best thing I could say about it was that it was better than nothing.
Worst case, if your manager can’t or won’t provide you with the tools you need to do your job, you need to move on. Never stay long in a job you don’t love, that’s how you lose your soul.
My first suggestion would be to install VirtualBox, put your distro of choice inside that, and then run it full screen most of the time.
This is my life right now. It is decidedly second class; all the corporate-mandated bloatware is still there wasting utterly ludicrous amounts of memory and CPU time, but at least I don’t have to look at it and I can use a decent window manager and terminal environment. I certainly consider this preferable to the Bash-on-Windows features (and far, far better than Cygwin).
That was my choice for years, too. Most of the times, I ssh‘d into that machine using putty or another terminal emulator. They are not good but you can get the job done. SSH’ing-in also circumvents any input lag, too. Plus side: suspend and resume would work flawlessly with Windows.
Anyways, I switched jobs since then. Doing Linux only now. Macs serve as ridiculously expensive SSH terminals now. No more Windows.
Have you noticed significant input lag when running linux fullscreen? Whenever I’ve tried it it’s been too laggy to be usable, but I was running on an AMD FX 8350.
My 2 cents - a few years ago I had a powerful laptop (one of the WS Lenovos, maybe 16-32GB RAM, i7, etc.) and I tried VMWare (paid edition - company paid for it), Microsoft Hyper-V, and VirtualBox. I could never get rid of, or stop being bothered by, input lag.
Can’t install VirtualBox. This is a workspace, think something like remote desktop. See above for a link.
You could also try the Windows 10 Linux stuff. I haven’t tried it (because I don’t use windows) but those who do say it is pretty great.
WSL is good, but not great. I’ve been using it semi-seriously on my home machine for the past year and half and it’s better than it was, but I’m consistently disappointed in all the terminal emulators. The only setup I’m happy with is running urxvt on Xming. You will be disappointed in file system speed, but that seems to be a Windows thing regardless of WSL.
Have you tried ConsEmu? https://conemu.github.io/
I suspect one of the fundamental problems at play here is the fact that many of these tools want to be able to embed things like CMD.EXE or PowerShell and don’t have the native characteristics we associate with UNIX terminals.
Possibly, but I just found ConEmu to be hideously ugly. Personal preference thing, really. Other than its grotesque UI, it seems like a capable terminal emulator.
Ah. Yeah. I’m not so concerned about that :) When you’re trying to make a home in the malarial swamps, first you ensure that you have shelter, then you worry about whether the drapes match the tablecloth :)
Nice article. I must admin that I am a systemd fan. I much prefer it to the soup of raw text in rc.d folders. Finally, an init system system for the 1990s.
I’ve never had a problem doing anything with systemd myself - I think a lot of the hate towards it stems from the attitude of the project owners, and how they don’t make any effort to cooperate with other projects (most notably, IMO, the Linux kernel folks). Here’s a couple of interesting mailing list messages that demonstrate that:
Exactly.
I was initially skeptical about the debug ability of a systemd unit, but the documentation covers things to great depth, and I’m a convert to the technical merits. Declarative service files, particularly when you use a ‘drop-in’, are a definite step up from the shell scripts of sysvinit.
The way the project tries to gobble up /everything/ is a concern though, given their interactions (or lack thereof) with other parts of the community.
My impression is that the resistance to systemd stems from it not being unixy. Not being Debiany, even.
I use for i in ..., sed, grep, awk, find, kill -SIGHUP, lsof, inotify, tee, and tr all damned day to mange my system, and systemd has left me blind and toothless.
I’m still working on my LFS-based replacement for my various Debian desktops, vms, and laptop.
Declarative service files, particularly when you use a ‘drop-in’, are a definite step up from the shell scripts of sysvinit.
I’ve never found “systemd vs sysvinit shell scripts” to be a particularly compelling argument. “Don’t use sysvinit shell scripts” is a perfectly fine argument, but doesn’t say much about systemd. There are loads of init systems out there, and it seemed to me that systemd was never in competition with sysvinit scripts, it was in competition with other new-fangled init systems, especially upstart which was widely deployed by Ubuntu.
That’s still not much of an argument for systemd; it’s just passing the buck to the Debian developers, and going with whichever they chose. That’s an excellent thing for users, sysadmins, etc. to do, but doesn’t address the actual question (i.e. why did the Debian devs make that choice?).
According to Wikipedia, the initial release of systemd was in 2010, at which point Ubuntu (a very widely-deployed Debian derivative) had been using upstart by default for 4 years.
Debian’s choice wasn’t so much between sysvinit or systemd, it was which non-sysvinit system to use; with the highest-profile contenders being systemd (backed by RedHat) and upstart (backed by Canonical). Sticking with sysvinit would have been an abstain, i.e. “we know it’s bad, but the alternatives aren’t better enough to justify a switch at the moment”. In other words sysvinit’s only “feature” is the fact that it is already in widespread use, with all of the benefits that brings (pre-existing code snippets, documentation, blogposts, troubleshooting forums, etc.).
These days systemd has that “feature” too, since it’s used by so many distros (including Debian, as you say), which was the last nail in sysvinit’s coffin: at this point sysvinit is mostly hanging on as a legacy option (Debian in particular cares very deeply about stability and compatibility). Choosing between Debian sysvinit and Debian systemd isn’t so much a choice of init system, it’s a choice of whether or not to agree with the Debian developers’ choice to switch init system. And that choice was between systemd, upstart, initng, runit, daemontools, dmd, etc. They abstained (stuck with sysvinit) for many years, until around 2015 when the systemd vs upstart competition was resoundingly won by systemd, with Ubuntu switching away from upstart and Debian switching away from sysvinit.
As I saw all of this going on, my interpretation was:
To me, comparing systemd to sysvinit is like those shampoo adverts which claim their product gives an X% improvement, but the fine-print says that’s compared to not washing ;)
OpenRC is drop-in and works perfectly fine. I dropped it in and I’m using it on all my installs with no issues.
I think maybe you’ve misunderstood me.
I don’t mean you can install systemd and it will continue to work with sysvinit scripts.
I’m referring to systemd’s “drop-in” unit configurations. You can override specific parameters of a unit without having to replace the whole thing.
https://coreos.com/os/docs/latest/using-systemd-drop-in-units.html
This is getting more and more common since GDPR. A way to “bypass” these kind of tactics is to enable GDPR / cookie consent blocking with an ad blocker (at least this is possible with uBlock Origin). It automatically hides these annoying banners/popups without forcing you to opt-in.
It’s even more fun when you consider how many of these websites then set the cookies that you’d actually have to opt in…
How do you do this with uBlock Origin? I didn’t see a setting about GDPR or cookie/consent blocking.
If you go in uBlock Origin preferences → Filter lists, under “Annoyances” there’s “Fanboy’s Cookiemonster List” which hides “we use cookies” banners (and apparently will also hide GDPR banners).
This news caused the public release for XSA-267 / CVE-2018-3665 (Speculative register leakage from lazy FPU context switching) to be moved to today.
These embargoed and NDA’d vulnerabilities need to die. The system is broken.
edit: Looks like cperciva of FreeBSD wrote a working exploit and then emailed Intel and demanded they end embargo ASAP https://twitter.com/cperciva/status/1007010583244230656?s=21
Prgmr.com is on the pre-disclosure list for Xen. When a vulnerability is discovered, and the discoverer uses the responsible disclosure process, and the process works, we’re given time to patch our hosts before the vulnerability is disclosed to the public. On balance I believe participating in the responsible disclosure process is better for my customers.
Pre-disclosure gives us time to build new packages, run through our testing process, and let our users know we’ll be performing maintenance. Last year we found a showstopping bug during a pre-disclosure period: it takes time and effort to verify a patch can go to production. With full disclosure, we would have the do so reactively, with significantly more time pressure. That would lead to more mistakes and lower quality fixes.
This is a bad response to the issue. The bad guys probably already have knowledge of it and can use it. A few players deemed important should not get advanced notification.
Prgmr.com qualifies for being on the Xen pre-disclosure list by a) being a vendor of a Xen-based system b) willing and able to maintain confidentiality and c) asking. We’re one of 6 dozen organizations on that list–the criteria for membership is technical and needs-based.
If you discover a vulnerability you are not obligated to use responsible disclosure. If you run Xen you are not obligated to participate in the pre-disclosure list. The process consists of voluntary coordination to discover, report, and resolve security issues. It is for the people and organizations with a shared goal: removing security defects from computer systems.
By maintaining confidentiality we are given the ability, and usually the means to have security issues resolved before they are announced. Our customers benefit via reduced exposure to these bugs. The act of keeping information temporarily confidential provides that reduced exposure.
You have described a voluntary process with articulable benefits as “needing to die,” along with my response being “bad.” As far as I can tell from your comments you claim “the system is broken” because some people “should not get advanced notice.” I’ve described what I do with that knowledge, and why it benefits my users. I’m thankful the security community tells me when my users are vulnerable and works with me to make them safer.
Can you improve this process for us? Have I misunderstood you?
Some bad guys might already have knowledge of it. Once it’s been disclosed, many bad guys definitely have knowledge of it, and they can deploy exploits far, far faster than maintainers, administrators and users can deploy fixes.
You’re treating “the bad guys” like they’re all one thing. In actuality, there’s a string of bad guys from people who will use a free, attack tool to people who will pay a few grand for one to people who can customize a kit if it’s just a sploit to people who can build a sploit from a description to rare people who had it already. There’s also a range in intent of attackers from DOS to data integrity to leaking secrets. The folks who had it already often just leak secrets in stealthy way instead of do actual damage. The also use the secrets in a limited way compared to average, black hat. They’re always weighing use vs detection of their access.
The process probably shuts down quite a range of attackers even if it makes no difference for the best ones who act the sneakiest.
The process probably shuts down quite a range of attackers even if it makes no difference for the best ones who act the sneakiest.
I believe the process is so effective at shutting down “quite a range of attackers” that it works despite: a) accidental leaks [need for improvement of process] b) intentional leaks [abuse] c) black hats on the pre-disclosure list reverse engineering an exploit from a patch. [fraud] In aggregate, the benefit from following the process exceeds the gain a black hat would have from subverting it.
Well, it’s complicated. (Disclosure: we were under the embargo.)
When a microprocessor has a vulnerability of this nature, those who write operating systems (or worse, provide them to others!) need time to implement and test a fix. I think Intel was actually doing an admirable job, honestly – and we were fighting for them to broaden their disclosure to other operating systems that didn’t have clear corporate or foundation backing (e.g., OpenBSD, Dragonfly, NetBSD, etc). That discussion was ongoing when OpenBSD caught wind of this – presumably because someone who was embargoed felt that OpenBSD deserved to know – and then fixed it in the worst possible way. (Namely, by snarkily indicating that it was to address a CPU vulnerability.) This was then compounded by Theo’s caustic presentation at BSDCan, which was honestly irresponsible: he clearly didn’t pull eager FPU out of thin air (“post-Spectre rumors”), and should have considered himself part of the embargo in spirit if not in letter.
For myself, I will continue to advocate that Intel broaden their disclosure to include more operating systems – but if those endeavoring to write those systems refuse to honor the necessary secrecy that responsible disclosure demands (and yes, this means “embargoed and NDA’d vulnerabilities”), they will make such inclusion impossible.
We could also argue Theo’s talk was helpful in that the CVE was finally made public.
Colin Percival tweeted in his thread overview about the vulnerability that he learned enough from Theo’s talk to write an exploit in 5 hours.
If Theo and and the OpenBSD developers pieced enough together from rumors to make a presentation that Colin could turn into an exploit in hours, how long have others (i.e., bad guys) who also heard rumors had working exploits?
Theo alone knows whether he picked-up eager FPU from developers under NDA. Even if he did, there’s zero possibility outside of the law he lives under (or contracts he might’ve signed) that he’s part of the embargo. As to the “spirit” of the embargo, his decision to discuss what he knew might hurt him or OpenBSD in the future. That was his call to make. He made it.
Lastly, I was at Theo’s talk. Caustic is not how I would describe it, nor would I categorize it as irresponsible. Theo was frustrated that OpenBSD developers who had contributed meaningfully to Spectre and Meltdown mitigation had been excluded. He vented some of that frustration in the talk. I’ve heard more (and harsher) venting about Linux in a 30 minute podcast than all the venting in Theo’s talk.
On the whole Theo’s talk was interesting and informative, with a sideshow of drama. And it may have been what was needed to get the vulnerability disclosed and more systems patched.
Disclosure: I’m an OpenBSD user, occasional port submitter, BSDCan speaker and workshop tutor, FreeNAS user and recommender, and have enjoyed many podcasts, some of which may have included venting.
If Theo and and the OpenBSD developers pieced enough together from rumors to make a presentation that Colin could turn into an exploit in hours, how long have others (i.e., bad guys) who also heard rumors had working exploits?
It was clear to me the day Spectre / Meltdown were disclosed that there would be future additional vulnerabilities of the same class based on that discovery. I think there is circumstantial evidence suggesting the discovery was productive for the people who knew about it in the second half of 2017 before it was publicly disclosed. One can safely assume black hats have had the ability to find and use novel variations in this class of vulnerability for at least six months.
If Theo did pick up eager FPU from a developer under embargo that demonstrates just how costly it is to break embargo. Five hours, third hand.
If Theo did pick up eager FPU from a developer under embargo that demonstrates just how costly it is to break embargo. Five hours, third hand.
I have absolutely no idea what point you’re trying to make. Certainly, everyone under the embargo knew that this would be easy to exploit; in that regard, Theo showed people what they already knew. The only new information here is that Theo is every bit as irresponsible as his detractors have claimed – and those detractors would (of course) point out that that information is not new at all…
With respect, how is Theo irresponsible for reducing the time the users of his OS are vulnerable?
Like, the embargo thing sounds a lot to the ill-informed like some kind of super-secret clubhouse.
Theo definitely wasn’t part of the embargo, but it’s also unquestionable that Theo was relying on information that came (ultimately) from someone who was under the embargo. OpenBSD either obtained that information via espionage or via someone trying to help OpenBSD out; either way, what Theo did was emphatically irresponsible. Of course, it was ultimately his call – but he is not the only user of OpenBSD, and is unfortunate that he has effectively elected to isolate the community to serve his own narcissism.
As for the conjecture that Theo served any helpful role here: sorry, that’s false. (Again, I was under the embargo.) The CVE was absolutely going public; all Theo did was marginally accelerate the timeline, which in turn has resulted in systems not being as prepared as they otherwise could be. At the same time, his irresponsible behavior has made it much more difficult for those of us who were advocating for broader inclusion – and unfortunately it will be the OpenBSD community that suffers the ramifications of any future limited disclosure.
Espionage? You’re suggesting one of:
Someone stole the exploit information, leaked it to the OpenBSD team, a team known for proactively securing their code, on the off-chance Theo would then further leak it (likely with mitigation code), causing the embargoed details to be released sooner than expected,
OpenBSD developers stole the exploit information, then leaked it (while committing mitigation code), causing the embargoed details to be released sooner than expected.
The first doesn’t seem plausible. The second isn’t worthy of you or any of the developers on the OpenBSD team.
I’m sure you’ve read Colin’s thread. He contacted folks under embargo after he wrote his exploit code based on Theo’s presentation. The release timeline moved forward. OSs that had no knowledge of the vulnerability now have patches in place. Perhaps those users view “helpful” in a different light.
Edit: Still boggling over the espionage comment. Had to flesh that out more.
Theo has replied:
In some forums, Bryan Cantrill is crafting a fiction.
He is saying the FPU problem (and other problems) were received as a leak.
He is not being truthful, inventing a storyline, and has not asked me for the facts.
This was discovered by guessing Intel made a mistake.
We are doing the best for OpenBSD. Our commit is best effort for our user community when Intel didn’t reply to mails asking for us to be included. But we were not included, there was no reply. End of story. That leaves us to figure things out ourselves.
Bryan is just upset we guessed right. It is called science.
He’s also offered to discuss the details with Bryan by phone.
Intel still has 7 more mistakes in the Embargo Execution Pipeline™️ according to a report^Wspeculation by Heise on May 3rd.
Let the games begin! 🍿
What’s (far) more likely: that Theo coincidentally guessed now, or that he received a hint from someone else? Add Theo’s history, and his case is even weaker.
While everyone is talking about Theo, the smart guys figuring this stuff out are Philip Guenther and Mike Larkin. Meet them over beer and discuss topics like ACPI, VMM, and Meltdown with them and you won’t doubt anymore that they can figure this stuff out.
In another reply you claim your approach is applied Bayesian reasoning, so let’s go with that.
Which is more likely:
or
Show me the observed distribution you based your assessment on. Show me all the times Theo lied about how he came to know something.
Absent meaningful data, I’ll go with team of smart people knowing their business.
Absent meaningful data
Your “meaningful data” is 11 minutes and 5 seconds into Theo’s BSDCan talk: “We heard a rumor that this is broken.” That is not guessing and that is not science – that is (somehow) coming into undisclosed information, putting some reasonable inferences around it and then irresponsibly sharing those inferences. But at the root is the undisclosed information. And to be clear, I am not accusing Theo of lying; I am accusing him of acting irresponsibly with respect to the information that came into his possession.
Here is at least one developer’s comment on the matter. He points to the heise.de article about Spectre-NG as an example of the rumors that were floating around. That article is a long way from “lazy FPU is broken”.
Theo has offered to discuss your concerns, what you think you know, what he knew, when and how. He’s made a good-faith effort to get his cellphone number to you. If you don’t have it, ask.
If you do have his number, call him. Ask him what he meant by “We heard a rumor that this is broken.” Ask him what rumor they heard. Ask him whether he was referring to the Spectre-NG article.
Seriously, how hard does this have to be? You engaged productively with me when I called you out. You’ve called Theo out. Talk to him.
And yes, I get it. Your chief criticism at this point is responsible disclosure. But as witnessed by the broader discussion in the security community, there’s no single agreed-upon solution.
While you’ve got Theo on the phone you can discuss responsible disclosure. Frankly, I suggest beer for that part of the discussion.
Edit: Clarify that Florian wasn’t saying he knew heise.de were the source.
It is Bayesian reasoning, pure and simple.
That said, this is a tempest in a teacup, so call it whatever you want; I’m gonna go floss my cat.
Sorry – I’m not accusing anyone of espionage; apologies if I came across that way.
What I am saying is that however Theo obtained information – and indeed, even if that information didn’t originate with the leak but rather by “guessing” as he is now apparently claiming – how he handled it was not responsible. And I am also saying that Theo’s irresponsibility has made the job of including OpenBSD more difficult.
The spectre paper made it abundantly clear that addtional side channels will be found in the speculative execution design.
This FPU problem is just one additonal bug of this kind. What I’d like to learn from you is:
What was the original planned public disclosure date before it was moved ahead to today?
Do you really expect that a process with long embargo windows has a chance of working for future spectre-style bugs when a lot of research is now happening in parallel on this class of bugs?
The original date for CVE-2018-3665 was July 10th. After the OpenBSD commit, there was preparation for an earlier disclosure. After Theo’s talk and after Colin developed his POC, the date was moved in from July 10th to June 26th, with preparations being made to go much earlier as needed. After the media attention today, the determination was made that the embargo was having little effect and that there was no point in further delay.
Yes, I expect that long embargo windows can work with Spectre-style bugs. Researchers have been responsible and very accommodating of the acute challenges of multi-party disclosure when those parties include potentially hypervisors, operating systems and higher-level runtimes.
Thanks for disclosing the date. I must say I am happy that my systems are already patched now, rather than in one month from now.
I’ll add that some new patches with the goal of mitigating spectre-class bugs are being developed in public without any coordinated disclosure:
http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/9474cbef7fcb61cd268019694d94db6a75af7dbe
Thanks for the clarification.
I don’t think early disclosure is always irresponsible (the details of what and when matter). Others think it’s never irresponsible; and some that it’s always irresponsible. Good arguments can be made for each position that reasonable people can disagree about and debate.
One thing I hope we can all agree on is that we need clear rules for how embargoes work (probably by industry). We need clear, public criteria covering who, what, when and how long. And how to get in the program, ideally with little or no cost.
It’s a given that large companies like Microsoft will be involved. Open-source representatives should have a seat at the table as well. But “open source” can’t just mean Red Hat and a few large foundations. OSs like OpenBSD have a presence in the ecosystem. We can’t just write the rules with a “You must be this high to ride” sign at the door.
And yeah, Theo’s talk might make this more difficult going forward. Hopefully both sides will use this event as an opportunity to open a dialog and discuss working together.
Right, I completely agree: I’m the person that’s been advocating for that. I was furious with Intel over Spectre/Meltdown (despite our significant exposure, we learned about it when everyone else did), and I was very grateful for the work that OpenBSD and illumos did together to implement KPTI. This time around, I was working from inside the embargo to get OpenBSD included. We hadn’t been able to get to where we needed to get, but I also felt that progress was being made – and I remained optimistic that we could get OpenBSD disclosure under embargo.
All of this is why I’m so frustrated: the way Theo has done this has made it much more difficult to advocate this position – it has strengthened the argument of those who believe that OpenBSD should not be included because they cannot be trusted. And that, in my opinion, is a shame.
Look at it from OpenBSD’s perspective though. They (apparently) tried emailing Intel to find out more, and were told “no”. What were they supposed to do? Just wait on the hope that someone, somewhere, was lobbying on their behalf to be included, with no knowledge of that lobbying?
why to people have the need to use a framework for everything, like the BDD testing frameworks in this article. i really don’t see the value of it. it’s just another dependency to carry around, and i can’t just read and understand what is happening.
what is gained by writing:
Expect(resp.StatusCode).To(Equal(http.StatusOK))
instead of
if resp.StatusCode != http.StatusOK {
t.Fail()
}
I don’t use that particular testing framework, but the thing I’d expect to gain by using it is better test failure messages. I use testify at work for almost precisely that reason. require.Equal(t, valueA, valueB) provides a lot of value, for example. I tried not to use any additional test helpers in the beginning, probably because we have similar sensibilities. But writing good tests that also have good messages when they fail got pretty old pretty fast.
ok, i can see that good messages may help, though i’d still rather use t.Fatal/Fatalf/Error/Errorf, maybe paired with a custom type implementing error (admitting that it’s a bit more to type) if a custom DSL is the alternative :)
testify looks interesting though!
testify is nice because it isn’t really a framework, unless maybe you start using its “suite” functionality, which is admittedly pretty light weight. But the rest of the library drops right into the normal Go unit testing harness, which I like.
I did try your methods for a while, but it was just untenable. I eventually just stopped writing good failure messages, which I just regretted later when trying to debug test failures. :-)
testify is a nice middle ground that doesn’t force you to play by their rules, but adds a lot of nice conveniences.
The former probably gives a much better failure message (e.g. something like “expected value ‘200’ but got value ‘500’”, rather than “assertion failed”).
That’s obviously not inherent to the complicated testing DSL, though. In general, I’m a fan of more expressive assert statements that can give better indications of what went wrong; I’m not a big fan of heavyweight testing frameworks or assertion DSLs because, like you, I generally find they badly obfuscate what’s actually going on in the test code.
yeah, with the caveats listed by others, I sort of thing this is a particularly egregious example of strange library usage/design. in theory, anyone (read: not just engineers) is supposed to be able to write a BDD spec. However, for that to be possible, it should be written in natural language. Behat specs are a good example of this: http://behat.org/en/latest/. But this one is just a DSL, which misses the point I think…
However, for that to be possible, it should be written in natural language. Behat specs are a good example of this: http://behat.org/en/latest/. But this one is just a DSL, which misses the point I think…
I’d say that the thing behat does is a real DSL (like, with a parser and stuff). The library from the article just has fancy named functions which are a bit a black box to me.
Just a thought: One could maybe write a compiler for a behat-like language which generates stdlib Go-tests, using type information found in the tested package, instead of using interface{} and reflect. That’d be a bit of work though ;)
I liked this article and appreciate the examples. But I was left wondering: under what conditions, if any, should I not follow the advice in this article?
I think it’s all good as rules of thumb, but I see two potential downsides.
First, there’s a key phrase that I think indicates some limitations:
It’s not a general purpose BMP library. It only supports 24-bit true color, ignoring most BMP features such as palettes.
As you start adding features, this design approach may rapidly become intractable. It’s most suitable for libraries that are minimalist in the other sense, and unfortunately, we live in an extremely complicated world where such libraries often aren’t actually useful. It’s all great ideals, but if taken as absolute gospel, I suspect it will strongly interfere with getting stuff done.
Second, for instance, I don’t like how the author has bmp_size return 0 in the case of an error. I know it’s a C-ism that’s very difficult to work around in C, but that’s a totally valid size to pass to calloc, which may return a totally valid non-NULL pointer, and bmp_init may then scribble over totally arbitrary memory. (Or it might segfault. “Might” is the operative word here. When a segfault is the best possible outcome, something has gone badly, badly wrong.) This approach forces the caller to check, which is unfriendly and error-prone. Of course, I don’t actually know a better way in C to handle it; returning a negative signed value won’t work because of conversion rules (though trying to allocate almost SIZE_MAX bytes of memory will probably fail).
Similarly,
…the library returns the power of two number of uint32_t it needs to allocate….
Why? The caller will definitely forget to 1ULL << z eventually, and then they’ll have far, far too small a table for the number of elements they want to put in it (which, by the way, the library also has no way to signal). I guess it saves the library from an “error state”, but at the cost of obfuscating its own interface and pushing the check for that error state (which, of course, didn’t actually go away) into the application where it will need to be written many times and thus have many opportunities to be done incorrectly.
…I guess I’m complaining that C isn’t Rust here. =/ But in both these cases, I wonder if a less “minimalist” interface might have been less error-prone to actually use, and (hopefully) uses of a library greatly outnumber implementations thereof, and thus should be a preferable target for minimization.
Guideline #2 (No dynamic memory allocations) can have a significant effect on error detection.
If the library allocates memory then it can use structure marking to detect use after free, uninitialized variables, and other programming errors.
Here is a quick sketch of a version that uses structure marking. I added bmp_try_bytes so that the resulting bitmap can be written to a file.
enum bmp_error_code //definition omitted for brevity
struct bmp_data; //incomplete type
bmp_error_code bmp_try_create(long width, long height, struct bmp_data **result);
bmp_error_code bmp_try_set(struct bmp_data *, long x, long y, unsigned long color);
bmp_error_code bmp_try_get(const struct bmp_data *, long x, long y, unsigned long *color);
bmp_error_code bmp_try_free(struct bmp_data **);
bmp_error_code bmp_try_get_bytes(const struct bmp_data *, size_t *bytes, void **bytes);
What I don’t get about UB in these discussions: isn’t it possible to get the tooling to bail on optimizations that cross a UB barrier?
Like OK you might dereference a null pointer here, this is UB, but don’t take that as a “well let’s just infer that anything’s doable here”. Instead take a “the result is unknowable, so don’t bother optimizing”.
Is there some sort of reason UB-ness can’t be taken into account in these optimization paths? Is there some… fundamental thing about C optimization that requires this tradeoff?
I think the problem with this idea is that many optimisations rely on something being undefined behaviour.
In any case, the result is not just “unknowable” - there is no result. If I have two signed integer values for which the sum exceeds the largest possible signed integer value, and I add them together, there is not some unknown value that is the result. There is in fact a known value which is the result but which cannot be represented by the type which the result of the operation needs to be expressed in. What does it mean to “not bother optimising” here? Yes, the compiler could decide that the result of such an operation should be an indeterminate value, rather than undefined behaviour, but that will inhibit certain optimisations that could be possible even in cases where the compiler cannot be sure that the indeterminate value will actually ever result.
People keep saying this, but I still have not seen a well worked out example of how UB produces real optimizations let alone a respectable study.
But do “optimizations” actually produce speedups for benchmarks? Despite frequent claims by compiler maintainers that they do, they rarely present numbers to support these claims. E.g., Chris Lattner (from Clang) wrote a three-part blog posting8 about undefined behaviour, with most of the first part devoted to “optimizations”, yet does not provide any speedup numbers. On the GCC side, when asked for numbers, one developer presented numbers he had from some unnamed source from IBM’s XLC compiler, not GCC; these numbers show a speedup factor 1.24 for SPECint from assuming that signed overflow does not happen (i.e., corresponding to the difference between -fwrapv and the default on GCC and Clang). Fortunately, Wang et al. [WCC+12] performed their own experiments compiling SPECint 2006 for AMD64 with both gcc-4.7 and clang-3.1 with default “optimizations” and with those “optimizations” disabled that they could identify, and running the results on a on a Core i7-980. They found speed differences on two out of the twelve SPECint benchmarks: 456.hmmer exhibits a speedup by 1.072 (GCC) or 1.09 (Clang) from assuming that signed overflow does not happen. For 462.libquantum there is a speedup by 1.063 (GCC) or 1.118 (Clang) from assuming that pointers to different types don’t alias. If the other benchmarks don’t show a speed difference, this is an overall SPECint improvement by a factor 1.011 (GCC) or 1.017 (Clang) from “optimizations”. http://www.complang.tuwien.ac.at/kps2015/proceedings/KPS_2015_submission_29.pdf
I still have not seen a well worked out example of how UB produces real optimizations
Some examples here (found by a search just now): https://kristerw.blogspot.com/2016/02/how-undefined-signed-overflow-enables.html
SPECint isn’t necessarily a good way to assess the affect of these particular optimisations.
OMG! You find those compelling?
Compelling for what? They are examples of how UB produces real optimisations, something that you said you hadn’t seen.
They don’t show any real optimization at all. They are micro-optimizations that produce incorrectly operating code.
When the compiler assumes multiplication is commutative and at the same time produces code that has non-commutative multiplication, it’s just terrible engineering. There’s no excuse for that.
As steveklabnik says above “UB has a cost, in that it’s a footgun. If you don’t get much optimization benefit, then you’re introducing a footgun for no benefit.”
They don’t show any real optimization at all. They are micro-optimizations
This is self-contradictory, unless by “real optimization” you mean “not-micro-optimization”, in which case you are just moving the goal posts.
that produce incorrectly operating code.
This is plainly false, except if you you mean that the code doesn’t behave as you personally think it should behave in certain cases, even though the language standard clearly says that the behaviour is undefined in those cases. In which case, sure, though I’m not sure why you think your own opinion of what the language semantics should be somehow trumps that of the committee responsible for actually deciding them.
Real optimization means substantive. Micro-optimizations like “ we take this arithmetic calculation and replace it with a faster one that produces the wrong answer” are ridiculous.
I’m totally uninterested in this legalist chopping or this absurd deference to a committee which is constantly debating and revisiting its own decisions. It is just absurd for people to argue that the C Standard wording can’t be criticized.
“ we take this arithmetic calculation and replace it with a faster one that produces the wrong answer”
And there you have the core of the problem.
You say “wrong answer” implying there is A Right One, as defined by the current C standard sometimes there are no Right Answers to certain operations, only “undefined”.
So Define your Right One, proclaim your New Standard, implement a Conforming compiler and everybody is happy.
I’m entirely on board with you saying the C committee lacks guts to abandon ancient marginal CPU’s and just define things…..
So I look forward to programming in VyoDaiken C.
Alas… I warn you though… you will either end up with something that just isn’t C as we know, or you will still have large areas “undefined”.
Machine integers are not a field. Anything we call “arithmetic” on them is defined arbitrarily (usually to be kind of like arithmetic on the mathematical integers, if you ignore the fact that they’re finite), so in fact there’s not a right answer—rather, there are several reasonable ones.
You could define them to behave however the machine behaves; but this is obviously not consistent from machine to machine. You could define them in a particular way (two’s-complement with signaling overflow, perhaps), but if this definition doesn’t match up to what the target machine actually does, you have potentially expensive checks and special-casing to shore up the mismatch, and you picked it arbitrarily anyway. (Case in point: did you agree with my suggestion? Do you think you could convince all of a room full of C programmers to?)
There’s a reasonable argument to be made that C should have said integer overflow is implementation-defined rather than undefined, but it’s hard to claim there’s a single obviously correct implementation-independent definition it should have adopted.
My suggestion is that when you feel like you should tell me something I obviously already know, you should think about what point you are trying to make.
C has a great deal of room for machine and implementation dependent behavior - necessarily. Implementation defined would have prevented surprises.
arithmetic.
Sounds like you’ll love Scheme and Ruby then ;-)
They have this Numeric Tower concept that does The Right Thing by arithmetic.
ps: Have a look at this, the ultimate Good News, Bad News story and cry…
https://lobste.rs/s/azj8ka/proposal_signed_integers_are_two_s#c_xo9ebu
…if we ever meet in person, we can drown our sorrows together and weep.
It is just absurd for people to argue that the C Standard wording can’t be criticized.
Just to be clear, I’m not arguing that (and I don’t think anybody here is doing so). However if you continue to dismiss logical arguments as “legalist chopping”, and suggest that we all defer to you instead of the language committee, I think the discussion will have to end here.
This is plainly false, except if you you mean that the code doesn’t behave as you personally think it should behave in certain cases, even though the language standard clearly says that the behaviour is undefined in those cases
I have no idea how to interpret that except as an argument that one is not permitted to question the wisdom of either the standard or the interpretation chosen by compiler writers. As several people on the standards bodies have pointed out, there is certainly no requirement in the standard that compiler developers pick some atrocious meaning. “Optimizing” code that produces a correct result to produce code that does not, produces incorrectly operating code. You can claim that the unoptimized code was in violation of the standard, but it worked.
Specifically, the example you pointed to starts off by “optimizing” C code that calculates according the mathematical rules and replaces it with code that computes the wrong answer. 2s complement fixed length multiplication is not commutative. Pretending that it is commutative is wrong.
I have no idea how to interpret that except as an argument that one is not permitted to question the wisdom of either the standard or the interpretation chosen by compiler writers.
I do not see how pointing out what the language standard does say is the same as saying that you are not permitted to question the wording of the standard. Good day.
No. There is nothing about C that requires compilers behave irrationally. The problem is that (1) the standard as written provides a loophole that can be interpreted as permitting irrational compilation and (2) the dominant free software compilers are badly managed ( as someone pointed out, it’s not as if they have paying customers who will select another product). There’s a great example: a GCC UB “optimization” was introduced that broke a lot of code by assuming an out of bound access to an array could not happen. It also broke a benchmark - creating an infinite loop. The GCC developers specifically disabled the optimization for the benchmark but not for other programs. The standard does not require this kind of horrible engineering, but it doesn’t forbid it.
There’s a great example: a GCC UB “optimization” was introduced that broke a lot of code by assuming an out of bound access to an array could not happen
You know, I’m quite happy with that.
Every new gcc release comes out with a lot of new optimizations and a lot of new warnings.
Every time I go through our code base cleaning up the warnings, often fixing bugs as I go, I’m ok with that.
Sometimes we only find the borken code on the unit tests or test racks. I’m OK with that. That code was fragile and was going to bite us in the bum sooner or later.
Old working code maybe working, but as soon as you’re into undefined behaviour it’s fragile, and changes in optimization are only one of many ways which can make it break.
In my view, the sooner I find it and fix it the better.
I deliberately wiggle the optimization settings and run tests. If working code breaks… I fix.
I run stuff on CPU’s with different number of cores… that’s a really good one for knocking loose undefined behaviour bugs.
Sure my code “Works for Me” on my desk.
But I don’t control the systems where other people run my code. Thus I want the behaviour of my code to always be defined.
I run stuff on CPU’s with different number of cores… that’s a really good one for knocking loose undefined behaviour bugs.
What do you mean by that? Just something like 2 vs 4 or as different as vs a 3 or 6 core that’s not a multiple of 2? Now that you mention it, I wonder if other interaction bugs could show up in SoC’s with mix of high-power and low-energy cores running things interacting. Maybe running on them could expose more bugs, too.
The C standard as a bunch of fine grained wording around volatile and atomic and sequence points with undefined behaviour if you understand them wrong.
Threading is dead easy…. any fool can (and many fools do) knock up a multithreaded program in a few minutes.
Getting threading perfectly right seems to be extra-ordinarily hard and I haven’t met anybody who can, nor seen anybodies code that is perfect.
So any fools code will “Work For Them” on their desk… now deploy it on a different CPU with a different number of cores, with a different load ….
….and the bugs come crawling out in all directions.
Actually gcc has been remarkably good about this… The last few releases I have dealt with there has a been a pairing of new warnings with new optimization passes.
Which makes sense, because a new optimization pass tends to mean the compiler has learn more about the structure of your code.
Where they have been gravely deficient is with function attribute…. there be dragons and unlikely to get a warning if you screw up.
Curiously enough they will suggest attributes if you haven’t got one… but won;t warn you if you have the wrong one. :-(
Bottom line. Beware of gcc function attributes. They are tricksy and easy to screw up.
Are you reading any of the critiques of UB or even the defenses? The core issue is silent transformations of code - e.g. a silent deletion of a null pointer check because UB “can’t happen”.
There is no fundamental reason. It’s just a lot of work and no optimizing compiler is implemented that way right now.
It looks like part of the problem is an engineering flaw in the compilers where there are multiple optimization passes that can produce errors in combination.
Normally, I’d say this is off-topic for lobste.rs but the writer IS pretty entertaining and I LOL-ed at
Sure, Google and Facebook and Apple do have to worry about this, because they’ve domiciled their foreign HQ’s in Ireland so that they can shelter all that foreign revenue from US taxation. Karma’s a bitch.
Also, way to go to break that stereotype about Canadians being polite doormats.
But for those of us here who are lawyers (or are close to the law, preferably not on the broke-the-law side) how accurate is this position?
The thing with collecting the taxes reminded me that Amazon now collects state taxes. I’m totally ok with this, but it is a state law Amazon is having to comply with, without which they would have to cease operations in that state. So I’m surprised that easyDNS can serve UK customers without collecting taxes - they must be violating UK law, right?
I also see, in principle, how this translates to having to start obeying contradictory laws. Say the Saudis say women can’t access the internet and all internet providers now have to track gender of the user. What happens to a US based company that is prohibited from denying services on the basis of gender. I guess they’ll have to create a new company in Saudi that’s a wholly owned subsidiary but is a Saudi company and so on and so forth.
Ah the joys of being one big happy planet.
I guess they’ll have to create a new company in Saudi that’s a wholly owned subsidiary but is a Saudi company and so on and so forth.
Or they just don’t trade in Saudi Arabia.
That’s an option for many people dealing with the GDPR: If you don’t have a website in Europe, or a business in Europe, and you don’t trade in European data, then the GDPR doesn’t apply to you. However Facebook – even if they weren’t in Ireland does trade in European data by selling advertisements to European businesses.
They could choose not to- they could refuse to do any business with any company in Europe. This kind of structuring would probably make them safe, but it’s not realistic: There’s simply too much money in Europe.
They would have to cease trading with any company AND not have any European “customers” (users). Having the data of any entity (person or company) that resides in a European country makes you liable according to GDPR.
Problem is, without perfect geo-blocking and more, users will “slip” through and then they are in the same situation.
I think the point is that if you have no company footprint in EU, not business partners there, etc, then the GDPR is unenforceable against you. Yes, they can sue you in an EU court and bring a judgement against your corporation, but if your corporation will never have any footprint there then there is no power to enforce the judgement.
This is true. You can violate any law, until you get catched.
However I wonder what’s the impact of such approach on the value of a company.
Having the data of any entity (person or company) that resides in a European country makes you liable according to GDPR.
Ehm… no.
The “data subject” is always a European citizen, a person, not a company.
Can you point me to the GDPR article that lead you to this conclusion?
You are completely right in that sense. However, companies who are handling personal EU data will make any company, that they in turn hand (parts of) that data to, liable (and require a data processor / data manager agreement). As you say, handling data for a EU company that has no personal data is not liable to GDPR, but it is a slippery slope because handling pay slips, staff management, etc. will very often have personal data.
Problem is, without perfect geo-blocking and more, users will “slip” through and then they are in the same situation.
An IP Address isn’t “personal data”, a name isn’t “personal data”, even a login name isn’t “personal data”. What exactly are the circumstances that you believe you would be “slipped” some personal data without realising it?
Problem is, without perfect geo-blocking and more, users will “slip” through and then they are in the same situation.
What exactly are the circumstances that you believe you would be “slipped” some personal data without realising it?
Frankly, that sentence sounds a lot like FUD, but IP addresses and names are personal data according to GDPR.
Frankly, that sentence sounds a lot like FUD,
“FUD” means “fear, uncertainty and doubt” and refers to a specific kind of marketing campaign where the goal is to spread enough misinformation about a subject so that people are afraid of engaging further with a subject.
Telling people they’re going to be accidentally breaking the law for being connected to the Internet is FUD. Please stop spreading it.
but IP addresses and names are personal data according to GDPR.
False.
The GDPR doesn’t mention IP addresses at all. It never once says that a “name” is personal data.
The ICO (GDPR Regulator in the UK) even gives the example of Names not being personal data:
By itself the name John Smith may not always be personal data because there are many individuals with that name.
It never once says that a “name” is personal data.
Dude, you really need to read the law:
(1) ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
I urge anyone using your consulting to hire a competent European lawyer instead.
That doesn’t disagree with what the ICO said.
The key language is an “identified or identifiable natural person”.
If you can’t identify a natural person with it, and you have no normal business practice that would enable you to do so, it’s not personal data.
For a consistent ruling of this, see opinion 4 which teaches that a dynamic IP address cannot identify a person. Why would anyone think a name would?
I urge anyone using your consulting to hire a competent European lawyer instead.
I do the same. I’m not a lawyer. I’m an SME who tells companies what they can do, and then invites outside legal to review my advice. I’m significantly more expensive than a European lawyer (in billings), but companies who want to understand what exactly can they do need someone like me instead of some guy on the Internet.
If you can’t identify a natural person with it, and you have no normal business practice that would enable you to do so, it’s not personal data.
For a consistent ruling of this, see opinion 4 which teaches that a dynamic IP address cannot identify a person. Why would anyone think a name would?
Because you cannot know if a specific name can be used to identify the user.
You just need one identificable name to violate the GDPR for that user.
Your “normal business” practices means nothing in this regards.
Article 33 explicitly states that:
The controller shall document any personal data breaches, comprising the facts relating to the personal data breach, its effects and the remedial action taken. That documentation shall enable the supervisory authority to verify compliance with this Article.
This means that a company is accountable for any personal data leak, be it due to a bad employee or a smart hackers crew using a zero day.
The law says that any information that can be used to identify a user directly or indirectly is personal data. And it includes data related to “one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”
So if a company holds my dynamic ip address with the time of my connection in its database and a third party can use these informations together to learn my identity (as the ISP that assigned me the IP could do) these information are personal data.
Same for a login name, if somebody can identify my identity with the couple username + host, that username is personal data per GDPR.
I’m not a lawyer.
Neither do I.
But I can read a law as any other European citizen can do.
What you said about name and IP is simply misleading.
I’m significantly more expensive than a European lawyer
Really, I have no doubt.
If that is the problem I can suggest pretty expensive and competent European lawyers.
But while I have no economic interest in this, as an European whose personal data are protected by the GDPR, I’m not happy to read you give technical advises without a minimal understanding of the law.
I’d like to have a list of the companies taking your advices, to avoid using their services.
What you said about name and IP is simply misleading.
The court decision you’re referring to (and you should read it, since it’s clear you haven’t) considers an IP Address and timestamp identifying to the ISP, since they can look up their customer’s name.
You just need one identificable name to violate the GDPR for that user.
That is nonsense.
Go away troll.
You just need one identificable name to violate the GDPR for that user.
That is nonsense.
That is the GDPR law. Literally. Article 4.
If my name is unique, and your db store my name, you are holding my personal data.
The court decision you’re referring to considers an IP Address and timestamp identifying to the ISP, since they can look up their customer’s name.
And if an ISP employee breach into a system and get the IP Address and timestamp of the users, she will be able to identify such people and gain sensible informations about them from the system.
Now, if the system’s controller don’t notify the European users about the data breach, thinking he is not collecting personal data subject to the GDPR, he will violate the Article 33.
Go away troll.
Fine, I guess I can not convince you to admit a mistake on this topic as it seems a good source of revenue.
But please, try to read and understand the law. It’s pretty simple and clear.
You just need one identificable name to violate the GDPR for that user.
That is nonsense.
That is the GDPR law. Literally. Article 4.
Stop trolling. The GDPR never uses the string “identificable”
If my name is unique, and your db store my name, you are holding my personal data.
The ICO disagrees. They’re the one responsible for regulating me (I’m in the UK) and they’ve given no further guidance on the subject. It is however consistent with their other positions on identifying personal data.
And if an ISP employee breach into a system…
What exactly do you think the normal person should think the risk is of someone who works at an ISP breaking into their website? You’re being absurd.
Stop trolling.
Stop trolling. The GDPR never uses the string “identificable”
However, correcting the obvious typo shows the word “identifiable” appears eight times in that article.
Nowhere does it say “one identifiable” or a “single identifiable” or anything related to that.
What is your point?
Both texts you refer to predate GDPR. And the GDPR never refer to them.
So they are both off-topic in this thread.
But, actually, I think that everyone can compare your statements with the GDPR text and can easily see how rooted are your advises.
So they are both off-topic in this thread.
The ICO’s opinion is all that matters.
Not yours.
But, actually, I think that everyone can compare your statements with the GDPR text and can easily see how rooted are your advises.
Yes. I’m telling people don’t panic, and you’re shouting panic; pointing to articles you haven’t read with interpretations that isn’t shared by the regulators even most professionals working in this space.
Then there’s that weird thing you’re saying about ISP employees breaching people’s sites…
Go away.
I might not have been clear, my point is that a company/website/service cannot reliably avoid european users (by geo-blocking, asking them to swear that they are not from EU, etc.) and once those users are on the platform their data is subject to the GDPR.
You’re not.
Having a European visit your website doesn’t necessarily mean you have any extra burdens.
If you don’t trade with Europeans and aren’t trading data specifically about Europeans[2], then you aren’t in-territory.
If you don’t know who they are, cannot find out who they are, and the information you have doesn’t through your normal business practices identify a natural person[2], then your data is not material.
I still cannot see how you can collect personal data accidentally if you know what personal data means, or what the GDPR is attempting to accomplish. The law doesn’t talk about “users” or “platform” in this way, and the regulators do not provide guidance in ethereal cases like yours.
[1]: For example, if you sell targeted advertising on your website and allow your buyers to break down by Geography, then you’re in-territory.
[2]: That last one might seem tricky, but it’s designed to catch companies who make behavioural profiles of people using cookies and IP addresses. If you’re not doing anything like that, then you’re probably fine, but I’d need a specific example to say.
through your normal business practices
Please @cpnielsen, compare this to the definitions for “personal data breach” in Article 4 of GDPR:
‘personal data breach’ means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed
and to the definition of “personal data” in the same article:
‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
Neither definitions cite in any way the use you do of the personal data in your business practices.
A certain set of data is personal indipendently from the use or the inferences that you can do about them.
Any information relating to an identified or identifiable natural person is personal data.
Did you mean to tag me or was that meant for @geocar? Either way, I think we agree.
To examplify my point: Let us pretend you are Bookface. You explicitly block any European user from signing up for your site (and since you opened on the day of the GDPR launch there are no users already signed up). Because your blocking is not perfect, Gerard from France stumbles across Bookface.com, signs up and gives you his full name, e-mail, date of birth and street address. You are now subject to GDPR as you are holding personal information about him. You can try to ignore it, and actual enforcement might be difficult (especially for individual cases), but the EU is very clear on this: You are subject to GDPR.
Depending on how you use this data and whether it is required for your platform to operate, you may have to ask Gerard to explicitly opt-in (or not use the service at all, if presented at sign-up).
Did you mean to tag me or was that meant for geocar?
Comment was for both of you.
But I realized by his last answer, that @geocar is not talking about GDPR as generally applied in Europe for European citizens, but about the UK reception that protect UK citizens only.
This explains his lack of understanding of the GDPR, but it also means that you can (probably, IANAL) safely take his advices for data relating UK citizens. Not for data relating to other Europeans.
A relevant example is the name of a user (or her IP Address) that are notoriously personal data according to the European GDPR, but that, according to geocar, are not to be consider as such in UK.
To examplify my point […]
Yes, we agree.
In your example, once the data of Gerard are in your system, you are subject to GDPR. Even if Gerard agrees on the processing you do, you have several obligations in his regards, such as assuming proper security measure to protect his data and informing him if his data get disclosed by an accidental data breach. You should read the law for a full list of the obligations.
And, AFAIK, you can only avoid such obligations by completely removing Gerard’s data from your system (including from logs and backups).
I encourage you and everybody else to read the law. It is really clear and well written.
And while a competent European lawyer might help, anybody in good faith can easily understand it.
So I’m surprised that easyDNS can serve UK customers without collecting taxes - they must be violating UK law, right?
I would imagine there is an amount of “Okay, so come and get it” involved with the VAT taxes and other laws. There’s no mechanism for enforcement of that decision if you hold no assets within EU member states. Now, the EU could attempt to block access to that website, but we all know how effective that is.
If you can’t hit someone with a stick what incentive do they have to follow your orders? Especially if there is no reward for doing so other than a pat on the head? Doubly so if following those orders is a pain in the butt.
If you can’t hit someone with a stick what incentive do they have to follow your orders?
Can you elaborate?
Are you saying people can violate US laws (eg a US company copyrights) till they stay outside the USA?
Normally, I’d say this is off-topic for lobste.rs
This is something I sincerely do not understand.
Why it’s off-topic if its tags are [law] and [privacy]?
The way to detect a true lobste.rs topic is to find one whose title you barely understand, which has one upvote and which has few replies. The replies, however, are substantial, mind opening and mind blowing. After reading the comments you can go back to the article and perhaps understand the title. To understand the article you might have to write some code yourself.
That’s how we started out.
I’m not that much of an old grouch to deny people their party line talk, but frankly, there’s still that YCombinator powered bar fight site, right? Why clone it here?
That said, I’m okay with a writeup like this appearing once in a blue moon. But I do find myself aggressively hiding stories more and more.
The way to detect a true lobste.rs topic is to find one whose title you barely understand… To understand the article you might have to write some code yourself.
It’s an amazing high standard.
But I’d say it would exclude 99% of the posts here and anything related to law, privacy, practices and culture.
Also I’d have some issues at posting anything I wrote myself, because I only write about topics I understand myself.
because I only write about topics I understand myself
Beginner mistake;)
Beginning expert’s mistake… ;-)
Something I’ve been wondering about (and this is probably the wrong forum to ask about) is whether or not doing this would result in employees or executives having issues if they go to Europe?
I think the question is something along the lines of “could a company be prosecuted for violations of the GDPR if its employees visit or work in Europe”.
I assume the answer is “no”, as long as they’re not actually doing business in Europe. (Which would be the primary reason to have employees there, but with the increased prevalence of remote work, it’s not necessarily the case.)
I am fairly certain you could even go to EU and work in an office on data for non-EU customers and still not be subject to GDPR. As long as you are not dealing with any EU entities, your physical location should not matter.
“It applies to all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location.”
https://www.eugdpr.org/gdpr-faqs.html
So if you are working in the EU, your company would probably need to comply with GDPR, as they likely has personal information on you in their systems. I guess it comes down to how lawyers would interpret “residence”. Enforcable? Idk.
Suppose I work for a company in Canada and that company flagrantly violate’s the GDPR. I later leave the company and move to Europe.
Is it possible for Europe to come after me personally, instead of (or as well as) the company?
What if I’m the CTO? CEO? Owner? Just an employee but directly responsible for the GDPR violations?
What if I don’t leave the company and just go to Europe on a vacation?
Is it possible for Europe to come after me personally, instead of (or as well as) the company?
This is the entire point of the legal fiction of a “corporate person”. If a corporation is doing bad things, you go after the corporation. It’s very rare that anyone within the company directly is charged with a crime unless they’re knowingly and intentionally violating something. GDPR is fairly lenient with remediation and other things.
What if I don’t leave the company and just go to Europe on a vacation?
They’d more or less have to issue a warrant for you, and you would know.
Maybe if it were egregious enough.
The US has been known to go after employees of money launderers and copyright violators in other companies, so it’s not without an international precedent, but I’d need more information to give better advice.
This is yet another project that has piqued my curiosity, only to find it participates in open source vendor lock-in by requiring Docker. Due to that, I’m unable to use it.
it participates in open source vendor lock-in by requiring Docker
Can you explain what is “vendor lock-in” about Docker?
Isn’t Docker now part of an “open container initiative” or something?
AFAIK, it’s usually not too difficult to de-Dockerify something.
Because Docker isn’t supported everywhere and won’t be. It’s not supported on the BSDs. Unless there’s a business requirement to run Linux, I only run BSD.
I’ve never heard of “vendor lock-in” meaning “it doesn’t run everywhere”. By that definition almost all software is “vendor lock-in”. Mostly I’ve heard the phrase used to refer to data formats and data in general. But whatever the case may be, the Dockerfile doesn’t mean Docker is required. You’re free to try building it and running it on BSD without Docker.
The old definition of cross-platform code meant it runs on the widely-used platforms regardless of what a vendor chooses. The project itself controls it. This code, if tied to Docker, will only use the hosts and targets Docker supports. Its locked into what that project chooses. I haven’t heard of open-source, vendor lock-in before but it makes sense: many OSS foundations are easier to use than modify heavily.
These people probably have no intention to put Docker on BSD or take over Docker development. They’ll depend on upstream to do that or not do that. So, they’re locked in if the Docker dependency isn’t easily replaceable by them or their users. If it is easily replaceable, I’d not call it lock-in: just a project preference for development and distribution with cross-platform being limited to Docker’s definition of platforms. Which maybe be enough for this project. I can’t say any more than that since I’m just glancing at it.
This code, if tied to Docker, will only use the hosts and targets Docker supports
For probably the third time now: this project is not “tied to Docker”, and the concept of “tied to Docker” for a single piece of code is borderline nonsensical.
There are projects that are “tied to Docker”, but that most likely means they assemble multiple software pieces together via a docker-compose.yml file, not a Dockerfile file.
It does appear that I misread the project. I looked at their deployment guide and Docker is front-and-center. However, it appears that the project does not have a hard dependency on Docker.
For those projects that do have a hard dependency on Docker, my statement still stands. Docker, in those cases, is a form of open source vendor lock-in due to deliberate non-portability.
Docker, in those cases, is a form of open source vendor lock-in due to deliberate non-portability.
Let’s Internet rage at non-portable BSD-specific features as well then.
Docker, in fact, literally only runs on Linux. It uses a wide variety of Linux-specific functionality, and all extant Docker images contain Linux x86 binaries. On Windows and OS X, Docker runs on Linux in a VM (a setup which is impressively fragile and introduces an incredibly variety of weird edge cases and complications).
The architecture specifics of Electron have helped it succeed, but what really matters are results: a developer can make a desktop application using a single JavaScript codebase, and it compiles into a binary that works on every OS.
This is obviously bullshit.
“Works” and “every” obviously require scare quotes, at the very least, but I don’t think the overall point is wrong. Electron succeeded (as far as it has succeeded, at any rate, which IMO is certainly too far to be dismissed) because there are massive numbers of incurious web programmers in the world, and Electron enables them to ship an application that will at least execute on several distinct, popular OSs without having to learn anything at all about those OSs or any languages or concepts other than Javascript.
Do you see a different high-level reason for its success?
I think it’s disingenuous to call these web developers “incurious”. There are other practical reasons to use Electron. I had to make a similar decision at my company: how do I allocate limited resources when I need to provide a web app plus iOS and Android apps? Guess what: I chose to go with hybrid mobile apps which shared a lot of code with the web app and between iOS/Android.
Does that make me incurious, or pragmatic?
I’m not sure you understand the effort involved in developing applications for multiple platforms. It’s all very nice to proclaim that these lazy developers just need to learn another two-three languages and another two-three monstrous sets of APIs, damnit, but in reality, what’s needed is another two or three teams of developers.
As a matter of fact, I don’t like Electron and I think companies like Slack can afford the extra developers, but that’s a different argument.
In terms of content and moderation, each instance would be kind of like a “view” over the aggregate data. If you want stricter moderation you could sign up for one instance over another. Each instance could also cater to a different crowd with different focuses, e.g. Linux vs. BSD vs. business-friendly non-technical vs. memes vs. …. Stories not fitting an instance could be blocked by the instance owner. Of course you could also get the catch-all instance where you see every type of story; it might feel like HN.
The current Lobsters has a very specific focus and culture, and also locked into a specific moderation style. Federating it would allow a system closer to Reddit and its subreddit system where each instance has more autonomy, yet the content from the federated instances would all be aggregated.
So of course such a system wouldn’t be a one-to-one replacement for Lobsters but a superset. Ideally an individual instance could be managed and moderated such that it would feel like the Lobsters of today.
The current Lobsters has a very specific focus and culture, and also locked into a specific moderation style. Federating it would allow a system closer to Reddit and its subreddit system where each instance has more autonomy, yet the content from the federated instances would all be aggregated.
If federation results in a reddit-like site, I’d much rather that lobste.rs doesn’t federate. It’s a tech-news aggregator with comments, there’s no real benefit in splitting it up, especially at it’s current scale.
I get what you’re saying. I think OP framed the idea wrong. People come to Lobsters because they like Lobsters. The question is whom would the federated Lobsters benefit – it would mostly benefit people who aren’t already Lobsters users.
It’s just that the Lobsters code base is open source and actively developed, and much simpler than Reddit’s old open source code. So it’s not unreasonable to want to build a federated version on top of Lobsters’ code rather than start somewhere else.
it would mostly benefit people who aren’t already Lobsters users.
Well that was my point. Any spammer or shiller can create and recreate reddit and hacker-news accounts, thereby decreasing the quality and the standard of the platform, and making moderation more difficult. This is exactly what the invite tree-concept prevents, which is quite the opposite of (free) federation.
He came back and I banned him again.
Based on my experience in community management, including here on Lobsters, I do not believe it’s possible for an individual instance in a system like you describe to have a coherent culture which is different from the top-level culture in substantial ways, unless you’re okay with participants feeling constantly under siege. The top-level culture always propagates downward, and overriding it takes an enormous amount of resources and constant effort.
Have you used Mastodon at all? If that’s used as a model, it seems each instance can have a distinct personality, as Mastodon instances do today. Contrast with traditional forums, and Reddit to some extent, which do more-or-less have a tree structure and where your concern definitely applies. With federation there doesn’t necessarily need to exist a top-down structure, even if that might be the easiest to architect (although I don’t know if it is the easiest).
I have used Mastodon, but not enough to have a strong opinion on it. It’s been a challenge for me to pay enough attention to it to keep up with what’s happening; it’s kind of an all-or-nothing thing, and right now Twitter is still taking the attention that I would have to give to Mastodon.
Biggest argument in favor is probably for people that want to leech off of the quality submissions/culture here but who don’t want to actively participate in the community or follow its norms. That and the general meme today of “federated and decentralized is obviously better than the alternative”.
Everybody wants the fruit of tilled gardens, but most people don’t want to put in the effort to actually do the work required to keep them running.
The funny thing is that we’d probably just end up with a handful (N < 4) of lobster peers (after the novelty wears off), probably split along roughly ideological lines:
And sure, that’d scratch some itches, but it’d probably just result in fracturing the community unnecessarily and creating the requirement for careful monitoring of what gets shared between sites. As a staunch supporter of Lobsters Classic, though, I’m of course biased.
I’d be quite interested to see lobsters publish as ActivityPub/OStatus (so I could, for instance, use a mastodon account to follow users / tags / all stories). I don’t see any reason to import off-site activity; one of the key advantages of lobsters is that growth is managed carefully.
Lobsters actually already does this with Twitter, so that seems both entirely straightforward to add and in line with existing functionality.
(Note that I don’t use Twitter, so I can’t speak to how well that feed actually works.)
It won’t go away entirely if the one, special person who happens to own this system decides to make it go away for whatever reason of their own. It won’t die off if this specific instance gets sold or given to someone who can’t handle it and who runs it into the ground.
Can we not post scuttlebutt on twitter from a thread in the dedicated SomethingAwful technology shitposting forum?
how many comments of yours do you think are policing what people post here? 10%, 20%? Before you respond with something along the lines of “eternal september” or “hacker news” just know I’ve lurked at HN for almost as long as its been around and I had a computer in the late 80s.
It is kind of a garbage source. friendlysock is doing people a favor by pointing that out, and I wish I’d read his comment before I read the thread.
If you have any evidence that any of these claims are untrue (a rebuttal from Musk, Tesla, etc.), please share it with us.
Legal systems generally (not the French) go with innocent until proven guilty for a reason. CEOs would not have a lot of time in the day if they had to personally prove every accusation made against them or their company.
Funny, he seems to have time to respond to random twitter accounts all day.
Obviously means regular boring old CEOs, not the visionary ones aimed at Mars…
Taking your jab at French jurisprudence seriously, what do you mean by that? Is this some recent court case?
Because France basically invented the modern Continental legal framework (well, Napoleon overhauled the ancient Roman system) which is used all over Europe (and beyond!) today.
Sure, it is a well known fact that France is the European Guantanamo. 😏
I don’t think Tesla as a corporate entity or Musk as a private individual / CEO will dignify this source with any sort of acknowledgement. That’s a PR no-no.
However, if a personal actually trained in ferreting out the truth and presenting it in a verifiable manner (these people are usually employed as journalists) were to pull on this thread, who knows where it might lead?
The standards of evidence in most places, including science, are that you present evidence for your claims since (a) you should already have it and (b) it saves readers time. Bullshit spreads fast as both media and Facebook’s experiment show. Retractions and thorough investigations often don’t make it to same audience. So, strong evidence for source’s identity or claims should be there by default. It’s why you often see me citing people as I make controversial claims to give people something to check them with.
There’s nothing surprising about the employee’s claims. It’s like asking for evidence that Google spies on users. They admit to it, and so does Tesla. So there’s your evidence, and I think it’s sad that you’re taking these trolls here seriously.
Thanks for the link. Key point:
“Every Tesla has GPS tracking that can be remotely accessed by the owner, as well as by Tesla itself. That means that people will always know where a Tesla is. This feature can be turned off, by entering the car and turning off the remote access feature. I am not sure why you would want to do this, but you can. Unfortunately, there are ways for a thief to turn off the remote access feature, and this will blind you to the specific information about the car. It will not stop Tesla from being able to track the car. They will retain that type of access no matter what, and have the authority to use it in the instances of vehicle theft.”
re taking trolls seriously. We’re calling you out about posting more unsubstantiated claims via Twitter. If your goal is getting info out, then you will always achieve it by including links like you gave me in the first place. Most people aren’t going to endlessly dig to verify stuff people say on Twitter. They shouldn’t since the BS ratio is through the roof. Also, that guy didn’t just make obvious claims like they could probably track/access the vehicle: he made many about their infrastructure and management that weren’t as obvious or verifiable. He also made them on a forum celebrated for trolling. So, yeah, links are even more helpful here.
But the point isn’t to even say that everything written here is true. The point is to share a very interesting data point that likely constitutes primary source material, and force a reaction from Tesla to stop their dangerous practices (or offer them a chance to set the record straight if any of this is untrue, which we’ve established is unlikely).
“Dangerous” compared to what? Force how?
Low-effort regurgitation of screencaps is not some big act of rebellion, it is just a way of lowering quality and adding noise.
If we wanted to read fiction we could go enjoy the sister Lobster site devoted to that activity.
Being a troll is “a way of lowering quality and adding noise”.
Which is why several people are asking you to stop it.
Is there any evidence your tweets or Lobsters submissions have changed security or ethical practices of a major company?
If not, then that’s either not what you’re doing here or you should be bringing that content to Tesla’s or investors’ attention via mediums they look at. It’s just noise on Lobsters.
I agree with you in general, but this specific “article” is just garbage. (As far as I’m concerned, Twitter in general should be blacklisted from lobste.rs. Anything there is either content-free or so inconvenient to read as to be inaccessible.)
I agree. I did at least learn from your link that Arnnon Geshuri, Vice President of HR at Tesla, was a senior one at Google that some reports said was involved in the price fixing and abusive retention of labor here. That’s a great hire if your an honest visionary taking care of employees who enable your world-changing vision. ;)