Can we not post scuttlebutt on twitter from a thread in the dedicated SomethingAwful technology shitposting forum?
how many comments of yours do you think are policing what people post here? 10%, 20%? Before you respond with something along the lines of “eternal september” or “hacker news” just know I’ve lurked at HN for almost as long as its been around and I had a computer in the late 80s.
It is kind of a garbage source. friendlysock is doing people a favor by pointing that out, and I wish I’d read his comment before I read the thread.
If you have any evidence that any of these claims are untrue (a rebuttal from Musk, Tesla, etc.), please share it with us.
Legal systems generally (not the French) go with innocent until proven guilty for a reason. CEOs would not have a lot of time in the day if they had to personally prove every accusation made against them or their company.
CEOs would not have a lot of time in the day…
Funny, he seems to have time to respond to random twitter accounts all day.
Taking your jab at French jurisprudence seriously, what do you mean by that? Is this some recent court case?
Because France basically invented the modern Continental legal framework (well, Napoleon overhauled the ancient Roman system) which is used all over Europe (and beyond!) today.
I don’t think Tesla as a corporate entity or Musk as a private individual / CEO will dignify this source with any sort of acknowledgement. That’s a PR no-no.
However, if a personal actually trained in ferreting out the truth and presenting it in a verifiable manner (these people are usually employed as journalists) were to pull on this thread, who knows where it might lead?
The standards of evidence in most places, including science, are that you present evidence for your claims since (a) you should already have it and (b) it saves readers time. Bullshit spreads fast as both media and Facebook’s experiment show. Retractions and thorough investigations often don’t make it to same audience. So, strong evidence for source’s identity or claims should be there by default. It’s why you often see me citing people as I make controversial claims to give people something to check them with.
There’s nothing surprising about the employee’s claims. It’s like asking for evidence that Google spies on users. They admit to it, and so does Tesla. So there’s your evidence, and I think it’s sad that you’re taking these trolls here seriously.
Thanks for the link. Key point:
“Every Tesla has GPS tracking that can be remotely accessed by the owner, as well as by Tesla itself. That means that people will always know where a Tesla is. This feature can be turned off, by entering the car and turning off the remote access feature. I am not sure why you would want to do this, but you can. Unfortunately, there are ways for a thief to turn off the remote access feature, and this will blind you to the specific information about the car. It will not stop Tesla from being able to track the car. They will retain that type of access no matter what, and have the authority to use it in the instances of vehicle theft.”
re taking trolls seriously. We’re calling you out about posting more unsubstantiated claims via Twitter. If your goal is getting info out, then you will always achieve it by including links like you gave me in the first place. Most people aren’t going to endlessly dig to verify stuff people say on Twitter. They shouldn’t since the BS ratio is through the roof. Also, that guy didn’t just make obvious claims like they could probably track/access the vehicle: he made many about their infrastructure and management that weren’t as obvious or verifiable. He also made them on a forum celebrated for trolling. So, yeah, links are even more helpful here.
But the point isn’t to even say that everything written here is true. The point is to share a very interesting data point that likely constitutes primary source material, and force a reaction from Tesla to stop their dangerous practices (or offer them a chance to set the record straight if any of this is untrue, which we’ve established is unlikely).
“Dangerous” compared to what? Force how?
Low-effort regurgitation of screencaps is not some big act of rebellion, it is just a way of lowering quality and adding noise.
But the point isn’t to even say that everything written here is true.
If we wanted to read fiction we could go enjoy the sister Lobster site devoted to that activity.
…it is just a way of lowering quality and adding noise.
Being a troll is “a way of lowering quality and adding noise”.
Is there any evidence your tweets or Lobsters submissions have changed security or ethical practices of a major company?
If not, then that’s either not what you’re doing here or you should be bringing that content to Tesla’s or investors’ attention via mediums they look at. It’s just noise on Lobsters.
I agree with you in general, but this specific “article” is just garbage. (As far as I’m concerned, Twitter in general should be blacklisted from lobste.rs. Anything there is either content-free or so inconvenient to read as to be inaccessible.)
I agree. I did at least learn from your link that Arnnon Geshuri, Vice President of HR at Tesla, was a senior one at Google that some reports said was involved in the price fixing and abusive retention of labor here. That’s a great hire if your an honest visionary taking care of employees who enable your world-changing vision. ;)
Ah they tricked me with this one, it’s a Medium article hidden behind another domain.
(Whenever I see “medium.com” next to lobsters articles I know not to click, since the result will be a weak thinkpiece by a frontend developer, wrapped in obtrusive markup.)
“The medium is the message” ;)
I have to admit though that seeing a medium link is generally a negative signal for me. Still click on many of them.
Many confuse Marshall McLuhan’s original meaning of that phrase. It didn’t really mean that the way a message was delivered was part of the message itself. It actually meant that the vast majority of messages were medium or average.
It would have been better said, “meh, the message is average.”
Interesting interpretation. I am not sure how he originally came to that phrase, but his book certainly spent a lot of time and effort arguing for the now prevalent meaning.
This didn’t really make sense to me, so I looked it up, and I don’t think that’s right. The original meaning is exactly what we’ve come to understand it as:
The medium is the message because it is the medium that shapes and controls the scale and form of human association and action. The content or uses of such media are as diverse as they are ineffectual in shaping the form of human association. Indeed, it is only too typical that the “content” of any medium blinds us to the character of the medium. (Understanding Media: The Extensions of Man, 1964, p.9)
I wonder where you’ve heard your interpretation?
I do not trust Software Freedom Conservancy (and with good reasons), but I agree with most of what is written here, except:
Copyright and other legal systems give authors the power to decide what license to choose […]
In my view, it’s a power which you don’t deserve — that allows you to restrict others.
As an author of free software myself, I think that I totally deserve the right to decide who and how can use my work.
I read the article you linked to but didn’t really understand how that means SFC can’t be trusted. Because a project under their umbrella git rebased a repo?
No.
I cannot trust them anymore, because when the project joined Conservancy, I explicitly asked them how my copyright was going to change and Karen Sandler replied that it was not going to change.
One year later I discovered that my name was completely removed by the sources.
According to the GPLv2 this violation causes a definitive termination of the project’s rights to use or modify the software.
Now, I informed Sandler about that mess (before the rebase) and never listen her back after, despite several of my contributions got “accidentally squashed” during the rebase.
That’s why I cannot trust them anymore.
Because they are still supporting a project that purposedly violated the GPLv2 (causing its definitive termination) and despite the fact that I gave them the possibility to fix this, they didn’t… and tried to remove all evidences of this violation and of the license termination with a rebase… (that still squashed some of my commits).
He’s objecting to restricting others in the way that proprietary software does, that’s the right he says you shouldn’t have. I think you edited out the part in your quote what bkuhn was talking about.
But more to your point, I also think that your right to to decide how others can use your work should be very limited. With software, an unlimited number of people can benefit from using your work in ways you may disagree with while you would be the only who would object. As a bargain with society, your authorial rights should be given smaller weight than the rights of your users.
As a bargain with society, your authorial rights should be given smaller weight than the rights of your users.
Is this a principle that you believe should be only applied to software?
Because if not, one could argue that a person’s special skills (say, as a doctor) are so valuable to society that that person should work for free to assure that the greatest number of people have access to their skill.
If the principle is restricted to expression, a photograph I take of a person could be freely used by a political party that I despise to further their cause through propaganda. I am only one person, and they are many. My pretty picture can help them more than it helps me. So according to the principle above (as I read it) they should have unrestricted access to my work.
I believe that the current regime of IP legislation is weighted too much towards copyright holders, but to argue that a creator should have no rights to decide how their work is used is going too far.
Software is different than doctors because software can be reproduced indefinitely without inconveniencing the author. Photographs are more similar to software than doctors.
I also didn’t say an author should have no rights. I just said their rights should weigh less. For example, copyrights should expire after, say, 10 years, instead of lasting forever as they de facto do now.
Thanks for clarifying your position in this matter.
I think we are broadly in agreement, especially with regards to the pernicious effects of “infinite copyright”.
It’s funny that I’m taking the parts of copyright here…
Let’s put it this way: if I invented a clean energy source I would do my best to ensure it was not turned to a weapon.
Same with software.
It’s my work, thus my responsibility.
An update from the second of the linked articles:
Update August 23rd: The court has confirmed that the searches and seizures were illegal. (Allegedly, no documents or equipment was analysed/evaluated. We are trying to get that confirmed in writing.)
A much better example, IMO, are these Markov generated Tumbler posts, trained with Puppet documentation and a collection of H.P. Lovecraft stories.
Poetic.
And this one looks like one of those quotes that become historical, but almost no one that uses it knows what it means:
“Any reasonable number of resources can be specified in a way I can never hope to depict.”
I like King James Programming. Example: Exercise 3.63 addresses why we want a local variable rather than a simple map as in the days of Herod the king
My blog has been more or less actively updated (albeit just with book reviews and monthly photo roundups these last years) since 2004. There have never been any ads on it.
I recently removed the Google Analytics tracker code.
I’ve removed the Disqus integration - I never got comments that were worthwhile.
The blog runs on a $5/mo VPS that provides me with a lot of useful services, so it’s essentially “free”.
That said, while I’d happily declare my blog to be “ad-free”, I do not agree with the statement
That I feel the use of corporate advertising on blogs devalues the medium.
Also, the logo is pretty damn ugly. Sorry.
Reminded me of FOAF: http://www.foaf-project.org
I remember being so excited about FOAF when I learned about it around 2005. Those were the heady days of blogs and RSS feeds and open APIs.
Guess that would be
http://gerikson.com/blog/comp/
It’s mostly just my adventures with Advent of Code and Project Euler.
I get that we’re supposed to keep original titles but I’d argue that the spacing is a result of the site’s “styling” and doesn’t have to be reproduced (just like we wouldn’t prepend a # to every submission’s title that links to a markdown doc)
In Swedish typography this is called “spärrning”, from the German Sperrsatz.
https://en.wikipedia.org/wiki/Emphasis_(typography)#Letter_spacing
It’s basically the only emphasis you have available if you can’t underline or use italics/bold.
Related tweetstorm: https://twitter.com/www_ora_tion_ca/status/1028010688650911746
It was interesting reading the reactions.
For as long as I can remember, people who have observed software development (insiders as well as outsiders) have compared it unfavorably with other professions when it comes to stuff like safety, verification, and adhering to standards.
But the thread is full of “well, actually”s from people who seem to think that the current way of designing software is just fine, and it’s bad people attacking software that’s the problem.
Astronomers use the Julian period because it is convenient to express long time intervals in days rather than months, weeks and years. It was devised by Joseph Scaliger, in 1582, who named it after his father Julius, thus creating the confusion between the Julian (Caesar) calendar and the Julian (Scaliger) period.
I consider myself a time and date nerd and I had no idea this was the case. I thought the two Julians were one and the same.
Steelseries 6GV2 at work.
Some older Keytronics (the one with the grayish spacebar with the lightning bolt on it) at home.
The only customization I do is to convert Caps Lock to Control.
I’ve heard the “binary logs are evil!!!” mantra chanted against systemd so many times that it wasn’t funny anymore. It’s a terrible argument. With so many big players putting their logs into databases, the popularity of the ELK stack, it is pretty clear that storing logs in non-plaintext format works. Way back in 2015, I wrote two blog posts about the topic.
The gist of it is that binary logs can be awesome, if put to good use. That the journald is not the best tool is another matter, but journald being buggy doesn’t mean binary logs are bad. It just means that the journald is possibly not the most carefully engineered thing out there. There are many things to criticize about systemd and the journal, and they both have their fair share of issues, but binary storage of logs is not one of them.
Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?
The journald/systemd people don’t act like they have any clue what’s going on in the real world: people can’t use the tools they used to, and these tools evidently suck; Plain text sucked less, so what’s the plan to get anything better?
I don’t think that’s entirely reasonable. It’s converting a complaint about principle (“don’t do binary logs”) into a complaint about practice, and that makes a big difference. If journald is a bad implementation of an ok idea, that requires very different steps to fix than if it’s a fundamentally bad idea.
What you’re describing makes sense for people on the systemd project to say (“woah, people hate our binary logs, maybe we should work on them”[0]), but not for the rest of us trying to understand things.
[0] I fear they’re not saying that, as they seem somewhat impervious to feedback
I feel like @geocar is against binary logs as a source format, but not as an intermediate or analytics format. Even if your application uses structured logging, it can still be stored in a text file, for example as JSON, at the source. It can be converted to a binary log later in the chain, for example on a centralized logging server, using ELK, SQL, MongoDB, Splunk or whatever. The benefit is that you keep a lot of flexibility at the source (in terms of supporting multiple formats depending on the source application) and are still able to go back to the plain text log if you encounter a problem.
I’m not even against binary logs “as a source format.”
Firstly: I recognise that “complaints about binary logs” is directed at journald and isn’t the same thing about complaints about logs in some non-text format.
I think getting systemd in deep forced sysadmins to retool on top of journald and that hurt a lot for so very little gain (if there was any gain at all- and for most workflows I suspect there wasn’t). This has almost certainly put people off of binary logs, and has almost certainly got people complaining about binary logs.
To that end: I don’t think those feelings around binary logs are misplaced.
Some humility is [going to be] required when trying to win people over with binary logs, but appropriating the term “binary logs” to include tools the sysadmin chooses is like pulling the rug out from under somebody, and that’s not helping.
Thank you very much for clarifying. I agree that forcing sysadmin “to retool on top of journald” hurts.
No, it’s recognising that when enough people are complaining about “the wrong thing”, telling them it’s the wrong thing doesn’t help them. It just causes them to dig in.
What’s the right thing?
I think that’s the point of the bug…
Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?
As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.
and these tools evidently suck
For a lot of use cases, they do not suck. For many, they are a vast improvement over text logs.
what’s the plan to get anything better?
Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.
As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.
People complain about things that hurt, and between Windows and journald it should not be a surprise that “binary logs” is getting the flak. journald has a lot of outreach work to do if they want to fix it.
For a lot of use cases, [the tools] do not suck. For many, they are a vast improvement over text logs.
And yet when programmers make mistakes implementing them, the sysadmin are left cleaning up after them.
Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.
Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge. This allows people to get a lot of the advantages of binary logs with few disadvantages (and given how cheap disk is, the price is basically zero).
Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.
These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.
Are we really to interpret this as refuse to install any software that doesn’t follow this rule?
I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.
Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.
What do you mean by a “transparent structuring layer”?
Something to structure the plain text logs into some tagged format (like JSON or protocol buffers).
Splunk e.g. lets users create a bunch of regular expressions to create these tags.
Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.
For some values of “can do”, yes. Most traditional text logs are terrible to work with (see my linked blog posts, not going to repeat them here, again). Besides, as long as your journal files aren’t corrupt (which happens less and less often these days, I’m told), you can just use journalctl to dump the entire thing, and grep in the logs, just like you grep in text files. Or filter them first, or dump in JSON and use jq, and so on. Plenty of options there.
Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.
Clearly our experience differs. Most syslog-ng PE customers (and customers of related products) made binary logs (either PE’s LogStore, or an SQL database) their golden source of knowledge. A lot of startups - and bigger businesses - outsourced their logging to services like loggly, which are a black box like binary logs.
These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.
These are directions to sysadmins too. The majority of daemons support logging to files, or use a logging framework where you can set them up to log directly to a central collector, or to a database directly. For a huge list of applications, bypassing syslog has been there since day one. Apache, Nginx, pretty much any Java application can all do this, just to name a few things. There are some notable exceptions such as postfix which will always use syslog, but there are ways around that too.
You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.
I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.
With the journal, you have journalctl, which is quite well documented.
Clearly our experience differs. Most syslog-ng PE customers…
Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?
outsourced their logging to services like loggly, which are a black box like binary logs.
I would be surprised to find that most people that use loggly don’t keep any local syslog files.
What exactly are you arguing here?
Plenty of options there.
And?
You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.
Right, and the goal is to get people using journald right?
If journald doesn’t want to be used, what it’s reason for existing?
Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?
Yes.
I would be surprised to find that most people that use loggly don’t keep any local syslog files.
Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate. In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.
Right, and the goal is to get people using journald right?
For systemd developers, perhaps. I’m not one of them. I don’t mind the journal, because it’s been working fine for my needs. The goal is to show that you can bypass it, if you don’t trust it. That you can get to a state where your logs are processed and stored efficiently, in a way that is easy to work with - easier than plain text files. Without using the journal. But with it, it may be slightly easier to get there, because you can skip the whole getting around it dance for those applications that insist on using syslog or stdout for logging.
Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?
Yes.
I think you’re completely wrong.
There are a lot of Debian/RHEL/Ubuntu/*BSD (let alone Windows) machines out there, and they’re definitely not using syslog-ng by default…
Debian publishes install information: syslog-ng verus rsyslogd. It’s no contest.
A big bank I’m working with has zero: all rsyslogd or Windows.
Also, the world is moving to journald…
So, why exactly do you believe this?
In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.
Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate.
Okay, but why do you think this contradicts what I say?
You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.
The goal is to show that you can bypass it, if you don’t trust it.
Ah well, this is a very different topic than what I’m replying to.
I can obviously bypass it by not using it.
I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.
I think you’re completely wrong.
I think I know better how many syslog-ng PE customers there are out there (FTR, I work at BalaBit, who make syslog-ng). It has a significant market share. Significant enough to be profitable (and growing), in an already crowded market.
A big bank I’m working with has zero: all rsyslogd or Windows.
…and we have big banks who run syslog-ng PE exclusively, and plenty of other customers, big and small.
Also, the world is moving to journald…
…and syslog-ng plays nicely with it, as does rsyslog. They nicely extend each other.
You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.
I think we’re misunderstanding each other… What I consider the golden source may be very different from what you consider. For me, the golden source is what people use when they work with the logs. It may or may not be the original source of it.
I don’t care much about the original source (unless it is also what people query), because that’s just a technical detail. I don’t care much how logs get from one point to another (though I prefer protocols that can represent structured data better than the syslog protocol). I care about how logs are stored, and how they are queried. Everything else is there to serve this end goal.
Thus, if an application writes its logs to a text file, which I then process and ship to a data warehouse, I consider that to be binary logs, because that’s how it will ultimately end up as. Since this warehouse is the interface, the original source can be safely discarded, once it shipped. As such, I can’t consider those the golden source.
If we restricted “binary logs” to stuff that originated as binary from the application, then we should not consider the Journal to use binary logs either, because most of its sources (stdout and syslog) are text-based. If the Journal uses binary logs, then anything that stores logs as binary data should be treated the same. Therefore, everything that ends up in a database, ultimately makes use of binary logs. Even if their original form, or the transports they arrived there, were text.
(Transport and storage are two very different things, by the way.)
I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.
I never said they are. All I said is that storing logs in binary is not inherently evil, linked to blog posts where I explain pretty much the same thing, and give examples for how binary storage of logs can improve one’s life. (Ok, I also asserted that syslog and stdout are terrible interfaces for logs, and I maintain that. This has nothing to do with text vs binary though - it is about free-form text being awful to work with; see the linked blog posts for a few examples why.)
I think I know better how many syslog-ng PE customers there are out there
Or we just have different definitions of significant.
Significant enough to be profitable (and growing), in an already crowded market.
Look, I have an advertising business that makes enough money to be profitable, and is growing, but I’m not going to say I have a “significant” market share of the digital advertising business.
But whatever.
All I said is that storing logs in binary is not inherently evil
And I didn’t disagree with that.
If you try and re-read my comments knowing that, maybe it’ll be more clear what I’m actually pointing to.
At this point, we’re just talking past each other, and there’s no point in that.
Yeah, I know someone who runs a keyserver and they are getting absolutely sick of responding to the GDPR troll emails.
Love the idea to use activitypub (the same technology involved in mastadon) for keyservers. That’s really smart!
Offtopic: Excuse me.
I think it depends on some conditions, so not everybody is going to see this every time. But when I click on medium links I tend to get this huge dialog box come up over the entire page saying some thing about registering or something. It’s really annoying. I wish we could host articles somewhere that doesn’t do this.
My opinion is that links should be links to some content. Not links to some kind of annoyware that I have to click past to get to the real article.
Could you give an example? That sounds like a pleasant improvement, but i don’t know exactly what you mean by a cached link.
I started running uMatrix and added rules to block all 1st party JS by default. It does take a while to white list things, yes, but it’s amazing when you start to see how many sites use Javascript for stupid shit. Imgur requires Javascript to view images! So do all Square Space sites (it’s for those fancy hover-over zoom boxes).
As a nice side effect, I rarely ever get paywall modals. If the article doesn’t show, I typically plug it into archive.is rather than enable javascript when I shouldn’t have to.
I do this as well, but with Medium it’s a choice between blocking the pop-up and getting to see the article images.
I think if you check the ‘spoof noscript>l tags’ option in umatrix then you’ll be able to see the images.
How timely! Someone at the office just shared this with me today: http://makemediumreadable.com
From what I can see, the popup is just a begging bowl, there’s actually no paywall or regwall involved.
I just click the little X in the top right corner of the popup.
But I do think that anyone who likes to blog more than a couple of times a year should just get a domain, a VPS and some blog software. It helps decentralization.
I use the kill sticky bookmarklet to dismiss overlays such as the one on medium.com. And yes, then I have to refresh the page to get the scroll to work again.
On other paywall sites when I can’t scroll, (perhaps because I removed some paywall overlay to get at the content below,) I’m able to restore scrolling by finding the overflow-x CSS property and altering or removing it. …Though, that didn’t work for me just now on medium.com.
Actually, it’s the overflow: hidden; CSS that I remove to get pages to scroll after removing some sticky div!
I run an SKS keyserver, have some patches in the codebase, wrote the operations documents in the wiki, etc.
Each keyserver is run by volunteers, peering with each other to exchange keys. The design was based around “protection against government attempts to censor keys”, dating from the first crypto wars. They’re immutable append-only logs, and the design approach is probably about dead. Each keyserver operator has their own policies.
I am a US citizen, living in the USA, with a keyserver hosted in the USA. My server’s privacy statement is at https://sks.spodhuis.org/#privacy but that does not cover anyone else running keyservers. [update: I’ve taken my keyserver down, copy/paste of former privacy policy at: https://gist.github.com/philpennock/0635864d34a323aa366b0c30c7360972 ]
You don’t know who is running keyservers. It’s “highly likely” that at least one nation has some acronym agency running one, at some kind of arms-length distance: it’s an easy and cheap way to get metadata about who wants to communicate privately with whom, where you get the logs because folks choose to send traffic to you as a service operator. I went into a little more depth on this over at http://www.openwall.com/lists/oss-security/2017/12/10/1
Thanks for this info.
Fundamentally, GDPR is about giving the right to individuals to censor content related to themselves.
A system set out to thwart any censorship will fall afoul of GDPR, based on this interpretation
However, people who use a keyserver are presumably A-OK with associating their info with an append-only immutable system. Sadly , GDPR doesn’t really take this use case into account (I think, I am not a lawyer).
I think what’s important to note about GDPR is that there’s an authority in each EU country that’s responsible for handling complaints. Someone might try to troll keyserver sites by attempting to remove their info, but they will have to make their case to this authority. Hopefully this authority will read the rules of the keyserver and decide that the complainant has no real case based on the stated goals of the keyserver site… or they’ll take this as a golden opportunity to kneecap (part of) secure communications.
I still think GDPR in general is a good idea - it treats personal info as toxic waste that has to be handled carefully, not as a valuable commodity to be sold to the highest bidder. Unfortunately it will cause damage in edge cases, like this.
gerikson you make really good points there about the GDPR.
Consenting people are not the focus of this entirely though , its about current and potential abuse of the servers and people who have not consented to their information being posted and there being no way for removal.
The Supervisory Authority’s wont ignore that, this is why the key servers need to change to prevent further abuse and their extinction.
They also wont consider this case, just like the recent ICANN case where they want it to be a requirement to store your information publicly with your domain which was rejected outright. The keyservers are not necessary to the functioning of the keys you upload, and a big part of the GDPR is processing only as long as necessary.
Someone recently made a point about the below term non-repudiation.
Non-repudiation this means in digital security
A service that provides proof of the integrity and origin of data.
An authentication that can be asserted to be genuine with high assurance.
KeyServers don’t do this!, you can have the same email address as anyone else, and even the maintainers and creator of the sks keyservers state this as well and recommend you check through other means to see if keys are what they appear to be, such as telephone or in person.
I also don’t think this is an edge case i think its a wake up call to rethink the design of the software and catch up with the rest of the world and quickly.
Lastly i don’t approve of trolling, if your doing it just for the sake of doing it “DON’T”, if you genuinely feel the need to submit a “right to erasure” due to not consenting to having your data published, please do it.
Thank you for the link: http://www.openwall.com/lists/oss-security/2017/12/10/1, its a fantastic read and makes some really good points.
Its easy for anyone to get hold of recent dumps from the sks servers, i have just hunted through a recent dump of 5 million + keys yesterday looking for interesting data. Will be writing an article soon about it.
i totally agree, it has been bothering me as well, i am in the middle of considering starting up my own self hosted blog. I also don’t like mediums method of charging for access to peoples stories without giving them anything.
I’m thinking of setting up a blog platform, like Medium, but totally free of bullshit for both the readers and the writers. Though the authors pay a small fee to host their blog (it’s a personal website/blog engine, as opposed to Medium which is much more public and community-like).
If that could be something that interests you, let me know and I’ll let you know :)
correction, turns out you can get paid if you sign up for their partner program, but i think it requires approval n shit.
hey @pushcx, is there a feature where we can prune a comment branch and graft it on to another branch? asking for a friend. Certainly not a high priority feature.
No, but it’s on my list of potential features to consider when Lobsters gets several times the comments it does now. For now the ‘off-topic’ votes do OK at prompting people to start new top-level threads, but I feel like I’m seeing a slow increase in threads where promoting a branch to a top-level comment would be useful enough to justify the disruption.
Unwinding after my penultimate week before vacation.
Doing some minor work around the house and garden.
Doing a longer (relatively, around 10km) bike ride.
Watching the World Cup final (on Sunday).
The license for this software is unclear.
Eschewing normal practice, there’s no LICENSE file in the source distribution.
I’m asking this because DJB seems to have views on software licensing that are at odds with the majority of the FOSS community. I’m not sure if this is still the case though.
From djb’s previous writings and software, he probably intends this to be license-free software.
And I know licensing is an interesting, complex topic that’s fun to armchair lawyer, so if folks want to pick up this topic please start by linking to and building your comment on the 20+ years of previous discussion, and avoid moralizing/shaming others’ licensing choices.
I wouldn’t necessarily qualify many of djb’s works as “license free”. He has explicitly put many of them into the public domain. See some of the license related notations on https://cr.yp.to/distributors.html as well.
Complicated software breaks. Simple software is more easily understood and far less prone to breaking: there are less moving pieces, less lines of code to keep in your head, and fewer edge cases.
Sometimes code is complicated because the problem is complicated.
And sometimes the simple solution is wrong, even for something as basic as averaging two numbers.
But there’s a difference here: a code is simple when it doesn’t introduce (too much of) its own accidental complexity. The innate complexity of the field is out of the equation, can’t do anything about it. But the code must strive to express its intent as simple as possible. It’s not a contradictory goal.
No, it’s not the problem that’s complicated, it’s the underlying platform on which they chose to develop. You wouldn’t have that bug in Common LISP, Ruby, or any environment with big nums.
Funny as he is, there were systems- and performance-oriented variants of LISP that either stayed low-level or produced C output that was low-level. They were really fast with many advantages of LISP-like languages. PreScheme was an early one. My current favorite, since they beat me to my idea of C-like Scheme, was ZL where authors did C/C++ in Scheme. With Scheme macros, one might pick the numeric representation that worked best from safest to fastest. Even change it over time if you had to start with speed on weak hardware but hardware got better.
These days, we even have type-safe and verified schemes for metaprogramming that might be mixed with such designs. So, you get clean-looking code that generates the messy, high-performance stuff maybe in verified or at least type-checked way. People are already doing similar things for SIMD. And you’re still using x86! And if you want, you can also use a VIA x86 CPU so you can say hardware itself was specified and verified in LISP-based ACL2. ;)
I’m not sure this really rebuts the claim. Is complicated code that solves complicated problems immune from breaking?
Also, I don’t think he recommended stopping at simple and ignoring correct.
Is complicated code that solves complicated problems immune from breaking?
It’s more that some problem cannot be solved with simple code, because you can’t capture the whole complexity of the problem without writing a lot of code to capture it.
Consider tax code. Accurately following tax law is going to be messy because tax law itself is messy.
Definitely not. If you have simple but wrong, it’s no good by definition. You can “not have” fast, but essentially, you still need “fast enough”. If you can accomplish the task, and you can do so simply and correctly, but can’t work it quick enough for real-life workloads, then in those cases you might as well call it broken.
I am put in mind of the quote:
You can make simple software with obviously no defects, or complicated software with no obvious defects.
I don’t even think “correct” software is required–for many lucrative things, you have a human being interpreting the results, and oftentimes incorrect or wrong output can still be of commercial value.
That’s why we have customer support folks. :)
This is a misquote of C.A.R. Hoare:
I conclude there are two ways of constructing software design: one way is to make it so simple there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
Nothing wrong with a misquote btw, as it means you internalized the statement rather than regurgitating the words :).
If I understand correctly (hah), his point is that if you aim for simplicity, it’s easier to ensure correctness.
I think what really hurts generative art is obscurity.
I was vaguely aware that this genre was “a thing” but I’ve never seen an exhibit advertised in my home town (Stockholm).
Like what?
The mod correctly removed my commentary from the story because, per the guidelines (which I missed), it should be in a separate comment. So in reference to your question I’m copying the removed comment here for context:
As far as what cars you can buy, there are many cars, new and old, that don’t have an Internet connection. Shop around. I personally plan to stick to used petrol based cars until auto manufacturers are able to design an electric car that I actually like.
Really? There are many new cars that don’t have internet connections? And software quality in most automobiles is appreciably better? Care to cite a source?
https://www.wired.com/brandlab/2016/02/how-connectivity-is-driving-the-future-of-the-car/
Indeed. People in cars represent a lucrative, and increasingly “captive” market for advertising.
This, coupled with the obvious interest of insurance companies and local tax authorities to know exactly where cars are and how fast they’re going will drive increasing addition of connectivity to cars. Note I did not say “adoption”, as it will be increasingly difficult to opt out of such connectivity.
It’s your choice to live in a Ferengi dystopia.
Lacking off planet travel options, …
You can buy older cars that are in good shape. The one I drive has no tracking devices. It’s pretty good on gas. Maintenance has been a few hundred this year. (Shrugs)
You gotta look carefully, though. Even low-end stuff might have tracking they dont advertise. At least they’re not remote-controlled, death machines.
The next frontier will be active, emination attacks on the computers trying to glitch them. Police in one area had something like that mounted on a helicopter. Low-cost, RF boards combined with high-output components will make those attacks cheaper. Might need TEMPEST sheilding for car computers even on older cars if expecting targetted attack.
Also, an older, common car will be cheap to fix due to being simpler (usually), part availability, commodity parts, and technician familiarity. There’s even junkyards out here like U-Pull-It that let you get parts out of wrecked or dead cars dirt cheap. Many parts are still fine even in a totalled vehicle.
https://www.bbc.co.uk/news/technology-25197786
Thanks. I can’t remember if it’s same company but same effect. The story also has this point supporting my recommendation of older vehicles in other comment:
“But because the device works on electronic systems, he acknowledged that it would not work on all older vehicles. ‘Certainly if you took a 1960s Land Rover, there’s a good chance you’re not going to stop it,’”
Might need really older vehicles for this one, though. Analog and mechanical systems to the rescue. :)
Let’s go back to those old slant-6s or straight 8s - 12mpg, spewing leaded gas fumes, heavy, none of that fancy electronic safety stuff like airbags, real distributors with points that could wear down, etc. Sadly, all engineering involves tradeoffs - if we are lucky
Most stuff your mentioning can be done without electronics or minimal use of them. They’re simple enough that they might also be able to use hardened electronics. There’s just nobody building cars that way due to no demand for RF-proof cars. We might see it happen in armored car side, though, if attackers start trapping important people in their cars.