I’ve just finished Infomocracy by Malka Older. It was entertaining as “political intrigue + gadgets” but really lacking in terms of scientific plausibility, character development and other kinds of depth, so it didn’t live up to the recommendations I’ve seen for it. But then, I have a strong preference for hard sci-fi so it’s not totally unexpected.
Before that, I read The Arrows of Time by Greg Egan, the last book in his Orthogonal theory. That was fascinating, like all of his books I’ve read, and takes place in a universe with different physics. Egan also wrote a ~250 page primer on those physics. It blows my mind that somebody can take literary research to such levels.
Now I’m back to Thinking, Fast and Slow by Daniel Kahneman. I keep stepping away from it because it’s just so uncomfortable to see how deeply irrational we are. Lots to think about.
What would you like to have improved in generative testing in general?
Smart ways to generate recursive data structures with low risk of them blowing up exponentially as the size parameter increases
Smart ways to direct generator distributions to problematic inputs.
Smart ways to generate recursive data structures with low risk of them blowing up exponentially as the size parameter increases
Most automated implementations suffer from this (e.g. generic deriving libraries in Haskell). It seems to rely on the expected number of recursive calls being made: for example, if a binary tree generator has a 50/50 chance of picking a leaf or a node, the leaf has no recursive calls and the node has two, we can expect 0.5 * 0 + 0.5 * 2 = 1 recursive call; and hence the expected size of our data is unbounded. When there can be many sub-expressions, like in a list, this number can grow really large.
The naive way to tackle this is to adjust the probabilities, such that leaf is chosen more often and the expected recursive calls are < 1. Unfortunately this causes exponential decay in the amount of data generated; so we may never see values more than a few levels deep.
I tend to avoid this by passing a “fuel” parameter through the generator. This is conserved, so if we want to generate multiple pieces of data (e.g. elements in a list) we must divide it up. The original QuickCheck paper mentions this, but says it’s undesirable since it couples together different parts of the generated data (if some values are large, the others will be small).
There are some smarter approaches too although I’ve not used them.
Yeah, I know someone who runs a keyserver and they are getting absolutely sick of responding to the GDPR troll emails.
Love the idea to use activitypub (the same technology involved in mastadon) for keyservers. That’s really smart!
Offtopic: Excuse me.
I think it depends on some conditions, so not everybody is going to see this every time. But when I click on medium links I tend to get this huge dialog box come up over the entire page saying some thing about registering or something. It’s really annoying. I wish we could host articles somewhere that doesn’t do this.
My opinion is that links should be links to some content. Not links to some kind of annoyware that I have to click past to get to the real article.
Could you give an example? That sounds like a pleasant improvement, but i don’t know exactly what you mean by a cached link.
I started running uMatrix and added rules to block all 1st party JS by default. It does take a while to white list things, yes, but it’s amazing when you start to see how many sites use Javascript for stupid shit. Imgur requires Javascript to view images! So do all Square Space sites (it’s for those fancy hover-over zoom boxes).
As a nice side effect, I rarely ever get paywall modals. If the article doesn’t show, I typically plug it into archive.is rather than enable javascript when I shouldn’t have to.
I do this as well, but with Medium it’s a choice between blocking the pop-up and getting to see the article images.
I think if you check the ‘spoof noscript>l tags’ option in umatrix then you’ll be able to see the images.
How timely! Someone at the office just shared this with me today: http://makemediumreadable.com
From what I can see, the popup is just a begging bowl, there’s actually no paywall or regwall involved.
I just click the little X in the top right corner of the popup.
But I do think that anyone who likes to blog more than a couple of times a year should just get a domain, a VPS and some blog software. It helps decentralization.
I use the kill sticky bookmarklet to dismiss overlays such as the one on medium.com. And yes, then I have to refresh the page to get the scroll to work again.
On other paywall sites when I can’t scroll, (perhaps because I removed some paywall overlay to get at the content below,) I’m able to restore scrolling by finding the overflow-x CSS property and altering or removing it. …Though, that didn’t work for me just now on medium.com.
Actually, it’s the overflow: hidden; CSS that I remove to get pages to scroll after removing some sticky div!
I run an SKS keyserver, have some patches in the codebase, wrote the operations documents in the wiki, etc.
Each keyserver is run by volunteers, peering with each other to exchange keys. The design was based around “protection against government attempts to censor keys”, dating from the first crypto wars. They’re immutable append-only logs, and the design approach is probably about dead. Each keyserver operator has their own policies.
I am a US citizen, living in the USA, with a keyserver hosted in the USA. My server’s privacy statement is at https://sks.spodhuis.org/#privacy but that does not cover anyone else running keyservers. [update: I’ve taken my keyserver down, copy/paste of former privacy policy at: https://gist.github.com/philpennock/0635864d34a323aa366b0c30c7360972 ]
You don’t know who is running keyservers. It’s “highly likely” that at least one nation has some acronym agency running one, at some kind of arms-length distance: it’s an easy and cheap way to get metadata about who wants to communicate privately with whom, where you get the logs because folks choose to send traffic to you as a service operator. I went into a little more depth on this over at http://www.openwall.com/lists/oss-security/2017/12/10/1
Thanks for this info.
Fundamentally, GDPR is about giving the right to individuals to censor content related to themselves.
A system set out to thwart any censorship will fall afoul of GDPR, based on this interpretation
However, people who use a keyserver are presumably A-OK with associating their info with an append-only immutable system. Sadly , GDPR doesn’t really take this use case into account (I think, I am not a lawyer).
I think what’s important to note about GDPR is that there’s an authority in each EU country that’s responsible for handling complaints. Someone might try to troll keyserver sites by attempting to remove their info, but they will have to make their case to this authority. Hopefully this authority will read the rules of the keyserver and decide that the complainant has no real case based on the stated goals of the keyserver site… or they’ll take this as a golden opportunity to kneecap (part of) secure communications.
I still think GDPR in general is a good idea - it treats personal info as toxic waste that has to be handled carefully, not as a valuable commodity to be sold to the highest bidder. Unfortunately it will cause damage in edge cases, like this.
gerikson you make really good points there about the GDPR.
Consenting people are not the focus of this entirely though , its about current and potential abuse of the servers and people who have not consented to their information being posted and there being no way for removal.
The Supervisory Authority’s wont ignore that, this is why the key servers need to change to prevent further abuse and their extinction.
They also wont consider this case, just like the recent ICANN case where they want it to be a requirement to store your information publicly with your domain which was rejected outright. The keyservers are not necessary to the functioning of the keys you upload, and a big part of the GDPR is processing only as long as necessary.
Someone recently made a point about the below term non-repudiation.
Non-repudiation this means in digital security
A service that provides proof of the integrity and origin of data.
An authentication that can be asserted to be genuine with high assurance.
KeyServers don’t do this!, you can have the same email address as anyone else, and even the maintainers and creator of the sks keyservers state this as well and recommend you check through other means to see if keys are what they appear to be, such as telephone or in person.
I also don’t think this is an edge case i think its a wake up call to rethink the design of the software and catch up with the rest of the world and quickly.
Lastly i don’t approve of trolling, if your doing it just for the sake of doing it “DON’T”, if you genuinely feel the need to submit a “right to erasure” due to not consenting to having your data published, please do it.
Thank you for the link: http://www.openwall.com/lists/oss-security/2017/12/10/1, its a fantastic read and makes some really good points.
Its easy for anyone to get hold of recent dumps from the sks servers, i have just hunted through a recent dump of 5 million + keys yesterday looking for interesting data. Will be writing an article soon about it.
i totally agree, it has been bothering me as well, i am in the middle of considering starting up my own self hosted blog. I also don’t like mediums method of charging for access to peoples stories without giving them anything.
I’m thinking of setting up a blog platform, like Medium, but totally free of bullshit for both the readers and the writers. Though the authors pay a small fee to host their blog (it’s a personal website/blog engine, as opposed to Medium which is much more public and community-like).
If that could be something that interests you, let me know and I’ll let you know :)
correction, turns out you can get paid if you sign up for their partner program, but i think it requires approval n shit.
hey @pushcx, is there a feature where we can prune a comment branch and graft it on to another branch? asking for a friend. Certainly not a high priority feature.
No, but it’s on my list of potential features to consider when Lobsters gets several times the comments it does now. For now the ‘off-topic’ votes do OK at prompting people to start new top-level threads, but I feel like I’m seeing a slow increase in threads where promoting a branch to a top-level comment would be useful enough to justify the disruption.
Reposted from HN where author, gnl, announced:
“Fellow Clojurians, I present to you, Ghostwheel. It makes the writing, reading, refactoring, testing, instrumentation and stubbing of function specs easy; introduces a concise syntax for inline fspec definitions; helps keep track of side effects; and makes general debugging almost fun, with full evaluation tracing of function I/O, local bindings and all threading macros.
It’s Alpha software and the Clojure port, while essentially functional, still needs some love to achieve full feature parity with ClojureScript – most prominently missing at the moment is the tracing functionality.
To steal a quote from Nathan Marz’ Specter in a blatant attempt at a so far likely undeserved comparison to its indispensability, I like to think of Ghostwheel as clojure.spec’s missing piece. It has become quite essential to me and I hope that you will find it useful too.
Feedback, PRs, and issue reports are welcome and much appreciated.”
Also:
“This is basically generative testing using clojure.spec in a nice package with a cherry on top and a few extras.”
I rarely ever care about progression of time and version, and the author doesn’t make a good case for why I should in the case of FreeBSD. It seems like a very fussy distinction.
At first I thought I was going to miss svn “r1234” numbers and considered putting a server-side update hook on a central server there which automatically tagged pushes to ‘master’ with.
I never ended up feeling the need, but given it’s just a few lines of bash perhaps it’d be worth trying and see if people use the sequential numbers or if they’re just a distraction.
Not the specifics, but the over-arching ideas pretty much hold up I’d say.
I was mainly referring to the title claim of “2^(Year-1984) Million Instructions per Second” because OP was asking for a graph.
http://propertesting.com/ is a nice one, targeted at Erlang.
This blog for the hypothesis Python library has a lot of great articles about how to use this stuff in “enterprise-y” software.
To be honest it was way more convincing to me than most other articles as to the utility of this stuff for higher-level applications
I’m curious what book people would recommend for someone to pickup C++. I already can program, but I’ve avoided C++ because of the reputation and also the syntax, but its something I’d really like to get at least comfortable in.
A Tour of C++ isn’t bad, particularly if you’re already familiar with C. After that it’s mostly practice – write a ray-tracer or some such.
The second edition comes out in a month.
I think it’s more than practice. There’s no way I was going to learn all the wrinkles in C++ without reading Scott Meyers’ Effective C++ series.
Sure, something like Effective Modern C++ is a fine choice after becoming competent at C++. That’s advanced material though, more for the kind of people who set coding guidelines for teams.
IME, without a detailed understanding of C++ ownership semantics you are going to hit some utterly impenetrable bugs pretty quickly.
thanks for posting this, I’ll definitely check out the book, especially if a new edition is right around the corner.
I like C++ Primer. It’s a whole lotta book, but it’s a whole lotta language and the book does an excellent job running you through a relatively recent version of the language. I’m currently working through Introduction to Design Patterns in C++ with QT. It’s a little dated but I’ve heard good things. Accelerated C++ is another I’ve picked up recently that seems to be well regarded. I’ve worked a bit with older C++98 style code in the past, but things have changed a bit with the advent of C++11 and especially later…
thanks for taking the time to reply. One of the reasons I’ve avoided the language so far is just the massive size of it in comparison to my other languages.
It’s an abstract for a workshop talk of work that is quite early. So, perhaps it shouldn’t have been shared at this point (but on the other hand, we believe in openness…). Regardless, it’s certainly not finished work, and if it sounds somewhat interesting, that’s basically the goal, as it is just a workshop talk!
I had no idea modern C++ syntax is implemented like this. I think the “pp” in the URL may mean “pre-processor insights.”
This is a good intuition pump, but to me it is no more troubling than the existence of well-ordering of real numbers.
I think the confusion is between existence and computability. I am not troubled by existence of strategy for uncountably many hat cases, because it seems obviously uncomputable. It is actually a theorem of ZFC that well-ordering of real numbers, which exists, is uncomputable. I think when people say “existence of strategy”, they implicitly assume computability of strategy.
I find it hard to accept both the well-ordering of the reals and the example here: out of uncountably many sets, each of countable size, pick one value (and have all of the inmates memorise this choice, lol).
This just seems weird, way too big.
I don’t exactly reject the axiom of choice but when a result depends on such a ginormous ass-pull, I kind of lose interest and look for more manageable things.
Yeah, the Axiom of Choice is never necessary for practical calculations. If weird results arise from its application, it’s a sign that the attempt to approximate a real-world situation with infinite sets has broken down somehow.
Serious question from a fully paid up member of the tinfoil hat brigade: Why protonmail?
Don’t forget that if you’re sending messages you also have to consider the recipient. If your hardware is hosed, theirs may well be too. On that basis perhaps mail should be avoided depending on your threat model.
One option would be to use something like a BeagleBone black as it’s open source and I believe verifiable.
Another option would be to use a disconnected host for creating, encrypting and viewing messages then a separate host for relaying. This was the basis for a project I did (and cancelled) a few years back.
Yeah, I guess I’m imagining that I’d be able to give my correspondents their own copy of the setup, and instructions on how to use it. I’m definitely not expecting that emails I send to random people will magically be safe from now until the end of time.
Signal’s proprietary central server and TOFU-oriented protocol add a lot of attack surface that doesn’t exist in other approaches.
The answer to the question “why protonmail” is mainly that I’m not sure what else to do. I have long given up hope that I’ll ever convince anyone to manually use PGP. Protonmail is a platform that I might be able to convince people to use; - has a nice UI and can conform to people’s existing habits and tools.
Edit: Reading this next to my other response does seem to make it clear that I’m confused about how other people ought to relate to the hypothetical system involved here. Obviously they can’t be allowed to just use their phones to read messages, so I’m not sure in what sense they should be allowed to stick with their existing habits.
if it’s just secure email wouldn’t spiped suffice?
This post presents quite a distorted picture. For instance,
Proof-of-stake is a bit too obviously “thems what has, gets”…
You need some way to qualify transaction validators. Bitcoin has demonstrated that economic commitment is a valid qualifier. It’s sensible to experiment with other forms of economic commitment, because as you point out, PoW is increasingly harmful as the network value grows. Has there ever been a resource allocation system where the allocators were not “them what has” in some sense, and no economic advantage accrued to them? At least in a cryptocurrency, this only applies to the validators, and the economic advantage is explicit, transparent, and accountable.
— so you have to convince the users to go along with it.
So, a completely voluntary system… Quite alien, compared to previous financial systems, but I kind of like that aspect.
It’s also as naturally centralising as proof-of-work, if not more so.
I’m not sure what you’re referring to here.
The other problem is that people routinely spend up to $99.99 to get $100 — thus, proof-of-stake will rapidly approximate proof-of-work.
That link describes stake grinding, and the long-since identified solution is described in the FAQ you link elsewhere in the post.
There’s a lot of similar distortions throughout the post, but it doesn’t seem worthwhile to point them out, since they are so blatant and the overall tone is so hostile and dismissive that I suspect most of them are deliberate propaganda.
There’s a lot of similar distortions throughout the post, but it doesn’t seem worthwhile to point them out, since they are so blatant and the overall tone is so hostile and dismissive that I suspect most of them are deliberate propaganda.
It would be worthwhile to point them out nonetheless, OP is not the only person reading these comments.
So, a completely voluntary system… Quite alien, compared to previous financial systems, but I kind of like that aspect.
I’m speaking in terms of convincing people the system is fair enough to bother participating in. As the rest of the post details, “number go up” has so far been sufficient.
Remember: the market doesn’t care about your ideology, only its own.
That link describes stake grinding
No, it’s general to economics. If there’s any way to spend toward even the slightest profit in any enterprise, someone will think that’s a viable business. An entry in a FAQ doesn’t make that go away.
There’s a lot of similar distortions
By “distorted picture “ and “distortions” I’m pretty sure you mean “doesn’t agree with me”. “Distortions” is what a believer says when they can’t quite support “incorrect”, let alone “meaningfully incorrect.”
I suspect most of them are deliberate propaganda.
If more than a negligible proportion of the populace agreed with you, then their interest in cryptocurrency would be greater than “number go up” and interest wouldn’t have gone away with the bubble. Jumping to conspiracy theories is unlikely to get you anywhere useful.
If there’s any way to spend toward even the slightest profit in any enterprise, someone will think that’s a viable business. An entry in a FAQ doesn’t make that go away.
There isn’t any such way in a well-designed proof-of-stake system. The FAQ explains this, as you would see if you read it with care. Such systems are described in detail in the Ouroboros Praos and Algorand papers. (I’m sure Casper has a similarly detailed specification, I’m just not as familiar with it.)
By “distorted picture “ and “distortions” I’m pretty sure you mean “doesn’t agree with me”.
No. You could convince me otherwise by making a cogent response to my initial objections.
There isn’t any such way in a well-designed proof-of-stake system.
Has any such system been deployed in the real world, and tested with real users? Plans have an unfortunate tendency not to survive contact with the enemy.
Nothing big, yet, and caution about such plans is sensible. But that’s a long way from claiming proof-of-stake is fundamentally broken for transparent economic reasons.
No, but there is no evidence such PoS system is impossible in principle. Saying, like David Gerard, it’s an axiom of general economics, is like saying it’s profitable to break public key cryptography, therefore all public key cryptography will be broken.
Gerard says no such thing. The entire point of the article is that Casper is soon here, and it will probably work as designed.
You are deliberately misquoting a comment regarding a specific argument against PoS, namely stake grinding.
That link describes stake grinding
No, it’s general to economics. If there’s any way to spend toward even the slightest profit in any enterprise, someone will think that’s a viable business.
Since I’d already pointed out that stake grinding is no longer an issue, the only sensible way to interpret this is that Paul’s general economic arguments had some independent relevance.
We shall see. I am very interested to see how Casper turns out when it’s exposed to the real world.
Whether Casper enabled Ethereum to make a successful transition from Proof-of-work to Proof-of-stake!
We will also see whether “PoS will rapidly approximate PoW”. My current opinion is that it is possible, but unlikely.
To respond to your subsequent edit:
If more than a negligible proportion of the populace agreed with you
A negligible proportion of the populace has read your post, and their opinion has no impact on its accuracy.
Those are some great results, but I still think that Python and Numpy are more readable once you get to use them than the implicit looping Nim suggests. While it may be shocking at first, vector operations and broadcasting usually results (IMHO) in incredibly concise code.
incredibly concise code.
There’s a notion in the programming circles (especially functional?) that conciseness is a good thing. I guess it’s a reaction to some “anti-conciseness” styles in languages like Java and C++, but I think we have went too far in a few places.
“Concision” is a broad term. Concise semantics, concise syntax, concise tokens, and concise programs in terms of character counts are all very different and have different trade offs. The fact that they are lumped together during discussion is a shame.
It’s meant to be “one thing” at the current level of abstraction, right? Which is totally subjective, but this is a design principle, so it can’t help but be fuzzy.
Am I the only one that:
Im with you. The typical model is open core, premium addons for enterprise. You can also license open-source software to enterprises. Many actually prefer to pay a company to be responsible for what they depend on. Finally, some are offering hosting or cloud containers for their solutions. And those are on top of usual support and service revenue for OSS.
So, yeah, I think they could make it work profitably with core product being open source. Quite a few companies do. I can’t guarantee that, though. Proprietary is still the safest route for monetizing software.
Look at all the free-software politics I’m not doing?
Code dumps are not the preferable solution for releasing open source, but they are still infinitely better than nothing.
This is why I’m pretty excited about datahike. It might actually turn into an open source dataomic-like database.
Very cool! I had seen datascript before, but it’s nice to see this address my point about feeling uncomfortable with a non-free database.
It’s not fashionable, but perl took from Larry Wall’s linguist background the concept of ‘it’ - i.e. the “thing we are talking about”.
It’s spelt “$_” explicitly in perl, but also many operations use $_ implicitly if no arg is given (e.g. regex/sub, print, etc). Also the looping constructs (for/map/grep) bind $_ to the ‘current element’. So you can:
print for @lines;
and have:
Or:
while (<STDIN>) { # Loop over each line on input
chomp; # remove trailing \n
s/foo/bar/; # regex-and-replace
say; # Print to stdout with \n
}
The schwartzian transform (https://en.wikipedia.org/wiki/Schwartzian_transform) uses this, and also the convention/feature that in a block/lambda passed to ‘sort’, the two items under comparison are ‘$a’ and ‘$b’. Which are hence sufficiently magic that you should never use them for any other purpose in perl :-)
anaphoric macros create a name that must be referenced to actually refer to it though. I guess in Perl, it’d be roughly equivalent to super lisp pseudocode:
(defparameter *it* (make-parameter #f))
;; str defaults to *it* if called without arg
(define (chomp (str *it*))
....)
(define (say (str *it*))
...)
(let loop ((*it* (get-line)))
(if *it*
(begin
(set *it* (chomp))
(set *it* (sub ...))
(say)
...)))
The jQuery library for JavaScript also supports a similar feature. In a function passed to $.each, this will be the current array element, and in an event handler, this will be the element that the event was fired on (which I think matches the browser’s DOM event handlers). The handlers can also take those same variables as optional function parameters if you want to name them.
Can we not post scuttlebutt on twitter from a thread in the dedicated SomethingAwful technology shitposting forum?
how many comments of yours do you think are policing what people post here? 10%, 20%? Before you respond with something along the lines of “eternal september” or “hacker news” just know I’ve lurked at HN for almost as long as its been around and I had a computer in the late 80s.
It is kind of a garbage source. friendlysock is doing people a favor by pointing that out, and I wish I’d read his comment before I read the thread.
If you have any evidence that any of these claims are untrue (a rebuttal from Musk, Tesla, etc.), please share it with us.
Legal systems generally (not the French) go with innocent until proven guilty for a reason. CEOs would not have a lot of time in the day if they had to personally prove every accusation made against them or their company.
Funny, he seems to have time to respond to random twitter accounts all day.
Obviously means regular boring old CEOs, not the visionary ones aimed at Mars…
Taking your jab at French jurisprudence seriously, what do you mean by that? Is this some recent court case?
Because France basically invented the modern Continental legal framework (well, Napoleon overhauled the ancient Roman system) which is used all over Europe (and beyond!) today.
Sure, it is a well known fact that France is the European Guantanamo. 😏
I don’t think Tesla as a corporate entity or Musk as a private individual / CEO will dignify this source with any sort of acknowledgement. That’s a PR no-no.
However, if a personal actually trained in ferreting out the truth and presenting it in a verifiable manner (these people are usually employed as journalists) were to pull on this thread, who knows where it might lead?
The standards of evidence in most places, including science, are that you present evidence for your claims since (a) you should already have it and (b) it saves readers time. Bullshit spreads fast as both media and Facebook’s experiment show. Retractions and thorough investigations often don’t make it to same audience. So, strong evidence for source’s identity or claims should be there by default. It’s why you often see me citing people as I make controversial claims to give people something to check them with.
There’s nothing surprising about the employee’s claims. It’s like asking for evidence that Google spies on users. They admit to it, and so does Tesla. So there’s your evidence, and I think it’s sad that you’re taking these trolls here seriously.
Thanks for the link. Key point:
“Every Tesla has GPS tracking that can be remotely accessed by the owner, as well as by Tesla itself. That means that people will always know where a Tesla is. This feature can be turned off, by entering the car and turning off the remote access feature. I am not sure why you would want to do this, but you can. Unfortunately, there are ways for a thief to turn off the remote access feature, and this will blind you to the specific information about the car. It will not stop Tesla from being able to track the car. They will retain that type of access no matter what, and have the authority to use it in the instances of vehicle theft.”
re taking trolls seriously. We’re calling you out about posting more unsubstantiated claims via Twitter. If your goal is getting info out, then you will always achieve it by including links like you gave me in the first place. Most people aren’t going to endlessly dig to verify stuff people say on Twitter. They shouldn’t since the BS ratio is through the roof. Also, that guy didn’t just make obvious claims like they could probably track/access the vehicle: he made many about their infrastructure and management that weren’t as obvious or verifiable. He also made them on a forum celebrated for trolling. So, yeah, links are even more helpful here.
But the point isn’t to even say that everything written here is true. The point is to share a very interesting data point that likely constitutes primary source material, and force a reaction from Tesla to stop their dangerous practices (or offer them a chance to set the record straight if any of this is untrue, which we’ve established is unlikely).
“Dangerous” compared to what? Force how?
Low-effort regurgitation of screencaps is not some big act of rebellion, it is just a way of lowering quality and adding noise.
If we wanted to read fiction we could go enjoy the sister Lobster site devoted to that activity.
Being a troll is “a way of lowering quality and adding noise”.
Which is why several people are asking you to stop it.
Is there any evidence your tweets or Lobsters submissions have changed security or ethical practices of a major company?
If not, then that’s either not what you’re doing here or you should be bringing that content to Tesla’s or investors’ attention via mediums they look at. It’s just noise on Lobsters.
I agree with you in general, but this specific “article” is just garbage. (As far as I’m concerned, Twitter in general should be blacklisted from lobste.rs. Anything there is either content-free or so inconvenient to read as to be inaccessible.)
I agree. I did at least learn from your link that Arnnon Geshuri, Vice President of HR at Tesla, was a senior one at Google that some reports said was involved in the price fixing and abusive retention of labor here. That’s a great hire if your an honest visionary taking care of employees who enable your world-changing vision. ;)