1. 7

Worth noting that s-expressions avoid a lot of legibility problems discussed in the article. If we look at the first example under the “providing immediate feedback” section where traditional notation looks like:

50.04 + 34.57 + 43.22 / 3


this would be expressed as:

(+ 50.04 34.57 (/ 43.22 3))


which would be hard to confuse with:

(/ (+ 50.04 34.57 43.22) 3)


A lot of people seem to have the impression that s-expressions are harder to read than traditional syntax, but I find the opposite to be the case. With s-expressions you have simple and predictable rules that remove a lot of mental overhead around figuring out what the code is doing.

1. 2

Similarly just having the same precedence and associativity for everything would give you an easy-to-predict and easy-to-read syntax. This way you gain terseness, but you have to get used to the associativity of whatever mechanism you’re using, whereas s-expressions (or *shudder* XML, etc) are more portable, but require you to explicitly state the tree with more characters.

For example, right associative:

50.04 + 34.57 + 43.22 / 3


And for the sum of everything over three, it would be:

(50.04 + 34.57 + 43.22) / 3


This is the style that APL/J/K and various languages inspired by them tend to use (they also add different precedence for certain operations that take another operation as one of their inputs, such as fold). Many people use such languages as an enhanced calculator (there are plotting utilities made for them, etc). For example, in K, where division is % and assignment is ::

force: (6.67e-11*mymass*collidingmass)%radius*radius
yearlybill: 12*rent+electric+internet


Or with functions, where / is fold:

force:{[m1;m2;radius](6.67e-11*m1*m2)%radius*radius}
yearlybill:{[monthlyutilities]12*+/monthlyutilities}

1. 1

Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.

I don’t know why people have such a problem with a + b + c / 3 meaning a + b + (c / 3). It’s just something you have to get used to, it’s not really that difficult and there are much bigger problems that need solving. But if it’s really such a big deal, just make it a function \frac{a + b + c}{3} in LaTeX is good enough for mathematicians, so frac(a + b + c, 3) should be good enough for programmers.

1. 1

Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.

I don’t know why people have such a problem with 1 * 2 + 3 and 3 + 1 * 2 meaning different things. It’s just something you have to get used to when using a different language, it’s not really that difficult and there are much bigger problems that need solving.

1. 2

The universal rules of mathematical expressions create a strong precedent. People expect them to hold. They get confused when they don’t. Even if they are arbitrary.

I’m not aware of any language anywhere in all of programming or mathematics that uses different rules and has sustained any kind of popularity. Seems like a hard requirement to ever be successful in my experience.

1. 1

They aren’t “universal”. See my other comment. Sustained any kind of popularity is a vacuous statement Forth is used extensively in embedded applications. Your calculator uses a left to right operator precedence and yet you don’t struggle to translate from PEMDAS or whatever system you use.

1. 2

They are absolutely universal. All mathematicians agree on the order of operations here.

1. 2

Funny because every mathematician I’ve talked to, and listened to about order ambiguity agrees with me and says you should put parentheses to disambiguate.

The reality is that because it is cultural means it does not matter if you have a solution to the problem if not everyone is using it. In my opinion abandoning order of operations is much simpler and the order is arbitrary, needlessly convoluted, and does not afford for the expansion of operators. You can make things abundantly clear by using polish notation.

- / 2x 3y 1

Before you throw your arms up in frustration yes there are proofs done in this format, and they’re great.

1. 0

because it is cultural

Yeah but it isn’t cultural. It’s universal. as I’ve explained

1. 1

I suppose since it is universal that there are severe pedagogical deficiencies, which doesn’t surprise me terribly. Still would have been completely avoided with a simpler and clearer precedence system. It took me a while to realize that you were talking about strictly mathematicians whereas I was talking about all people. Apologies for my poor communication.

2. 1

“Order of operations” have been an arbitrary curse on mathematics since their creation, different cultures don’t actually agree, in addition it restricts the creation of new operators. I’m not particularly invested in left to right or right to left, but either would be much simpler than the random format we have now.

1. 2

Cultures that don’t use ÷ and × often don’t write sentences left-to-right and pages top-to-bottom. They might not even use arabic numerals.

I don’t see how it restricts the creation of new operators. Mathematicians seem to have no problem introducing new operators: ∧, ∨, →, ↔, dots, existing operators in circles and all sorts of silly new operators are used all over algebra without any real issue. If it’s not obvious from context, you put brackets in.

1. 1

What order precedence does modulus have? Is it the same as division or should it be done first, or last? If we had a order precedence that can accomodate new operators this question wouldn’t need to be asked and I wouldn’t have to use parentheses which lets be honest is a hack.

1. 1

Modulus isn’t a standard mathematical operator. But if you defined it, you could just say what its precedence is.

3. 1

wait are you using PEMDAS or BODMAS?

1. 1

Same thing. Brackets = parenthesis, multiplication and division are done at the same time and so their order is whatever sounds better when reading out the abbreviation. What synonym of exponent does ‘O’ stand for?

1. 1

Multiplication and division are not done at the same time. Orders I believe. http://www.math.harvard.edu/~knill/pedagogy/ambiguity/

1. 1

Multiplication and division are always done at the same time (with left-associativity - a÷b÷c = (a÷b)÷c in mathematics and this follows over into programming languages that use * and / to emulate × and ÷.

2x/3y-1 is not well-defined notation. It’s not mathematics, because mathematics doesn’t use a slash in the middle of some linear text for division (it uses a horizontal line or ÷ depending on the context, although really depending on the level, because I haven’t seen anyone use ÷ since primary school), and it’s not any programming language I’m aware of either. Randomly writing down some text then claiming it’s ambiguous is pretty silly.

2 × x ÷ 3 × y - 1 is completely unambiguous, on the other hand: (((2 × x) ÷ 3) × y) - 1. Try putting it into google, or asking someone what 2 × 9 ÷ 3 × 2 - 1 is. Their answer is 11.

Mathematicians almost never use ÷ anyway, we write (2 x) / (3 y) where the line is horizontal (not possible on this platform as far as I can tell). But the same rule applies to addition and subtraction: 2 + x - 3 + y - 1 is universally agreed to be (((2 + x) - 3) + y) - 1.

Programming languages usually approximate ÷ and × with / and * for the sake of ASCII, so the same rules apply as with those operators. I’m not sure I know of any programming language where you can multiply variables by juxtaposition.

I once saw a proposal that it should be based on whitespace: 1+x * 3+y would be (1 + x) * (3 + y), while 1 + x*3 + y would be 1 + x * 3 + y. I thought it was quite a cute proposal, if perhaps prone to error.

1. 2

Americans use a slash in the middle of linear text to mean division. You clearly didn’t even read the article. Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.

1. 1

Americans use a slash in the middle of linear text to mean division.

Don’t think so.

You clearly didn’t even read the article.

The article has a bunch of monospace ASCII.

Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.

It’s what literally everybody in the entire world does.

1. 2

I think it’s fine to make fun of things. Also I think it’s a bad idea but mostly because it really should be a different type so I can account for when I have a divide by 0, because usually when I divide by zero it’s by accident and I probably meant to actually do something else. Maybe in some domain though this isn’t true. They also claim not to have the concept of null so it’s hard as a casual passerby to understand why you wouldn’t want to be able to express a space for a specific type that is yet to be filled.

1. 3

I think this is a great idea, I wondered how many people besides OP has put it into practice?

1. 4

My workplace does but we develop software for internal staff. We almost never work over 40 hours. It’s great and we get to focus on whats most important and work down from there. Occasionally something that wasn’t important becomes important and we reorganize the list. There’s always like an infinite well of work (obviously) but we manage the incoming work pretty well and the “customers” are quite happy.

1. 3

For your submission What Men and Women Consider Hardcore Gaming Are Not The Same

I suggested removing cogsci, and it should probably have the label datascience. This is because the article isn’t really cognitive science research in part because it doesn’t attribute the findings to cognition. It would take more research to see if it was cognitive, or social, or cultural. If the findings held for example in China vs the US. The label of cogsci is therefore misleading, and I would personally feel a “culture” label would also be putting words in the authors mouth. Datascience is probably the most correct label.

1. 1

I agree with your analysis–could @pushcx or @Irene take a look at this and fix it for me? :)

1. 9

This article is a good argument against treating a lack of gender diversity in video games as a problem to be solved. Men and women are systematically interested in different types of video game experiences, and game creators who cater to one type of experience or the other will naturally have a gender imbalance in the sorts of players who want to play that type of game.

1. 11

It’s a sign of bizarre times that this isn’t obvious. Boys and girls have always preferred playing with different toys since the dawn of time.

1. 17

There’s nothing obvious about it, and re-examining unfounded claims is not bizarre. We know that, historically, plenty of claims made were just plain wrong (consider the anabolic-catabolic “theory”).

Boys and girls had very different /roles/ since the dawn of time for obvious reasons. If you tried, as a girl, to play with the “wrong” toys you could see quite a bit of resistance.

1. 16

I’m not saying this is wrong (I haven’t done any research so I don’t know) but it seems very likely that kids are pushed to play with specific toys by society. We label toys as boys or girls, we market toys as being played with by either boys or girls and we give kids toys that we associate with their gender.

I saw a video this year where young babies were placed in a room full of a range of toys. Each time the baby was dressed in either pink or blue and given a female or male name regardless of their actual gender and a babysitter was in the room as well to help them play with the toys. Each time the babysitter would tend to help the baby play with toys stereotypical for their perceived gender. After the babysitter was asked which toys they thought the baby liked and they would say the baby seemed to prefer the toys of the perceived gender regardless of what the babys actual gender was.

Now that’s not really a scientific study but it does seem to suggest that things are not as “obvious” as they seem. It’s a little hard to test because really you would have to raise a kid in an alternative society to see what differences it makes.

1. 2

There’s also evidence that toy choice is gendered along the same lines that we in our culture are familiar with among chimpanzees, suggesting that toy choice has something to do with biological mechanisms of gendering bodies that are older than the human-chimpanzee split.

Anyway, this entire article is already presupposing that gendered differences in toys (well, video game tastes, but is a video game not just a more sophisticated toy?) exist and are important. As per the title, what men and women consider hardcore gaming are not the same.

1. 2

Could as well be the kids wanted to be nice to the babysitter who helped them play. The type of play also needs to be accounted for. There are studies as well which show that very young kids tend to gravitate to certain types of play.

Of course there’s going to be some overlap and gray areas, but what’s the harm in acknowledging the idea that maybe play and preferences have something to do with biology?

1. 10

but what’s the harm in acknowledging the idea that maybe play and preferences have something to do with biology?

There is no harm in thinking maybe it might be true and maybe it might not. There is harm in things like OPs comment stating “It’s a sign of bizarre times that this isn’t obvious.” When it’s extremely complex and not obvious at all.

1. 7

There is no harm with acknowledging that they “have something to do with biology”, the difference is how much weight is put on it, and the problems are caused when that is used as an excuse for things like exclusion, whether that’s subtle coercion of “oh I wouldn’t bother with that, because it’s been shown that people like me are bad at that sort of thing”, to the deep personal exclusion of “I will never be able to do X in a good way because of my biology, so I should not try”.

Equally, what is the harm in acknowledging the idea that maybe play and preferences have something to do with culture?

1. 1

I don’t know where coercion or exclusion came from here.

And surely society has some effect, but reading something like The Blank Slate makes me think it’a not such a huge factor.

Next someone will probably point out Pinker is a white supremacist or something and I’m done with this already.

1. 3

I don’t know where coercion or exclusion came from here.

Do societal consequences not matter, just because they’re societal?

reading something like The Blank Slate makes me think it’a not such a huge factor.

The Blank Slate, last I checked, ignores a lot of hard evidence done in the social sciences in favour of bashing Pinker’s strawman of the subjects. In addition, I’m not sure how someone can place a single reasonably cited book as a justification for ignoring 70 years of hard evidence. Especially when such a book’s argument is strongly contested.

Next someone will probably point out Pinker is a white supremacist or something and I’m done with this already.

Does someone’s political views not have any bearing on their research? Surely years of study have found bias in study construction extremely easy. I take the attitude that it must be so, for politics is how we view and frame all manner of parts of the world. Whether or not someone is a racist matters deeply as to the purpose behind the arguments that they make, and the ways that they approach certain details. Likewise if I am a monarchist you would surely wish to know that when arguing about matters of state, since my arguments might be led by conscious or unconscious motivations.

1. 1

I don’t think Pinker has a political horse in the race, but I do understand he can be misunderstood to have one even if he didn’t. So as far as anyone should care, the discussion could be limited to the science.

I’m just not particularly interested anymore, because something like infant behavior, sex vs gender, toy preference, biology, anthropology, primatology and who knows what “always” gets conflated with coercion and exclusion.

It’s essentially impossible to discuss matters online, text-based, time-delayed and without real interaction. More so when it starts to feel like something someone wants to win. The easiest win is to claim the other party doesn’t care about something not immediately related yet important and he’s therefore a bad person by implication.

That’s why I’m done.

Sometimes a cigar is just a cigar and men and women choose different toys, ways to play, subjects to study and careers to follow.

2. 6

Yes as a hypothetical, and in a context where social coercion doesn’t exist your statement would be totally fine and good. Saying it with certainty, even though it runs contrary to the scientific consensus lacks epistemological responsibility. It’s fine to say I’m not sure I agree with the scientific consensus, however it’s irresponsible to say that the scientific consensus is certainly wrong without any evidence. Once you add in the fact that some people will try to use such claims as a way to pressure a demographic out of an activity, then you have the risk of real harm. I’m not saying you’re the kind of person who would do that but it’s important to be aware that people will try to use your message there to exclude others who are wholly capable.

3. 0

I mean there’s no reason to believe that there’s real sexual dimorphism in the toys children choose to play with. I’ve seen boys play with dolls and girls play with trucks. Gender is a construct, that’s the scientific consensus and those saying otherwise value tradition over evidence.

1. 3

There are also a lot of arbitrary gendered items that change over time or across cultures. For example skirts of some form have been either male or female clothing depending on the culture/location. Also pants have been male clothing but are now neutral.

There are no doubt very real differences between genders. The obvious one being physical strength/body shapes but I am willing to bet that a majority of the differences between genders today are formed by tradition and not biology.

1. 2

The differences in gender as you said are formed by tradition. When you talk about physical strength and body shapes however that’s sexual dimorphism, unless you are referring to the cultural mores that pressure men to bulk up and pressure women not to. Sex informally speaking is the bits between your legs, sexual dimorphism is the physiological difference that often (but not always) come along with that like testosterone or estrogen production, gender is the cultural construct we have around sex. You can have sexes without having gender, which I’m sure has existed and you can have many genders within a single sex if you’re like creating a sci-fi culture.

You weren’t wrong in any way I just thought it would be useful to be clear.

2. 4

The reason there’s a push to solve it is the profit motive.Given that roughly 50% of women play games if you could create an experience that tailors to both cultures you could make a lot more money than if you didn’t.

Though I personally also enjoy playing games with people with different backgrounds. Sometimes a different cultural outlook also can have refreshing outside of the box ideas. It looks like for example that according to this survey while women value competition and challenge, they also value looking good while doing it, and going all the way to completion. That would mean if you want to hook women, make sure to add robust customization options or ways to build or design things. I think the completion aspect is already in most games, cheevos. Notice that they don’t disvalue destruction, but they find it less interesting than a well written story.

1. 2

Indeed, it is like complaining chick flicks get chick viewers, which is absurd.

1. 7

I haven’t heard that particular complaint, but one I hear often is that it’s quite absurd to have a genre lineup that resembles something like “action,” “comedy,” “drama,” and “not for men,” as if “not for men” were its own genre (it’s obviously not literally called that, but you provided your own example above). Deciding to use a “not for men” genre immediately creates its counterpart, “for men,” which is every other genre.

You logically have two choices here:

1. Accept the dichotomy and make explicit the implicit labels: “action for men,” “comedy for men,” “drama for men,” and “not for men.” You’ll have to train your brain to see this everywhere, as the implicit labels are extremely implicit. Along with appeal to the targeted demographic comes license to exclude the other – after all, if your genre is “not for men” then you don’t care if your movie makes men uncomfortable (this is different than making it desirable for not-men). If your genre is “action for men,” you don’t care if your movie makes women feel uncomfortable. It’s not for them.
2. Reject the dichotomy, and distribute the “not for men” qualities into the core genres – “action for men” just becomes “action”. Along with this comes the lack of license to exclude. This has made some movie watchers/videogame players mad – even though there is still plenty of content around (and more being made every day), the consumers of the previously “for men” genres see this as dilution and loss. Some of the things they liked excluded people, and instead of trying to untangle the good from the bad (or learn to coexist with new expressions of things they liked before) they’ve decided to double down and defend everything.

Whichever decision you make will impact how you see the modern media landscape.

1. 7

I think the author is conflating dependent types and refinement types.

1. 3

Could you elaborate why the author should use a broader term? Refinement Types is an even broader category than dependent types, and do not necessarily include types which depend on a value. In this case the author is specifically referring to types which depend on a value.

1. 11

In his example he’s not encoding the length of the arrays into the type, just adding a type contract that the length of the output of the function matches the lengths of the inputs. This is more in line with how refinement type systems work than dependent type systems.

1. 2

But… is this a computer? This article is rather vague and if it just prints out the numbers of the pulled strings… The Wikipedia article about computers says the firsts electromechanical computers were built at the end of the 30’s, however this device is from 1913: more than 20 years earlier! And the Wikipedia article about Grand Central mentions the hidden basement and the German sabotage but there is not a single word about any so-called computer.

Does anyone here have more information about it?

1. 2

If you read carefully it doesn’t say “first” anywhere.

1. 1

I might be in the minority here, but I don’t mind whiteboard puzzles. I’m not saying they’re effective as a hiring tool (I’m also not not saying that), but I’m always surprised when people say they stress specifically about them over other interview methods. I’m genuinely curious what exactly people dislike (other than ‘it’s not representative of the job’, which I agree with). Is it the stress and time pressure? Or the reliance on past knowledge? Would it be possible to construct a good whiteboard interview for you, or is the format itself distasteful?

1. 4

I think for the majority it’s because a lot of their knowledge is stored on the internet with their working memory copy simply being how to quickly look up the documentation to get the right answer.

I think for the majority it’s because the internet has become an extension of their working memory and without it they flounder simply because they haven’t needed to commit to memory the details, or anything much more than were to find them when needed.

The best analogy I can come up with is mental arithmetic, it used to be that you could take someone to the white board and ask them to work out a long division question or complex multiplication and most would be able to do so with little to no stress. With the prevalence of calculators nobody bothers to remember how to do long division or factorisation of difficult multiplication because they know how to use a calculator that does it quicker (we are animals of path of least resistance after all.)

A white board interview where you’re describing abstract concepts would probably solve a lot of the worries, because that tends to be what most people remember, with the details filled in by a few searches of the documentation.

1. 4

A white board interview where you’re describing abstract concepts would probably solve a lot of the worries, because that tends to be what most people remember, with the details filled in by a few searches of the documentation.

This makes sense to me – I feel like if you handed someone a marker and said “explain {something from their resume} to me,” a whiteboard interview would be a lot less intimidating.

1. 2

I’d certainly expect people to be able to do a long multiplication on a whiteboard, though. It’s pretty standard stuff. If they couldn’t, I’d hope it was due to a mind blank under stress and not because they literally don’t understand how multiplying numbers works.

And I think that if you can’t do basic programming without the internet you’ll struggle to be productive. That’s not to say that you should just know everything, but far too many people I’ve seen can’t do any little basic bit of programming without googling the most basic things. I like that programming competitions, if nothing else, at least force you to learn to write the basic ‘glue’ code quickly without having to look up e.g. how to print something to two decimal places or how to read a float from standard input. Basic stuff you should just know. They also are an okay litmus test. I’ve never met anyone that did well in programming contests that was a bad programmer. But I’ve definitely met good programmers that didn’t do programming contests. It has a high false negative rate and a very low false positive rate, I expect.

1. 3

I personally haven’t had to do long multiplication on paper in well over a decade and while I can describe three different methods in abstract I wouldn’t be able to do them without looking up simply because I have forgotten the details over the years of resolving to use a calculator.

The same can be said for some with programming, maybe they use a framework that provides verbose abstractions but no longer remember how to do such things (e.g sessions, http, file handling) on “bare metal” without first looking it up.

Having a decent memory isn’t a bad thing but as Einstein supposedly once said “Never memorize something that you can look up.”

1. 5

Einstein probably never said that. However he did say. (In response to not knowing the speed of sound as included in the Edison Test: New York Times (18 May 1921); )

[I do not] carry such information in my mind since it is readily available in books. …The value of a college education is not the learning of many facts but the training of the mind to think.”

Basically don’t labor over learning dumb facts. I also though think it’s wise if you find a free moment to understand how and why things work.

1. 3

That is a much better quote, it also encompasses what milesrout was saying in a separate thread.

1. 2

It’s also likely that Einstein never said “Never memorize something you can look up”, so it’s not really fair to call it a quote. If you had to create a pithy new phrase from his quote it might be something like “Facts are no substitution for reason and understanding.”

1. 2

That is the reason for why I wrote “Einstein supposedly once said”; I wasn’t trying to pass off something that may not be true as fact. However it is certainly something that a small amount of searching found to be a popular phrase attributed to Einstein and that is the reason why I quoted it.

The New York Times reference you provided was much better not only in being easily traced back to the man himself but also because it better conveyed the point I was trying to make. Thank you for sharing it :)

1. 2

Yes, sorry for being pedantic. I just wanted to make sure I wasn’t being misunderstood there. I had assumed that you said it in good faith. Thank you for being patient.

2. 2

The ‘details’ are essentially that 2134 * 34 = 2134 * 30 + 2134 * 4. I really don’t think you’d have any trouble if you thought about it for a few seconds.

The problem I’ve seen is that people don’t even think about something. They either know it and do it or they convince themselves they don’t know it and don’t try to work it out. That, the predilection to giving up, that is the danger sign, not that they don’t know it.

The same can be said for some with programming, maybe they use a framework that provides verbose abstractions but no longer remember how to do such things (e.g sessions, http, file handling) on “bare metal” without first looking it up.

I mean if you’re doing high level stuff you shouldn’t expect to know the details of writing low level code. If people are testing your algorithm knowledge at a Javascript webapp gig, then it’s just bad interviewing. But people testing your algorithm knowledge at a routing algorithm gig seems pretty fair.

Having a decent memory isn’t a bad thing but as Einstein supposedly once said “Never memorize something that you can look up.”

But if you asked Einstein some basic physics he wouldn’t look it up, he’d know it. Because having fully internalised the basic principles of physics is just part and parcel of understanding physics at the level he understood physics. Like, if you asked a mathematician the epsilon-delta definition of a limit, they’d be able to explain what it is, and what it meant, even if perhaps they couldn’t write it down formally left to right in one go if they hadn’t recently taught a course on analysis. Not because they’re geniuses that remember everything but because it’s just the most basic fundamental knowledge that everything else is based on.

1. 5

I think we are both on the same page, possibly I am bad at explaining what I am trying to say.

people testing your algorithm knowledge at a routing algorithm gig seems pretty fair

Agreed, the problem I think many see with white board interviews is that they are largely used to test knowledge that isn’t pertinent to the job at hand for example testing your algorithm knowledge at a job where you’re largely expected to write high level web apps where everything is wrapped in an closed source abstraction you’re going to need to learn on the job anyway.

1. 1

Exactly. They used to test how an individual candidate goes about problem solving, but nowadays they’re just a litmus test to see if you’ve memorized all the graph algorithms you might be asked to regurgitate.

2. 3

But if you asked Einstein some basic physics he wouldn’t look it up, he’d know it.

All of my physics professors and PIs looked basic stuff up all the time. There’s a reason we were allowed to take two pages of equations into exams.

1. 1

This is way too much attention for a teen. I mean what if their idea is wrong in some small way. pop

1. 8

If someone of the caliber of Scott Aaronson is sufficiently convinced you are right to put his name on the paper, there is no shame in being wrong. No one in the field would hold that against Tang if it turned out to be wrong.

I fail to see what being a teen has to do with anything. If he was a year older, or already in grad school, it would have been fine?

To me that sounds like an argument from the same visceral response I have to these kinds of stories: jealousy. I really have to suppress the urge to rain on his parade. And doing that via arguments that seem to have someone’s best interest in mind is the socially safest way to do so. It’s a way of stealthily sabotaging someone’s accomplishment.

1. 3

I think what voronoipotato may have been alluding to was that aside from that age largely being turmoil for emotions as they learn to deal with them. Teenagers aren’t as battle hardened as adults and having so much attention plastered upon them for it to suddenly turn nasty can be a massive blow emotionally and without the right support can end up dissuading them from continuing.

It sounds like the teen will be fine, with someone like Scott Aaronson on the same team - even if the idea turns out to be flawed they will have all the support they need to continue.

1. 3

Exactly, to be precise the problem I had was with the article pushing Tang into the limelight and not the research Tang contributed to. The research is fine.

2. 3

I’m not jealous at all. I don’t have any desire to get into academia or compete in that way. I stopped with a community college degree. The difference a few years can make at that age in emotional development is pretty big. Also they have other past successes however small to fall back on. Some people have one big (perceived) catastrophic failure at the beginning and they give up, and they never come back. More importantly, notoriety isn’t Tang’s accomplishment. Saying that they shouldn’t be forced into the limelight on their first attempt isn’t saying they should never have had the opportunity to submit or contribute to scientific progress.

3. 6

Once Tang had completed the algorithm, Aaronson wanted to be sure it was correct before releasing it publicly. “I was still nervous that once Tang put the paper online, if it’s wrong, the first big paper of [Tang’s] career would go splat,” Aaronson said.

The article specifically mentions that point.

1. 2

Yeah I don’t see how that makes it any better.

1. 1

I wouldn’t mind seeing a black & white version that used eink for superior battery life.

1. 9

Learn the basics not someone else’s abstraction….

Except that the basics ARE just someone else’s abstraction.

1. 9

Not that I’m complaining; I want to see more APL discussion but…please can we do something other than the exact same Game of Life one-liner in every article? APL is not Life: The Language. I’m not exaggerating when I say that something like 99% of pop-APL articles are the dissecting the exact same Game of Life example.

1. 5

It’s like Quicksort in Haskell.

1. 1

It’s because they can’t actually write anything meaningful in the language so they just quote the example they found. So I’d argue it’s likely that we can’t do something other than the exact same Game of Life one-liner, because we’re not capable.

1. 1

At this point you can’t easily lease non open-plan office space because landlords believe it’s undesirable.

1. 5

The landlords believe its undesirable because they really appreciated everyone paying more for a less developed space.

1. 6

Yeah, I know someone who runs a keyserver and they are getting absolutely sick of responding to the GDPR troll emails.

Love the idea to use activitypub (the same technology involved in mastadon) for keyservers. That’s really smart!

1. 16

Offtopic: Excuse me.

I think it depends on some conditions, so not everybody is going to see this every time. But when I click on medium links I tend to get this huge dialog box come up over the entire page saying some thing about registering or something. It’s really annoying. I wish we could host articles somewhere that doesn’t do this.

My opinion is that links should be links to some content. Not links to some kind of annoyware that I have to click past to get to the real article.

1. 11

Use the cached link for Medium articles. It doesn’t have the popup. Just the content.

1. 1

Could you give an example? That sounds like a pleasant improvement, but i don’t know exactly what you mean by a cached link.

1. 3

There is a’ cached’ link under each article title on lobste.rs

1. 1

Thanks.

2. 7

I started running uMatrix and added rules to block all 1st party JS by default. It does take a while to white list things, yes, but it’s amazing when you start to see how many sites use Javascript for stupid shit. Imgur requires Javascript to view images! So do all Square Space sites (it’s for those fancy hover-over zoom boxes).

As a nice side effect, I rarely ever get paywall modals. If the article doesn’t show, I typically plug it into archive.is rather than enable javascript when I shouldn’t have to.

1. 2

I do this as well, but with Medium it’s a choice between blocking the pop-up and getting to see the article images.

1. 6

I think if you check the ‘spoof noscript>l tags’ option in umatrix then you’ll be able to see the images.

1. 1

Nice trick, thanks!

2. 6

How timely! Someone at the office just shared this with me today: http://makemediumreadable.com

1. 4

From what I can see, the popup is just a begging bowl, there’s actually no paywall or regwall involved.

I just click the little X in the top right corner of the popup.

But I do think that anyone who likes to blog more than a couple of times a year should just get a domain, a VPS and some blog software. It helps decentralization.

1. 1

And I find that I can’t scroll down.

1. 3

I use the kill sticky bookmarklet to dismiss overlays such as the one on medium.com. And yes, then I have to refresh the page to get the scroll to work again.

On other paywall sites when I can’t scroll, (perhaps because I removed some paywall overlay to get at the content below,) I’m able to restore scrolling by finding the overflow-x CSS property and altering or removing it. …Though, that didn’t work for me just now on medium.com.

1. 1

Actually, it’s the overflow: hidden; CSS that I remove to get pages to scroll after removing some sticky div!

2. 3

What is the keyserver’s privacy policy?

1. 5

I run an SKS keyserver, have some patches in the codebase, wrote the operations documents in the wiki, etc.

Each keyserver is run by volunteers, peering with each other to exchange keys. The design was based around “protection against government attempts to censor keys”, dating from the first crypto wars. They’re immutable append-only logs, and the design approach is probably about dead. Each keyserver operator has their own policies.

I am a US citizen, living in the USA, with a keyserver hosted in the USA. My server’s privacy statement is at https://sks.spodhuis.org/#privacy but that does not cover anyone else running keyservers. [update: I’ve taken my keyserver down, copy/paste of former privacy policy at: https://gist.github.com/philpennock/0635864d34a323aa366b0c30c7360972 ]

You don’t know who is running keyservers. It’s “highly likely” that at least one nation has some acronym agency running one, at some kind of arms-length distance: it’s an easy and cheap way to get metadata about who wants to communicate privately with whom, where you get the logs because folks choose to send traffic to you as a service operator. I went into a little more depth on this over at http://www.openwall.com/lists/oss-security/2017/12/10/1

1. 5

Thanks for this info.

Fundamentally, GDPR is about giving the right to individuals to censor content related to themselves.

A system set out to thwart any censorship will fall afoul of GDPR, based on this interpretation

However, people who use a keyserver are presumably A-OK with associating their info with an append-only immutable system. Sadly , GDPR doesn’t really take this use case into account (I think, I am not a lawyer).

I think what’s important to note about GDPR is that there’s an authority in each EU country that’s responsible for handling complaints. Someone might try to troll keyserver sites by attempting to remove their info, but they will have to make their case to this authority. Hopefully this authority will read the rules of the keyserver and decide that the complainant has no real case based on the stated goals of the keyserver site… or they’ll take this as a golden opportunity to kneecap (part of) secure communications.

I still think GDPR in general is a good idea - it treats personal info as toxic waste that has to be handled carefully, not as a valuable commodity to be sold to the highest bidder. Unfortunately it will cause damage in edge cases, like this.

1. 3

gerikson you make really good points there about the GDPR.

Consenting people are not the focus of this entirely though , its about current and potential abuse of the servers and people who have not consented to their information being posted and there being no way for removal.

The Supervisory Authority’s wont ignore that, this is why the key servers need to change to prevent further abuse and their extinction.

They also wont consider this case, just like the recent ICANN case where they want it to be a requirement to store your information publicly with your domain which was rejected outright. The keyservers are not necessary to the functioning of the keys you upload, and a big part of the GDPR is processing only as long as necessary.

Someone recently made a point about the below term non-repudiation.
Non-repudiation this means in digital security

A service that provides proof of the integrity and origin of data.
An authentication that can be asserted to be genuine with high assurance.


KeyServers don’t do this!, you can have the same email address as anyone else, and even the maintainers and creator of the sks keyservers state this as well and recommend you check through other means to see if keys are what they appear to be, such as telephone or in person.

I also don’t think this is an edge case i think its a wake up call to rethink the design of the software and catch up with the rest of the world and quickly.

Lastly i don’t approve of trolling, if your doing it just for the sake of doing it “DON’T”, if you genuinely feel the need to submit a “right to erasure” due to not consenting to having your data published, please do it.

2. 2

Thank you for the link: http://www.openwall.com/lists/oss-security/2017/12/10/1, its a fantastic read and makes some really good points.

Its easy for anyone to get hold of recent dumps from the sks servers, i have just hunted through a recent dump of 5 million + keys yesterday looking for interesting data. Will be writing an article soon about it.

2. 3

i totally agree, it has been bothering me as well, i am in the middle of considering starting up my own self hosted blog. I also don’t like mediums method of charging for access to peoples stories without giving them anything.

1. 3

I’m thinking of setting up a blog platform, like Medium, but totally free of bullshit for both the readers and the writers. Though the authors pay a small fee to host their blog (it’s a personal website/blog engine, as opposed to Medium which is much more public and community-like).

If that could be something that interests you, let me know and I’ll let you know :)

1. 2

lmao you don’t even get paid when someone has to pay for your article?

1. 1

correction, turns out you can get paid if you sign up for their partner program, but i think it requires approval n shit.

2. 2

hey @pushcx, is there a feature where we can prune a comment branch and graft it on to another branch? asking for a friend. Certainly not a high priority feature.

1. 3

No, but it’s on my list of potential features to consider when Lobsters gets several times the comments it does now. For now the ‘off-topic’ votes do OK at prompting people to start new top-level threads, but I feel like I’m seeing a slow increase in threads where promoting a branch to a top-level comment would be useful enough to justify the disruption.

1. 31

Software correctness is not a developer decision, it’s largely a business decision guided by cost management. I mean depending on where you work and what you work on the software may be so stable that when you try to point out a problem the business will simply point out that the software is correct because it’s always correct and that you’re probably just not understanding why it is correct. Apps are buggy mostly when the costs of failure to the business are low or not felt by management.

1. 5

Came here to say exactly this.

There is no barrier to entry or minimum bar for consideration in software.

So you end up with thousands of businesses saying variations of “our budget is $1000 and we want you to make a software that …”. Then of course you are going to see lots of failure in the resulting software. The choice often ends up being “spend 10,000x and make it super robust” or “live with bugs”. No business chooses the first option when you can say “oops sorry that was a bug we just fixed it. thank you! :)”. This pattern persists even as the cost of developing software comes down. Meaning if you reduce the cost of producing flawless software to$X the market will choose a much more buggy version that costs a fraction of $X because the cost of living with those bugs is still much lower than the cost of choosing a flawless one. 1. 15 I recently moved to financial software development, and it seems everybody has real life experience of losing huge sums of money to a bug, and everybody, including management and trading, is willing to try practice to reduce bugs. So I became more convinced that it is the cost of bugs that matters. 1. 1 While this is true, don’t you think this is sort of… pathetic? Pretty harsh, I couldn’t come up with a better word on the spot. What I mean is, this is basically “those damn suits made us do it”. 1. 1 Not really. Would you like your mobile phone screen to be made bullet proof and have it cost$150M?

Would you like an atomic bedside alarm clock for $500k? A light bulb that is guaranteed to not fail for 200 years for$1,000?

It’s a real trade-off and there’s a line to be drawn about how good/robust/reliable/correct/secure you want something to be.

Most people/businesses can live with software with bugs and the cost of aiming for no bugs goes up real fast.

Taking serious steps towards improving software quality is very time consuming and expensive so even those basic first steps wont be taken unless it’s for something critical such as aircraft or rocket code.

For non-critical software often there’s no huge difference between 0 bugs or 5 bugs or 20 bugs. So there isn’t a strong incentive to try so hard to reduce the bugs from their initial 100 to 10 (and to keep it there).

The case that compels us to eliminate bugs is where it is something to the effect of “no bugs or the rocket crashes”.

Also you have to consider velocity of change/iteration in that software. You can spend tons of resources and have your little web app audited and certified ast it is today but you have to think of something for your future changes and additions too.

As the technology improves the average software should become better in the same way that the average pair of shoes or the average watch or the average tshirt becomes better.

1. 1

Would you like your mobile phone screen to be made bullet proof and have it cost \$150M?

Quite exaggerated, but I get your point. The thing is — yes, I personally would like to pay 2-3x for a phone if I can be SURE it won’t degrade software-wise. I’m not worried about hardware (as long as the battery is replaceable), but I know that in 2-3 major OS updates it will feel unnecessarily slow and clunky.

Also you have to consider velocity of change/iteration in that software

Oh, man, that’s whole other story… I can’t remember the last time I wanted software to update. And the only two reasons I do update usually are:

1. It annoys me until I do;
2. It will hopefully fix some bugs introduced due to this whole crazy update schedule in the first place.

Most people/businesses can live with software with bugs and the cost of aiming for no bugs goes up real fast.

Which brings us back to my original point: we got used to it and we don’t create any significant pressure.

2. 1

Businesses that allow buggy code to ship should probably be shamed into better behavior. They exist because the bar is low, and would cease to exist with a higher bar. Driving them out of business would be generally desirable.

A boycott would need to start or be organized by developers, since developers are the only people who know the difference between a circumstance where a high-quality solution is possible but difficult, a circumstance where a high-quality solution is trivial but rare for historical reasons, and a situation where all solutions are necessarily going to run up against real, mathematical restrictions.

(Also, most code in existence isn’t being developed in a capitalist-corporate context, and the most important code – code used by everybody – isn’t being developed in that context either. We can and should expect high quality from it, because there’s no point at which improving quality becomes “more than my job’s worth”.)

3. 3

it’s largely a business decision guided by cost management.

I don’t agree about the cost management reasoning. Rather it is a business decision that follows what customers actually want. And customers actually do prefer features over quality. No matter how much it hurts our pride in craftsmanship…

The reason we didn’t see it before software is that other fields simply don’t have this trade off as an option: buildings and cars can’t constantly grow new physical features.

1. 3

Speed / Quality / Cost

Pick two

You can add on features to cars, and buildings, and the development process does sometimes go on and on forever. The difference is if your cow clicker game has a game breaking bug, typically nobody literally dies. There exists software where people do die if there are serious bugs and in those scenarios they either compromise in speed or cost.

We’ve seen this before software in other fields, and they do have this trade off as an option, you just weren’t in charge of building it. The iron triangle predates software though I do agree scope creep is a bigger problem in software it is also present in other industries.

2. 4

I agree. I suppose this is another thing that we should make clear to the general public.

But the problem I’m mostly focusing on is the problem of huge accidental complexity. It’s not business or management who made us build seemingly infinite layers and abstractions.

1. 12

It’s not business or management who made us build seemingly infinite layers and abstractions.

Oh it definitely was. The waterfall process, banking on IBM/COBOL/RPG, CORBA, endless piles of objects everywhere, big company apps using obfuscated formats/protocols, Java/.NET… these were middle managers and consultants forcing bullshit on developers. Those bandwagons are still going strong. Most developers stuck on them move slower as a result. The management solution is more bullshit that looked good in a PowerPoint or sounded convincing in a strip club with costs covered by a salesperson. The developers had hardly any say in it at all.

With that status quo, we typically are forced to go with two options: build the new thing on top of or within their pile of bullshit; find new niches or application areas that let us clean slate stuff. Then, we have to sell them on these whether internally or externally. Doing that for stuff that’s quality-focused rather than feature/buzzword-focused is always an uphill battle. So, quality-focused software with simple UI’s aren’t the norm. Although developers and suppliers cause problems, vast majority of status quo is from demand side of consumers and businesses.

1. 3

It isn’t? Most managers I’ve met come and see me saying, we dont want to have to think about this, so build on top of this abstraction of it. They definitely do not want us wiping the slate clean and spending a lot of time rebuilding it anew, that would be bad for business.

1. 3

Brutalism as an architectonic style is disgusting and oppressive as shit (intentionally). I spent quite a bit of time in a brutalist building, I felt like shit. Like how did intentional hostility ever become a trend?

1. 10

While the term certainly originates from concrete, the author is not trying to advocate making websites out of concrete (figuratively). I think the main point can be seen in the paragraph mentioning Truth to Materials. That is, don’t try to hide what the structure is made out of - and in the case of a website it is a hypertext document.

This website could be seen in that light. It is very minimally styled and operates exactly how the elements of the interface should (be expected to). The points of interaction are very clear.

The styling doesn’t even have to be minimal, but there is certainly a minimalism implied.

1. 9

I respect your opinion, but I personally really enjoy brutalist architecture. I like the minimalism and utilitarian simplicity of the concrete exteriors, and I like how the style emphasizes the structure of the buildings.

1. 2

I think if you added a splash of color it would make the environment much more enjoyable while still embracing the pragmatism and the seriousness.

2. 5

It isn’t intentionally being oppressive or hostile. It represents pragmatism, modernity, and moral seriousness. However it doesn’t take a large logical jump to realize that pragmatism, modernity, and moral seriousness could feel oppressive. In the same way to the architects who designed brutalism, the indulgent designs of 1930’s-1940’s might feel like a spit in the face if you’re struggling to make ends meet. Neither were trying to hurt anyone, yet here we are.

1. 3

I consider the 1930s designs (as can be seen in shows such as Poirot) to be rather elegant styling. But I also see the pragmatism that was prompted with the war shortages.

I am not a great fan of giant concrete structures that have no accommodation for natural lighting, but I also dislike the “glass monstrosities” that have been built after brutalist designs.

I find myself respecting the exterior of some of the brick buildings of the 19th Century and possibly early 20th. Western University in London Canada has many buildings with that style.

Some of the updates done to the Renaissance Center in Detroit have mitigated some of the problems with Brutalist - ironically with a lot of glass.

1. 2

This might be true of Brutalism specifically, but (at least some) modern (“Modern”, “Post-modern”, etc.) architecture is deliberately hostile.

2. 3

I found this article on that very topic pretty interesting.

1. 2

In my home town, the public library and civic center (pool, gymnasium) are brutalist. It was really quite lovely. Especially the library was extremely cozy on the inside, with big open spaces with tables and little nooks with comfortable chairs.

1. 1

My pet theory is that brutalism is a style that looks good in black-and-white photographs at the extent of looking good in real life. So it was successful in a time period when architects were judged mainly on black-and-white photographs of their buildings.

1. 2

I think the closer financial construct is a toxic asset.

1. 2

agreed. toxic assets or perhaps investments that are operating at a loss. because the software actually takes money away from you, it’s not like the money is “in purgatory”, it’s being siphoned away. it’s more like a house or some other physical property that is vacant, and huge, and thus costs a lot of money each month just to keep up without foreclosure.

1. 11

a chipmonger kills its webshit propogands after some employees complain

If you can easily n-gate a submission, maybe it shouldn’t be here.

Spam about ad campaigns and counterreactions is not a core value prop of lobsters. :(

1. 18

on the other hand, this story is currently on the front page with an above-median vote score, and the other riscv-basics story is the highest voted story currently on the front page, so evidently the users of lobsters found both relevant to their interests.

Yours is some low-quality gatekeeping.

1. 23

News is the mindkiller. Humans are hardwired to be really interested in new things regardless of their utility, usefulness, or healthiness–you need look no further than the 24 hour news cycle or tabloids or HN front page to observe this phenomena.

If you look at any given submission, it has a bunch of different things it’s “good” at: good in terms of educating about hardware, good in terms of talking about the math behind some compiler optimization, good in whatever. Submissions that are news are good primarily in terms of how new they are, and have other goodness that tangential if it exists at all. The articles may even have a significant off-topic component, such as politics or gossip or advertising.

This results in the following pathologies:

• Over time, if a community optimizes for news, they start to normalize those other components, until the scope of the submissions expands to encompass the formerly off-topic material…and that material is usually something that is at best duplicated elsewhere and at worst pure flamebait.
• The industry we’re in specializes in spending loads of money on attractive clickbait and advertising presenting as news, and so soon the submissions become flooded with low-quality crap or advertising that takes up community cycles to process without ever giving anything substantial in return.
• The perceived quality of the site goes down for everybody and the community fragments, because news is available elsewhere (thus, the utility of the site is diminished) and because the valuable discussion is taken up with nitpicking news stories. This is why, say, c2 wiki is still around and useful and a lot of news sites…aren’t.

What you dismiss as gatekeeping is an attempt to combat this.

EDIT:

A brief note–your example of the two ARM articles being on the front page illustrate the issue. Otherwise intelligent lobsters will upvote that sort of stuff because it’s “neat”, without noting that with everybody behaving that way we’ve temporarily lost two good spots for technical content–instead, we have more free advertising for ARM (all press is good press) and now slightly more precedent for garbage submissions and call-response (news thing, rebuttal to news thing, criticism/analysis of rebuttal). It’s all so tiresome.

1. 5

ugggh, you leveled up my brain regarding what belongs on lobste.rs. “I like this!” is not only not necessarily an argument ‘for’, it is sometimes an argument ‘against’. Mind-blown.

1. 2

I bookmarked and often shared this post since it seemed like a nice set of guidelines. Had a lot of votes in favor, too.

1. 1

I thought we concluded that votes in favour represent anti-signal.

1. 1

Haha. Depends on the context. They’re important for meta threads since it can determine site’s future.

2. 5

This is interesting news, it’s not just drama or clickbait. The big chip makers have maintained an oligopoly through patents on abstract math: an ISA. It’s insane that innovation can only come from a few big players because of their lawyers. RISC-V is the first serious dent that the open source movement has been able to make in this market because (unlike ARM, OpenPOWER, and OpenSPARC) it has a serious commitment to open source and it is technologically superior.

ARM will be the first player to fall to RISC-V because they have a monopoly on lower end chips. Samsung, Qualcomm, NVidia, Apple, Google, etc. are all perfectly capable of making a competitive chips without having to pay a 1% tax to ARM. We are already seeing this with Western Digital’s switch to RISC-V, there is no advantage to paying ARM for simple micro-controllers … which is a huge portion of ARM’s business.

That they are resorting to FUD tactics shows that ARM execs know this. People interested in the larger strategic moves, like myself, find this article about how their FUD tactics backfired very interesting. I would appreciate it if you didn’t characterize this sort of news as spam and the people who follow how big industry players are behaving as just being into drama.

1. 6

With respect, a good deal of your post is kremlinology.

That they are resorting to FUD tactics shows that ARM execs know this.

The ARM execs cannot be guaranteed to “know” anything of the sort–it’s more likely that there is a standard playbook to be run to talk about any competing technology, RISC-V, OSS, or otherwise. Claiming that “oh ho obviously they feel the heat!” is speculation, and without links and evidence, baseless speculation at that.

the people who follow how big industry players are behaving as just being into drama.

The people who “follow” big industry players are quite usually just people that want to feel informed, and are quite unlikely to be anybody with any actual actions available given this information. Thus, just because something is interesting to them doesn’t make it necessarily useful or actionable.

characterize this sort of news as spam

Again, all news is spam on a site with historically more of a bend towards information and non-news submissions. Further, it’s not like this hasn’t been covered extensively elsewhere, on Slashdot and HN and Gizmodo and elsewhere. It’s not like it isn’t being shown on many fronts.

Please understand that while in this specific case, you might have an interest–but if all lobsters follow this idea, it trashes the site.

1. 2

With respect, a good deal of your post is kremlinology.

I’m not allowed to infer basic information about the internal state of an organization based on its public actions?

That they are resorting to FUD tactics shows that ARM execs know this.

The ARM execs cannot be guaranteed to “know” anything of the sort–it’s more likely that there is a standard playbook to be run to talk about any competing technology, RISC-V, OSS, or otherwise. Claiming that “oh ho obviously they feel the heat!” is speculation, and without links and evidence, baseless speculation at that.

Do you understand why I might feel frustrated when someone mocks arguments defending a topic but then demands others provide extensive context to the conversation s/he inserted themselves into?

It’s not like ARM hasn’t spoken out on this subject before; a high level ARM technology fellow debated RISC-V foundation members a couple of years ago. The debate sounds a lot like an early draft of the arguments presented on the FUD website: RISC-V can’t possibly replicate ARM’s ecosystem and design services.

If you go look at the RISC-V foundation membership list, you will find a lot of ARM licensors and competitors including Qualcomm, Samsung, NVidia, IBM, Huawei, and Google. They are using RISC-V as a vehicle to jointly fund high-quality replacements of ARM’s IP, much of which consists of ISA patents and tooling. RISC-V has a very thorough patent review process, making it difficult to sue RISC-V manufacturers based on the ISA. There is a lot I don’t understand about the value ARM adds in terms of chip design and industry collaborations, but NVidia alone is worth 3x what SoftBank paid for ARM just two years ago.

If ARM execs aren’t worried about RISC-V taking market share, they should be. ARM creating a FUD website is very strong, direct evidence that this is the case.

The people who “follow” big industry players are quite usually just people that want to feel informed, and are quite unlikely to be anybody with any actual actions available given this information. Thus, just because something is interesting to them doesn’t make it necessarily useful or actionable.

It feels like you are talking down to me and other interested readers. Are kernel hackers the only people allowed to be interested in kernel development news? I don’t get a lot of actionable information based on the latest scheduler drama, but (as a UX engineer) I am interested in the outcome of these debates.

I came to Lobste.rs for a deeper understanding of the underlying technical and political factors at play here.

Again, all news is spam on a site with historically more of a bend towards information and non-news submissions.

I am open to this argument and I probably wouldn’t have perceived your comments so negatively had I not started from the standard definition of spam. Of course, I also understand that it is hard to justify the time to fit such nuance into a comment on an article : )

You clearly have thought a lot about this and discussed it with others, but new and causal readers haven’t. Perhaps you could use less incendiary language? Just say that Lobste.rs focuses on non-news submissions and that you feel industry news is offtopic.

Further, it’s not like this hasn’t been covered extensively elsewhere, on Slashdot and HN and Gizmodo and elsewhere. It’s not like it isn’t being shown on many fronts.

The technical analysis on HN and other sites is … non-existent. I would love to hear more from experts with informed opinions on chip design and manufacture and that’s what I expected of the comments here.

Please understand that while in this specific case, you might have an interest–but if all lobsters follow this idea, it trashes the site.

Well, I’m kinda peeved that the comments section of both stories turned into a slow-burn flamewar : /

2. 2

ARM will be the first player to fall to RISC-V because they have a monopoly on lower end chips.

They actually don’t. A good chunk of the chip market is 8-16 bitters. Billions of dollars worth. In the 32-bit category, there’s a lot of players licensing IP and selling chips. ARM has stuff from low end all the way up to smartphones with piles of proven I.P. plus great brand, ecosystem, and market share. They’re not going anywhere any time soon. MIPS is still selling lots of stuff in low-end devices including 32-bit versions of MCU’s. Cavium used them for Octeon I-III’s for high-performance networking with offload engines.

With most of these, you’d get working hardware, all the peripherals you need, toolchain, books/training on it, lots of existing libraries/code, big company to support you, and maybe someone to sue if the I.P. didn’t work. RISC-V doesn’t have all that yet. Most big companies who aren’t backers… which are most big companies in this space… won’t use it without a larger subset of that or all of that depending on company. I’m keeping hopes up for SiFi’s new I.P. but even it probably has to be licensed for big money. If paying an arm and a leg, many will choose to pay the company known to deliver.

From what I see, ARM’s marketing people or whatever are just reacting to a new development that’s in the news a lot. There some threat to their revenues given some big companies are showing interest in RISC-V. So, they’re slamming the competition and protecting their own brand. Just business news or ops as usual.

1. 3

The 16 bit category has been almost totally annihilated by small 32-bit designs. The 8-bit category will stands.

(I’m also deeply doubtful of RISC-V while hardware beyond SiFive suffers critical existence failure, but that remains to be seen…)

1. 2

ARM will be the first player [large monopoly] to fall [lose lots of market-share] to RISC-V because they have a monopoly on lower end chips.

Argh, I thought “fall” was too strong a choice of words while writing this, I should’ve listened to myself.

My line of thought was that it’s really hard to create a competitive server platform, as evidenced by the niche market SPARC, OpenPOWER, and ARM occupy in the server space. However, there are plenty of low-power, low-complexity ARM cores out there that are up for grabs. I’m hoping that Samsung, Qualcomm, and other RISC-V backers are supporting RISC-V in hopes that they can take their CPU designs in-house and cut ARM out of the equation.

I am largely ignorant of the (actual) lower-end chip market, thanks for the insight.

With most of these, you’d get working hardware, all the peripherals you need, toolchain, books/training on it, lots of existing libraries/code, big company to support you, and maybe someone to sue if the I.P. didn’t work. RISC-V doesn’t have all that yet.

The RISC-V foundation was very intentional in their licensing and wanted to ensure that designers and manufactures would have plenty of secret sauce they could layer on top of the core spec. This is one of the reasons OpenSPARC failed and why so many different frenemies are collaborating on RISC-V.

From what I see, ARM’s marketing people or whatever are just reacting to a new development that’s in the news a lot.

Their marketing people made the site, but an ARM technology fellow pitched similarly bad arguments in a debate ~2 years ago. Or maybe I’ve just drunk too much Kool Aid.

2. 3

I upvoted both submissions. I consciously bought Lobsters frontpage spot for RISC-V advertising and paid loss of technical content in exchange. I acknowlege other negative externalities but I think they are small. Sorry about that.

I think RISC-V advertising is as legitimate as BSD advertising, Rust advertising, etc. here. Yes, technical advertising would have been better. I have a small suspicion of gatekeeping RISC-V (or hardware) against established topics, which you can dismiss by simply stating so in the reply.

1. 4

Thanks for keeping up the effort to steer the submissions in a more cerebral direction, away from news. I totally agree with you and appreciate it.

1. 2

I almost never upvote these kind of submissions, but seeing as it can be hard to get these off the main page, maybe it could be interesting for lobsters to have some kind of merging feature that could group stories that are simply different stages of the same news into the same story, thus only blocking one spot.

1. 3

Now that is interesting. It could be some sort of chaining or hyperlinks that goes in the text field. If not done manually, the system could add it automatically in a way that was clearly attributed to the system. I say the text field so the actions next to stories or comments stay uncluttered.

1. 3

It’s been done before for huge and nasty submissions; usually done to hot takes.

1. 2

It would also allow it to act as a timeline of sorts. Done correctly I could even apply quasi automatically to tech release posts as well, making it easier to read prior discussions.

The main question right now would be how to handle the comments ui for those grouped stories.

2. 1

All publicity is good publicity is actually totally false.The actual saying should be something like “Not all bad publicity is bad for you if it aligns with your identity.”. Fighting OSS definitely doesn’t align with the ARM identity/ethos.

3. 5

It’s so easy to just react and click that upvote button without thinking; the score is a reflection of emotional appeal, not of this submission’s relevance. “But it’s on the front page” is also a tired argument that comes up in every discussion like this one. @friendlysock makes excellent points in his reply to you, I totally agree with him and appreciate that he takes the time to try to steer the submissions away from news. There are plenty of news sites, I don’t want another one.

4. 8

or maybe n-gate is a worthless sneer masquerading as a website that doesn’t need to be used as a referent on topical material? Especially given that literally anything posted to HN is going to be skewered there? I’m not the go-to guy on HN cheerleading (at all, in any way) but n-gate is smirky petulant crap and doesn’t exactly contribute to enlightenment on tech topics.

1. 11

worthless sneer masquerading as a website that doesn’t need to be used as a referent on topical material

El Reg could be described the exact same way!

1. 2

thats…..actually a good point.

1. 16

So, to summarize, “Distributed Version Control isn’t a good replacement for a release process”?

1. 5

Yes, “Nearly all users of version control are non-developers.” is the author’s thesis for why DVCS is bad. I personally also find it frustrating when someone writes a book about something they could communicate quickly and clearly.

Their argument for why it’s bad for development is that their drive must be clean when crossing the US border as though that’s a problem that most people have.

Their argument for why it’s bad for long lived development is “your checkouts will necessarily become too big and slow someday, if the project stays on this system long enough” , which the author really has no evidence for.

1. 1

Libraries are the compiler of 2018. Makes things a lot easier, you can always go in and fix it if it doesn’t work, and the developer refrain is that somehow it’s the erosion of the human mind. For historical context this is what developers thought of other developers who used compilers. Yet today very very few people would refer to someone who used a compiler as a simpleton. In fact we’ve learned that having access to a compiler actually allows us to express even more complex abstractions. Yet we still go through the same fearful conversation about libraries.