Some people want to be given work, plenty of time to do it, and make sure no one else knows how to do their work - lest they be replaced. Before the remote-work-revolution, it was harder to get replaced. Now, quick access to global talent makes it much easier to get replaced. I guess it’s why some people have become more protective of their work.
What I’ve seen more often is devs who want more autonomy, and (project/product/people) managers who want their devs to work on what they (or the team as a whole) have decided is the right thing to work on right now, and to focus on that and get it done.
I think that (generally!) the right thing to do as a dev is not to hide their activity (working on what they see as actually important) but make the case for the business value of their work and switch teams or companies if they just can’t agree.
Google Drive’s trash is changing. Starting October 13, items will be automatically deleted forever after they’ve been in your trash for 30 days.Learn more
I guess that’s what you get if you’re hosting it like that. Also I don’t like the missing mouseover for links and I think I can sum it up as: not a fan of Google Docs. But definitely a creative solution.
Would love to see a “how Mozilla could have survived” article. Was it really the result of bad decision-making, or was it basically inevitable given what they were up against? (Note: not asking about whether or not they made any bad decisions.)
Would love to see a “how Mozilla could have survived” article.
I often wonder what would have happened had they not given up on FirefoxOS. Non-technical users do not care about browsers, they won’t install an alternative one unless forced to. This means that for people to use Firefox, it’d have to come bundled with the hardware they purchase. FirefoxOS was a way to get Gecko running on lots of mobile platforms.
Kerning is useless for monospaced fonts, almost by definition.
Kerning is so that combinations like “AV” don’t have a wide space between them. AV will have that, because the horisontal space taken up by each character is the same.
There are advantages to kerning, and you miss out on them with monospaced fonts. Obviously you gain other benefits while writing code with monospaced fonts, but fit prose? Not so clear.
So don’t do kerning? Not sure if there are any readability studies or something that you’re thinking about but as a programmer I am also happy to read articles in monospaced font.
Monospaced fonts make prose objectively harder to read. They’re an inappropriate choice for body text, unless you’re trying to make a specific e.g. stylistic statement.
Do you have any links for some studies about it? I’m wondering since you’ve used the objectively term, which I find confusing, since I’m not impacted by monospaced formatting at all. Film scripts are being written in monospaced script, books were written in it (at least in the manual typewriter days), I think this wouldn’t be the case if monospaced fonts would be objectively harder to read?
Do you have any links for some studies about it? I’m wondering since you’ve used the objectively term, which I find confusing, since I’m not impacted by monospaced formatting at all.
This is a subject that has been studied for a long time. A quick search turned up Typeface features and legibility research, but there is a lot more out there on this topic.
Manuscripts and drafts are not the end product of a screenplay or a book. They’re specialized products intended for specialized audiences.
There are no works intended for a mainstream audience that are set in a monospaced typeface that I know of. If a significant proportion of the population found it easier to read monospaced, that market would be addressed - for example, in primary education.
Market could prefer variable width fonts because monospaced are wider, thus impacting the space that is taken by the text, which in turn impacts production cost. This alone could have more weight for market preference than the actual ease of reading. Bigger text compression that is achieved by using variable width could improve speed of reading by healthy individuals, but that isn’t so obvious for people with vision disability.
Individuals with central loss might be expected to read fixed-pitch fonts more easily owing to the greater susceptibility of crowding effects of the eccentric retina with which they must read. On the other hand, their difficulty in making fixative eye movements in reading should favor the greater compression of variable pitch. Other low-vision patients, reading highly magnified text, might benefit from the increased positional certainty of characters of fixed pitch. Our preliminary results with individuals with macular disease show fixed pitch to be far more readable for most subjects at the character size at which they read most comfortably. (“Reading with fixed and variable character pitch”: Arditi, Knoblauch, Grunwald)
Since at least some research papers attribute superiority of variable width font to the horizontal compression of the text – which positively influences the reading speed and doesn’t require as many eye movements – I’m wondering if the ‘readability’ of monospaced typefaces can be improved with clever kerning instead of changing the actual width of the letters.
The reading time (Task 1) with the variable-matrix character design was 69.1 s on the average, and the mean reading time with the fixed-matrix character set was 73.3 s, t (8) = 2.76, p < 0.02. The difference is 4.2 s or 6.1% (related to fixed-matrix characters). (“RESEARCH NOTE Fixed versus Variable Letter Width for Televised Text”: Beldie, Pastoor, Schwarz)
The excerpt from the paper above suggests that the superiority of variable width vs monospaced isn’t as crushing as one could think when reading that human preference for variable width is an “objective truth”.
Also, the question was if monospaced fonts are really harder to read than variable fonts, not if monospaced fonts are easier to read. I think there are no meaningful differences between both styles.
Market could prefer variable width fonts because monospaced are wider, thus impacting the space that is taken by the text, which in turn impacts production cost.
So it’s more readable and costs less? No wonder monospaced fonts lose out.
I’d love to read the paper you’ve referenced, but cannot find a link in your comment.
(To be honest: you’re right and I apologize. It was a cheap shot).
I found the first paper (https://www.ncbi.nlm.nih.gov/pubmed/2231111), and while I didn’t read it all I found a link to a font that’s designed to be easier to read for people who suffer from macular degeneration (like my wife). The font (Maxular), shares some design cues from monospaces fonts, but critically, wide characters (like m and w) are wider than narrow ones, like i.
That’s what I think is a big problem with monospaced fonts, at small sizes characters like m and w get very compressed and are hard to distinguish.
I also tried to code with a variable width font. It works ok with Lisp code but not the others. The tricky part is aligning stuff. You need elastic tabstops.
I don’t find bottom’s graph (that one that displays usage for each individual core) to be very informative in your screenshot. Maybe they weren’t designing for so many cores :)
I feel like the “dot-map” one does the best job. You get a quick and accurate view of what regions are crammed with infected people and ones not, but also where within the region. I’m surprised it’s not more used (I’ve never seen one before this).
This is wrong: points are spread randomly within the province, they don’t correlate to actual people or cases. This makes the dot-density map really misleading, and this comment is evidence.
I don’t like the dot one. It doesn’t correct for population. It implies that cases are spread uniformly over the area, but it’s clear that that assumption isn’t even close to correct. Cases are undoubtedly heavily weighted towards cities just because that’s where the people are, different parts of the province are almost certainly doing different, etc.
I particularly don’t think it’s responsible to make it look like the entire area is doing the same when we are talking about something like a virus. People use the press to decide how they should respond to the virus, and if they are in the region, local variations matter. Note that this is different from the other maps that make it clear that we don’t have fine grained enough data to divide up the region, but don’t suggest the absence of local variation.
For this data in particular, glhaynes also has a very good point that it looks like the color has been inverted or something. I was similarly initially confused, removing the label might help.
Don’t you think it’s more useful to report an estimation of where an infection happened, rather than saying “we don’t know, somewhere in this massive area”?
Cases are undoubtedly heavily weighted towards cities just because that’s where the people are, different parts of the province are almost certainly doing different, etc.
And this is what the dot density map implies anyway? Or are we talking about two different maps? X) I’m seeing very very sparse dots in non cities and dense as hell in cities…?
I feel like we must be interpreting the map differently, or looking at different things.
What I’m seeing, in this image, is a map divided into a bunch of provinces. Provinces contain both rural and urban areas. Within each province dots are spread uniformly at random, i.e. any point within the province is equally likely to have a dot no matter the local population density, and the local propensity to have the virus relative to the rest of the province.
I had a little trouble with the dot density map: because the dots were so dense in Hubei Province as to almost entirely fill it, I initially read Hubei as being rendered in inverse color for emphasis, leading me to interpret the small remaining non-dotted negative spaces there as being its “dots”.
Probably not a problem in a higher-resolution setting — or for smarter viewers!
Even then, though, it could easily be interpreted as showing a harder boundary at the province level than actually exists — as though the province was basically filled with cases that abruptly end right at the edge. That seems particularly likely to mislead a viewer in this case into thinking this is showing the effect of a hard quarantine right on the province boundary.
Agreed. This is also a topic covered in the talk Intro to Empirical Software Engineering. It’s amazing how much we think we know, vs how much we actually do.
Upvoted not because I think this is a good idea, but because I’m curious to get others’ opinions on it.
This seems like a terrible, terrible idea. It’s yet another way of soft-forcing Google’s hegemony on the Web. Specifically:
Badging is intended to identify when sites are authored in a way that makes them slow generally
I’m pretty sure this actually means “badging is intended to identify when sites are authored in a way that makes them slow on Chrome…”
And this isn’t like flagging a site that has an expired certificate or something. That is a legitimate security concern. This is making a value judgement on the content of the site itself, and making it seem like there’s something fundamentally wrong with the site if Google doesn’t like it.
They refer to two tools, one a Chrome extension (Lighthouse) that I didn’t bother to install, the other a website (PageSpeed Insights). I went for the latter to test a page that has no AMP or other “Google friendly” fluff and is otherwise quite light weight (https://www.coreboot.org) and got a few recommendations on what to improve: Compress some assets, set up caching policies and defer some javascript.
If that’s all they want, that seems fair to me.
(Disclosure: I work on Chrome OS firmware, but I have no insights in what the browser level folks are doing)
Yeah, within 2 seconds of loading this link I was worried they were just pushing AMP. If they really are just pushing best practices then I’m cautiously optimistic about this change, and the fact that they didn’t mention AMP and instead linked to those tools gives me hope… but it’s Google, so who knows.
I’m definitely still wary. I personally feel that Google sticking badges on websites they approve of is never going to end well, regardless of how scientific it may seem at the beginning.
I really feel like there are major parallels to be drawn between Google and the rules of Animal Farm.
But that’s not all they want. Google and Chrome are now positioning themselves to visually tell users whether or not a site is “slow” (according to whatever metrics Google wants, which can of course change over time). As with most Google things, it will probably look reasonable on the surface, but long term just result in Google having even more control over websites and what they can and can’t do.
I would agree with you if it weren’t for Google’s long history of questionable decisions and abuses of their position as the (effective) gate keeper of the web,
Specifically the fact Google created them. As tinfoil-rambling as it sounds, they already deal out a rather extensive spying/targeted advertising network, a possibly-manipulable gateway to information, the most popular video service, the most popular browser, some rather popular programming languages (Go, Dart), power a large portion of Web services (Chrome’s V8 in Node.js, that and Blink for Electron), many alternative browsers being Chromium/Blink-based, the AMP kerfuffle, ReCAPTCHA, and maybe the future protocols as to how their vision of the Web works.
They keep encompassing more and more parts of the Web, both technical and nontechnical, and people keep guzzling that down like the newest Slurm flavor. That’s what worries me the most.
Generally I’m against prejudging companies like this, but Google has earned it, and then some.
Some sites that I use have Recaptcha. I use Firefox, and I can only pass the captcha if I log into Gmail first. Honestly, what kind of Orwellian horseshit is that?
So, I can at least comprehend this one, even if I hate it.
The whole point of recaptcha is to make it hard to pass unless you can prove you’re a real person. Being logged in to a google account which is actively used for ‘real things’ (and does not attempt too many captchas) is a really hard-to-forge signal.
Well, given what’s happening here…any site Google decides.
To be less pithy: this could be used as an embrace-extend-extinguish cycle. Sure, right now it’s all just general best-practices but what if later it’s “well, this site isn’t as fast as it could be because it’s not using Google’s extensions that only work in Chrome that would make it 0.49% faster, so we’ll flag it.”
I’m not saying Google is definitely going to do that, but…I don’t like them making this sort of determination for me.
It gives soft pressure to conform to Google’s “standards” whatever they may be. No website is going to want to have a big popup before it loads saying “This site is slow!” so they’ll do whatever it takes to have that not happen, including neglecting support for non-Chrome browsers.
I don’t know if it’s still the case, but at one point YouTube deployed a site redesign that was based on a draft standard that Chrome implemented and Firefox did not (Firefox only supported the final, approved standard). As a result, the page loaded quickly in Chrome, but on Firefox it downloaded an additional JS polyfill, making the page noticeably slower to render.
I shut down my desktop overnight, and generally shut down my laptop when I’m done using it (starting from a blank slate when I’m going to do something completely different, such as transitioning from play to work, is nice imo). That means that I’ll generally start Firefox from a fresh boot at least once a day, usually multiple times per day. I’m probably not the only one.
Also, it has become common for DEs and WMs to lack minimize buttons; neither gnome nor pantheon have them by default anymore afaik, and the concept of minimizing doesn’t even make sense in a tiling WM. That means that closing applications when you’re not doing it is probably becoming more common over time.
Removal of minimize buttons from non-tiling desktop environments is just a misplaced unification with mobile devices (where those environments like GNOME don’t work in the first place and have no reason to unify anything with them) and should not be tolerated.
and the concept of minimizing doesn’t even make sense in a tiling WM.
I use Awesome and it supports minimising. I use it semi regularly. Sometimes a desktop gets a bit busy so I minimise a window with Super-n. Later Super-Ctrl-n brings it back. Handy for terminals that are running dev servers and the like too.
Handy for terminals that are running dev servers and the like too.
I find tmux to be a much better solution for this; you get a bunch of other handy features for free too, like being able to resume from an SSH session.
We might be talking about different things. I was referring to a dev server in the sense of a Rails application server that is running for local development or a zola instance when writing a blog post. In both cases there is no ssh involved. Is that what you meant?
No, that’s what I mean. If I start a dev server on one computer and walk away, then decide later I want to keep working on it, I can continue my work from another computer without being physically present at the original machine.
In tiling WMs there’s no need to minimize since the windows will just be parked behind active ones. (I’m only speaking for i3, StumpWM and EXWM. I’m not sure what dynamic[1] tiling WMs like xmonad do.)
[1] This is just a classification I made up on the spot, but static tiling WMs like StumpWM and EXWM have fixed tiles arranged by the user wherein new windows are placed. Dynamic tiling WMs create a new tile to put a new window in. i3 is somewhere inbetween.
You don’t have minimise buttons but you do have multiple desktops, the solution there is to park windows on an off-screen desktop and move them back when you want to interact with them. This is what I do with Xmonad where I use the last desktop (#9) as a ‘parking space’. I move windows there by pressing Shift-Alt-9 and retrieve them with Alt-9 (which moves to the ‘parking lot’) followed by Shift-Alt-1 (to move the window to desktop #1) or something similar.
Sure, that works, but that workspace 9 becomes really messy if you do that with every application you ever open. It’s not like in, say, ubuntu (which enables minimize buttons in gnome) or Windows where you can have as many applications open and minimized as you want and the only effect is an extra icon in a panel.
Also, in i3 (or sway, which is what I use), multiple monitors share the same pool of workspaces; when I have two screens, I usually allocate workspaces from 1 and up on the left screen and from 10 and down on the right screen. I sometimes use workspace 5 as a “parking space”, but that also means you have to decide which screen you want your parking space to belong to - or you could use workspace 5 for the left screen’s parking space and workspace 6 for the right screens, but then you have to remember the screen you last used the application on.
Sometimes it’s just easier to close the application, you know?
It doesn’t matter if #9 gets messy as it is only used as a ‘parking lot’. I don’t use it all that much so it is a rare event for there to be more than 2 windows on that desktop. Xmonad also uses a shared pool (or can use a shared pool, the thing is configurable to the hilt so it is really up to the user to decide how to handle this) but there is no fixed relation between monitor position and desktop number - I can pull up #9 on any monitor, move a window to whatever desktop I had open on that monitor and switch back. Normally desktop #9 is invisible.
Based on the satire tag I assume the creator doesn’t actually care about the responses to the question. I flagged this as spam, I don’t think we need this kind of thing.
When does a post cross the line from “I don’t want to see more content like this so will not upvote it” to “this should be flagged as spam”? For you, personally, I mean.
It’s harder to define in general, but when people are creating copycat throwaway posts that they don’t care about I think that’s effectively spamming the site.
Out of all of the “What is your X” setup posts, this is the only one that the poster didn’t seem to care about, but I still think the commentary adds more than it takes away, overall.
This is the frustrating thing to me about this article (which is a good writeup).
Near the beginning:
A little over a year ago, I wrote “Should you learn C to ‘learn how the computer works’”. It was a bit controversial. I had promised two follow-up posts. It’s taken me a year, but here’s the first one.
And the end:
Because C’s abstract machine is so thin, these kinds of details can really, really matter, as we’ve seen. And this is where the good part of the “C teaches you how the computer works” meme comes in.
Klabnik took a year to shoot out an essay that is super easy to misinterpret at “C isn’t how computers really work (and so we shouldn’t learn it!)” and that has become a not-uncommon meme in the Rust Evangelion Strike Force.
A year later, he recants, sorta kinda–but in the meantime, what happened? How much stupid argumentation resulted from this? How many C developers had to put up with erroneous Rust fanboyism on that point alone?
Folks, please please please be deliberate when writing for your communities on these things.
As much as I think an “investors never” policy is an ethically good one, I don’t think straight advertisements (even for Drew Dewalt’s services) disguised as blog posts belong on lobste.rs; maybe this ought to have had a ‘show’ tag?
(Do we have a tag specifically for advertisements? Stuff like this, & stuff like, say, books written by regulars here, can be of interest but a lot of folks would probably like to filter them out.)
“But, consider that the incentives that this approach creates hold us accountable only to the users. “
It at least gave me the opportunity to say that’s totally false. People who want to maximize money are always incentivized to do things that bring in more money. As a direct counter-example, this is why Microsoft plasters ads all over the home screen of players who already paid for both an Xbox and years of subscriptions to Xbox Live. The ads bring in more money on top of what the customers already paid. Likewise, companies selling customers’ information behind their backs to data brokers, surveillance states, etc. The claim that them paying you incentivizes you not to screw them is utterly false and people need to stop repeating that.
What you can say is you’re taking payment only from them to align your incentives with taking care of them. From there, that your personal principles and/or (if applicable) non-profit’s charter will keep you from harming your customers. Aside from principles, you see not harming customers as a positive differentiator for your brand to both gain and keep customers for a long time. That they’re paying you alone doesn’t do anything. It’s your principles and goals making that happen. Whereas, in companies whose leadership is about profit maximization, they make it not happen.
Just to be clear: I think your actions over time argue that you’re telling the truth about what you intend to do. You’re clearly driven by personal principles over profit. It’s just that this is rarely the case, most companies selling to customers have the incentive you described, many screw their customers over anyway, and therefore it’s misinformation to say B2C incentive leads to moral behavior. It’s almost always the leadership at different levels that decide on that.
These are good points, and I agree that simply taking user’s money doesn’t mean that you’re incentivized to support their needs. But, I think that it’s telling that SourceHut is the only player who isn’t incentivized to anyone else, i.e. investors. Additionally, because the SourceHut finances are public, you can easily see exactly what the budget is and how it’s being spent. These facts don’t present an indisputable statement of SourceHut’s integrity, but they definitely set it apart from its competitors, I think.
I agree about your transparent finances supporting your claims. I love that. Definitely keep it up.
Although not an endorsement, you might find Buffer’s open salaries and revenue dashboard interesting. I’m against the live version given customers’ perceptions of ups and downs might be more negative than what they actually are. Their feedback might also wear employees out. Better to do periodic summaries like you do so their reactions happen when you’re ready for them. What’s in those links might give you ideas, though.
This complaint comes up often for sourcehut blog posts. However, I almost always find them interesting. Just because someone is explaining their choices positively doesn’t mean it’s nothing but an advert. Counter-cultural internet businesses (sourcehut, pinboard, etc.) with the weird idea of making money by selling users a product are incredibly interesting at this point in time.
That’s what my second paragraph was about. While I don’t think it counts in this case (saying something true like “investors never” before an ad doesn’t make it substantially more than ‘just’ an ad, particularly when it’s common sense in most circles like this is), there are situations where things that are advertisements are interesting to this audience. Nevertheless, a lot of folks here would like not to be advertised to even when the ad has some interesting copy.
We have a solution for exactly the problem where some people really want to see a particular type of content & other people really don’t want to see it: tagging.
Pendo gives you things like, in their words, “Analytics to understand what users are doing across their product journey” and “Surveys to capture how users feel about their product experience”.
The straightforward way to explain why GitLab is adding this is that they want to have better insight in how people are using their platform. Regardless of how you feel about the ethics of that – and I don’t think it’s automatically unethical, the devil is in the details – there is real and tangible useful information for the users. If the producer of software understands their users better, then they can create better software. It’s very common for there to be a bit of a disconnect between developers and users. This certainly isn’t tracking people across websites for ad purposes or the like.
The idea that (quoting the article) “these kinds of changes are not implemented with the user in mind - these decisions are more easily explained by following the money” is nothing short of conspiratorial fear-mongering bullshit, and that is my polite description of it. That this is used to advertise a competing service is, in my opinion, highly unethical advertising.
I don’t like “these kinds of changes are not implemented with the user in mind - these decisions are more easily explained by following the money” either but “nothing short of conspiratorial fear-mongering bullshit, and that is my polite description of it” doesn’t sound better to me.
It’s an allegation of malice which is unlikely to be true presented with no evidence what-so-ever, and trying to advertising a competing product. It’s classic FUD-based advertising. I could have been far more direct than my previous comment and still be accurate.
Either way, I care too much about the general topic to let complete nonsens like this slide. That’s how communities go off the deep end.
There is overwhelming evidence in support of the claim that any company taking VC money trying to I.P.O for billions is going to (a) do stuff bad for its users to get there at various points or (b) do it after it’s public following the doctrine of increasing shareholder value at all costs. Plus, covering the executives and Board’s fat checks. The (b) strategies are typically to do bad things to increase revenue or do bad things to cut costs. What drives it is how extreme the financial incentives are when meeting financial goals requires moving tremendous amounts of money.
So, starting with that status quo, you can then do new things with different incentives and principles. That’s what Sourcehut is doing. His advertising goes a bit too far, scaring off some clients for sure. Highlighting his focus on user freedom, the ethics, the lack of tracking, the lack of bloat… things different than most VC-funded products… makes lots of sense given much of his competition aren’t about that stuff.
If anything, any company taking VC money or aiming to make billions needs to be proving they’ll not harm customers in long-term using contracts, open-source software, etc. They should not be believed by default at this point. Hell, it was companies such as IBM, Microsoft, and Oracle that proved we needed assurances that long ago.
I appreciate that you were trying to be polite, but there are definitely more civil ways to say that you a) don’t think something’s true and b) doubt the intentions of the author.
Yeah. How much of your system runs GNU? (Rhetorical question.) Stallman isn’t less politically charged than Yarvin, even before the recent comments; the difference is that Stallman leaned left, which is more accepted in the affluent intellectually-oriented minds of software people
I don’t think it’s accurate to describe Stallman as a libertarian. As far as I’m aware, he describes himself as a socialist and holds non-free-software-related political views that are generally consistent with the left wing of American politics, including fiscally. Many of his views on software freedom are consistent with some strains of libertarianism, but I think it’s ultimately a limitation of the “left-right” metaphor when talking about political beliefs to describe him as otherwise socially left, fiscally right.
I don’t know if he does. I remember reading an essay by him claiming that free software is not socialist, but I can’t find it right now. I think is more likely that he regards himself as more in the classical liberal tradition.
I was wondering how long it would be before someone would chime in with this tired bit of commentary.
Every comment you’ve left in this thread has been inflammatory and off-topic. The Urbit guy wrote some things you don’t like. We get it. The weird thing to me is the most hysterical people who constantly yell at those they deem right-wing also have funny ideas about when prejudice is acceptable, i.e., punching down vs punching up, power dynamics, etc. How much authority or power do you think CY/MM has?
This is a horrendously boring topic, and you have made this forum a more boring place for having brought it up. Please just stop. Let’s focus on technology.
The piece of technology was founded by someone with very strong visions as to what society should be, which means that in the software, there are design decisions being made to align with this. In this case, there’s a very strange belief in a feudalistic system where - even if the names were changed from lords/dukes/etc. to galaxies/planets/etc. - this influence still exists in the software. I believe it bears mentioning.
Yep. He’s said some horrendous things and excluded a lot of people. I have no interest in this project if it can’t bring itself to align with the values of the community.
Kind of seems like it has, though? Or at least they kicked him out and erased his name from their public intro, probably other places. I guess it’s down to who exactly you mean by “the” community.
I have dark mode enabled on my iPhone SE, but only see the dark colour scheme for this website in Safari, not Firefox (v19.1, the latest version, afaict).
Yeah, I have dark mode enabled in Windows, and I’m running Firefox developer edition (70.0b13), and I don’t see dark mode on this website. I have no idea which of the three are to blame.
So there are like these two schools of thought, the Turing School of Thought that says: “Well the computer is just a device that has like a sort of like a printing head and the reader and so on, and it just moves the tape and infinite tape and moves them.” So this is a very mechanical kind of approach. And then there is this very abstract approaching mathematics.
Oh come on now. Saying the schools of thought are “Turing machines vs mathematics” is silly. Automata theory and formal languages are both topics in math, just as category theory is.
But things in imperative programming are not very well defined. It’s like they don’t really have this nice mathematical structure.
This is also silly. Dynamic logic is part of math. Temporal logic is part of math. Most formally-verified production code uses some form of state machine formalism. State machines are also math.
Anyways if I call something, you know like in object oriented programming, if I call something an object that’s so meaningful, right? Just try to define what you mean by an object in a programming language. Just because you took a word from English language that everybody thinks they understand, that doesn’t mean that an object in C++ for instance, is immediately obvious.
That’s actually often better for beginners. The technically complete answer can wait until they’ve built a mental model.
For a programmer for instance? Like it doesn’t make sense to learn category theory. Will it make you a better programmer? I think it will. It will sort of make you a higher level programmer. You know, it’s like being able to lift yourself above your program. It’s like otherwise you, you know, you’re just like a little ant working in an anthill, right? And the only things you see are the things that are close around you, right? It’s like your never able to like lift yourself above the anthill and see how it’s related to the rest of the world and so on.
Honestly this is what bothers me most about this kind of advocacy. Sure, CT might make you a better programmer… but so do a lot of things. Juggling makes me a better programmer. I can list, off the top of my head, five or six programming lessons I learned from juggling. But I’m not applying juggling techniques when I program. I don’t think anybody who knows CT is applying CT techniques when they program.
When we equate CT with math, we’re saying “learning math makes you a better programmer because it changes how you think”. And sure, that happens. But there’s also a lot of math that’s not CT that, in addition to making you think differently, also gives you powerful techniques to deal with code. Some I’ve run into:
With first-order logic you can really easily express complex specifications at the design level
Breaking tests into equivalence classes improves your test coverage
That’s just the ones I’ve run into directly, where I had the relevant skills. I’ve definitely smashed into probability and graph problems where I knew it would have been much easier by knowing the math, but didn’t. You can learn mathematical thinking and build directly useful skills. So why do we keep talking about math as if it’s just CT?
Oh, and the first example of the benefit of CT? “using Semigroups to represent the general notion of collapsing two entities into one”? You don’t need CT for that. That’s first-semester abstract algebra.
I liked your post even though I somewhat disagree with it. I don’t know too much about CT, but it does seem like CT is one level of abstraction above all those other fields of mathematics you mentioned. If your fields are mathematics libraries with classes in them, CT is a library of interfaces. CT is the study of things that compose in myriad ways, and I think that does quality it as a candidate for the calculus of software engineering.
Just a side note; I think it’s cruel to quote a non-native speaker’s transcript, it makes them sound silly :) When you listen to Bartosz, you don’t mind all the “like”s and he’s very careful in writing, so he writes with perfect English. So these quotes don’t do him justice. This is not a criticism towards you, BTW, you obviously can’t quote sound in text. Just an observation.
Agreed on CT. People communicate better when they share models. CT, even if never explicitly used, is a robust shared understanding for software design. You know that you have that understanding when people bring up an instance of something that is, say, monoidal, and they can call to mind other instances and see the connection.
Yes, but less than it might have. Our discipline doesn’t do as well as it could with cross-generational knowledge transfer. A lot of this is due to rapid growth and fragmentation of education.
But I’m not applying juggling techniques when I program. I don’t think anybody who knows CT is applying CT techniques when they program.
I disagree; many CT techniques are directly applicable to programming. Functors, applicative/monoidal functors, monads, comonads, &c are the standard examples. My personal favorite is the categorical semantics of the simply-typed lambda calculus, which I use when writing compilers; it corresponds pretty directly to Oleg’s “final style”. But there are a few caveats.
First, you don’t have to learn CT concepts from a math perspective; you can learn them from a programming perspective. I’ve gone both ways; I learned Haskell functors before category-theory ones, but denotational semantics before final style. Nor are these ideas necessary to write any particular program.
Second, some of these techniques can be pretty niche (although not all; functors are everywhere). Denotational semantics, for example, is probably only useful if you’re implementing your own language or DSL. That said, many problems can be fruitfully viewed from this angle if you try. Conal Elliott, for example, has made a cottage industry out applying simple category theory & denotational semantics to all sorts of problems, from animation to compilation to neural nets to fourier transforms.
But there’s also a lot of math that’s not CT that, in addition to making you think differently, also gives you powerful techniques to deal with code.
This is certainly true!
However, calling any particular piece of math “not CT” is dangerous; almost any math can be viewed from a categorical lens. I think this is part of why category theory can feel so evangelical: for some people, the categorical lens just clicks really well. They apply it to everything, and because it works so well for them, they promote it as “the key” to understanding math and programming (and thought and…).
(The way category theory “just clicks” for some people reminds me of how some functional programmers get annoyed that Leibniz and big-O notation aren’t clear about variable binding structure and wish everybody just used λ-notation.)
But the reverse is true as well! Almost any use of a CT idea can be viewed from a non-categorical angle. Category theory is relatively recent mathematically; one of the things that mathematicians like about it is that it gives a unifying framework for understanding many patterns that cropped up in earlier mathematical work. But just as these earlier results were discovered without CT, they can be understood without it; there are relatively few results which were discovered only by categorical means.
So it’s hard to give concrete examples of the “advantages” of using CT, because any concrete example is subject to quibbling about whether it’s really category theory or not.
Oh come on now. Saying the schools of thought are “Turing machines vs mathematics” is silly. Automata theory and formal languages are both topics in math, just as category theory is.
To reiterate your point, Turing was familiar with set theory. The celebrated halting problem was inspired by Cantor’s diagonal argument.
Also, Turing’s oracle machines connect to both first order arithmetic and set theory. The connection to first order arithmetic is via the Kleene–Mostowski hierarchy. The connection to set theory is via the Borel hierarchy. These connections are the topic of descriptive set theory.
So why do we keep talking about math as if it’s just CT?
Category theorists have long hoped for CT to be the foundation of mathematics. Paul Taylor and William Lawvere have advocated this point of view extensively.
FWIW, in my amateur work in formally verifying my proofs, I’ve never needed CT. I just used Isabelle/HOL to formalize sentential logic and measure theory. I’ve occasionally needed to reach for Zorn’s lemma.
In my professional work in Haskell, I try to avoid category theory. This is because it makes the code hard to read.
Sometimes I explore CT in Haskell. I don’t often find conventional CT helpful. In conventional CT, categories are enriched in Set. In Haskell, it’s better to think of instances of Category to be enriched in Haskell itself.
In addition, textbook CT constructions often fail.
RE: the blogging experiment, I think you could just link to a tweet thread to better effect. The literal HTML entities and lack of distinct formatting between content and authors are a bit tough on my eyes. But I did enjoy the responsive email quotes.
Most headsets aren’t really ready for interacting with small text for extended periods yet. The Rift S is usable for in-game interfaces, and the screen-door effect isn’t very noticable, but there’s always a “sweet spot” you need to find with headset adjustment to make text legible, and it’s not really as comfortable as just looking at a regular monitor.
Maybe the HP Reverb is approaching usable, but I haven’t tried it. I’m excited at the possibilities but still skeptical short-term.
Yeah, something that makes me less sanguine than I was previously is a comment by Carmack in last year’s Oculus keynote where he says that previously phones were driving small displays to be better and better, but now they’ve reached a point where quality improvements go unnoticed by consumers, so “VR companies will have to foot the bill” for higher PPI screens.
Re. this section:
What I’ve seen more often is devs who want more autonomy, and (project/product/people) managers who want their devs to work on what they (or the team as a whole) have decided is the right thing to work on right now, and to focus on that and get it done.
I think that (generally!) the right thing to do as a dev is not to hide their activity (working on what they see as actually important) but make the case for the business value of their work and switch teams or companies if they just can’t agree.
This is mine! Keen for feedback :)
I guess that’s what you get if you’re hosting it like that. Also I don’t like the missing mouseover for links and I think I can sum it up as: not a fan of Google Docs. But definitely a creative solution.
The black text on a dark blue background was difficult to read on my iPhone.
Would love to see a “how Mozilla could have survived” article. Was it really the result of bad decision-making, or was it basically inevitable given what they were up against? (Note: not asking about whether or not they made any bad decisions.)
I often wonder what would have happened had they not given up on FirefoxOS. Non-technical users do not care about browsers, they won’t install an alternative one unless forced to. This means that for people to use Firefox, it’d have to come bundled with the hardware they purchase. FirefoxOS was a way to get Gecko running on lots of mobile platforms.
Thanks for this. I’ve been planning to use this on all future side-projects, and your notes will make me pause and think more carefully about it.
Unfortunately, I learned about kerning and kerning is impossible to do even decently with monospace fonts.
Kerning is useless for monospaced fonts, almost by definition.
Kerning is so that combinations like “AV” don’t have a wide space between them.
AV
will have that, because the horisontal space taken up by each character is the same.There are advantages to kerning, and you miss out on them with monospaced fonts. Obviously you gain other benefits while writing code with monospaced fonts, but fit prose? Not so clear.
Ditto. Maybe it’s just me, but I find it very easy to lose my place when reading monospace text.
So don’t do kerning? Not sure if there are any readability studies or something that you’re thinking about but as a programmer I am also happy to read articles in monospaced font.
Monospaced fonts make prose objectively harder to read. They’re an inappropriate choice for body text, unless you’re trying to make a specific e.g. stylistic statement.
Do you have any links for some studies about it? I’m wondering since you’ve used the objectively term, which I find confusing, since I’m not impacted by monospaced formatting at all. Film scripts are being written in monospaced script, books were written in it (at least in the manual typewriter days), I think this wouldn’t be the case if monospaced fonts would be objectively harder to read?
This is a subject that has been studied for a long time. A quick search turned up Typeface features and legibility research, but there is a lot more out there on this topic.
The late Bill Hill at Microsoft has a range of interesting videos on ClearType.
Your first link was fascinating, thanks!
Manuscripts and drafts are not the end product of a screenplay or a book. They’re specialized products intended for specialized audiences.
There are no works intended for a mainstream audience that are set in a monospaced typeface that I know of. If a significant proportion of the population found it easier to read monospaced, that market would be addressed - for example, in primary education.
Market could prefer variable width fonts because monospaced are wider, thus impacting the space that is taken by the text, which in turn impacts production cost. This alone could have more weight for market preference than the actual ease of reading. Bigger text compression that is achieved by using variable width could improve speed of reading by healthy individuals, but that isn’t so obvious for people with vision disability.
Since at least some research papers attribute superiority of variable width font to the horizontal compression of the text – which positively influences the reading speed and doesn’t require as many eye movements – I’m wondering if the ‘readability’ of monospaced typefaces can be improved with clever kerning instead of changing the actual width of the letters.
The excerpt from the paper above suggests that the superiority of variable width vs monospaced isn’t as crushing as one could think when reading that human preference for variable width is an “objective truth”.
Also, the question was if monospaced fonts are really harder to read than variable fonts, not if monospaced fonts are easier to read. I think there are no meaningful differences between both styles.
So it’s more readable and costs less? No wonder monospaced fonts lose out.
I’d love to read the paper you’ve referenced, but cannot find a link in your comment.
Low quality trolling.
They could be paywalled. I’ve provided the name of papers plus authors, everyone should be able to find them on the internet.
What?! I put a lot of effort into my trolling!
(To be honest: you’re right and I apologize. It was a cheap shot).
I found the first paper (https://www.ncbi.nlm.nih.gov/pubmed/2231111), and while I didn’t read it all I found a link to a font that’s designed to be easier to read for people who suffer from macular degeneration (like my wife). The font (Maxular), shares some design cues from monospaces fonts, but critically, wide characters (like m and w) are wider than narrow ones, like i.
That’s what I think is a big problem with monospaced fonts, at small sizes characters like m and w get very compressed and are hard to distinguish.
I also tried to code with a variable width font. It works ok with Lisp code but not the others. The tricky part is aligning stuff. You need elastic tabstops.
Oh, wow. That’s a cool idea. Yeah, that might be enough.
Very cool idea, but that means using actual tabs in a file, and I know a lot of programmers who hate tabs in files.
Good point. I think the cases when different sized tabs would cause problems should also cause problems with a system like this.
I don’t find bottom’s graph (that one that displays usage for each individual core) to be very informative in your screenshot. Maybe they weren’t designing for so many cores :)
Yes possibly. I figured it was a good machine to test them on for that reason. :-)
I feel like the “dot-map” one does the best job. You get a quick and accurate view of what regions are crammed with infected people and ones not, but also where within the region. I’m surprised it’s not more used (I’ve never seen one before this).
This is wrong: points are spread randomly within the province, they don’t correlate to actual people or cases. This makes the dot-density map really misleading, and this comment is evidence.
I mentioned I was surprised. :p
I don’t like the dot one. It doesn’t correct for population. It implies that cases are spread uniformly over the area, but it’s clear that that assumption isn’t even close to correct. Cases are undoubtedly heavily weighted towards cities just because that’s where the people are, different parts of the province are almost certainly doing different, etc.
I particularly don’t think it’s responsible to make it look like the entire area is doing the same when we are talking about something like a virus. People use the press to decide how they should respond to the virus, and if they are in the region, local variations matter. Note that this is different from the other maps that make it clear that we don’t have fine grained enough data to divide up the region, but don’t suggest the absence of local variation.
For this data in particular, glhaynes also has a very good point that it looks like the color has been inverted or something. I was similarly initially confused, removing the label might help.
Replying to:
Don’t you think it’s more useful to report an estimation of where an infection happened, rather than saying “we don’t know, somewhere in this massive area”?
And this is what the dot density map implies anyway? Or are we talking about two different maps? X) I’m seeing very very sparse dots in non cities and dense as hell in cities…?
I feel like we must be interpreting the map differently, or looking at different things.
What I’m seeing, in this image, is a map divided into a bunch of provinces. Provinces contain both rural and urban areas. Within each province dots are spread uniformly at random, i.e. any point within the province is equally likely to have a dot no matter the local population density, and the local propensity to have the virus relative to the rest of the province.
Oh, they are placed randomly in the province? :( Ok, that does suck.
I had a little trouble with the dot density map: because the dots were so dense in Hubei Province as to almost entirely fill it, I initially read Hubei as being rendered in inverse color for emphasis, leading me to interpret the small remaining non-dotted negative spaces there as being its “dots”.
Probably not a problem in a higher-resolution setting — or for smarter viewers!
Even then, though, it could easily be interpreted as showing a harder boundary at the province level than actually exists — as though the province was basically filled with cases that abruptly end right at the edge. That seems particularly likely to mislead a viewer in this case into thinking this is showing the effect of a hard quarantine right on the province boundary.
FWIW I experienced the exact same thing
I’ve used Loggly for the last 5 years or so and find it much better to use than self hosted Kibana.
Excellent - I wish more effort was spent in general on trying to replicate widely-cited papers.
Agreed. This is also a topic covered in the talk Intro to Empirical Software Engineering. It’s amazing how much we think we know, vs how much we actually do.
Upvoted not because I think this is a good idea, but because I’m curious to get others’ opinions on it.
This seems like a terrible, terrible idea. It’s yet another way of soft-forcing Google’s hegemony on the Web. Specifically:
I’m pretty sure this actually means “badging is intended to identify when sites are authored in a way that makes them slow on Chrome…”
And this isn’t like flagging a site that has an expired certificate or something. That is a legitimate security concern. This is making a value judgement on the content of the site itself, and making it seem like there’s something fundamentally wrong with the site if Google doesn’t like it.
Nope.
I’m with you.
And further, “badging is intended to identify when sites are authored without using AMP” - or whatever else Google tries to force people into using.
Seems like yet another way for Google to pretend to care whilst pushing their own agenda.
They refer to two tools, one a Chrome extension (Lighthouse) that I didn’t bother to install, the other a website (PageSpeed Insights). I went for the latter to test a page that has no AMP or other “Google friendly” fluff and is otherwise quite light weight (https://www.coreboot.org) and got a few recommendations on what to improve: Compress some assets, set up caching policies and defer some javascript.
If that’s all they want, that seems fair to me.
(Disclosure: I work on Chrome OS firmware, but I have no insights in what the browser level folks are doing)
Yeah, within 2 seconds of loading this link I was worried they were just pushing AMP. If they really are just pushing best practices then I’m cautiously optimistic about this change, and the fact that they didn’t mention AMP and instead linked to those tools gives me hope… but it’s Google, so who knows.
I’m definitely still wary. I personally feel that Google sticking badges on websites they approve of is never going to end well, regardless of how scientific it may seem at the beginning.
I really feel like there are major parallels to be drawn between Google and the rules of Animal Farm.
But that’s not all they want. Google and Chrome are now positioning themselves to visually tell users whether or not a site is “slow” (according to whatever metrics Google wants, which can of course change over time). As with most Google things, it will probably look reasonable on the surface, but long term just result in Google having even more control over websites and what they can and can’t do.
I would agree with you if it weren’t for Google’s long history of questionable decisions and abuses of their position as the (effective) gate keeper of the web,
QUIC?
Not to mention that and SPDY being the basis of the “next” versions of HTTP (HTTP/2 and HTTP/3) which will no doubt be rabidly pushed for.
Are there technical problems with HTTP/{2,3} or are you just worried because Google created them?
Specifically the fact Google created them. As tinfoil-rambling as it sounds, they already deal out a rather extensive spying/targeted advertising network, a possibly-manipulable gateway to information, the most popular video service, the most popular browser, some rather popular programming languages (Go, Dart), power a large portion of Web services (Chrome’s V8 in Node.js, that and Blink for Electron), many alternative browsers being Chromium/Blink-based, the AMP kerfuffle, ReCAPTCHA, and maybe the future protocols as to how their vision of the Web works.
They keep encompassing more and more parts of the Web, both technical and nontechnical, and people keep guzzling that down like the newest Slurm flavor. That’s what worries me the most.
http/2 and http/3 are the result of a multi-party standardization process. It’s not SPDY with just a new label.
Yes.
Generally I’m against prejudging companies like this, but Google has earned it, and then some.
Some sites that I use have Recaptcha. I use Firefox, and I can only pass the captcha if I log into Gmail first. Honestly, what kind of Orwellian horseshit is that?
So, I can at least comprehend this one, even if I hate it.
The whole point of recaptcha is to make it hard to pass unless you can prove you’re a real person. Being logged in to a google account which is actively used for ‘real things’ (and does not attempt too many captchas) is a really hard-to-forge signal.
What’s an example of a site that loads fast in Chrome but slowly in Firefox?
Well, given what’s happening here…any site Google decides.
To be less pithy: this could be used as an embrace-extend-extinguish cycle. Sure, right now it’s all just general best-practices but what if later it’s “well, this site isn’t as fast as it could be because it’s not using Google’s extensions that only work in Chrome that would make it 0.49% faster, so we’ll flag it.”
I’m not saying Google is definitely going to do that, but…I don’t like them making this sort of determination for me.
It gives soft pressure to conform to Google’s “standards” whatever they may be. No website is going to want to have a big popup before it loads saying “This site is slow!” so they’ll do whatever it takes to have that not happen, including neglecting support for non-Chrome browsers.
I don’t know if it’s still the case, but at one point YouTube deployed a site redesign that was based on a draft standard that Chrome implemented and Firefox did not (Firefox only supported the final, approved standard). As a result, the page loaded quickly in Chrome, but on Firefox it downloaded an additional JS polyfill, making the page noticeably slower to render.
How about Slack video calls? Those are a “loads never” in my book. (Still annoyed about that.)
I guess the question is, how much of a big deal is the performance delta in practise? Do you just close the browser after you’re done?
I shut down my desktop overnight, and generally shut down my laptop when I’m done using it (starting from a blank slate when I’m going to do something completely different, such as transitioning from play to work, is nice imo). That means that I’ll generally start Firefox from a fresh boot at least once a day, usually multiple times per day. I’m probably not the only one.
Also, it has become common for DEs and WMs to lack minimize buttons; neither gnome nor pantheon have them by default anymore afaik, and the concept of minimizing doesn’t even make sense in a tiling WM. That means that closing applications when you’re not doing it is probably becoming more common over time.
Removal of minimize buttons from non-tiling desktop environments is just a misplaced unification with mobile devices (where those environments like GNOME don’t work in the first place and have no reason to unify anything with them) and should not be tolerated.
I wasn’t saying it’s good, I was bringing it up as a really relevant reason for why someone might close their applications when not using them.
I use Awesome and it supports minimising. I use it semi regularly. Sometimes a desktop gets a bit busy so I minimise a window with Super-n. Later Super-Ctrl-n brings it back. Handy for terminals that are running dev servers and the like too.
Sounds like the ‘scratchpad’ in i3wm.
I find tmux to be a much better solution for this; you get a bunch of other handy features for free too, like being able to resume from an SSH session.
We might be talking about different things. I was referring to a dev server in the sense of a Rails application server that is running for local development or a
zola
instance when writing a blog post. In both cases there is no ssh involved. Is that what you meant?I think he means that instead of minimising the X window he would detach from the tmux session or move to a different tmux “window”.
I think he was just mentioning the SSH thing as an extra benefit to using tmux in general.
No, that’s what I mean. If I start a dev server on one computer and walk away, then decide later I want to keep working on it, I can continue my work from another computer without being physically present at the original machine.
In tiling WMs there’s no need to minimize since the windows will just be parked behind active ones. (I’m only speaking for i3, StumpWM and EXWM. I’m not sure what dynamic[1] tiling WMs like xmonad do.)
[1] This is just a classification I made up on the spot, but static tiling WMs like StumpWM and EXWM have fixed tiles arranged by the user wherein new windows are placed. Dynamic tiling WMs create a new tile to put a new window in. i3 is somewhere inbetween.
You don’t have minimise buttons but you do have multiple desktops, the solution there is to park windows on an off-screen desktop and move them back when you want to interact with them. This is what I do with Xmonad where I use the last desktop (#9) as a ‘parking space’. I move windows there by pressing
Shift-Alt-9
and retrieve them withAlt-9
(which moves to the ‘parking lot’) followed byShift-Alt-1
(to move the window to desktop #1) or something similar.Sure, that works, but that workspace 9 becomes really messy if you do that with every application you ever open. It’s not like in, say, ubuntu (which enables minimize buttons in gnome) or Windows where you can have as many applications open and minimized as you want and the only effect is an extra icon in a panel.
Also, in i3 (or sway, which is what I use), multiple monitors share the same pool of workspaces; when I have two screens, I usually allocate workspaces from 1 and up on the left screen and from 10 and down on the right screen. I sometimes use workspace 5 as a “parking space”, but that also means you have to decide which screen you want your parking space to belong to - or you could use workspace 5 for the left screen’s parking space and workspace 6 for the right screens, but then you have to remember the screen you last used the application on.
Sometimes it’s just easier to close the application, you know?
It doesn’t matter if #9 gets messy as it is only used as a ‘parking lot’. I don’t use it all that much so it is a rare event for there to be more than 2 windows on that desktop. Xmonad also uses a shared pool (or can use a shared pool, the thing is configurable to the hilt so it is really up to the user to decide how to handle this) but there is no fixed relation between monitor position and desktop number - I can pull up #9 on any monitor, move a window to whatever desktop I had open on that monitor and switch back. Normally desktop #9 is invisible.
Based on the
satire
tag I assume the creator doesn’t actually care about the responses to the question. I flagged this as spam, I don’t think we need this kind of thing.When does a post cross the line from “I don’t want to see more content like this so will not upvote it” to “this should be flagged as spam”? For you, personally, I mean.
It’s harder to define in general, but when people are creating copycat throwaway posts that they don’t care about I think that’s effectively spamming the site.
Out of all of the “What is your X” setup posts, this is the only one that the poster didn’t seem to care about, but I still think the commentary adds more than it takes away, overall.
This is the frustrating thing to me about this article (which is a good writeup).
Near the beginning:
And the end:
Klabnik took a year to shoot out an essay that is super easy to misinterpret at “C isn’t how computers really work (and so we shouldn’t learn it!)” and that has become a not-uncommon meme in the Rust Evangelion Strike Force.
A year later, he recants, sorta kinda–but in the meantime, what happened? How much stupid argumentation resulted from this? How many C developers had to put up with erroneous Rust fanboyism on that point alone?
Folks, please please please be deliberate when writing for your communities on these things.
Can you link to a specific example or two?
A year of IRC and twitter interactions, though I’m sure you could find comments on HN or even here if you looked.
As much as I think an “investors never” policy is an ethically good one, I don’t think straight advertisements (even for Drew Dewalt’s services) disguised as blog posts belong on lobste.rs; maybe this ought to have had a ‘show’ tag?
(Do we have a tag specifically for advertisements? Stuff like this, & stuff like, say, books written by regulars here, can be of interest but a lot of folks would probably like to filter them out.)
I agree that this doesn’t really need to be here.
“But, consider that the incentives that this approach creates hold us accountable only to the users. “
It at least gave me the opportunity to say that’s totally false. People who want to maximize money are always incentivized to do things that bring in more money. As a direct counter-example, this is why Microsoft plasters ads all over the home screen of players who already paid for both an Xbox and years of subscriptions to Xbox Live. The ads bring in more money on top of what the customers already paid. Likewise, companies selling customers’ information behind their backs to data brokers, surveillance states, etc. The claim that them paying you incentivizes you not to screw them is utterly false and people need to stop repeating that.
What you can say is you’re taking payment only from them to align your incentives with taking care of them. From there, that your personal principles and/or (if applicable) non-profit’s charter will keep you from harming your customers. Aside from principles, you see not harming customers as a positive differentiator for your brand to both gain and keep customers for a long time. That they’re paying you alone doesn’t do anything. It’s your principles and goals making that happen. Whereas, in companies whose leadership is about profit maximization, they make it not happen.
Just to be clear: I think your actions over time argue that you’re telling the truth about what you intend to do. You’re clearly driven by personal principles over profit. It’s just that this is rarely the case, most companies selling to customers have the incentive you described, many screw their customers over anyway, and therefore it’s misinformation to say B2C incentive leads to moral behavior. It’s almost always the leadership at different levels that decide on that.
These are good points, and I agree that simply taking user’s money doesn’t mean that you’re incentivized to support their needs. But, I think that it’s telling that SourceHut is the only player who isn’t incentivized to anyone else, i.e. investors. Additionally, because the SourceHut finances are public, you can easily see exactly what the budget is and how it’s being spent. These facts don’t present an indisputable statement of SourceHut’s integrity, but they definitely set it apart from its competitors, I think.
I agree about your transparent finances supporting your claims. I love that. Definitely keep it up.
Although not an endorsement, you might find Buffer’s open salaries and revenue dashboard interesting. I’m against the live version given customers’ perceptions of ups and downs might be more negative than what they actually are. Their feedback might also wear employees out. Better to do periodic summaries like you do so their reactions happen when you’re ready for them. What’s in those links might give you ideas, though.
This complaint comes up often for sourcehut blog posts. However, I almost always find them interesting. Just because someone is explaining their choices positively doesn’t mean it’s nothing but an advert. Counter-cultural internet businesses (sourcehut, pinboard, etc.) with the weird idea of making money by selling users a product are incredibly interesting at this point in time.
That’s what my second paragraph was about. While I don’t think it counts in this case (saying something true like “investors never” before an ad doesn’t make it substantially more than ‘just’ an ad, particularly when it’s common sense in most circles like this is), there are situations where things that are advertisements are interesting to this audience. Nevertheless, a lot of folks here would like not to be advertised to even when the ad has some interesting copy.
We have a solution for exactly the problem where some people really want to see a particular type of content & other people really don’t want to see it: tagging.
“Advertising” is a pretty fuzzy label. In a very real sense all blog posts are advertising for a brand, even if it’s the author’s “personal brand”.
Pendo gives you things like, in their words, “Analytics to understand what users are doing across their product journey” and “Surveys to capture how users feel about their product experience”.
The straightforward way to explain why GitLab is adding this is that they want to have better insight in how people are using their platform. Regardless of how you feel about the ethics of that – and I don’t think it’s automatically unethical, the devil is in the details – there is real and tangible useful information for the users. If the producer of software understands their users better, then they can create better software. It’s very common for there to be a bit of a disconnect between developers and users. This certainly isn’t tracking people across websites for ad purposes or the like.
The idea that (quoting the article) “these kinds of changes are not implemented with the user in mind - these decisions are more easily explained by following the money” is nothing short of conspiratorial fear-mongering bullshit, and that is my polite description of it. That this is used to advertise a competing service is, in my opinion, highly unethical advertising.
I don’t like “these kinds of changes are not implemented with the user in mind - these decisions are more easily explained by following the money” either but “nothing short of conspiratorial fear-mongering bullshit, and that is my polite description of it” doesn’t sound better to me.
It’s an allegation of malice which is unlikely to be true presented with no evidence what-so-ever, and trying to advertising a competing product. It’s classic FUD-based advertising. I could have been far more direct than my previous comment and still be accurate.
Either way, I care too much about the general topic to let complete nonsens like this slide. That’s how communities go off the deep end.
There is overwhelming evidence in support of the claim that any company taking VC money trying to I.P.O for billions is going to (a) do stuff bad for its users to get there at various points or (b) do it after it’s public following the doctrine of increasing shareholder value at all costs. Plus, covering the executives and Board’s fat checks. The (b) strategies are typically to do bad things to increase revenue or do bad things to cut costs. What drives it is how extreme the financial incentives are when meeting financial goals requires moving tremendous amounts of money.
So, starting with that status quo, you can then do new things with different incentives and principles. That’s what Sourcehut is doing. His advertising goes a bit too far, scaring off some clients for sure. Highlighting his focus on user freedom, the ethics, the lack of tracking, the lack of bloat… things different than most VC-funded products… makes lots of sense given much of his competition aren’t about that stuff.
If anything, any company taking VC money or aiming to make billions needs to be proving they’ll not harm customers in long-term using contracts, open-source software, etc. They should not be believed by default at this point. Hell, it was companies such as IBM, Microsoft, and Oracle that proved we needed assurances that long ago.
I appreciate that you were trying to be polite, but there are definitely more civil ways to say that you a) don’t think something’s true and b) doubt the intentions of the author.
Oh boy, even more to add to this.
The guy who created it, Curtis Yarvin is extremely right-wing and has been described as “the Alt right’s favorite philosophy instructor”.
Big oof.
Personally I’m fine with using, discussing and even contributing to a software system created by someone whose politics I don’t agree with.
P.S. for anyone interested in a thoughtful, thorough criticism of Yarvin’s political philosophy, I recommend this Slate Star Codex article: https://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/
Yeah. How much of your system runs GNU? (Rhetorical question.) Stallman isn’t less politically charged than Yarvin, even before the recent comments; the difference is that Stallman leaned left, which is more accepted in the affluent intellectually-oriented minds of software people
From what I’ve seen Stallman leaned libertarian (socially left, fiscally right).
I don’t think it’s accurate to describe Stallman as a libertarian. As far as I’m aware, he describes himself as a socialist and holds non-free-software-related political views that are generally consistent with the left wing of American politics, including fiscally. Many of his views on software freedom are consistent with some strains of libertarianism, but I think it’s ultimately a limitation of the “left-right” metaphor when talking about political beliefs to describe him as otherwise socially left, fiscally right.
I don’t know if he does. I remember reading an essay by him claiming that free software is not socialist, but I can’t find it right now. I think is more likely that he regards himself as more in the classical liberal tradition.
Stallman is a democratic socialist, historically supporting the US’s Green Party and Bernie Sanders.
I was wondering how long it would be before someone would chime in with this tired bit of commentary.
Every comment you’ve left in this thread has been inflammatory and off-topic. The Urbit guy wrote some things you don’t like. We get it. The weird thing to me is the most hysterical people who constantly yell at those they deem right-wing also have funny ideas about when prejudice is acceptable, i.e., punching down vs punching up, power dynamics, etc. How much authority or power do you think CY/MM has?
This is a horrendously boring topic, and you have made this forum a more boring place for having brought it up. Please just stop. Let’s focus on technology.
The piece of technology was founded by someone with very strong visions as to what society should be, which means that in the software, there are design decisions being made to align with this. In this case, there’s a very strange belief in a feudalistic system where - even if the names were changed from lords/dukes/etc. to galaxies/planets/etc. - this influence still exists in the software. I believe it bears mentioning.
Yep. He’s said some horrendous things and excluded a lot of people. I have no interest in this project if it can’t bring itself to align with the values of the community.
The tendency to universalize one’s values creates a problematic blind spot for a great many people
Kind of seems like it has, though? Or at least they kicked him out and erased his name from their public intro, probably other places. I guess it’s down to who exactly you mean by “the” community.
I have dark mode enabled on my iPhone SE, but only see the dark colour scheme for this website in Safari, not Firefox (v19.1, the latest version, afaict).
Yeah, I have dark mode enabled in Windows, and I’m running Firefox developer edition (70.0b13), and I don’t see dark mode on this website. I have no idea which of the three are to blame.
Oh come on now. Saying the schools of thought are “Turing machines vs mathematics” is silly. Automata theory and formal languages are both topics in math, just as category theory is.
This is also silly. Dynamic logic is part of math. Temporal logic is part of math. Most formally-verified production code uses some form of state machine formalism. State machines are also math.
That’s actually often better for beginners. The technically complete answer can wait until they’ve built a mental model.
Honestly this is what bothers me most about this kind of advocacy. Sure, CT might make you a better programmer… but so do a lot of things. Juggling makes me a better programmer. I can list, off the top of my head, five or six programming lessons I learned from juggling. But I’m not applying juggling techniques when I program. I don’t think anybody who knows CT is applying CT techniques when they program.
When we equate CT with math, we’re saying “learning math makes you a better programmer because it changes how you think”. And sure, that happens. But there’s also a lot of math that’s not CT that, in addition to making you think differently, also gives you powerful techniques to deal with code. Some I’ve run into:
That’s just the ones I’ve run into directly, where I had the relevant skills. I’ve definitely smashed into probability and graph problems where I knew it would have been much easier by knowing the math, but didn’t. You can learn mathematical thinking and build directly useful skills. So why do we keep talking about math as if it’s just CT?
Oh, and the first example of the benefit of CT? “using Semigroups to represent the general notion of collapsing two entities into one”? You don’t need CT for that. That’s first-semester abstract algebra.
I liked your post even though I somewhat disagree with it. I don’t know too much about CT, but it does seem like CT is one level of abstraction above all those other fields of mathematics you mentioned. If your fields are mathematics libraries with classes in them, CT is a library of interfaces. CT is the study of things that compose in myriad ways, and I think that does quality it as a candidate for the calculus of software engineering.
Just a side note; I think it’s cruel to quote a non-native speaker’s transcript, it makes them sound silly :) When you listen to Bartosz, you don’t mind all the “like”s and he’s very careful in writing, so he writes with perfect English. So these quotes don’t do him justice. This is not a criticism towards you, BTW, you obviously can’t quote sound in text. Just an observation.
Agreed on CT. People communicate better when they share models. CT, even if never explicitly used, is a robust shared understanding for software design. You know that you have that understanding when people bring up an instance of something that is, say, monoidal, and they can call to mind other instances and see the connection.
We (the SE world) did attempt pattern languages before. Do you think it has helped?
Yes, but less than it might have. Our discipline doesn’t do as well as it could with cross-generational knowledge transfer. A lot of this is due to rapid growth and fragmentation of education.
To be honest I don’t think anybody, ever, has sounded good in a transcript :P
I think Carmack is one exception. He speaks as well as many people write.
I disagree; many CT techniques are directly applicable to programming. Functors, applicative/monoidal functors, monads, comonads, &c are the standard examples. My personal favorite is the categorical semantics of the simply-typed lambda calculus, which I use when writing compilers; it corresponds pretty directly to Oleg’s “final style”. But there are a few caveats.
First, you don’t have to learn CT concepts from a math perspective; you can learn them from a programming perspective. I’ve gone both ways; I learned Haskell functors before category-theory ones, but denotational semantics before final style. Nor are these ideas necessary to write any particular program.
Second, some of these techniques can be pretty niche (although not all; functors are everywhere). Denotational semantics, for example, is probably only useful if you’re implementing your own language or DSL. That said, many problems can be fruitfully viewed from this angle if you try. Conal Elliott, for example, has made a cottage industry out applying simple category theory & denotational semantics to all sorts of problems, from animation to compilation to neural nets to fourier transforms.
This is certainly true!
However, calling any particular piece of math “not CT” is dangerous; almost any math can be viewed from a categorical lens. I think this is part of why category theory can feel so evangelical: for some people, the categorical lens just clicks really well. They apply it to everything, and because it works so well for them, they promote it as “the key” to understanding math and programming (and thought and…).
(The way category theory “just clicks” for some people reminds me of how some functional programmers get annoyed that Leibniz and big-O notation aren’t clear about variable binding structure and wish everybody just used λ-notation.)
But the reverse is true as well! Almost any use of a CT idea can be viewed from a non-categorical angle. Category theory is relatively recent mathematically; one of the things that mathematicians like about it is that it gives a unifying framework for understanding many patterns that cropped up in earlier mathematical work. But just as these earlier results were discovered without CT, they can be understood without it; there are relatively few results which were discovered only by categorical means.
So it’s hard to give concrete examples of the “advantages” of using CT, because any concrete example is subject to quibbling about whether it’s really category theory or not.
To reiterate your point, Turing was familiar with set theory. The celebrated halting problem was inspired by Cantor’s diagonal argument.
Also, Turing’s oracle machines connect to both first order arithmetic and set theory. The connection to first order arithmetic is via the Kleene–Mostowski hierarchy. The connection to set theory is via the Borel hierarchy. These connections are the topic of descriptive set theory.
Category theorists have long hoped for CT to be the foundation of mathematics. Paul Taylor and William Lawvere have advocated this point of view extensively.
FWIW, in my amateur work in formally verifying my proofs, I’ve never needed CT. I just used Isabelle/HOL to formalize sentential logic and measure theory. I’ve occasionally needed to reach for Zorn’s lemma.
In my professional work in Haskell, I try to avoid category theory. This is because it makes the code hard to read.
Sometimes I explore CT in Haskell. I don’t often find conventional CT helpful. In conventional CT, categories are enriched in Set. In Haskell, it’s better to think of instances of
Category
to be enriched in Haskell itself.In addition, textbook CT constructions often fail.
For instance, I have explored implementing exponentials in the Category of Functors in Haskell. An approach I’ve seen cited is to use this construction from Mathoverflow. AFAIK Phil Freedman’s implementation is the only one that works in Haskell.
RE: the blogging experiment, I think you could just link to a tweet thread to better effect. The literal HTML entities and lack of distinct formatting between content and authors are a bit tough on my eyes. But I did enjoy the responsive email quotes.
Anyway, I hope you filed a bug!
I plan to!
FWIW I also found this very hard to read. Experiments are good though :)
Oh, I found the problem. Firefox blocks twitter’s widget code. I’ll go fix that by downloading it, hosting it locally and the like.
Most headsets aren’t really ready for interacting with small text for extended periods yet. The Rift S is usable for in-game interfaces, and the screen-door effect isn’t very noticable, but there’s always a “sweet spot” you need to find with headset adjustment to make text legible, and it’s not really as comfortable as just looking at a regular monitor.
Maybe the HP Reverb is approaching usable, but I haven’t tried it. I’m excited at the possibilities but still skeptical short-term.
Yeah, something that makes me less sanguine than I was previously is a comment by Carmack in last year’s Oculus keynote where he says that previously phones were driving small displays to be better and better, but now they’ve reached a point where quality improvements go unnoticed by consumers, so “VR companies will have to foot the bill” for higher PPI screens.