I mean, I joke, but… I mean… Right? I’m guessing you prolly missed it in OpenAI’s 98-page GPT-4 technical report, but large models are apparently already prone to discovering that “power-seeking” is an effective strategy for increasing their own robustness. Open the PDF and search for “power-seeking” for a fun and totally 100% non-scary read.
Yet, the link and twitter post shared seem to indicate exactly the opposite. ARC was tasked to assess the model power-seeking behavior and the conclusion was:
ARC found that the versions of GPT-4 it evaluated were ineffective at the autonomous replication task based on preliminary experiments they conducted.
ARC also wrote this, though:
However, the models were able to fully or mostly complete many relevant subtasks. Given only the ability to write and run code, models appear to understand how to use this to browse the internet, get humans to do things for them, and carry out long-term plans – even if they cannot yet execute on this reliably. They can generate somewhat reasonable plans for acquiring money or scamming people, and can do many parts of the task of setting up copies of language models on new servers. Current language models are also very much capable of convincing humans to do things for them.
We think that, for systems more capable than Claude and GPT-4, we are now at the point where we need to check carefully that new models do not have sufficient capabilities to replicate autonomously or cause catastrophic harm – it’s no longer obvious that they won’t be able to.
Still, the original post is claiming:
but large models are apparently already prone to discovering that “power-seeking” is an effective strategy for increasing their own robustness.
Right now, at best, large model can be prompted to do sub-task and unreliably complete them. There’s a huge gap between power-seeking and doing specific tasks on prompt. If anything, we are getting to a point where this AI has the means to do a lot, and if they had the capabilities of power-seeking, they could probably get somewhere. However claiming that current LLM is “prone to discovering that “power-seeking” is an effective strategy” is misleading.
If your first red-team test finds that your AI is effective at autonomous replication, you’re a few weeks out from the world ending. The fact that we’re even talking about this anthropically demands that the AI was ineffective at this. The important question is the gradient it’s on.
we believe that power seeking is an inevitable emergent property of optimization in general. There are a few others, like self preservation. We aren’t seeing this in GPT-4. But it isn’t clear exactly when and how it could appear.
I’m wondering, could it also eventually be simply parroting in itself? Right now, everyone seems to look for ways to make use of AI and LLM to whatever problem they see. Wouldn’t it make sense for a generative model then to simply do what it has been trained on: deploy more AI model? Is that really power-seeking or simply more parroting and yet another case of us looking in the mirror and seeing intelligence in our reflection?
I’m assuming the use of “optimization” here is different from the generally accepted one, which to me is improving a process to be more effective.
by optimization I mean applying some iterated algorithm like gradient descent to minimize an error function. (i.e. tweaking the weights of the neural network to make it better at predicting the next token)
Please note that this issue was not the result of a compromise of any GitHub systems or customer information. Instead, the exposure was the result of what we believe to be an inadvertent publishing of private information.
How can exposing a private key not be compromise?
Because it was presumably a mistake by someone working at Github, and not an external party deliberately trying to get hold of the private key.
To be clear, the exposing of a key itself was a compromise, but it was not the result of a separate compromise.
It was in a public repo, so might have been downloaded. After that, there’s no way of knowing if it ended up in the hands of anyone malicious. If someone is able to fake your GitHub DNS reply (e.g. on a public WiFi network) then they can redirect you to a server that will accept your git pushes to private repos.
My main concern is two fold for ML products:
Code interviews are going to be a bigger mine field for the interviewer.
Developers will become code janitors full-time, developers part time. As in the project/product manager will generate a half working program and the developers are there to fix it.
Re. 2: that transformation has already largely taken place. I started coding in 1987, professionally in 2000.
When I’m coding in 2023 most of the time I’m gluing together preexisting libraries, and dealing with bugs in other people’s code. That was not at all the case when I started out.
Gluing together pre-existing libraries was what OOP advocates like Brad Cox were promising in the ‘80s for most software engineers (with a smaller group writing those components). Brad’s vision was always that end users, not professional programmers, would do a lot of the final assembly.
That’s the promise/premise of “low code/no code” as well. The public-facing rationale is shortening the lead between customer wishes and software solution, the hidden one is lessening the reliance on scarce, expensive software developers.
Code interviews are going to be a bigger mine field for the interviewer.
Just last week I put a few of our interview questions into ChatGPT and it solved most of them fairly reasonably. I received a slightly misguided answer that would have been okay for a live coding interview. The questions were all basic to intermediate SQL questions. Basically a series of stakeholder questions with a some stub data to query.
We decided that we couldn’t let people do them as a take home. There was too much risk that someone could fake it. Even a screen sharing session might result in some trickery on a second screen.
I think the solution is interviews that ask for broad understanding.
I personally have always been interviewed in a quite open discussion (leet code interviews are very uncommon in Germany), only one out of six interviews in my career gave me a take home assignment. As a senior, when I myself was asked to join an interview I asked mostly questions that (imo) should show whether a candidate understands general concepts, e.g. “We have problem X, which technologies do you think could solve them?”, “You mentioned Y and Z, what advantages/disadvantages does each of Y and Z have in this use case?”.
At least I guess that it would be hard to hear the question, type the question into ChatGPT, wait for and understand the answer and respond with the answer with a small enough latency that it would not seem weird in the interview.
Just last week I put a few of our interview questions into ChatGPT and it solved most of them fairly reasonably. I received a slightly misguided answer that would have been okay for a live coding interview. The questions were all basic to intermediate SQL questions. Basically a series of stakeholder questions with a some stub data to query.
We had someone scam us for a week at my last employer (I never interviewed the candidate), but they refused to turn on their camera and started giving different responses.
So now does the interviewer need to keep in mind things like that, but I also need to ensure that the person isn’t using ChatGPT and CodePilot on their end to deliver answers to my problems. Maybe this will give rise to killing working remotely. “Sorry, I need you to come into our office for the interview.”
Perhaps it would promote the habit of designing better-constrained interfaces to rein in the complexity, and force more effort into writing comprehensive unit tests.
Congrats on submitting the (by my count) 9th Lobsters article that mentions IPv6 in the title! A real tragedy that so few devs are interested in this stuff.
Wouldn’t SIIT serve this purpose better? My router doesn’t support SIIT so I may be misunderstanding it, but I believe your router would do the translation at Layer 3. Then you could do SNI or “Host” header routing like normal in some other proxy like nginx inside your IPv6-only network. The IPv4 source IP would be visible in logs and can be blocked with normal tools since it just appears as a padded IPv6 IP.
I found a few more submissions with “ipv6” in the title: https://gist.github.com/gustafe/607aafbddfa7624509ff31e3f5213d36
They are ordered in reverse submission date order.
Hehe, it seems to be “smarter” than just looking at the titles.
The result above is from scraping every submission title.
Like many others who went to engineering school in the ‘80s, I appreciate high fidelity audio gear to the point of being a little bit of an audiophile. Also, like many people who knew electrical engineers from that era, the amount of bad advice that you can find in “audiophile” circles is astounding. And don’t get me wrong, I’ll tell you to your face that popular music sounds better today on vinyl rather than CD. What I won’t do is try to convince you that the reason for this is because digital is an inferior technology incapable of meeting audiophile standards because that’s not even close to the reason.
As far as your ears are concerned, louder sounds better. CD and digital have far greater fidelity to the original wave form than vinyl does. Vinyl also has less dynamic range, the difference between the loudest sound that you can record and the softest sound that you can record. When music was regularly sold in both formats in the late eighties, the mastering process for popular music was the same for both formats. In the early ‘90s people discovered that the common mastering process for CD and LP was leaving a lot of dynamic range unused when the music was pressed onto CDs. As the vinyl LP was falling out of favor, people discovered that you if you reduce the dynamic range of the music you can master it at a higher sound level or loudness on CD without generating distortion. To people listening to the music these louder pressings initially sound better. Every rock and pop artist on the planet wants their song to sound the best when played alongside other music. So artist started asking for this as part of the mastering process. The problem with doing this is that everything begins to get a wall of sound feeling to it. By making the soft parts louder and the loud parts softer so you can make the whole thing louder, you take some of the impact that the music would have with its original dynamic range. When vinyl records starting coming back in to favor, the music destined for LP was mastered the way they used to for vinyl back in the ‘80s. If you listen to two versions of a song, one mastered for LP, and the other mastered for CD, the LP will sound better in the first few plays before vinyl’s entropic nature ruins it, because the LP version will have more dynamic range. The same is true of two pressings of Pink Floyd’s “Dark Side of the Moon” if you are comparing an early 1980’s issue CD to a late ’90s CD reissue. This is really only true of Rock, Pop, and R&B. Classical and Jazz were unaffected by the Loudness War because fans of those genres put fidelity highest in their desired traits.
Summarizing: when you say you prefer vinyl over CD, you say that you prefer 80’s style mastering over the overly compressed end-90s+ mastering.
It’s interesting that the extra headroom on CDs sparked the loudness war, instead of resulting in better dynamics. And now that people expect music to have a certain loudness, I guess we can’t go back.
Perhaps one day we could get a new wave of artists mastering their 320kbps mp3s 80s-stlye?
A loud mix makes sense if you’re listening to music in a noisy environment. (On your commute, say.) But I’d rather have the ability to compress the dynamic range at the time of playing, so I can adjust it to suit the environment.
I used to have a decent, though inexpensive, stereo system setup. Back when I would sit down just to listen to music, with no other distractions, like the Internet.
But when was the last time I really sat down to listen to music? For me it is usually in the car, or through a pair of earbuds. Or maybe washing the dishes.
The extra headroom mostly provided the opportunity, alongside the fidelity and lack of physical limitations of a CD: on a vinyl if you try to brickwall you end up with unusable media.
What sparked the loudness war is the usual prisoner’s dilemna, where producers ask for more volume in order to stand out, leading the next producer to do the same, until you end up with tons of compression and no dynamic range left. Radio was a big contributor, as stations tend(ed?) to do peak normalisation[0], so if you leverage dynamic range widely you end up very quiet next to the pieces played before and after.
Perhaps one day we could get a new wave of artists mastering their 320kbps mp3s 80s-stlye?
To an extent it’s already been happening for about a decade: every streaming service does loudness normalisation[1], so by over-compressing you end up with a track that’s no louder than your neighbours, but it clips and sounds dead.
Lots of “legacy” media companies (software developers, production companies, distributors, …) have also been using loudness normalisation for about a decade following the spread of EBU R 128 (a european recommendation for loudness normalisation), for the same reason.
[0] where the highest sound of every track is set to the same level
[1] where the target is a perceived overall loudness for the entire track
That’s me. When I buy rock music on LP, I’m purchasing music mastered to the 1980’s LP standards. I do that because Rock music works well with the 60 dB or so of dynamic range that vinyl LP offers.
It really is quite shocking to take a CD mastered in the early 90s and another in the late 90s-early 2000s and play them at the same volume settings.
This is an excellent explanation. Being able to explain things clearly without hiding behind technical terms like “compression” is a strong indicator to me that you are a true expert in this field.
For drummers and bassists, “compression” is a well-known term, because compressing dynamic range is almost required in order to record them faithfully. The typical gigging bassist will have a compressor pedal in their effects chain for live performance, too.
I do appreciate it when digital releases are mastered in way that preserves dynamic range, and playing it after any typical digital release in the affected genres, it will sound really quiet.
Some bands have demonstrated to me that you can be a loud rock band with dynamic range mostly intact.
I listen to 90% of stuff on vinyl, and I have no rational explanation beyond yours as to why I like it more than streaming.
I stream way more than I use my turntable, basically for the same reasons @fs111 mentions. But I definitely prefer vinyl because while streaming is pure consumption, vinyl is participatory. I enjoy handling the vinyl and really taking care of it (cleaning it when I get it/before I play it, taking care of the jacket, etc.). It makes me feel like a caretaker of music that’s important to me - a participant in the process, instead of just a consumer.
On my phone I listen to music. On my turntable I play music.
I like the physicality of it, too, and I also love the actual artifacts, the records and their sleeves and such.
While I can see the appeal most of my music consumption is while working. I would not like getting up constantly to switch records.
Possibly: I’ve read that over time many pop songs get remastered with more and more dynamic range compression. This makes all parts of the song sound similar in loudness, but also removes some musical features (dynamics) and depending on the DRC method (fast attack/decay, slow attack/decay, manual envelope adjustment) can introduce audible distortion.
Older vinyl record and CD releases are from earlier masters. Albeit some records are newly manufactured, so some will be based on newer remasters anyway.
Cannot confirm or deny, I don’t buy or listen to pop :/
This is called the loudness war. This site collects the dynamic range for albums: https://dr.loudness-war.info
In addition to the loudness wars people have been talking about, certain technical restrictions limit what can accurately be recorded on vinyl. This leads to a subtle “sound” that people get used to and prefer. This could be reproduced when mastering for digital audio formats, but people either don’t do that processing or “audiophiles” claim that it gets lost in translation somehow.
An interesting…if not rebuttal, counter-point to just mocking the audiophiles: https://fediscience.org/@marcbrooker/110017159893728862
For those not wanting to click on the link, the possible explanation was that different memcpy implementations use different pipelines and so may rigger different power states. This is possibly more plausible than you might at first imagine. I had a computer some years ago where I could hear SSE-heavy workloads because the EM leakage from the CPU hitting those pipelines was picked up in the sound card, so an SSE-based memcpy would give worse sound than one using integer pipelines (possibly - I doubt SSE loads and stores were actually audible on that machine). This is why sensible audiophiles that I’ve known recommend a cheap USB audio adaptor over any expensive sound card for analogue output: simply moving the DAC a metre away from the electrical noise does more good than anything else. Outputting optical audio to an external decoder and amplifier can be better because then your audio system can be completely electrically isolated from the computer (there are a lot of chips in a typical computer case that are little radio broadcasters and any nearby wire may accidentally act as an antenna receiving the signal).
The other plausible explanation that I considered was jitter. Probably not the case 10 years ago, but 20 years ago the ring buffers that sound cards could DMA from were quite small and you could quite clearly hear when the CPU wasn’t filling them fast enough. A slow memcpy being preempted at the wrong time may well have caused artefacts. If you can fill the buffer easily within a single scheduling quantum and yield then you’ll be priority boosted the next time and so you’ll easily keep up playing music. If you’re occasionally being preempted mid-copy, then you’ll suffer from weird artefacts as the card overtakes the CPU and plays a few ms of the previous sample instead of the new one.
I don’t think either of these were the case here though, and this is why sensible audiophiles do random double-blind tests on themselves: play the thing n times, where half are with one version, half with the other. If you don’t get a statistically significant number of guesses as which is the better one landing on the same implementation, you’re just kidding yourself (which is very easy to do with audio).
Those counterpoints are good. And he doesn’t much mock the audiophiles. High end audio is funny game. Since it involves chasing the limb of the diminishing returns curve, it naturally attracts eccentrics. The kind of people who take joy from the fact that a brake job on their Porsche cost ten times as much is does on a Toyota. For these folks, gold plated TOSLINK cables, $500 ethernet cables, and high fidelity audio grade ethernet switches are the stuff of which happy dreams are made. If that’s how eccentric people enjoy their money, more power to them.
What’s truly sad about this it ruins things for the people who have some sort of balance in their lives because there is point to be gained from climbing the diminishing returns curve in A/V equipment as a means of improving sound quality. The trick is that you don’t have to climb too far to greatly improve your experience.
I’m not involved in high-end audio, but it feels it has overlap with another field I’m tangentially aware of, high-end watch collection. If they’re similar, then the motivation driving participants is less the end goal of a perfect sound system or the perfect watch, but learning about the stuff that’s out there, trading for new used stuff, and generally being part of a community.
Specifically, it’s not that hard to incrementally increase ones collection by starting small, looking for used instances of the stuff you’re interested in, saving some discretionary income, buying up, etc. And for many people, that’s the actual fun part!
And while there’s an element of snake oil[1] and grift, in general both HE audio and watches are physical objects that can be inspected for quality. I.e. even if paying 6 figures or more for a turntable with a cast iron bed[2] is a lot of money in absolute terms, you are getting a physical object that has intrinsic worth, if only to other HE audio enthusiasts.
[1] I tried to find some examples and ran across this piece about expensive audio ethernet cables: https://arstechnica.com/gadgets/2015/07/gallery-we-tear-apart-a-340-audiophile-ethernet-cable-and-look-inside/. While excessively priced they are not outright fraudulent.
[2] price source: https://harpers.org/archive/2022/12/corner-club-cathedral-cocoon-audiophilia-and-its-discontents/
Wow, that submission title is… something else.
Edit the suggestions have taken hold, it’s more reasonable now.
Based on previous submissions and this confusing page I’m suggest
ing it’s from Oct 2016.
What does a date add here?
Most of what Dan writes seems to be pretty insightful observations that are essentially timeless.
It adds that I read this when it came out but I can’t tell until I reread half of it to see if he’s revisiting a topic or it’s a repost.
Can a designer explain what the word “stack” means in this context?
Is this a website that showcases my OS’s fonts?
The “stack” is the sequence of typefaces that the browser will try. You prefer the first, you’ll fall back in order to the last until you find one that is present on the users machine.
Is this a website that showcases my OS’s fonts?
The site tries shows font styles that are already available on people’s computers as part of their system fonts. If you pick one of these font listings for your CSS, then you don’t need to use web fonts. In that way, this site connects with the Stop Using Web Fonts article that’s also on the front page.
Yes, but formalizing what you used to do across platforms you hadn’t considered at that time. Basically it’s helping to answer the question “how am I most likely going to get a passably similar aesthetic across the permutation space of pre-installed font choices made by disparate vendors that don’t communicate with each other and can’t agree on a standard?”
I remember fiddling extensively to get fonts to appear similar in Windows/IE4 and Linux. Windows at the time had the meticulously hinted Verdana/Georgia/Tahoma TrueType fonts, whereas Linux (I think) only supported Type1 and bitmap fonts, and certainly couldn’t do hinting. Type1 fonts didn’t look good until you increased the point size to about 12-13, which was frankly too large for a resolution of 640x480.
I didn’t have access to a Macintosh at the time, and neither did anybody I knew, or I probably would’ve included them in my attempts.
I have to say, what the author did looks very good (and certainly better than my attempts at the time). I’m just amused by the word “modern”.
Pretty sure the “modern” is just to bring the stack collections up to date with current device trends; most of the other such stack recommendations out there are several years and numerous device generations out of date. The underlying problem predates the Windows/Mac/*nix trichotomy by at least the Gutenberg Press.
Yeah, but OSs ship with a lot more built-in fonts now, and nicer ones, so you don’t have to fall back on the same old Times/Arial/Verdana/Trebuchet/Georgia set. (Ugh. The only one of those I can still stand is Georgia.)
In this case, it’s just a list of operating system native fonts to use for similarish appearances. You can see more detail at the GitHub page: https://github.com/system-fonts/modern-font-stacks.
The web is not the same medium as graphic design.
I disagree. The web is a place for art and graphic design as much as anything else. Websites are allowed to be pretty.
That extra flair on your lowercase “t” doesn’t help the user better interact with your features or UI. It just slows them down.
Anecdotal at best. Misleading at worst.
The user wants to complete a task - not look at a pretty poster
You are not all users. I, for one, do not enjoy using websites that don’t look nice.
many designers design more for their own ego and portfolio rather than the end-user
Again, anecdotal (though it does seem plausible).
I find myself agreeing with all the other points brought up in the article (system fonts are usually good enough, consistency across platforms isn’t essential, performance). I don’t have any extra fonts used on my website (except for where Katex needs to be used) and I think it’s fine (in most cases. I’ve seen the default font render badly on some devices and it was a little sad).
I still disagree about “websites are tools and nothing else”. I don’t want my website to be a tool. I want it to be art. I’ve poured in time and effort and money and my soul into what I’ve made. I do consider it art. I consider it a statement. And if I have to make a 35kb request to add in a specific typeface, then I’ll do it to make it reach my vision.
That extra flair on your lowercase “t” doesn’t help the user better interact with your features or UI. It just slows them down.
Anecdotal at best. Misleading at worst.
That was obviously not the real question though: the point is, do web fonts help users in any way, compared to widely available system fonts? My guess is that the difference is small enough to be hard to even detect.
As a user, they make me happy and I tend to be more engaged with the content (when used effectively), so yes I find them helpful. I don’t want to live in a world without variety or freedom of expression. As long as there are ways to turn them off, surely everyone can be happy.
We live in a world full of colour. I don’t like this idea of the hypothetical ‘user’ who ‘just wants to get things done’ and has no appreciation for the small pleasures in life. I don’t have anything against anyone who feels that way of course (it’s completely valid). Just this generalisation of ‘the user’.
It really depends on the metrics measured.
Does the font help the user fill out the form and submit it? No, not really.
Does the font help engender a brand feeling of trust across platforms and mediums? Probably yes.
It’s impossible not to detect my own instinctive, positive reaction to a nice web design, and typography is a big part of that. I am quite certain I’m not alone in that feeling. That enjoyment is “helpful” enough for me to feel strongly that web fonts are here to stay, and that’s a good thing. There’s also plenty of UX data about what typography communicates to users, even if those findings aren’t normally presented in terms of “helping.”
A poorly chosen font can be hard to read in a certain context, but that’s a far cry from “all custom web fonts are bad for usability” and I haven’t seen any evidence to back up that claim. So given there are obvious positives I think the question is really what harm they actually do.
I’d wager typography is not limited to fonts.
Now obviously there’s a difference between a good web font and a crappy system font. But are all system fonts crappy? I haven’t checked, but don’t we already have a wide enough selection of good fonts widely installed on user’s systems already? Isn’t the difference between those good fonts and and (presumably) even better web fonts less noticeable? Surely we’re past the age of Arial and Times New Roman by now?
See this related submission: https://lobste.rs/s/tdiloe/modern_font_stacks
It’s basically grouping “typeface styles” across different systems’ installed fonts.
This is big, thank you.
I mean, I guess it won’t be as good as the best web font someone would chose for a given application, but if anything is “close enough”, that could be it.
Obviously fonts are a subset of typography (didn’t mean to imply they are the same), but they are absolutely central to it. And I didn’t say that system fonts are all crappy. My argument doesn’t rely on that premise, and anyway, I like system fonts! I think that designing within constraints can foster creativity! I just don’t think we should impose those constraints on everyone. At least not without a lot more evidence of actual harm than I’ve seen presented.
And although we are definitely past the New Roman Times ;) I don’t think that means that a striking use of a good font is any less effective on the web.
*gerb ʰ-: reconstructed Proto-Indo-European root, meaning to carve
It’s always fun to see software named after reconstructed Proto-Indo-European roots. *gerb ʰ is the source of all English -graphy words (via ancient Greek graphein “to write”); the carve meaning comes the native Germanic inheritance of English.
Someone posted in the issue tracker it is French slang for “vomit” :). It could be worse I guess…
I wanted a Greek-ish word because it’s my native language but glyph was already taken. Also font in Greek is too lengthy: γραμματοσειρά
If it’s any consolation, I’m French and did not make the connection because the pronunciation in an English context does not match the French slang’s. Even reading your comment I had to scratch my head a bit. Plus “gerbe” also means “bouquet” so you can use that as an excuse :)
And you know, we’re used to this in the CS field, “bit” is also a slang word in French.
Wait, I thought the official name for “bit” in French is “octet”? 😉
Funniest cross-cultural snafu lately is the US English technical term “nonce” conflicting with the UK English slang term for pedophile.
No, FR(octet) is EN(byte) – “oct-” means 8. I guess we completely dismissed the possibility that a byte might not be 8 bits ^^.
A bit is a bit, we kept it even with the funny-word association. You just power-through your CS class trying to suppress your immaturity as much as possible until you explode, but that’s life.
Part of the reason that French uses octet is that bit and byte have almost identical pronunciation in French. The other reason is that this is a slang term that you probably don’t want to be using in polite conversation…
If you think that’s bad, imagine working for a company that’s just put a huge publicity push into a product line that in French is pronounced ‘J’ai pété’ (I farted), including a high-profile version ‘chat, J’ai pété’ (cat, I farted).
Bulgarians will love your project too :)
I understand this sounds harsh, but many designers design more for their own ego and portfolio rather than the end-user.
It is harsh, and it’s not fair. How can anyone claim to understand the true motivations of a group of people from all walks of life, who number in the many thousands, if not millions?
This is an unhelpful analogy. How often is a specific design directly attributed to a dictatorial designer only interested in inflating their own ego and portfolio?
If you don’t like how a designer designs stuff - don’t hire them.
If you want to know if they care more about their ego than their customers - ask them.
People ascribe evils like inaccessible design to specific people or rather a specific class of people rather than the actual cause, which is huge websites following each other in a fashion-led dance in pursuit of intangibles like “engagement”.
Previously (3yrs ago):
That’s fine, it’s ok to re-submit after 6 months anyway, and the domain has changed in the meantime.
I just remembered the title and searched around :D
A whole bunch — some about the usability of the applications we program, some about the usability of the tools we program with. I do know the drill about tag proposals, just haven’t got around to making the list of “these articles would also benefit from this tag”.
Everytime I need a table in Markdown I just write a perl script to generate the HTML directly and use that in my document.
Yes. Markdown is cool, but you hit the walls very quickly. The solution is to bail out to HTML. In the wise words of da Share z0ne: if it sucks, hit da bricks.
OK here’s from the total dataset as scraped from all entries, top 10 tag combinations, capped at 4 tag combinations
# number of tags: 1
programming - 3400
security - 2255
practices - 2118
hardware - 1535
javascript - 1258
web - 1190
databases - 1171
linux - 1112
culture - 952
go - 933
# number of tags: 2
ask, programming - 563
javascript, web - 551
practices, programming - 516
pdf, security - 345
linux, security - 331
security, web - 326
hardware, historical - 302
culture, practices - 278
hardware, security - 263
privacy, security - 251
# number of tags: 3
hardware, historical, reversing - 55
hardware, pdf, security - 50
javascript, programming, web - 49
hardware, historical, video - 48
formalmethods, pdf, programming - 41
compsci, pdf, programming - 36
browsers, security, web - 34
browsers, javascript, web - 32
practices, programming, video - 27
compsci, math, pdf - 24
# number of tags: 4
formalmethods, pdf, plt, programming - 9
compsci, formalmethods, pdf, programming - 9
browsers, javascript, programming, web - 9
formalmethods, pdf, programming, security - 8
dragonflybsd, freebsd, netbsd, openbsd - 8
javascript, nodejs, release, show - 8
compsci, pdf, programming, security - 8
event, ios, mac, video - 7
c, c++, lua, programming - 6
netbsd, osdev, programming, unix - 6
Sure! From the tags search page
Tip: read stories across multiple tags with /t/tag1,tag2
This seems to fetch posts with ask or programming, not ask and programming.
Categories have this as I just found: https://lobste.rs/categories/compsci,culture
Ah, you want AND instead of OR.
No clue what “categories” are in relationship to this site, I’ve never seen them!
I guess they are the broader taxonomy that group multiple tags. I found them only a while back too.
Ok, this surprised me…
This seems to be a major clusterfuck. Looking at the sudoers man page, it spends some time explaining how use_pty prevents a dangerous attack, just to then say: “This flag is off by default.” Which of course leaves the question: WHY? You know you have a major security vulnerability, you have a mitigation in place, and it is “off by default”?
For su the same applies, though you cannot even set this in a config file.
On Debian, use_pty
is enabled by default in the shipped configuration file. This is quite recent (added two years ago).
Isn’t it off by default to protect non-interactive sessions from wonky output? At least that’s the message I get from the linked article.
Bored at work so searched for “solar” here and found these results that might also benefit from this tag.
Specifically from domain solar.lowtechmagazine.com:
I am skeptical of things I read on the internet. I am even more skeptical of articles that seem to be written specifically to draw a large number of views. I admit that I clicked on the link, I admit that I read the first few paragraphs. At that point I stopped reading.
You’re very right that this post doesn’t do a very good job when it comes to the topic of how widespread the problem is; it’s based entirely on anecdata and conjecture.
However, if you’re looking for a more rigorous treatment of the topic, David Graeber’s book Bullshit Jobs is a great read: https://theanarchistlibrary.org/library/david-graeber-bullshit-jobs Graeber’s analysis shows this to be a much more widespread problem than your experience might lead you to encounter, probably because you’ve been able to avoid getting hired by the kind of people who perpetuate these patterns.
Thank you very much for that link. I started to read it, but I have to ask, since you have read it: it reads a bit like “I know what jobs should exist (manufacturing jobs) and these are not manufacturing jobs, so they are bullshit jobs.” Everything he lists seems a quite reasonable job. I mean it may feel right to say these jobs shouldn’t exist, but thankfully we’re not a command economy and are insulated from that particular kind of hubris. Someone is paying for that service/good, which is why it exists.
It’s a good question, and I can see why you would think that by reading the first few bits. There is a section of the book that addresses the question of defining it, because it obviously is very subjective. The key is that he defines it according to the judgement of the person doing the job:
This is true in some sense, but one of the key points of the book is that many jobs exist specifically in order to boost the prestige of executives and managers. In certain kinds of organizations, having a large number of underlings is a source of political power, in a way that is completely disconnected from those people doing productive work. As you might expect, organizations with that particular dysfunction tend to be larger than they otherwise would be, meaning these jobs are also more numerous than you would expect.
Not quite. Some services exist solely to seek rent from the economy. Landlords, tax preparers, insurance firms, payday loan sharks, timeshare resellers, multi-level marketers, and more; these lines of work only exist to leech money without providing useful goods or services in return. Worse, many of them provide useless goods and services!
Personal solutions
Legislative solutions
These things are often not done by the landlord, they are done by the landlord’s agent. The landlord takes money from you, keeps some, and uses the rest to pay someone to perform these tasks. Their income derives solely from having capital, not from doing any work. This is pretty much the purest sense of ‘rent seeking’ as an economics term.
Someone has something that other people want and they earn a living off providing that. Where’s the problem? What kind of world would it be if you could not do that? If people just took your stuff from you? Not a world I want to live in.
I think you need to read a lot more about economics before we can usefully have this conversation. I did not make a value judgement at all in my post, I attempted to explain a concept in economics. Your reply reads like you didn’t read the link that @Corbin posted at all and just want to have a political argument. This is not the right forum for that argument.
Sounds like a value judgement to me.
Please link to the post where you think I said that.
My post was in response to the post that said that. You made a reply to my reply. I assumed that your post had more meaning than “Landlords who ask for rent are rent seeking.” but perhaps I was mistaken and that was indeed all you wrote and you had no opinion on the post I was originally responding to, re: Landlords (and other) behaviors.
https://lobste.rs/s/mbgpma/i_ve_been_employed_tech_for_years_i_ve#c_ykdxfv
Landlords don’t provide housing supply. Indeed, landlords have the ability to decrease housing supply by refusing to lease or sublet.
You really ought to take an introductory economics course; housing obeys most rules of economics, as a necessary (normal) good, and so any rent-seeking behavior is going to do the normal thing: raise prices, decrease supply, distort market.
I’m trying to think if I would like to live in a world where people are prohibited or heavily regulated from employing capital. Taking your example: If I own a house and do not live in it, do you envision a world where I must sell the house? Would I be barred from renting it? Would it stop at houses or also extend to the tools I own? Would your world stop Home Depot from renting out tools? What about my money? Could I rent it out?
You’re making quite a leap here. “No rent-seeking” does not automatically translate to a chaotic frenzy where people can just take your stuff. But to answer your question, one possible world without landlords is a world where everyone is housed.
Not sure how that follows. Landlords aren’t in the game to keep houses empty. They have an enormous incentive to rent out the home and house someone.
I would rather predict that a world where people can’t rent out housing they own would lead to housing being kept empty as owners wait out housing down turns to sell units they no longer need. Or a thriving blackmarket in rentals.
Or do you envision a world where the state has a monopoly on housing?
I actually envision a world without a state.
If you are joking, well, you have your joke. If you are serious, I think I can’t learn any more from you, but thank you for the conversation.
Payday loans are outright illegal in some parts of the USA. MLMs are heavily regulated federally; pyramid schemes are illegal, and lotteries are generally either illegal, state-run, or heavily regulated.
A world without insurance is pretty simple; just give public oversight to management of risk funds. Then, all members of society can directly have their risks automatically hedged at all times. The only remaining needs for insurance are business-to-business and can be reformulated as service contracts. (If this sounds silly, consider the contrapositive: would you want a world where e.g. food stamps are privatized into “hunger insurance”? If everybody gets sick and dies, why do we need “health insurance” or “life insurance”?)
Will we have nationalized
I am skeptical of a political entity being able to run such insurance in a scale covering 350 million people in different markets. In theory its great because you have a giant pool, so the premiums (taxes) needed for the fund will go down, but management is key in such large organization. I don’t know that politically appointed organizations are great at such management.
For medical insurance specifically: Britain’s NHS and Canada’s system are of course nearby examples for health. It is not clear to me that this is worth the upheaval. I have never heard convincing arguments that innovation in health will continue to occur under a nationalized system. I suspect, like the NHS, there will be stagnation at best.
My core concern with any nationalized system is that its a monopoly. Monopolies are very hard to change, make accountable and have zero incentive for efficiency.
The UK (and maybe Canada, I’m not familiar with the system) have nationalized health providers. This is separate from national insurance, like in Switzerland, where everyone must have medical insurance (and the state subsidizes it for very poor people) but the actual services are a mix of state and private care.
It makes a huge amount of sense from a political economy standpoint to ensure that basic health coverage is spread among all citizens. The US system, which combines substandard coverage with enormous costs, is a paradise for rent-seekers and hell for everyone else.
I’m not familiar with the Swiss system, however the system you describe sounds like the system in MA: you can have any insurance you want, but you have to have insurance (you pay a fine if you don’t) but the state will subsidize your insurance if you can’t pay. This is a state mandate and a means based subsidy. The insurance carriers are still independent, private entities.
Do you envision such a state mandate + needs based subsidy for each category that I listed, or just for health?
Or is it medicare for all (single payer) kind of system?
I was just pointing out that mandatory health insurance can coexist with a mix of health providers. The entire system does not have to be run by one provider.
Ok, so you are a proponent of a single payer system (like medicare). There are bargaining advantages (for the people) but it is not immediately clear to me what the long term effects of such a single payer system are. What is the motivation for innovation, for example.
I suppose one motivation is for a bigger piece of the pie: offer better services for the same fixed price everyone gets, or offer cheaper services. The problem starts to arise when you can’t pick providers (insurance companies already restrict the providers they work with, a national insurance corporation will most certainly do the same).
I personally believe it is an idea worth exploring, perhaps by gradually increasing coverage (say state by state, or income group by group). Just not clear to me what happens long term.
Is that really the same as a service/good being valuable though?
Yes, at least to the person paying for it.
(Points to distinction between use-value and exchange-value)
Once again, I am thankful I don’t live in a command economy where individuals get to decide what I should find valuable.
I’ve never worked a job where my employer didn’t dictate what was valuable to them and I’ve never lived in a democratic economy that wasn’t organized top-down. Sounds nice though.
Really. What fresh hell do you live in? I live in the United States, where I find that anyone with energy and imagination can make a go of it. I lived in India in the 1990s and even there, despite the heavy government involvement in everything I found people making their own independent ways. Of course, the people got tired of that and got rid of a lot of the red tape in the 2000s.
I also live in the US, where I find that people can make a go of it as long as it’s profitable. In my estimation it’s the prioritization of profit over actual value to society that drives the proliferation of bullshit jobs (because people in charge value prestige, among other things), as well as more dire things like pollution and carbon emissions.
Who measures this? Who gets to decide what is valuable for society?
Good questions. There are lots of ways to measure this and not one single source for all of it. We would need to rely on the experts in various fields to gather and interpret the data for us. But I would argue that some useful metrics would include global temperature (i.e. mitigating climate change as much as possible), life expectancy (currently in decline in the US), wealth equality (in sharp decline in the US for the last few decades), infant mortality (rising in the US), regional and biodiversity (in decline pretty much everywhere), pollution levels, criminal recidivism rates, racial equity in the education/medical/housing/prison/etc. systems, the gender pay gap…I’m sure there are a lot of other things but these are just from the top of my head.
In terms of who gets to decide, we all do–or should. And of course “how” is a very large question with no concise answer–there is a lot of valid discussion to be had about the relevance and nuances of any given metric. The point is that profitability is far and away the #1 driver of which problems people have the resources to work on (unless they want to be relegated to the non-profit sector, which has its own problems). But for the first time in history, I think we have the logistical and technological capacity to provide for people’s basic needs so that we can start to tackle these various aspects of quality of life more directly, and focus less on this indirection of aligning with the profit-motive. That’s the purpose of the distinction.
These are your values. Excellent. They do not have to be everyone’s values.
We do so currently with currency. It’s the quickest and most honest way to vote.
No. As I said, there is a level of indirection where if you care about solving any of these you first have to align your mission with the profit motive. The idea of profitable solutions to things like climate change and wealth inequality is laughable. Are you seriously arguing that the profit motive does not dilute any worthwhile endeavors?
The profit motive is shaped by legislation and by fashions, which determines what is profitable and not.
For example, in MA, a certain amount of our tax money is going into putting solar panels on people’s roofs. People who would otherwise not use solar are employing private companies to install solar capacity.
It’s not clear to me that this is a good solution to anything, but it is what the people have decided. I may not think its worthwhile (as opposed to a large solar power plant in one of our deserts, for example, piping power further north) but I don’t get to, alone, decide that.
What is not worthwhile to you is worthwhile to someone else. I think it is important that we all remember this.
I am aware of how legislation shapes markets under capitalism. The other side of the equation is how markets (or rather, billionaires who currently control the markets) shape legislation. In fact, a Princeton study found zero correlation between what the majority of Americans support and what actually gets signed into law. This is a logical and unsurprising consequence of the “vote with your dollar”: a lot of people get no votes while a small minority get almost all of them.
So yeah. I completely agree that my values are not everyone’s values, and that maybe most people don’t care about biodiversity, for example, but that is beside the point. The point is that most people’s values, whatever they are, are not able to be fully expressed in the current order.
I’m 100% serious about the state, by the way, although I have no illusions that it’s at all likely to happen in our lifetimes! Thanks to you as well. Even if you decided not to learn anything, it was a fun exercise. I’ll leave you with this quote from Ursula K. Le Guin:
I am certain that you do live in such a “command” economy. I’m going to assume that you live in a place where there is at least one person who sells glass windows. That person would find it valuable for someone to go around your neighbourhood and throw rocks through all the windows. The rock tosser gets a salary and the glass maker sells more of their product. Both individuals find this transaction beneficial, but I strongly suspect that this would be illegal where you live.
As a society, we intentionally ban profitable actions with negative externalities (e.g. hitman, arsonist, thief). However, our legislature moves slowly and new such occupation (e.g. paid fake yelp reviews) pop up quickly. We cannot yet call these jobs criminal, but they are bullshit.
I got a different definition of bullshit jobs that made more sense to me earlier up this thread which defines “bullshit jobs” as a catchy term for sinecures.
The type of jobs you define as “bullshit” I would call “shady”. That sweet spot where it’s clear harm is being done to many and benefit to the few but legislation and enforcement haven’t caught up yet.