This makes it dramatically easier to distribute and run LLMs. It also means that as models and their weights formats continue to evolve over time, llamafile gives you a way to ensure that a given set of weights will remain usable and perform consistently and reproducibly, forever.
This is a lie. If it links to CUDA, it’s not gonna survive after an upgrade. If it’s not, does it use metal framework, and does Apple guarantee backward compatibility? If it’s still not, why bother using it?
On Apple Silicon, everything should just work if Xcode is installed.
On Linux, Nvidia cuBLAS GPU support will be compiled on the fly if (1) you have the cc compiler installed, (2) you pass the --n-gpu-layers 35 flag (or whatever value is appropriate) to enable GPU, and (3) the CUDA developer toolkit is installed on your machine and the nvcc compiler is on your path.
On Windows, that usually means you need to open up the MSVC x64 native command prompt and run llamafile there, for the first invocation, so it can build a DLL with native GPU support. After that, $CUDA_PATH/bin still usually needs to be on the $PATH so the GGML DLL can find its other CUDA dependencies.
In the event that GPU support couldn't be compiled and dynamically linked on the fly for any reason, llamafile will fall back to CPU inference.
It’s not clear to me what would happen if my system cuda libs or metal frameworks update to a new version. Does it automatically recompile? Does it recompile every time it’s executed? I guess I just don’t trust any document that says some executable “will remain usable and perform consistently and reproducibly, forever” without substantial proof.
Great story, but probably the most important lesson to learn from this remained un-remarked on in the conclusion:
I was into my unofficial second shift having already logged 12 hours that Wednesday. Long workdays are a nominal scenario for the assembly and test phase.
Whoever was responsible for the crunch-time situation really deserves the blame for the problem, not the person who wired the breakout box.
I am in this world. The deadline is set by the orbit of Mars, if you miss it you are delayed for two years, so there is an extreme amount of pressure to hit the launch window. Secondly, every space mission of this class is a fabergé egg with 217 seperate contractors contributing their custom jewels. There are always integration issues, even assuming there wasn’t some fundamental subsystem issue that delayed delivery for integration. Even when rovers are nominally the same platform, they still have quirks and different instruments that mean they are still firmly pets rather than cattle. Given the cost per kg of launch, every subsystem has to be incredibly marginal and fragile weight-wise, else it’s a gramme taken away from science payloads which is ultimately the whole purpose of these missions. As a result, things are delicate and fussy and have very un-shaken-down procedures. It’s the perfect storm for double shifts.
There’s also an often-ignored aspect that’s easy to miss outside regulated fields: there is an ever-present feeling that there is one more thing to verify, and it’s extremely pressing because that may be your last chance to check it and fix it. About half the double shifts I’ve worked weren’t crunch time specifically, we weren’t in any danger of missing a deadline (I was in a parallel world where deadlines were fortunately not set by the orbits of celestial objects). It’s just you could never be too sure.
Also, radiation hardened instruments and electronic components have a reputation of ruggedness that gives lots of people a surprisingly incorrect expectation of ruggedness and replicableness (is that even a word?) about many spacecrafts. These aren’t serially-manufactured flying machines, they’re one-, maybe two-of-a-kind things. They work reliably not because they’ve gone through umpteen assembly line audits that result in a perfect fabrication flow, where everything that comes off the assembly line is guaranteed to work within six sigma. Some components on these things are like that, but the whole flying gizmo works reliably only because it’s tested into oblivion.
Less crunch would obviously be desirable. But even a perfectly-planned project with 100% delay-free execution will still end up with some crunch, if only because test cycles are the only guarantee of quality so there will always be some pressure to use any available time to do some more of those and to avoid mishaps by making procedures crunch-proof, rather than by avoiding the crunch.
I did a little searching about this. The project was green-lit in mid 2000 with a launch window in Summer of 2003, so about 3 years, to build not one but two rovers and get them to mars for a 90 day mission. Check out this pdf of a memo from what would be riiiiight smack in the middle of that schedule:
The NASA Office of Inspector General (OIG) conducted an audit of the Implementation of Faster, Better, Cheaper (FBC) policies for acquisition management at NASA. By using FBC to mange programs/projects, NASA has attempted to change not only the way project managers think, but also the way they conduct business. Therefore, we considered FBC a management policy that should be defined, documented in policy documents, and incorporated into the strategic planning process. Although NASA has been using the FBC approach to manage projects since 1992, NASA has neither defined FBC nor implemented policies and guidance for FBC. Without a common understanding of FBC, NASA cannot effectively communicate its principles to program/project managers or contractor employees. In addition, the Agency has not incorporated sufficient FBC goals, objectives, and metrics into NASA’s strategic management process. Therefore, missions completed using FBC are outside the strategic management and planning process, and progress toward achieving FBC cannot be measured or reported. Finally, NASA has not adequately aligned its human resources with its strategic goals. As a result, the Agency cannot determine the appropriate number of staff and competencies needed to effectively carry out strategic goals and objectives for its programs.
My paraphrase: Y’all told everyone to do stuff faster, better, and cheaper, but then didn’t actually make any policies for how to do that, or how to measure your success at doing that. Oh, and y’all suck out loud at staffing.
They include the management response which was basically: Well… yeah that’s a fair point. Also it’s not Faster Better Cheaper’s fault we suck at staffing! We just suck at staffing in general. We plan to develop plans to fix that next year!
I’m not joking about that “plan to develop plans” part btw. Here’s the full quote:
NASA also only partially concurred with the recommendations to align staffing with strategic goals because management does not view FBC as the cause for the staffing issues identified. However, NASA plans to develop a workforce plan for each Center that will link staffing, funding resources, mission and activities and core competencies. In addition, the fiscal year 2002 Performance Plan will include a discussion of Agency human resources.
Big oof.
Despite all of this, the rovers meant to last like 3 months lasted 6 years and 14 years respectively. ¯\_(ツ)_/¯
Good point! Another aspect is that you should design systems in such a way that inadvertent misconnections become impossible, even for low-level testing. If that’s not possible with the hardware, in the given case a very simple pre-test would have been to test the impedance and resistance and abort in any case of excessive measurements.
To build a bridge to programming: Design your interfaces such that they cannot be broken with bogus input. This especially applies to low-level functions that are only explicitly called in tests, because you can mess up test inputs easily by accident. One approach is to use a strong type system, e.g. a function “{Real, NaR} log10(x:Real)” is much more fragile than “Real log10(x:StrictlyPositive)”, which is constrained by the type system not to yield NaR (not a real) in any case.
I’m working on a system for autogenerating content for a new twitch channel using ChatGPT around the concept of TTRPGs like D&D and Pathfinder. It’s live on twitch now, but much of the content is still very basic. We’ve got both some prompt tuning to do and some tweaks to our interface with Unreal. It’s been an interesting sort of crash course in LLMs, Text to Speech, game engines, all kinds of stuff.
This might be a silly question but…… why would you WANT to treat JSON as a YAML subset? Is it just a fun exercise, or are there real use cases/needs this solves?
There are two schools of thought in technology development.
School one is the tech development is full of little robots working at little stations. Your job is to optimize the robots, work, queues, and pathways in order to optimize flow. In this school, tech development is a big factory. Ideas come in on one side, finished tech goes outside on the other. You can see this in Product Development in “The Principles of Product Development Flow”. For DevOps the keystone book is probably “The Goal”. DevOps even goes so far as to literally call the thing a “pipeline” I guess we should be lucky nobody used the phrase “conveyor belt”
School two is that tech development is all about scoping, identifying, and understanding a problem well enough that it “goes away”, ie, computers do everything and people are free. In this school, you want to kill the robots, the flow, the workstations, and the rest of it. The more complicated it is, the more complicated it is. And it’ll only get worse. Tech development is about combining creativity, logic, and problem-solving. It’s nothing like building Toyotas at all. We’re destroying the factory, not trying to make it run better.
Traditional managers, directors, and C-level folks are all taught school one. After all, they couldn’t understand or do school two even if they wanted to. So whatever BigCorp you’re in, you’re most likely going to be stuck in some kind of Toyota Production System knockoff. Sometimes it doesn’t hurt too much, but it’s always suboptimal. Building cars ain’t creating new tech solutions. It never will be.
And if you’re a cofounder of a bootstrapped product company, it seems obvious to me that you should implement school two all the way. Any recommended resources for learning to do this?
Anecdotes follow, but I’d love to here data both for and against!
They forget the art of engineering. The science is supposed to be repeatable, but the art is knowing what will be easy to enhance and maintain, while also being satisfying to work on so that you don’t encounter negative emotion turnover.
You also have to factor in “wanderlust” that occurs from doing the same basic thing for a while. Be ready for your team members to just get bored with what they’re doing, and encourage them to branch out and move!
Generally speaking, the teams I led this way would be more relaxed, didn’t have to work nights and weekends, and were CONSISTENTLY better and faster than the “people are robots driven by arbitrary quotas and dates” crews.
Any time someone started throwing out constant arbitrary fixed goals, we did better when I REFUSED to dump that pressure on the team. If they’re possible to hit, they’ll get hit whether we focus on the random date or not. Most of the time arbitrary quotas/dates make people rush to the quota and stop when they hit it, or slack until the date looms and rush to hit it. In both cases quality or cost suffers.
Anecdotes follow, but I’d love to here data both for and against!
They forget the art of engineering. The science is supposed to be repeatable, but the art is knowing what will be easy to enhance and maintain, while also being satisfying to work on so that you don’t encounter negative emotion turnover.
You also have to factor in “wanderlust” that occurs from doing the same basic thing for a while. Be ready for your team members to just get bored with what they’re doing, and encourage them to branch out and move!
Generally speaking, the teams I led this way would be more relaxed, didn’t have to work nights and weekends, and were CONSISTENTLY better and faster than the “people are robots driven by arbitrary quotas and dates” crews.
Any time someone started throwing out constant arbitrary fixed goals, we did better when I REFUSED to dump that pressure on the team. If they’re possible to hit, they’ll get hit whether we focus on the random date or not. Most of the time arbitrary quotas/dates make people rush to the quota and stop when they hit it, or slack until the date looms and rush to hit it. In both cases quality or cost suffers.
Yes, all the time. I spend at least a couple of hours per week tinkering with something. I ran my own Jenkins to tinker with automation. I fired up a postgres database to try storing some data in it. I’ve built systems to download and analyze data automatically. I blow away the OS on a machine and install it from scratch now and then.
Thank goodness I do, too. This actually came in very handy when I started tinkering with Selenium for web UI automation recently. A few weeks later we needed to automate something in a web-based application that wasn’t exposed via the API, and Selenium gave me a quick-n-dirty option to get it done!
It’s probably an expression of political distaste for overt references to furrydom rather than an authentic opinion that this article’s content is off-topic. I think this is absolutely topical content myself, but I’ve seen plenty of articles posted that I also thought were entirely topical (some of which I posted myself), that had off-topic or other flags because they were triggering to the political sensiblities of other users.
I get off topic downvotes for my posts with Mara too. Some of the graybeards here really dislike furries for some reason I can’t comprehend. I hope they can find something better to do that downvote furry adjacent content. Anyways, keep up the good work!
I’m that kind of a person, though I don’t have a gray beard. To me it’s just cringe (for lack of a better word), just like an unironic “euphoric” atheist, a gun-obssessed anarcho capitalist, a “My Little Pony” Fanboy or a western-anime otaku. I honestly don’t see what the difference is.
Any blog that tries to mix that kind of usually fringe subculture is fine by itself, people are strange, but I have my doubts how relevant it is to a general-public site like Lobsters.
That being said, I didn’t flag it, I’ll just be hiding it.
In principle, yes, but we often have discissions on the form of sites (don’t post twitter threads, avoid medium, not loading without JS, too low contrast, automatically playing videos), and interspersing a page with furry imagary is just something that some people are used to (apparently this is an american thing), and others are not.
Eurofurence, Nordic Fuzz Con, and FurDU are just a few of the international furry conventions that attract thousands of attendees every year (COVID notwithstanding).
Honestly that comes of as saying that McDonalds isn’t an american thing, because they have joints all over the world. Have you ever wondered why we are writing in English? I think everyone knows that american culture has a kind of dominance that no other culture has, because of hollywood, TV series and media in general. It’s always the de facto standard, and almost anything that is a thing in the US has following somewhere else. That has only intensified with the internet. But if anywhere in this thread, this is the point where we would be crossing over into off-topic territory, so I’d sugest we agree to disagree.
The furry fandom has its roots in the underground comix movement of the 1970s, a genre of comic books that depicts explicit content.[5] In 1976, a pair of cartoonists created the amateur press association Vootie, which was dedicated to animal-focused art. Many of its featured works contained adult themes, such as “Omaha” the Cat Dancer, which contained explicit sex.[6] Vootie grew a small following over the next several years, and its contributors began meeting at science fiction and comics conventions.
So it literally comes from the US. But setting that aside, even if I didn’t know that, it’s something so inherintly american, that I would have been really suprised that something that at the same time desexualizes bestiality (by removing the inherent link) and sexualizes animals (by giving them human cues of attractivness and anatonomy) could come from anywhere else.
Edit: Also I was curious and looked it up, “Nordic Fuzz Con” has 1499 atendees in 2020, but considering how many contries these people came from, it’s approximatly 0.000008% of the population. It’s common that when people are too online, they overestimate how large their bubble really is. “Eurofurence” with almost twice as many atendees isn’t much better of.
That’s super off topic for the discussion, but I’ve recently changed my mind about “american culture”. I now feel that a significant part of it is just universal, liberal culture, and not specifically American (hamburgers, pizzas and sushi being fun gastronomical examples). This post changed the way I think about this.
I don’t know why you think it is [an American thing].
Probably due to mako’s comment, which said they “always considered it an American subculture”. I hadn’t heard of it being American before… thanks to your comment I’ll unlearn that.
I think you could tack on just about any group and the content would be pretty much the same. “…for punks,” “…for people with a pulse,” or whatever. I’ve no strong opinion on furries. As long as their hobbies are not hurting anybody, I’ll just file it in the “not my thing, but not hurting me” bucket and see if the rest of what they have to say is interesting or not.
Technology doesn’t exist in a vacuum. Practitioners, users, researchers, and creators are people whose experiences of technology will be informed by their lifestyle preferences, race, gender, queerness (or not), positionality in society, past experiences, mental health, hobbies, friends and so on.
It’s ridiculous and downright depressing to me that anyone would consider a blog off topic because the writer chose to make their technical narrative their own. It strikes me as the kind of narrow thinking that leads the tech industry to not be a very accessible or diverse place in general.
Divorcing technology from the real world leads to isolation and atrophy (to borrow the words of Courant). It reduces diversity, leads to moral atrophy, and systems built without empathy for users.
The cringe is a reaction of your own, not the content itself. I would avoid downvoting a post just because of my relationship to it, so I’m glad you made the same call.
Lobste.rs caters to a very specific subculture that exists in the IT sector that is in itself part of a broader subculture of technology creators and maintainers. It’s just that you think your subculture is important enough to be let in and others are not.
You’re right that “technology” is a subculture, but my claim is that we are perpendicular/stochastically independent to “furry culture”.
It’s just that you think your subculture is important enough to be let in and others are not.
I would very kindly ask you not not be this elitist about this, this is explicitly a techonology site, with no further designations. The community has it’s tendencies, this way or another, but that doesn’t change the fact that the average to something as obscure as a “furry” will be recieved with some hesitation. This isn’t anything personal, I can imagine that if I went to some “normal” site like Facebook and started talking about the need Free Software that most people would consider me crazy.
It’s the exact opposite of being elitist, it’s about being inclusive. You call “technological community” a thing that is aligned to your culture and values and it’s just a very small fraction of the people that produce digital technology. You universalize it because you cannot conceive that there might be different ways than yours of producing technology together. You believe your way is THE way and you reject other ways.
I don’t think it’s greybeards, rather non-Americans. I’m in the UK, London, and if there’s a furry subculture here it is so microscopic that I’m not aware of it. I’ve always considered it an American subculture, and possibly mostly silicon valley, but certainly for non-Americans I think it’s very obscure. I didn’t vote either way, and have no idea what the furry thing is about, just glimpse it once in a while.
For what it’s worth, in America you don’t just see people walking around expressing as furries while they shop for groceries. Most of us have never run across the culture in person. I think it’s not that this is an American phenomenon but that online spaces are safer, so that’s where you (and we) see them.
I really enjoy most of the aesthetic of your pages, and the technical content! I just don’t like the random stuff being jammed in between it. I don’t need a bunch of reading space occupied by a full color, artistic, glorified selfie 6 times. Or in the case of Mara’s first appearance, 16 times.
Folks, this is a nice high-effort post about implementing security, with code and references and the whole shebang. It isn’t shilling a service, it isn’t navel-gazing on politics, it isn’t even some borderline case of spamming a blog to get more views without care for the community.
Anybody who flagged this as off-topic either didn’t read the article or is a tremendous asshole.
Anyone who flagged this as spam either didn’t read the article or is a tremendous asshole.
If the reference to furries in the title rustled your jimmies, despite the site policy here being to use the original title as close as possible, and you were unable to evaluate the quality of the article on its own merits, you’re a tremendous asshole.
Furry is my blog’s aesthetic and theme, and a significant chunk of the content, but the focus is 99% encryption. The parts that are furry-relevant are:
A lot of tech workers are furries (or furry-adjacent).
I’ve found that furries are generally more comfortable with the abstraction of “identity” from “self” than non-furries. I generally attribute this to the prevalence of roleplay in our culture. (I remarked on this detail in the post.)
Implied but never stated in this particular article: Since roughly 80% of furries are LGBTQIA+, and queer folks are likely to be discriminated against in many locales, improving furry technology will likely have a net positive impact on queer privacy in oppressive societies.
This page isn’t so much for furries than it is from a furry, published on a furry blog, and with a bad furry pun in the title.
You don’t actually need to entertain anti-furry sentiment. And do not worry either, there’s also people who appreciate this. I’d rather see furries than most common traits of the modern web.
A lot of tech workers are furries (or furry-adjacent).
I don’t doubt that a lot of furries (or furry-adjacent) might be tech workers, but I’m not sure your statement is accurate, given just how many tech workers there are.
The main problem with this kind of title phrasing is the forced communication of a political/sexual/whatever message, which is off-topic for the site, and most people don’t care, and don’t want to care for it.
Anybody visiting the link would see that the page has a furry aesthetic. Then they would have the chance to read the article, or close the page. This way a message is promoted on the main page. I think identity politics are already too emphasized and destructive in discussions, and have a bad effect on communities and society. Consider seeing things like a Heterosexual christian father’s guide to unit testing on the front page. Without judging anybody’s identity, this is not the place and form for that topic and that kind of statements.
For some reason you failed to understand my point, and are accusing me with something instead of arguing my points. Most likely this is because of my inability of phrasing my point efficiently.
But in the same spirit: I wonder why do I even need to know anybody’s affiliation at all in context of a technical discussion?
I wonder why do I even need to know anybody’s affiliation at all in context of a technical discussion?
Because the author decided, that their “affiliation” is relevant to their content, that’s it. You don’t need to follow that thinking, you can opt-out of reading their article, even hide it on sites like lobste.rs.
Any articel tells you something about the authors identity and cultural affiliations. And most of us just fill the blanks with defaults, where details are missing. i.e. an authors gender on technical content is often assumed to be male, if not stated otherwise. Most of us who grew up in societies with Christian majorities just assume that most guides to unit testing are a variation of the “Heterosexual christian father’s guide to unit testing”.
That’s bad because it taints our perspective, even on the already factual diversity of tech and the net. So IMHO it’s a good thing, if more of us keep their affiliations explicit and maybe even reflect on how those influence their perspectives.
Your points aren’t worth arguing. You assert several things (“most people don’t care,” “have a bad effect on communities”) without any supporting evidence. To the first about whether people care and “don’t want to care” – I don’t find that persuasive even if you can provide evidence that a majority of people don’t want to be confronted with the identities of people who’re considered outside the mainstream. But I also suspect you’re making an assertion you want to be right but have no evidence to back up.
Likewise, what even is a “bad effect on communities and society”?
You also express an opinion (“I think identity politics are already too emphasized”) which I heartily disagree with, but that’s your opinion and I don’t see any point arguing about that. OK, you think that. I think too many craft beers are over-hopped IPAs and not enough are Hefeweizens. The market seems to disagree with me, but you’re not going to convince me otherwise. :-)
Start with a thought-terminating cliché. Then you start arguing my points. :) No problem.
To the first about whether people care and “don’t want to care” – I don’t find that persuasive even if you can provide evidence that a majority of people don’t want to be confronted with the identities of people who’re considered outside the mainstream.
I understand your points, but you didn’t really grasp what I wanted to phrase. IMHO “mainstream” and other identities should not confront each other here unless being technically relevant ones, about which technical discussion can be carried on. There are other mediums for those kind of discussions.
Lucky someone has managed to phrase my ideas better than I could above:
As I understand @kodfodrasz, they were bothered not inherently by the reminder of the group’s existence, but by the broadcasting of that reminder to the Lobsters front page. When an article title on the front page asserts the author’s voluntary membership of a group, that is not only a reminder that the group exists—it’s also implicitly an advocation that the group is a valid, normal, defensible group to join. One can agree with the content of such advocacy while also disliking the side effects of such advocacy.
What side effects would those be? @kodfodrasz said that “identity politics are already too emphasized and destructive in discussions, and have a bad effect on communities and society”. I think they are referring to way advocacy for an identity can encourage an “us vs. them” mindset. Personally, I see the spread of that mindset as a legitimate downside which, when deciding whether to post such advocacy, must be balanced against the legitimate upside that advocacy for a good cause can have.
My assertion is that currently I see a trend where legitimate topics are not discussed because some participants in the discussion have specific opinions on other topics than the one discussed. Dismissing some on-topic opinions for off-topic opinions is an everyday trend, and if bringing our off-topic identities to the site would gradually become more accepted, then that trend would also creep in from other parts of the society, where it has had done its harm already.
I hold this opinion as a guide for every off-topic identity. I think of it with regards to this forum a bit similarly to the separation of church and state has happened in most of the western world.
by the broadcasting of that reminder to the Lobsters front page
The submitter (author in this case) has one “vote” in promoting their content on this site. Usually one net upvote keeps stuff in /new and outside the front page. What’s promoted this content to the front page is the site’s users, who have upvoted it enough to appear on it.
At time of my writing this comment, the current standing is
50, -7 off-topic, -4 spam
Also note that comments themselves contribute to visibility, so everyone commenting complaining about this being off-topic and “in your face” aren’t helping their cause…
When an article title on the front page asserts the author’s voluntary membership of a group, that is not only a reminder that the group exists—it’s also implicitly an advocation that the group is a valid, normal, defensible group to join.
Are you (or @kodfodrasz) implying that identifying as a furry is in some way so dangerous as to be suppressed by society at large?
What if it’s just interlaced with drawings of BSDM activities, like that old GIMP splash screen? I wouldn’t be caught dead scrolling that (nor opening GIMP) at work.
Yep. There are people, for example, for whom submission is not a sexual thing but instead about being safe and there are people for whom having a little (in the subcategory of dd/lg) is about having somebody to support and take care of and encourage in self-improvement.
That’s not everyone, the same way that there are in fact furries who are all about getting knotted.
My point is just that if you want to go Not All Furries, you should be similarly rigorous about other subcultures.
o/ I’m asexual but still very into BDSM (and also a furry!). I know what something being sexualised feels like — took a while to get here — and while a lot of people do link the two intimately (as many do for furry things), they aren’t dependently linked.
Actually, I know a real example. There is a Python-related French blog named Sam et Max. The technical articles are generally considered high-quality by the French-speaking Python programmers. But there are also BDSM- and sex-related articles alongside the Python articles. Even within a Python-related article, the author sometimes makes some references about his own fantasies or real past experience.
As long as there’s no overt pornography, sure. I’d read a good article on crypto that had “by someone currently tied up” on it. What’s the point of writing if you get shamed for putting your personality in it.
Already mentioned elsewhere but it’s my understanding that being a furry isn’t inherently sexual / about sex, though there can be that aspect. I certainly wouldn’t mind a post that was something like “a lesbian’s guide to…” or “a gay person’s guide to..” because those identities encompass more than sexual practices. (Someone elsewhere says that BDSM isn’t strictly speaking sexual, which … is news to me, but I admit my ignorance here. If there’s a non-sexual aspect to BDSM identity then sure, I’m OK with a BDSM-themed post on tech.)
Consider seeing things like a Heterosexual christian father’s guide to unit testing on the front page.
That goes without saying, because that’s the default viewpoint.
The way the author clarifies and establishes their viewpoint does not make their technical content anymore off topic than someone submitting something titled “A Hacker’s Guide to MFA” or “A SRE’s Guide to Notifications”. The lens that they are using to evaluate a technical topic is an important piece of information that we often-times forget in tech with disastrous outcomes.
No, it is not necessarily the default. But even if it would be, articulating that off-topic identity on the front-page would be unnecessarily divisive, and I’m pretty convinced, that people of other identities would flock the comment section claiming that the post is racist (sic!), and is not inclusive, hurts their feeling, and I think they’d be right (on this site).
Hacker or SRE are on-topic tech identities themselves, while sexuality, political stand, religion are not really.
Hacker is a political identity. For instance, it’s one that I find really degrading when associated to the whole profession. The nerd identity or the general infatilizing of programmers is degrading as well. These are tolerated because they are the majority’s identity in this specific niche and presented as “neutral” even though they are not.
Well I see some positive vibe about the hacker word in the IT sector, if you remember there was some hacker glider logo thingie around the millennia. I’m not one of them, and agree with you, I also find hacker somewhat negative, and not because of the “evil hacker”, but of the unprofessional meanings of the phrase (eg. quick hack). Still lots of fellow professionals don’t agree on this one with us.
Regarding Nerd: I also find the phrase degrading, and I don’t understand those who refer to themselves as nerds in a positive context.
I don’t understand those who refer to themselves as nerds in a positive context.
The best way of removing the degrading conotation of a word is to rewrite its meaning. The best way to do that is to unironically use it in a neutral-to-positive context.
yeah but the problem is what you want to appropriate. The word “slut” has been reappropriated to defend the right for men and women to have sex freely without judgement. The word “nigger” has been reappropriated because black people are proud of being black. But the word “nerd”? “nerd” means being obsessed with stuff and have very poor social skill and connections. Reappropriating the word flirts very closely with glorifying social disfunctions, exclusion and individualism.
but Nerd is imho all negative. The positive connotations, like being dedicated and consistent on a practice is not exclusive to being a nerd. Being nerd is not even stigmatized anymore: now it’s cool to be nerd and still it’s degrading, like being a circus freak. You reappropriate a word to remove a stigma towards a category, but the stigma is already gone and what is left is a very distorted portrayal of knowledge workers.
That the stigma is gone is precisely because people took the term and ran with it.
Besides, I have no problem with assholes (whose opinion of me is no concern of mine) considering me a circus freak: it makes them keep themselves at a distance which means less work for me to get the same desirable result.
(Also: I disagree with the term “nerd” glorifying “social dysfunction” - normalizing, maybe, but that’s a very inclusive stance, especially when these “dysfunctions” are called by their proper name: neurodiversity. And what precisely is the problem with individualism again? And another tangent: knowledge workers aren’t necessarily nerds and nerds aren’t necessarily knowledge workers)
I agree with all your values but it doesn’t seem like this is what’s happening in the real world. Inclusion of neurodiversity is happening only in small bubble in USA/NE: if anything, neurodiverse people are just more aware of being different. Good for coping, not that good for social inclusion. Really neurodiverse people are still rejected by the society at large and at best they get tokenized and made into heroes but not really included. Also this appropriation of the word detached the concept of nerd from neurodiversity that if it was ever a thing, it’s not a thing now. Today being nerd is wearing glasses and a checkered shirt. Then if you flirt flawlessly with girls, entertain complex social networks and work as a hair dresser, it’s enough to say your hobby is building radios and boom, you’re a nerd. I don’t see how this process would help neurodiverse people and I don’t see how it is good to have to live up to this stereotype to be included in the IT industry (because in most places, if you are not some flavor of nerd/geek, you’re looked at with suspicion)
For most people, “Furries” is “that weird sex thing”. I can see a lot of people wanting to make it clear that sexual references are out of place in order to make tech a more comfortable and welcoming place for everyone. I suspect that famous Rails ‘pr0n star’ talk has (rightly) made people feel uncomfortable with sexual imagery in tech.
I’ve upvoted because the content is good, but I’m also not really one for keeping things milquetoast. I’d like to see more content like this. The technical parts are worth reading, even though I have no interest whatsoever in furries, and mildly dislike the aesthetic.
And yes – I’ve discovered today via google that it’s only a sex thing for 30% to 50% of the people in the subculture, but as an outsider, the sexual aspect is the only aspect I had ever heard people mention.
Going forward, I’d just suggest ignoring the downvotes and moving on – they’ll always be there on anything that’s not boring corporate talk, and the threads like these just suck the air out of interesting conversation.
Is it, though? If it was written as “a teacher’s guide to end-to-end encryption” would anybody be flagging it or carping about the title just because the intended / primary audience was teachers but the content could be abstracted to anybody who cared about end-to-end encryption?
That’s a good type of question to ask, but your example title “A Teacher’s Guide …” is not equivalent. The author being a teacher could be highly relevant to the content of the article; for example, the article might especially focus on the easy-to-teach parts of encryption. The author being a furry, however, is likely to affect only the theme.
Analogous titles would change “furry” to another subculture that is not innately connected to tech and that people choose rather than being born with. Two examples:
“Hide my Waifu: An Otaku’s Guide to End-to-End Encryption”
“Communication is Key: A Polyamorous Person’s Guide to End-to-End Encryption”
Would people complain about those titles? I predict that yes, some people would, though fewer than those who are complaining about the furry-related title.
Obviously it’s great that someone wants to give us this information. In return we should give them respect and thanks.
Showcasing their identity not only gives personal color to the post, it also donates some of the credit to the community they identify with, rather than to some default security engineer type we might imagine.
Thanks to this personal touch, some readers can no longer say furries are unintelligent, or never did anything for them.
Why do people say hello to docker in the first place?
Because unix, Make, compiled code, and shell script glue isn’t powerful enuf?
Or to make devops a job title that means something.
–Sarcastic and jaded, systemd still sucks
Well for example when you’re not working with compiled code (or you’re working with loads of shared libraries) then the system configuration starts playing a big role in how your program replies.
For example, you have a Python webserver. Firstly, do you have the right Python available? And if this server does something like PDF rendering it might call out to another application to do so, so is that installed properly?
I think that this stuff is more avoidable with like…. proper compilation artifacts but it can be painful to reproduce your production system locally when you are doing something more novel than just querying a DB and generating text
Well for example when you’re not working with compiled code
Have you actually ever worked with a bigger C/C++ application? :) I’ve never seen any ecosystem where it is worse to separate your system’s library version in /usr from custom installed stuff. Everything from Python, Ruby, PHP is so much easier to work with in this regard.
One of the reasons is because “we have an Kubernetes / OpenShift / Frozbinitz Special cluster” that is supported by “the enterprise” and I can do my initial work on my laptop with Docker.
I don’t like Docker the company, but Docker the command line tool is fine. Docker Desktop has the potential to be great, but it seems like it would require Docker the company to be purchased by someone less dependent on registered users as a metric for survival.
Here’s one example. At my last job, I first introduced Docker to our environment because, sadly, we had a few Node.js applications written at various points in the company’s history each of which required a particular version of the Node runtime and also particular versions of various modules that weren’t compatible with all the versions of the runtime we were using.
It made the production configuration a bit of a nightmare to keep track of. If you wanted to run two of these on the same host, you could no longer just install a Node system package, etc.
Packaging the applications in Docker containers eliminated all those headaches in one step.
We were not running Kubernetes or any other orchestration system; we just stopped deploying directory trees to our servers. We still used scripts to do our deployment, but they became much simpler scripts than before.
Can you elaborate on this? I’ve mostly used Docker to simplify dependency management, nothing whatsoever to do with any kind of scaling concern. How is it a “hardware solution” when it is a piece of software?
K8s (which is just fancy docker orchestration) allows you to string containers together. You can find many pre defined config a that allow rapid scaling up and down, allowing an “app” to add additional workers when needed, add those to the load balancer and when they are idle to tear them down. It’s really “neat” but by adding (and subtracting) machines as needed it’s my point of being a hardware solution to a software problem.
Instead of worker threads and processes and LPC you end up with many machines and RPC. The plus side of going with RPC of course is that you do get to scale on multiple physical hosts easily, while a thread aware app now needs to manage LPC and RPC to scale.
It’s “expensive” hardware wise but look at Rancher it’s a quick and easy k8s that deploys painlessly and once you setup shared storage you can take some predefined app setup and hit the plus button from the UI and watch it auto scale, then hit minus and watch it right size.
Sure, that makes total sense, but you’re talking about Kubernetes, not Docker, no? How does any of that apply if you’re running a fixed set of standalone Docker containers on a fixed set of hosts, say, or running Docker in development environments to get a consistent build toolchain?
Nice to see the wave of defectors from OSX back to more open platforms continue!
I agree I’ve always found XCode frustrating to use and obscenely ponderous to set up as well. It’s not so much that it’s a GUI or that it’s complicated, I think you could say the same for VIsual Studio Code or Pycharm both of which I love, it’s that it gets in the way of what I’m trying to do 100% of the time.
And yeah, the MagSafe connector was a thing of beauty. My wife is clinging to her 2012 Macbook Pro for dear life :)
You also didn’t mention keyboard, although I realize I’m more sensitive to that than most. I find it interesting that the keyboard on my $200 Pinebook Pro is several orders of magnitude better than the one on my work issued 2018 Macbook Pro which feels like typing on squishy oatmeal.
That doesn’t surprise me at all, and I assume that’s the largest migration direction between the “three” platforms. Most average Joe developers are already using Windows for most of their computing needs (games) and probably only used macOS because they “had to” because work involves either a Linux server or an iOS app.
I’m not sure where you get the idea that there is a “wave of defectors from OSX”, the post was most likely written to be posted specifically here. I like the community and I don’t want to be that guy but this community is a small, irrelevant echo chamber in terms of world-wide OS adoption.
I totally agree. There HAS been a wave of people leaving the OSX platform for various reasons, but as another poster wrote they’re not all migrating to FLOSS environments.
A number of them are choosing Windows instead. I myself dual boot and enjoy the best of both worlds :)
Also, nobody other than Apple actually has hard numbers on this.
I just bought the 2020 Macbook Pro and the keyboard is from the different planet than 2018’s, I really like it! The price was painful but there are music softwares I need daily that don’t run in Linux so I don’t really have a choice.
Macs come with a lot out of the box, so I don’t do much. My advice would be to install as few things as possible for the first few days/weeks just to see what the builtin stuff has to offer.
From hour one on any Mac I buy, I remap caps lock to control, turn on all of the touchpad checkboxes and mouse checkboxes, and install homebrew (which gets some Xcode utilities). From there I install games and a few packages. I set up iCloud because I use the the iCloud Drive and iMessage. Other than that, I have about 61 packages installed via homebrew, and most of those are dependencies.
Examples of stuff I brew install or download and install from a 3rd party:
python3 because it’s easier to pip install to the non-system python
screen because it handles emojis for homebrew better than the included default version
pv for viewing progress and throughput on long-running pipes
rust (via the rustup-init package)
CKAN for Kerbal Space Program (MECHJEB 2 for life!)
ncdu for storage usage analysis
neofetch for nerd cred
PrusaSlicer because I have a mk3s + mmu2 and mini
Blender (I don’t know how to use it, but I’m going to learn)
WireShark because I’m a 1337 h4x0r
Visual Studio Code (Surprisingly good!)
IntelliJ (now largely replaced by Visual Studio Code)
Firefox for playing in another browser
For mainstream applications, I tend to use the Apple version if it exists. Terminal instead of a replacement, Pages/Numbers/Keynote instead of MS Office, Mail instead of Outlook or Thunderbird. Pretty much everything else is game clients.
Haskell and Scala are #14 and #15 on the “most loved” list, and nowhere to be seen in the “wanted” list.
They actually are #17 and #18 in the “most wanted” list. Also, I find this a bit shocking, but both of these languages are actually ranking higher up the “most dreaded” languages list: #11 and #12.
Meh. They’re over-selling the emotion there. The question was apparently more akin to “I use this language and I am / am not interested in continuing to use it.”
When you said you used a language, If you reported you wanted to keep using it they labelled it “loved” and if not, they labelled it “dreaded.”
I used to use Scala a lot. I was more than happy to keep using it, but that doesn’t mean I loved it. Once I started using Rust, I dropped Scala. That doesn’t mean I dread it.
Trey Harris (in response to a similar question at the time) explained: “I used to populate my units.dat file with tons of extra prefixes and units.” In any case, lightseconds (and therefore millilightseconds) is in the standard definitions.units file on Linux these days, so perhaps you could grab a better definitions.units out of https://www.gnu.org/software/units/ if nothing else. (On my machine, the standard units starts up with 2919 units, 109 prefixes, and 88 nonlinear units.)
It does to a degree; for example, Mozilla’s sccache is a ccache like tool with optional support for sharing build artefacts on cloud storage services which ships with support for the Rust toolchain (it works as a wrapper around rustc, so it should work even if you don’t use Cargo to build Rust code). Linking times can still be slow though, and obviously the cache isn’t useful if you’re running beta or nightly and frequently updating your compiler.
«Who’s going to support any Elixir code? What if you get hit by a bus? (which reads: «please stop what you’re doing, you’re harming the team and the company«)
It’s a little more subtle than that I think. OK, the author is a Star Trek fan, right? Remember when they had a critical mission to get help for a kid on the ship, and then Data goes nuts and takes over the ship, flying it to a different destination? That could be what the author is doing from their team’s perspective.
I’ve observed two major projects where a single engineer decided on a different language from what the rest of the team was using without getting at least a few other people on board, and the result has been consistent: the project either dies, or gets or rewritten in the language the rest of the group already knows.
What is in this that is not either out-of-date or hasn’t been rehashed endlessly at this point? As of May 2020 Is there any new insight to be gained in talking about how math-y and hard Haskell is, or framing Haskell vs. Go as The Right Thing vs. worse-is-better? And finally, do we have to take terribly conceived-of diagrams seriously if the author self-identifies as a “Haskell ninja?”
Plenty of people are writing Haskell and using it in production or for their hobby projects. Stack and cabal and nix taken as a whole have made major inroads into dealing with “cabal hell” (or “DLL hell” as this piece calls it). Go continues to be both decidedly mediocre and inflexible, easy to pick up, and more popular than ever. Haskell remains pretty different from a lot of other languages and has a steep learning curve. News at 11
Since 2011…? Off the top of my head, as a casual (and no-longer-zealous) Haskell user:
Space leaks are still a serious and deep-rooted problem. Better tooling to deal with them now exists, though.
There are now two major package management systems, not quite compatible with each other, and a fair amount of lingering ill-will from the split. Three if you count Nix.
Hackage is kind of like NPM: tons of low-quality and half-finished libraries with no easy way to find the good ones.
The proliferation of language extensions has fragmented the language even further, and many local dialects have emerged. Libraries can be inscrutable. Reminds me of C++.
Learning curve steeper than ever. Plenty of beginner books of dubious quality. Plenty of academic papers pushing the state of the art in FP. Not much in the middle.
Culture still largely privileges cleverness “elegance” above practicality. Oh, academia.
So, overall, I think the post isn’t exactly a brilliant epiphany, but presents a fairly even-handed perspective, and has aged pretty well. You might have seen these points rehashed endlessly, but I don’t think everyone on Lobsters has.
The language itself has many great points, certainly rewards the effort put into learning, and for a few domains it’s by far the best choice despite these drawbacks. I wouldn’t fault anyone for not wanting to use it, though, especially if they’re not in one of those domains. Go or something else might very well be a better choice, especially if you need to hire.
The community, as such, is kinda bleh in my highly subjective opinion, but hardly the worst. Hard to climb that steep curve without engaging with the community. Might just be my perception, but it seems net language snobbery has increased along with industrial uptake. The established academic users are as magnanimous as ever, but now surrounded by a rising population of younger, more competitive and hungrier users, some of whom are maybe just a little insecure about the secret-weapon superpowers they haven’t quite mastered.
I was objecting to the context-free drive-by posting of an old, fluffy, somewhat out-of-date article on a topic that is already discussed a lot here, or at least feels that way to me. I don’t necessarily disagree with any of your points but they weren’t raised in the piece and only seem to be relevant because they are criticisms of Haskell, of which there are plenty to go around. It doesn’t seem like you or other posters are responding to my actual comment, and more disappointingly it seems like my comment has become yet another excuse for some to go off on negative rants about Haskell. Damned if I do, damned if I don’t…
edit: forgot the quote that triggered this… well quasi-rant. I think it’s related to the perception of secret-weapon superpowers that people seem to have about Haskell. I used to think Haskell was Superman: An alien from another world that has all of these incredible powers that other mere earthlings cannot achieve. Now I think it’s more Batman.
The established academic users are as magnanimous as ever, but now surrounded by a rising population of younger, more competitive and hungrier users, some of whom are maybe just a little insecure about the secret-weapon superpowers they haven’t quite mastered.
I keep finding myself looking for a nice, cozy functional language to roll around in. Every time I look I hear people speak of the beauty and elegance of type systems, or the lack of side effects, or laziness, and it all sounds just so magical!
Often I sit down to start with one of these languages and immediately find myself a bit lost in the syntax. It’s not the mathy parts - it’s the quirks of how the language changes the names of common things.
Let’s use an intro CS example:
putStrLn (Haskell)
IO.puts (Elixir)
text (Elm)
Yeah, you can probably guess what these things do. Yet how many of them would you guess on day 1 of switching to that language? How many computer science, or even basic programming courses start with a language that uses anything except a variant of “print”? (Before you go “MIT with Scheme!”, they’ve switched to Python.)
There’s a thing authors do when they’re trying to immerse you in a world that is supposed to feel alien, and that’s changing familiar things to make them feel unfamiliar. That’s fine and all, but not particularly accessible. I’m not looking for some unique, immersive experience, I’m looking to get stuff done. I get it: It’s not a line printer anymore. The save icon is still a floppy disk though, and the idiom of “Print” is well established and has been for like decades now.
Uuhh print and putStrLn are two sides of the same coin, normally I’d use print since it has the more flexible signature (Show a => a -> IO () vs String -> IO ()).
So in haskell print = putStrLn . show, a dsitinction is drawn, but it’s not missing.
TryHaskell Haskell.org shows putStrLn first, and you have to dig into the I/O functions to find print.
In the Haskell wiki, they even call the action of putStrLnprinting, then switch to print later on without explaining it. Clear as mud. OK, so let’s play with some output to see if we can figure it out.
Prelude> putStrLn 4
error: No instance for (Num String) arising from the literal ‘4’
Prelude> putStrLn "4"
4
Prelude> putStrLn "四"
四
Got it. Integers aren’t strings, so that no worky. OK, so print = putStrLn . show.
Wait, what happened to that last one? Why wasn’t it "四"? Well, it turns out that Show’s signature is show :: a -> String, but the devil’s in the details. The byline is “Conversion of values to readable Strings.” but the detail is that show outputs syntactically correct Haskell expressions, not plain strings. For some reason, this means I get the quotes around the string (even though they’re not part of the string) and a conversion of UTF-8 to a string of the code point rather than the character itself.
Everything fed into show keeps its types though, right?
Prelude> show [1,2,3]
"[1,2,3]"
Prelude> show ["a","b","c"]
"[\"a\",\"b\",\"c\"]"
Prelude> show ['a','b','c']
"\"abc\""
Wait, what? Ohhhh, strings are just a list of chars in Haskell, and they get auto-converted to a string, losing their “listyness” for basic output.
Just for giggles, let’s try something similar in python:
Most people would probably just shrug at it and keep coding, in all fairness. Haskell is far from the only language with that weirdness too! It just seems so unnecessary, and so prevalent in the popular functional languages.
Yes but it was designed by a comittee decades ago so in my mind it gets a pass on some of these legacy papercuts. In return I get a rock-solid language with lots of mature libraries.
There’s now an edit indicating that the comparison is wrong anyhow: The Rust code was parsing the string many more times than Go because of the way the code was written.
Plus the comparisons were different. Also, the code was a weird mix of looking at bytes and unicode codepoints.
This sloppy-approach-to-everything is sadly a thing I have started to expect from Go users, and one of the reasons why I automatically penalize projects using Go in terms of quality and reliability unless proven otherwise.
This is a lie. If it links to CUDA, it’s not gonna survive after an upgrade. If it’s not, does it use metal framework, and does Apple guarantee backward compatibility? If it’s still not, why bother using it?
Straight from the readme in the link:
GPU SupportOn Apple Silicon, everything should just work if Xcode is installed.On Linux, Nvidia cuBLAS GPU support will be compiled on the fly if (1) you have the cc compiler installed, (2) you pass the --n-gpu-layers 35 flag (or whatever value is appropriate) to enable GPU, and (3) the CUDA developer toolkit is installed on your machine and the nvcc compiler is on your path.On Windows, that usually means you need to open up the MSVC x64 native command prompt and run llamafile there, for the first invocation, so it can build a DLL with native GPU support. After that, $CUDA_PATH/bin still usually needs to be on the $PATH so the GGML DLL can find its other CUDA dependencies.In the event that GPU support couldn't be compiled and dynamically linked on the fly for any reason, llamafile will fall back to CPU inference.It’s not clear to me what would happen if my system cuda libs or metal frameworks update to a new version. Does it automatically recompile? Does it recompile every time it’s executed? I guess I just don’t trust any document that says some executable “will remain usable and perform consistently and reproducibly, forever” without substantial proof.
Justine told me that in that case it will fallback to CPU inference.
Even that’s quite a claim.
Great story, but probably the most important lesson to learn from this remained un-remarked on in the conclusion:
Whoever was responsible for the crunch-time situation really deserves the blame for the problem, not the person who wired the breakout box.
I am in this world. The deadline is set by the orbit of Mars, if you miss it you are delayed for two years, so there is an extreme amount of pressure to hit the launch window. Secondly, every space mission of this class is a fabergé egg with 217 seperate contractors contributing their custom jewels. There are always integration issues, even assuming there wasn’t some fundamental subsystem issue that delayed delivery for integration. Even when rovers are nominally the same platform, they still have quirks and different instruments that mean they are still firmly pets rather than cattle. Given the cost per kg of launch, every subsystem has to be incredibly marginal and fragile weight-wise, else it’s a gramme taken away from science payloads which is ultimately the whole purpose of these missions. As a result, things are delicate and fussy and have very un-shaken-down procedures. It’s the perfect storm for double shifts.
There’s also an often-ignored aspect that’s easy to miss outside regulated fields: there is an ever-present feeling that there is one more thing to verify, and it’s extremely pressing because that may be your last chance to check it and fix it. About half the double shifts I’ve worked weren’t crunch time specifically, we weren’t in any danger of missing a deadline (I was in a parallel world where deadlines were fortunately not set by the orbits of celestial objects). It’s just you could never be too sure.
Also, radiation hardened instruments and electronic components have a reputation of ruggedness that gives lots of people a surprisingly incorrect expectation of ruggedness and replicableness (is that even a word?) about many spacecrafts. These aren’t serially-manufactured flying machines, they’re one-, maybe two-of-a-kind things. They work reliably not because they’ve gone through umpteen assembly line audits that result in a perfect fabrication flow, where everything that comes off the assembly line is guaranteed to work within six sigma. Some components on these things are like that, but the whole flying gizmo works reliably only because it’s tested into oblivion.
Less crunch would obviously be desirable. But even a perfectly-planned project with 100% delay-free execution will still end up with some crunch, if only because test cycles are the only guarantee of quality so there will always be some pressure to use any available time to do some more of those and to avoid mishaps by making procedures crunch-proof, rather than by avoiding the crunch.
I did a little searching about this. The project was green-lit in mid 2000 with a launch window in Summer of 2003, so about 3 years, to build not one but two rovers and get them to mars for a 90 day mission. Check out this pdf of a memo from what would be riiiiight smack in the middle of that schedule:
My paraphrase: Y’all told everyone to do stuff faster, better, and cheaper, but then didn’t actually make any policies for how to do that, or how to measure your success at doing that. Oh, and y’all suck out loud at staffing.
They include the management response which was basically: Well… yeah that’s a fair point. Also it’s not Faster Better Cheaper’s fault we suck at staffing! We just suck at staffing in general. We plan to develop plans to fix that next year!
I’m not joking about that “plan to develop plans” part btw. Here’s the full quote:
Big oof.
Despite all of this, the rovers meant to last like 3 months lasted 6 years and 14 years respectively. ¯\_(ツ)_/¯
Good point! Another aspect is that you should design systems in such a way that inadvertent misconnections become impossible, even for low-level testing. If that’s not possible with the hardware, in the given case a very simple pre-test would have been to test the impedance and resistance and abort in any case of excessive measurements.
To build a bridge to programming: Design your interfaces such that they cannot be broken with bogus input. This especially applies to low-level functions that are only explicitly called in tests, because you can mess up test inputs easily by accident. One approach is to use a strong type system, e.g. a function “{Real, NaR} log10(x:Real)” is much more fragile than “Real log10(x:StrictlyPositive)”, which is constrained by the type system not to yield NaR (not a real) in any case.
I haven’t seen NaR before.
In my mind I imagine a NaI (not an integer) could be useful to handle overflow/underflow/divide by zero.
NaR is used for some next generation computer arithmetic concepts like Posits.
I’m running my trivia question test that I did on GPT-3.5 (https://www.sliceofexperiments.com/p/chatgpt-vs-50000-trivia-questions) but now on GPT-4
I think they’re looking for evals to submit for early GPT-4 API access!
Luckily, I was able to get API access a few days after the announcement!
I’m working on a system for autogenerating content for a new twitch channel using ChatGPT around the concept of TTRPGs like D&D and Pathfinder. It’s live on twitch now, but much of the content is still very basic. We’ve got both some prompt tuning to do and some tweaks to our interface with Unreal. It’s been an interesting sort of crash course in LLMs, Text to Speech, game engines, all kinds of stuff.
This might be a silly question but…… why would you WANT to treat JSON as a YAML subset? Is it just a fun exercise, or are there real use cases/needs this solves?
So that rf you make a tool that can be configured by YAML, you can say “if you don’t know YAML, just write JSON”.
There are two schools of thought in technology development.
School one is the tech development is full of little robots working at little stations. Your job is to optimize the robots, work, queues, and pathways in order to optimize flow. In this school, tech development is a big factory. Ideas come in on one side, finished tech goes outside on the other. You can see this in Product Development in “The Principles of Product Development Flow”. For DevOps the keystone book is probably “The Goal”. DevOps even goes so far as to literally call the thing a “pipeline” I guess we should be lucky nobody used the phrase “conveyor belt”
School two is that tech development is all about scoping, identifying, and understanding a problem well enough that it “goes away”, ie, computers do everything and people are free. In this school, you want to kill the robots, the flow, the workstations, and the rest of it. The more complicated it is, the more complicated it is. And it’ll only get worse. Tech development is about combining creativity, logic, and problem-solving. It’s nothing like building Toyotas at all. We’re destroying the factory, not trying to make it run better.
Traditional managers, directors, and C-level folks are all taught school one. After all, they couldn’t understand or do school two even if they wanted to. So whatever BigCorp you’re in, you’re most likely going to be stuck in some kind of Toyota Production System knockoff. Sometimes it doesn’t hurt too much, but it’s always suboptimal. Building cars ain’t creating new tech solutions. It never will be.
And if you’re a cofounder of a bootstrapped product company, it seems obvious to me that you should implement school two all the way. Any recommended resources for learning to do this?
Anecdotes follow, but I’d love to here data both for and against!
They forget the art of engineering. The science is supposed to be repeatable, but the art is knowing what will be easy to enhance and maintain, while also being satisfying to work on so that you don’t encounter negative emotion turnover.
You also have to factor in “wanderlust” that occurs from doing the same basic thing for a while. Be ready for your team members to just get bored with what they’re doing, and encourage them to branch out and move!
Generally speaking, the teams I led this way would be more relaxed, didn’t have to work nights and weekends, and were CONSISTENTLY better and faster than the “people are robots driven by arbitrary quotas and dates” crews.
Any time someone started throwing out constant arbitrary fixed goals, we did better when I REFUSED to dump that pressure on the team. If they’re possible to hit, they’ll get hit whether we focus on the random date or not. Most of the time arbitrary quotas/dates make people rush to the quota and stop when they hit it, or slack until the date looms and rush to hit it. In both cases quality or cost suffers.
Anecdotes follow, but I’d love to here data both for and against!
They forget the art of engineering. The science is supposed to be repeatable, but the art is knowing what will be easy to enhance and maintain, while also being satisfying to work on so that you don’t encounter negative emotion turnover.
You also have to factor in “wanderlust” that occurs from doing the same basic thing for a while. Be ready for your team members to just get bored with what they’re doing, and encourage them to branch out and move!
Generally speaking, the teams I led this way would be more relaxed, didn’t have to work nights and weekends, and were CONSISTENTLY better and faster than the “people are robots driven by arbitrary quotas and dates” crews.
Any time someone started throwing out constant arbitrary fixed goals, we did better when I REFUSED to dump that pressure on the team. If they’re possible to hit, they’ll get hit whether we focus on the random date or not. Most of the time arbitrary quotas/dates make people rush to the quota and stop when they hit it, or slack until the date looms and rush to hit it. In both cases quality or cost suffers.
This was posted as a dupe of the parent. Consider deleting.
Yes, all the time. I spend at least a couple of hours per week tinkering with something. I ran my own Jenkins to tinker with automation. I fired up a postgres database to try storing some data in it. I’ve built systems to download and analyze data automatically. I blow away the OS on a machine and install it from scratch now and then.
Thank goodness I do, too. This actually came in very handy when I started tinkering with Selenium for web UI automation recently. A few weeks later we needed to automate something in a web-based application that wasn’t exposed via the API, and Selenium gave me a quick-n-dirty option to get it done!
To whomever downvoted this as off-topic:
…so which topic is it off-?
It’s probably an expression of political distaste for overt references to furrydom rather than an authentic opinion that this article’s content is off-topic. I think this is absolutely topical content myself, but I’ve seen plenty of articles posted that I also thought were entirely topical (some of which I posted myself), that had off-topic or other flags because they were triggering to the political sensiblities of other users.
I get off topic downvotes for my posts with Mara too. Some of the graybeards here really dislike furries for some reason I can’t comprehend. I hope they can find something better to do that downvote furry adjacent content. Anyways, keep up the good work!
I’m that kind of a person, though I don’t have a gray beard. To me it’s just cringe (for lack of a better word), just like an unironic “euphoric” atheist, a gun-obssessed anarcho capitalist, a “My Little Pony” Fanboy or a western-anime otaku. I honestly don’t see what the difference is.
Any blog that tries to mix that kind of usually fringe subculture is fine by itself, people are strange, but I have my doubts how relevant it is to a general-public site like Lobsters.
That being said, I didn’t flag it, I’ll just be hiding it.
Setting aside how cringe or not it is, we should evaluate the article on its technical merits.
In principle, yes, but we often have discissions on the form of sites (don’t post twitter threads, avoid medium, not loading without JS, too low contrast, automatically playing videos), and interspersing a page with furry imagary is just something that some people are used to (apparently this is an american thing), and others are not.
It’s not an American thing.
I don’t know why you think it is.
Eurofurence, Nordic Fuzz Con, and FurDU are just a few of the international furry conventions that attract thousands of attendees every year (COVID notwithstanding).
Honestly that comes of as saying that McDonalds isn’t an american thing, because they have joints all over the world. Have you ever wondered why we are writing in English? I think everyone knows that american culture has a kind of dominance that no other culture has, because of hollywood, TV series and media in general. It’s always the de facto standard, and almost anything that is a thing in the US has following somewhere else. That has only intensified with the internet. But if anywhere in this thread, this is the point where we would be crossing over into off-topic territory, so I’d sugest we agree to disagree.
And regarding
First of all, Wikipedia says
So it literally comes from the US. But setting that aside, even if I didn’t know that, it’s something so inherintly american, that I would have been really suprised that something that at the same time desexualizes bestiality (by removing the inherent link) and sexualizes animals (by giving them human cues of attractivness and anatonomy) could come from anywhere else.
Edit: Also I was curious and looked it up, “Nordic Fuzz Con” has 1499 atendees in 2020, but considering how many contries these people came from, it’s approximatly 0.000008% of the population. It’s common that when people are too online, they overestimate how large their bubble really is. “Eurofurence” with almost twice as many atendees isn’t much better of.
That’s super off topic for the discussion, but I’ve recently changed my mind about “american culture”. I now feel that a significant part of it is just universal, liberal culture, and not specifically American (hamburgers, pizzas and sushi being fun gastronomical examples). This post changed the way I think about this.
Probably due to mako’s comment, which said they “always considered it an American subculture”. I hadn’t heard of it being American before… thanks to your comment I’ll unlearn that.
Lobsters is general public? :-)
I think you could tack on just about any group and the content would be pretty much the same. “…for punks,” “…for people with a pulse,” or whatever. I’ve no strong opinion on furries. As long as their hobbies are not hurting anybody, I’ll just file it in the “not my thing, but not hurting me” bucket and see if the rest of what they have to say is interesting or not.
Technology doesn’t exist in a vacuum. Practitioners, users, researchers, and creators are people whose experiences of technology will be informed by their lifestyle preferences, race, gender, queerness (or not), positionality in society, past experiences, mental health, hobbies, friends and so on.
It’s ridiculous and downright depressing to me that anyone would consider a blog off topic because the writer chose to make their technical narrative their own. It strikes me as the kind of narrow thinking that leads the tech industry to not be a very accessible or diverse place in general.
Divorcing technology from the real world leads to isolation and atrophy (to borrow the words of Courant). It reduces diversity, leads to moral atrophy, and systems built without empathy for users.
And it leads to gatekeeping. Don’t do that.
The cringe is a reaction of your own, not the content itself. I would avoid downvoting a post just because of my relationship to it, so I’m glad you made the same call.
Lobste.rs caters to a very specific subculture that exists in the IT sector that is in itself part of a broader subculture of technology creators and maintainers. It’s just that you think your subculture is important enough to be let in and others are not.
You’re right that “technology” is a subculture, but my claim is that we are perpendicular/stochastically independent to “furry culture”.
I would very kindly ask you not not be this elitist about this, this is explicitly a techonology site, with no further designations. The community has it’s tendencies, this way or another, but that doesn’t change the fact that the average to something as obscure as a “furry” will be recieved with some hesitation. This isn’t anything personal, I can imagine that if I went to some “normal” site like Facebook and started talking about the need Free Software that most people would consider me crazy.
It’s the exact opposite of being elitist, it’s about being inclusive. You call “technological community” a thing that is aligned to your culture and values and it’s just a very small fraction of the people that produce digital technology. You universalize it because you cannot conceive that there might be different ways than yours of producing technology together. You believe your way is THE way and you reject other ways.
I don’t think it’s greybeards, rather non-Americans. I’m in the UK, London, and if there’s a furry subculture here it is so microscopic that I’m not aware of it. I’ve always considered it an American subculture, and possibly mostly silicon valley, but certainly for non-Americans I think it’s very obscure. I didn’t vote either way, and have no idea what the furry thing is about, just glimpse it once in a while.
For what it’s worth, in America you don’t just see people walking around expressing as furries while they shop for groceries. Most of us have never run across the culture in person. I think it’s not that this is an American phenomenon but that online spaces are safer, so that’s where you (and we) see them.
just how microscopic would it have to be for you to not be aware of it? do you keep tabs on all… culture… in London?
It’s honestly not very hard.
I really enjoy most of the aesthetic of your pages, and the technical content! I just don’t like the random stuff being jammed in between it. I don’t need a bunch of reading space occupied by a full color, artistic, glorified selfie 6 times. Or in the case of Mara’s first appearance, 16 times.
Just posting in support of this.
Folks, this is a nice high-effort post about implementing security, with code and references and the whole shebang. It isn’t shilling a service, it isn’t navel-gazing on politics, it isn’t even some borderline case of spamming a blog to get more views without care for the community.
Anybody who flagged this as off-topic either didn’t read the article or is a tremendous asshole.
Anyone who flagged this as spam either didn’t read the article or is a tremendous asshole.
If the reference to furries in the title rustled your jimmies, despite the site policy here being to use the original title as close as possible, and you were unable to evaluate the quality of the article on its own merits, you’re a tremendous asshole.
I’m not going to flag it, but the „for furrys“ bit certainly is off topic
Furry is my blog’s aesthetic and theme, and a significant chunk of the content, but the focus is 99% encryption. The parts that are furry-relevant are:
This page isn’t so much for furries than it is from a furry, published on a furry blog, and with a bad furry pun in the title.
You don’t actually need to entertain anti-furry sentiment. And do not worry either, there’s also people who appreciate this. I’d rather see furries than most common traits of the modern web.
For certain values of “a lot”. I’d guess that this kind of stuff is more popular in the US than in India.
I don’t doubt that a lot of furries (or furry-adjacent) might be tech workers, but I’m not sure your statement is accurate, given just how many tech workers there are.
The main problem with this kind of title phrasing is the forced communication of a political/sexual/whatever message, which is off-topic for the site, and most people don’t care, and don’t want to care for it.
Anybody visiting the link would see that the page has a furry aesthetic. Then they would have the chance to read the article, or close the page. This way a message is promoted on the main page. I think identity politics are already too emphasized and destructive in discussions, and have a bad effect on communities and society. Consider seeing things like a Heterosexual christian father’s guide to unit testing on the front page. Without judging anybody’s identity, this is not the place and form for that topic and that kind of statements.
I wonder why the simple reminder of a group’s existence bothers you so.
For some reason you failed to understand my point, and are accusing me with something instead of arguing my points. Most likely this is because of my inability of phrasing my point efficiently.
But in the same spirit: I wonder why do I even need to know anybody’s affiliation at all in context of a technical discussion?
One could make the same argument to flag “Beej’s Guide to Network Programming” or any post about how company X solves their problems.
And usually they do so, considering it as spam, a form of advertisement… Only not of the political, but of the business kind.
I don’t think you are familiar with at least the first example.
But at least I can be familiar with the second example…
Your style is not that of a Friendly engineer.
There was a time he went by a different name…:p (angrysock)
Because the author decided, that their “affiliation” is relevant to their content, that’s it. You don’t need to follow that thinking, you can opt-out of reading their article, even hide it on sites like lobste.rs.
Any articel tells you something about the authors identity and cultural affiliations. And most of us just fill the blanks with defaults, where details are missing. i.e. an authors gender on technical content is often assumed to be male, if not stated otherwise. Most of us who grew up in societies with Christian majorities just assume that most guides to unit testing are a variation of the “Heterosexual christian father’s guide to unit testing”. That’s bad because it taints our perspective, even on the already factual diversity of tech and the net. So IMHO it’s a good thing, if more of us keep their affiliations explicit and maybe even reflect on how those influence their perspectives.
Your points aren’t worth arguing. You assert several things (“most people don’t care,” “have a bad effect on communities”) without any supporting evidence. To the first about whether people care and “don’t want to care” – I don’t find that persuasive even if you can provide evidence that a majority of people don’t want to be confronted with the identities of people who’re considered outside the mainstream. But I also suspect you’re making an assertion you want to be right but have no evidence to back up.
Likewise, what even is a “bad effect on communities and society”?
You also express an opinion (“I think identity politics are already too emphasized”) which I heartily disagree with, but that’s your opinion and I don’t see any point arguing about that. OK, you think that. I think too many craft beers are over-hopped IPAs and not enough are Hefeweizens. The market seems to disagree with me, but you’re not going to convince me otherwise. :-)
Start with a thought-terminating cliché. Then you start arguing my points. :) No problem.
I understand your points, but you didn’t really grasp what I wanted to phrase. IMHO “mainstream” and other identities should not confront each other here unless being technically relevant ones, about which technical discussion can be carried on. There are other mediums for those kind of discussions.
Lucky someone has managed to phrase my ideas better than I could above:
https://lobste.rs/s/mn1am1/going_bark_furry_s_guide_end_end#c_xndsrl
As I understand @kodfodrasz, they were bothered not inherently by the reminder of the group’s existence, but by the broadcasting of that reminder to the Lobsters front page. When an article title on the front page asserts the author’s voluntary membership of a group, that is not only a reminder that the group exists—it’s also implicitly an advocation that the group is a valid, normal, defensible group to join. One can agree with the content of such advocacy while also disliking the side effects of such advocacy.
What side effects would those be? @kodfodrasz said that “identity politics are already too emphasized and destructive in discussions, and have a bad effect on communities and society”. I think they are referring to way advocacy for an identity can encourage an “us vs. them” mindset. Personally, I see the spread of that mindset as a legitimate downside which, when deciding whether to post such advocacy, must be balanced against the legitimate upside that advocacy for a good cause can have.
^ this
My assertion is that currently I see a trend where legitimate topics are not discussed because some participants in the discussion have specific opinions on other topics than the one discussed. Dismissing some on-topic opinions for off-topic opinions is an everyday trend, and if bringing our off-topic identities to the site would gradually become more accepted, then that trend would also creep in from other parts of the society, where it has had done its harm already.
I hold this opinion as a guide for every off-topic identity. I think of it with regards to this forum a bit similarly to the separation of church and state has happened in most of the western world.
The submitter (author in this case) has one “vote” in promoting their content on this site. Usually one net upvote keeps stuff in /new and outside the front page. What’s promoted this content to the front page is the site’s users, who have upvoted it enough to appear on it.
At time of my writing this comment, the current standing is
Also note that comments themselves contribute to visibility, so everyone commenting complaining about this being off-topic and “in your face” aren’t helping their cause…
Are you (or @kodfodrasz) implying that identifying as a furry is in some way so dangerous as to be suppressed by society at large?
Would you be fine with a BDSM-themed blog post on a tech topic?
It depends how the theme is explored.
If it uses BDSM culture to explore the nuances of consent in order to explain a complicated technical point, I’m all for it.
What if it’s just interlaced with drawings of BSDM activities, like that old GIMP splash screen? I wouldn’t be caught dead scrolling that (nor opening GIMP) at work.
If you work at a place that cares more about some bullshit policing of imagery than technical merit, that’s a yikes from me.
There’s an inherent sexual quality to BDSM that isn’t inherent to furry culture.
You do realize that, correct?
Strictly speaking that isn’t necessarily true about BDSM.
Oh? This is news to me.
Yep. There are people, for example, for whom submission is not a sexual thing but instead about being safe and there are people for whom having a little (in the subcategory of dd/lg) is about having somebody to support and take care of and encourage in self-improvement.
That’s not everyone, the same way that there are in fact furries who are all about getting knotted.
My point is just that if you want to go Not All Furries, you should be similarly rigorous about other subcultures.
o/ I’m asexual but still very into BDSM (and also a furry!). I know what something being sexualised feels like — took a while to get here — and while a lot of people do link the two intimately (as many do for furry things), they aren’t dependently linked.
Actually, I know a real example. There is a Python-related French blog named Sam et Max. The technical articles are generally considered high-quality by the French-speaking Python programmers. But there are also BDSM- and sex-related articles alongside the Python articles. Even within a Python-related article, the author sometimes makes some references about his own fantasies or real past experience.
As long as there’s no overt pornography, sure. I’d read a good article on crypto that had “by someone currently tied up” on it. What’s the point of writing if you get shamed for putting your personality in it.
Already mentioned elsewhere but it’s my understanding that being a furry isn’t inherently sexual / about sex, though there can be that aspect. I certainly wouldn’t mind a post that was something like “a lesbian’s guide to…” or “a gay person’s guide to..” because those identities encompass more than sexual practices. (Someone elsewhere says that BDSM isn’t strictly speaking sexual, which … is news to me, but I admit my ignorance here. If there’s a non-sexual aspect to BDSM identity then sure, I’m OK with a BDSM-themed post on tech.)
That goes without saying, because that’s the default viewpoint.
The way the author clarifies and establishes their viewpoint does not make their technical content anymore off topic than someone submitting something titled “A Hacker’s Guide to MFA” or “A SRE’s Guide to Notifications”. The lens that they are using to evaluate a technical topic is an important piece of information that we often-times forget in tech with disastrous outcomes.
No, it is not necessarily the default. But even if it would be, articulating that off-topic identity on the front-page would be unnecessarily divisive, and I’m pretty convinced, that people of other identities would flock the comment section claiming that the post is racist (sic!), and is not inclusive, hurts their feeling, and I think they’d be right (on this site).
Hacker or SRE are on-topic tech identities themselves, while sexuality, political stand, religion are not really.
Hacker is a political identity. For instance, it’s one that I find really degrading when associated to the whole profession. The nerd identity or the general infatilizing of programmers is degrading as well. These are tolerated because they are the majority’s identity in this specific niche and presented as “neutral” even though they are not.
Well I see some positive vibe about the hacker word in the IT sector, if you remember there was some hacker glider logo thingie around the millennia. I’m not one of them, and agree with you, I also find hacker somewhat negative, and not because of the “evil hacker”, but of the unprofessional meanings of the phrase (eg. quick hack). Still lots of fellow professionals don’t agree on this one with us.
Regarding Nerd: I also find the phrase degrading, and I don’t understand those who refer to themselves as nerds in a positive context.
The best way of removing the degrading conotation of a word is to rewrite its meaning. The best way to do that is to unironically use it in a neutral-to-positive context.
yeah but the problem is what you want to appropriate. The word “slut” has been reappropriated to defend the right for men and women to have sex freely without judgement. The word “nigger” has been reappropriated because black people are proud of being black. But the word “nerd”? “nerd” means being obsessed with stuff and have very poor social skill and connections. Reappropriating the word flirts very closely with glorifying social disfunctions, exclusion and individualism.
Reappropriating is done because there are negative connotations that we want to take out of focus; that’s the whole point.
but Nerd is imho all negative. The positive connotations, like being dedicated and consistent on a practice is not exclusive to being a nerd. Being nerd is not even stigmatized anymore: now it’s cool to be nerd and still it’s degrading, like being a circus freak. You reappropriate a word to remove a stigma towards a category, but the stigma is already gone and what is left is a very distorted portrayal of knowledge workers.
That the stigma is gone is precisely because people took the term and ran with it.
Besides, I have no problem with assholes (whose opinion of me is no concern of mine) considering me a circus freak: it makes them keep themselves at a distance which means less work for me to get the same desirable result.
(Also: I disagree with the term “nerd” glorifying “social dysfunction” - normalizing, maybe, but that’s a very inclusive stance, especially when these “dysfunctions” are called by their proper name: neurodiversity. And what precisely is the problem with individualism again? And another tangent: knowledge workers aren’t necessarily nerds and nerds aren’t necessarily knowledge workers)
I agree with all your values but it doesn’t seem like this is what’s happening in the real world. Inclusion of neurodiversity is happening only in small bubble in USA/NE: if anything, neurodiverse people are just more aware of being different. Good for coping, not that good for social inclusion. Really neurodiverse people are still rejected by the society at large and at best they get tokenized and made into heroes but not really included. Also this appropriation of the word detached the concept of nerd from neurodiversity that if it was ever a thing, it’s not a thing now. Today being nerd is wearing glasses and a checkered shirt. Then if you flirt flawlessly with girls, entertain complex social networks and work as a hair dresser, it’s enough to say your hobby is building radios and boom, you’re a nerd. I don’t see how this process would help neurodiverse people and I don’t see how it is good to have to live up to this stereotype to be included in the IT industry (because in most places, if you are not some flavor of nerd/geek, you’re looked at with suspicion)
For most people, “Furries” is “that weird sex thing”. I can see a lot of people wanting to make it clear that sexual references are out of place in order to make tech a more comfortable and welcoming place for everyone. I suspect that famous Rails ‘pr0n star’ talk has (rightly) made people feel uncomfortable with sexual imagery in tech.
I’ve upvoted because the content is good, but I’m also not really one for keeping things milquetoast. I’d like to see more content like this. The technical parts are worth reading, even though I have no interest whatsoever in furries, and mildly dislike the aesthetic.
And yes – I’ve discovered today via google that it’s only a sex thing for 30% to 50% of the people in the subculture, but as an outsider, the sexual aspect is the only aspect I had ever heard people mention.
Going forward, I’d just suggest ignoring the downvotes and moving on – they’ll always be there on anything that’s not boring corporate talk, and the threads like these just suck the air out of interesting conversation.
[edit: content moved to different post, this was accidentally off-by-one click]
Yiff it bothers you, why not just read it without the images? Firefox reader view works great fur me.
It doesn’t claim to be for furries, it claims to be by one.
Is it, though? If it was written as “a teacher’s guide to end-to-end encryption” would anybody be flagging it or carping about the title just because the intended / primary audience was teachers but the content could be abstracted to anybody who cared about end-to-end encryption?
That’s a good type of question to ask, but your example title “A Teacher’s Guide …” is not equivalent. The author being a teacher could be highly relevant to the content of the article; for example, the article might especially focus on the easy-to-teach parts of encryption. The author being a furry, however, is likely to affect only the theme.
Analogous titles would change “furry” to another subculture that is not innately connected to tech and that people choose rather than being born with. Two examples:
Would people complain about those titles? I predict that yes, some people would, though fewer than those who are complaining about the furry-related title.
Belatedly, but I’m following up on these flags. I missed this story and am reading through it now.
Obviously it’s great that someone wants to give us this information. In return we should give them respect and thanks.
Showcasing their identity not only gives personal color to the post, it also donates some of the credit to the community they identify with, rather than to some default security engineer type we might imagine.
Thanks to this personal touch, some readers can no longer say furries are unintelligent, or never did anything for them.
Why do people say hello to docker in the first place? Because unix, Make, compiled code, and shell script glue isn’t powerful enuf? Or to make devops a job title that means something. –Sarcastic and jaded, systemd still sucks
Well for example when you’re not working with compiled code (or you’re working with loads of shared libraries) then the system configuration starts playing a big role in how your program replies.
For example, you have a Python webserver. Firstly, do you have the right Python available? And if this server does something like PDF rendering it might call out to another application to do so, so is that installed properly?
I think that this stuff is more avoidable with like…. proper compilation artifacts but it can be painful to reproduce your production system locally when you are doing something more novel than just querying a DB and generating text
Have you actually ever worked with a bigger C/C++ application? :) I’ve never seen any ecosystem where it is worse to separate your system’s library version in /usr from custom installed stuff. Everything from Python, Ruby, PHP is so much easier to work with in this regard.
One of the reasons is because “we have an Kubernetes / OpenShift / Frozbinitz Special cluster” that is supported by “the enterprise” and I can do my initial work on my laptop with Docker.
I don’t like Docker the company, but Docker the command line tool is fine. Docker Desktop has the potential to be great, but it seems like it would require Docker the company to be purchased by someone less dependent on registered users as a metric for survival.
Here’s one example. At my last job, I first introduced Docker to our environment because, sadly, we had a few Node.js applications written at various points in the company’s history each of which required a particular version of the Node runtime and also particular versions of various modules that weren’t compatible with all the versions of the runtime we were using.
It made the production configuration a bit of a nightmare to keep track of. If you wanted to run two of these on the same host, you could no longer just install a Node system package, etc.
Packaging the applications in Docker containers eliminated all those headaches in one step.
We were not running Kubernetes or any other orchestration system; we just stopped deploying directory trees to our servers. We still used scripts to do our deployment, but they became much simpler scripts than before.
Because their code doesn’t scale. Docker is a hardware solution to a software problem
Can you elaborate on this? I’ve mostly used Docker to simplify dependency management, nothing whatsoever to do with any kind of scaling concern. How is it a “hardware solution” when it is a piece of software?
K8s (which is just fancy docker orchestration) allows you to string containers together. You can find many pre defined config a that allow rapid scaling up and down, allowing an “app” to add additional workers when needed, add those to the load balancer and when they are idle to tear them down. It’s really “neat” but by adding (and subtracting) machines as needed it’s my point of being a hardware solution to a software problem.
Instead of worker threads and processes and LPC you end up with many machines and RPC. The plus side of going with RPC of course is that you do get to scale on multiple physical hosts easily, while a thread aware app now needs to manage LPC and RPC to scale.
It’s “expensive” hardware wise but look at Rancher it’s a quick and easy k8s that deploys painlessly and once you setup shared storage you can take some predefined app setup and hit the plus button from the UI and watch it auto scale, then hit minus and watch it right size.
Sure, that makes total sense, but you’re talking about Kubernetes, not Docker, no? How does any of that apply if you’re running a fixed set of standalone Docker containers on a fixed set of hosts, say, or running Docker in development environments to get a consistent build toolchain?
Nice to see the wave of defectors from OSX back to more open platforms continue!
I agree I’ve always found XCode frustrating to use and obscenely ponderous to set up as well. It’s not so much that it’s a GUI or that it’s complicated, I think you could say the same for VIsual Studio Code or Pycharm both of which I love, it’s that it gets in the way of what I’m trying to do 100% of the time.
And yeah, the MagSafe connector was a thing of beauty. My wife is clinging to her 2012 Macbook Pro for dear life :)
You also didn’t mention keyboard, although I realize I’m more sensitive to that than most. I find it interesting that the keyboard on my $200 Pinebook Pro is several orders of magnitude better than the one on my work issued 2018 Macbook Pro which feels like typing on squishy oatmeal.
Sadly, in my social bubble, I see more people moving from OSX to Windows/WSL than to Linux/BSD.
That doesn’t surprise me at all, and I assume that’s the largest migration direction between the “three” platforms. Most average Joe developers are already using Windows for most of their computing needs (games) and probably only used macOS because they “had to” because work involves either a Linux server or an iOS app.
I’m not sure where you get the idea that there is a “wave of defectors from OSX”, the post was most likely written to be posted specifically here. I like the community and I don’t want to be that guy but this community is a small, irrelevant echo chamber in terms of world-wide OS adoption.
I totally agree. There HAS been a wave of people leaving the OSX platform for various reasons, but as another poster wrote they’re not all migrating to FLOSS environments.
A number of them are choosing Windows instead. I myself dual boot and enjoy the best of both worlds :)
Also, nobody other than Apple actually has hard numbers on this.
I just bought the 2020 Macbook Pro and the keyboard is from the different planet than 2018’s, I really like it! The price was painful but there are music softwares I need daily that don’t run in Linux so I don’t really have a choice.
Which one? The one with no Esc key? No thanks.
They brought back the escape key :)
The new one has physical Esc-key.
Macs come with a lot out of the box, so I don’t do much. My advice would be to install as few things as possible for the first few days/weeks just to see what the builtin stuff has to offer.
From hour one on any Mac I buy, I remap caps lock to control, turn on all of the touchpad checkboxes and mouse checkboxes, and install homebrew (which gets some Xcode utilities). From there I install games and a few packages. I set up iCloud because I use the the iCloud Drive and iMessage. Other than that, I have about 61 packages installed via homebrew, and most of those are dependencies.
Examples of stuff I
brew installor download and install from a 3rd party:For mainstream applications, I tend to use the Apple version if it exists. Terminal instead of a replacement, Pages/Numbers/Keynote instead of MS Office, Mail instead of Outlook or Thunderbird. Pretty much everything else is game clients.
Source: https://insights.stackoverflow.com/survey/2020#most-loved-dreaded-and-wanted
Haskell and Scala are #14 and #15 on the “most loved” list, and nowhere to be seen in the “wanted” list.
They actually are #17 and #18 in the “most wanted” list. Also, I find this a bit shocking, but both of these languages are actually ranking higher up the “most dreaded” languages list: #11 and #12.
Meh. They’re over-selling the emotion there. The question was apparently more akin to “I use this language and I am / am not interested in continuing to use it.”
When you said you used a language, If you reported you wanted to keep using it they labelled it “loved” and if not, they labelled it “dreaded.”
I used to use Scala a lot. I was more than happy to keep using it, but that doesn’t mean I loved it. Once I started using Rust, I dropped Scala. That doesn’t mean I dread it.
1311 units, 63 prefixes?! My Mac only comes with 586 units, 56 prefixes and millilightseconds isn’t among them. How do I get more?
Trey Harris (in response to a similar question at the time) explained: “I used to populate my units.dat file with tons of extra prefixes and units.” In any case, lightseconds (and therefore millilightseconds) is in the standard definitions.units file on Linux these days, so perhaps you could grab a better definitions.units out of https://www.gnu.org/software/units/ if nothing else. (On my machine, the standard units starts up with 2919 units, 109 prefixes, and 88 nonlinear units.)
brew install gnu-units && gunitYesssssssssssssssss
Thanks Owen!
I haven’t dug into it much yet, but wouldn’t something akin to ccache help with dependency compilation times?
It does to a degree; for example, Mozilla’s sccache is a ccache like tool with optional support for sharing build artefacts on cloud storage services which ships with support for the Rust toolchain (it works as a wrapper around rustc, so it should work even if you don’t use Cargo to build Rust code). Linking times can still be slow though, and obviously the cache isn’t useful if you’re running beta or nightly and frequently updating your compiler.
Or using a build system like bazel or Nix.
I do really like Linus’ point earlier in the thread too: “checkpatch” throws a warning about 80 characters… while generating 124 character lines.
This ran perfectly on my GPU on macOS in Safari, and worked great on my iPad. It’s also quite relaxing :)
It’s a little more subtle than that I think. OK, the author is a Star Trek fan, right? Remember when they had a critical mission to get help for a kid on the ship, and then Data goes nuts and takes over the ship, flying it to a different destination? That could be what the author is doing from their team’s perspective.
I’ve observed two major projects where a single engineer decided on a different language from what the rest of the team was using without getting at least a few other people on board, and the result has been consistent: the project either dies, or gets or rewritten in the language the rest of the group already knows.
What is in this that is not either out-of-date or hasn’t been rehashed endlessly at this point? As of May 2020 Is there any new insight to be gained in talking about how math-y and hard Haskell is, or framing Haskell vs. Go as The Right Thing vs. worse-is-better? And finally, do we have to take terribly conceived-of diagrams seriously if the author self-identifies as a “Haskell ninja?”
Plenty of people are writing Haskell and using it in production or for their hobby projects. Stack and cabal and nix taken as a whole have made major inroads into dealing with “cabal hell” (or “DLL hell” as this piece calls it). Go continues to be both decidedly mediocre and inflexible, easy to pick up, and more popular than ever. Haskell remains pretty different from a lot of other languages and has a steep learning curve. News at 11
Since 2011…? Off the top of my head, as a casual (and no-longer-zealous) Haskell user:
cleverness“elegance” above practicality. Oh, academia.So, overall, I think the post isn’t exactly a brilliant epiphany, but presents a fairly even-handed perspective, and has aged pretty well. You might have seen these points rehashed endlessly, but I don’t think everyone on Lobsters has.
The language itself has many great points, certainly rewards the effort put into learning, and for a few domains it’s by far the best choice despite these drawbacks. I wouldn’t fault anyone for not wanting to use it, though, especially if they’re not in one of those domains. Go or something else might very well be a better choice, especially if you need to hire.
The community, as such, is kinda bleh in my highly subjective opinion, but hardly the worst. Hard to climb that steep curve without engaging with the community. Might just be my perception, but it seems net language snobbery has increased along with industrial uptake. The established academic users are as magnanimous as ever, but now surrounded by a rising population of younger, more competitive and hungrier users, some of whom are maybe just a little insecure about the secret-weapon superpowers they haven’t quite mastered.
I was objecting to the context-free drive-by posting of an old, fluffy, somewhat out-of-date article on a topic that is already discussed a lot here, or at least feels that way to me. I don’t necessarily disagree with any of your points but they weren’t raised in the piece and only seem to be relevant because they are criticisms of Haskell, of which there are plenty to go around. It doesn’t seem like you or other posters are responding to my actual comment, and more disappointingly it seems like my comment has become yet another excuse for some to go off on negative rants about Haskell. Damned if I do, damned if I don’t…
Submitter here. I checked to see if this had been submitted before posting, it had not.
I found it via the author’s Ninja piece, linked here earlier.
I was interested to see what the Lobste.rs community had to say about this piece, 9 years later.
edit: forgot the quote that triggered this… well quasi-rant. I think it’s related to the perception of secret-weapon superpowers that people seem to have about Haskell. I used to think Haskell was Superman: An alien from another world that has all of these incredible powers that other mere earthlings cannot achieve. Now I think it’s more Batman.
I keep finding myself looking for a nice, cozy functional language to roll around in. Every time I look I hear people speak of the beauty and elegance of type systems, or the lack of side effects, or laziness, and it all sounds just so magical!
Often I sit down to start with one of these languages and immediately find myself a bit lost in the syntax. It’s not the mathy parts - it’s the quirks of how the language changes the names of common things.
Let’s use an intro CS example:
Yeah, you can probably guess what these things do. Yet how many of them would you guess on day 1 of switching to that language? How many computer science, or even basic programming courses start with a language that uses anything except a variant of “print”? (Before you go “MIT with Scheme!”, they’ve switched to Python.)
There’s a thing authors do when they’re trying to immerse you in a world that is supposed to feel alien, and that’s changing familiar things to make them feel unfamiliar. That’s fine and all, but not particularly accessible. I’m not looking for some unique, immersive experience, I’m looking to get stuff done. I get it: It’s not a line printer anymore. The save icon is still a floppy disk though, and the idiom of “Print” is well established and has been for like decades now.
Uuhh print and putStrLn are two sides of the same coin, normally I’d use print since it has the more flexible signature (Show a => a -> IO () vs String -> IO ()).
So in haskell
print = putStrLn . show, a dsitinction is drawn, but it’s not missing.TryHaskell Haskell.org shows putStrLn first, and you have to dig into the I/O functions to find print.
In the Haskell wiki, they even call the action of
putStrLnprinting, then switch toprintlater on without explaining it. Clear as mud. OK, so let’s play with some output to see if we can figure it out.Got it. Integers aren’t strings, so that no worky. OK, so
print = putStrLn . show.Wait, what happened to that last one? Why wasn’t it
"四"? Well, it turns out that Show’s signature isshow :: a -> String, but the devil’s in the details. The byline is “Conversion of values to readable Strings.” but the detail is that show outputs syntactically correct Haskell expressions, not plain strings. For some reason, this means I get the quotes around the string (even though they’re not part of the string) and a conversion of UTF-8 to a string of the code point rather than the character itself.Everything fed into show keeps its types though, right?
Wait, what? Ohhhh, strings are just a list of chars in Haskell, and they get auto-converted to a string, losing their “listyness” for basic output.
Just for giggles, let’s try something similar in python:
Yup. Looks like what I asked it to print alright.
Fair point. I suppose if you use Text and OverloadedStrings maybe print would have behaved better but these are indeed not very good defaults.
Most people would probably just shrug at it and keep coding, in all fairness. Haskell is far from the only language with that weirdness too! It just seems so unnecessary, and so prevalent in the popular functional languages.
Yes but it was designed by a comittee decades ago so in my mind it gets a pass on some of these legacy papercuts. In return I get a rock-solid language with lots of mature libraries.
https://xkcd.com/1053/
There’s now an edit indicating that the comparison is wrong anyhow: The Rust code was parsing the string many more times than Go because of the way the code was written.
Plus the comparisons were different. Also, the code was a weird mix of looking at bytes and unicode codepoints.
This sloppy-approach-to-everything is sadly a thing I have started to expect from Go users, and one of the reasons why I automatically penalize projects using Go in terms of quality and reliability unless proven otherwise.