You mention CUDA for ML, but isn’t that all very machine dependent? I could be wrong but I thought the dependency manager downloads your correct version.
i’m sure if you picked some run-of-the-mill examples of AI generated text, and sent them to the author 5 years ago, they would confidently say “an AI could never write this”, so i’m not going to state too confidently what AI will or won’t do 5 years from now
We have had an AI winter before. We are either at the start of a hockey-stick curve of AI development, or at a local maximum, before the start of another.
There’s no exponential growth in a finite universe that both consumes any kind of resource and goes on forever, so there is going to be a plateau at some point.
But I’ll readily admit that I’m one of the people who five years ago wouldn’t have guessed an AI would be able to generate text at GPT-4 quality today, so I’m not going to pretend I can confidently predict where that plateau is going to be.
Markov chains could generate plausible text in the ‘90s. In the Cambridge computer lab, they ran a weekly happy hour (beer and snacks) and, after writing a load of announcements every week, one of the organisers got bored and wrote a tiny Python program that built Markov chains of all of the existing ones and wrote new ones. It worked well, right up until it announced a free one by accident.
ChatGPT is not fundamentally different in functionality, only in scale. The model is a denser encoding of the probability space than a Markov chain and the training set is many orders of magnitude larger, but the output is of the same kind: it looks plausibly like something that could have been in the input set but contains clear examples of not being backed by any underlying understanding (ChatGPT will confidently assert things of the form ‘A, therefore not A’).
I’ve written several Markov chain-based text generators myself - the most amusing one used the blog of a famously bullshit-prone local political commentator as its training corpus; the result ended up sometimes being very hard to discern from the real deal. But only on the snippet level: If I had it compose an entire essay, the result was so obviously nonsense that nobody was fooled for a second. That’s what having a “context window” of two words and a blog-sized corpus gets you. :)
I suppose my own inability to predict the quality of current-day text generation was more about underestimating the current scale of datacenter compute (and the practicality of working with truly enormous data sets) than anything else. If anything, LLMs are as much an accomplishment of Big Data as one of AI.
If anything, LLMs are as much an accomplishment of Big Data as one of AI.
I completely agree with this. It’s also worth noticing which companies are pushing them. It’s not just that they’re companies that have senior leadership who are promoted on their ability to spout plausible bullshit and so think that’s what intelligence looks like, they’re also companies that make money selling large compute services.
Back in the ‘80s, if you needed a database for payroll and so on, you wanted to buy something from Oracle and IBM, often with a big piece of hardware to run it. By the late ‘90s, you could do the same thing with PostgreSQL on a cheap PC. Maybe a slightly more expensive PC if you wanted RAID and tape backups. Now, things like payroll, accounting, inventory control, and so on are all handling such tiny amounts of data that you wouldn’t think or running them with anything other than commodity infrastructure (unless you’re operating at the scale of Amazon or Walmart).
This is always the problem for folks selling big iron. Any workload that needs it today probably won’t in 10 years. As such, you need a continuing flow of new workloads. Video streaming was good for a while: it needs a lot of compute for transcoding, a lot of storage to hold the files, and a lot of bandwidth to stream them. As the set of CODECs that you need shrank and they improved, these requirements went down. With FTTP bringing gigabit connections to a lot of folks, there’s a risk that this will go away. With something like BitTorrent streaming, a single RPi on a fast home Internet can easily handle 100 parallel streams of HD video and so as long as 1% of your viewers are willing to keep seeding then you’re good and the demand for cloud video streaming goes away for anyone that isn’t a big streaming company (and they may build their own infrastructure).
But then AI comes along. It needs big data sets (which you can only collect if you’ve got the infrastructure to trawl a large fraction of the web), loads of storage for them (you use the same data repeatedly in training so need to have enough storage for a large chunk of the web), loads of specialised hardware to train (too expensive to buy for a single model, you need to amortise the cost over a bunch of things, which favours people renting infrastructure to their customer) and then requires custom hardware for inference to be effective. It’s a perfect use case for cloud providers. The more hype that you can drive to it, the more cloud services you can sell. Whether they’re actually useful to your customers doesn’t matter: as long as they’re part of the hype cycle, they’re giving you money.
I disagree. NLP researchers have been experimenting with text generation for decades. I first heard the term “hallucinated text” in 2014, which back then meant any text generated by a model because because it was a given that text models aren’t concerned with the truth. People in our department were convinced that more complex models with more data would generate more coherent text. Especially after the leaps and bounds we saw image generators making. The big surprise is that the architecture turned out to be quite simple, an even bigger surprise is how much money is spent on training the models.
“We could never make an AI that can write this with our budget” more like.
There’s not nearly such optimism in fact generating AI that I know of. People in expert systems research have been humble realists ever since the AI winter, and nobody serious is jumping on the LLM hype train. All I hear are wishes from business people.
It does look good, but it also looks like they cherry picked the examples. What happens in a word that has multiple m’s, but only one of them can be made bigger and the other can not because it’s neighbours are also wide? I can’t test it right now, but I expect it would look weird and uneven.
Oh, that’s an interesting point. I have to try it some more, before I can say whether it would disturb me. But TBH, I’m not sure I actually look at the exact letters or words of my code when I type. I’ve just installed monaspace and will find out. (Though I’m not sure how to switch all those monaspace-toggles in e.g., vscode)
It messes with tables too. But I’ve been coding in a proportional font for 5 years now and I can count on one hand the times that alignment has been an actual problem.
I still use monospace in the terminal because programs there like to print tables.
And we don’t want to take a page from car companies’ books by asking you to do things no reasonable person would ever do – like reciting a 9,461-word privacy policy to everyone who opens your car’s doors.
This implies it’s affecting passengers, too. Even without actually owning the car you’re in your privacy might be violated!
The moment you sit in the passenger seat of a Subaru that uses connected services, you’ve consented to allow them to use – and maybe even sell – your personal information. According to their privacy policy, that means things like your name, location, “Audio recordings of Vehicle Occupants”, and inferences they can draw about things like your “characteristics, predispositions, behavior, or attitudes.”
[…]
If you go read Subaru’s privacy policy (or don’t, we did it for you, you can just read our review here), you’ll see at the very start they say this: “This Privacy Policy applies to each user of the Services, including any “Vehicle Occupant,” which includes each driver or passenger in a Subaru vehicle that uses Connected Vehicle Services, such as Subaru Starlink (such vehicle, a “Connected Vehicle”), whether or not such driver or passenger is the vehicle owner or a registered user of the Connected Vehicle Services. For the avoidance of doubt, for purposes of this Privacy Policy, “using” the Services includes being a Vehicle Occupant in a Connected Vehicle.” So yeah, they don’t want there to be any doubt that when you sit in a connected Subaru, you’ve entered the world of using their services.
I like my Subaru (which is why I chose that review to dive into) and I don’t think I use any of the features that would make it a “Connected Vehicle”, but that very decidedly creeps me out.
I don’t know. A quick guess: send phone bluetooth or wifi identifiers to the manufacturer, who can then sell it as extra data to the same companies that are already tracking your phone so that they know even more about you?
“we drone people on metadata” combined with the n degrees of social graph steps from you to a surveillance target used as a metric to expand surveillance to you.
It’s hard for me to imagine not having a car. I mean, it’s obviously doable, but I’m having problem in some scenarios, like: having to buy some furniture and bring it home (only big shops deliver), moving across cities, helping friends move, being sick and needing to go to the doctor, trying to visit some lonely place with a tent, doing big/heavy groceries (e.g. buying 6 bottles of 5L water plus stuff for the week). A bike is not the answer for these.
I, too, have never owned a car. To each of your scenarios:
having to buy some furniture and bring it home (only big shops deliver)
I have never seen a furniture shop that doesn’t deliver. A lot of furniture doesn’t fit in a car, so you end up needing to rent a van anyway. Man with a van services are fairly cheap and come with someone to help you carry things as well.
moving across cities, helping friends move,
How often do you do this? Last time we moved, we rented a van for the day, which cost about as much as a week’s worth of tax and insurance on a car. Moving with a car sounds quite painful.
being sick and needing to go to the doctor,
Are you safe to drive when you’re sick? Again, how often do you do this?
I live in a city, so my GP is about 5 minutes walk from my house. If I need to go further, taxis are available.
trying to visit some lonely place with a tent,
Sure, you might want a car for that but, again, how often do you do it? If it’s every weekend, it might make sense. If it’s once or twice a year then renting a car (or a camper van, or some kind of off-road vehicle) probably makes more sense.
doing big/heavy groceries (e.g. buying 6 bottles of 5L water plus stuff for the week). A bike is not the answer for these.
That’s a lot of bottled water. I live in a civilised country, so the tap water is drinkable, but you might be surprised at how much you can carry on a bike. Two pannier bags will happily carry a week’s shopping for an individual.
That said, I can’t imagine going back to doing a big grocery run in person. For the last 20+ years, I’ve done it online and had it delivered. It takes less time to do the shop than it would take to drive to the supermarket, and these days I can do it on a tablet so I can wander around and check the fridge and cupboards to see if I’m out of something (or, if it’s a non-perishable on special offer, how much I have space for), which is far more convenient. I pick up fresh things every few days from a shop within walking distance.
Mr Money Moustache has some good rants on the economics of car ownership. I am somewhat in awe of an industry that has managed to equate ownership of a depreciating asset with high operating costs with freedom in the minds of consumers.
These arguments always seem a bit circular unfortunately, and can be summarised as ‘if you just do less of the things that need a car, to the point at which you no longer really need a car, then hey presto you don’t actually need a car!!’. I mean yes, sure.
I wouldn’t bother with one in any sort of decent sized city, I think they’re effectively essential in most rural places here in the UK, and the need to make ‘anti car’ as a sort of religion or identity (I don’t think the parent comment is doing this, i should say) seems like a psychological tick that isn’t very helpful in what needs to be a more sober debate about urban infrastructure and planning.
These arguments always seem a bit circular unfortunately, and can be summarised as ‘if you just do less of the things that need a car, to the point at which you no longer really need a car, then hey presto you don’t actually need a car!!’. I mean yes, sure.
That’s quite reductionist. There’s a question for each of those things in a few dimensions:
Do they actually improve your quality of life? Supermarket shopping is one of the activities I used to really hate doing, for example. Yay, a car enables me to do this, but I can also just do it online and have it delivered. It’s faster and more convenient. The weekly supermarket trip is typically the top thing that people say they need a car for, but not doing it is a big quality of life improvement. I spend five minutes prodding a tablet rather than ten minutes driving to a supermarket, half an hour walking around it in a crowd of stressed people, then ten minutes driving home.
Do they actually justify the cost? A quick search tells me that the average cost of car ownership in the UK is £3406.80/year (tax, insurance, fuel, depreciation). When I was looking to buy my first house, I looked at a couple of places that were sufficiently far out of town to need a car and worked out that, with the price difference, I’d be making a loss after about four years and would then keep making a loss. Buying somewhere a bit more expensive and owning an appreciating asset was better financial sense than buying somewhere cheaper, and it was more convenient (I wouldn’t want to drive home after a pub trip, for example).
What are the alternatives? Taking that £3400/year number, that buys a lot of taxi trips. A trip anywhere in town is about £10, so I could take a taxi every two days for the same price as owning a car, plus I can get a taxi back from the pub drunk, whereas I wouldn’t want to drive back. If I take a taxi every couple of weeks, it’s much cheaper. For unusual things like airport trips, the cost of a taxi is about the same as the cost of parking at the airport, and I don’t have to drive while tired and jet lagged.
Do they justify the externalities? The best thing about 2020 was that the reduction in vehicle traffic meant that I didn’t get a cold from air pollution for the first time in years. If rich people own cars, this incentivises governments to incentivise infrastructure that requires cars, which pushes inequality by forcing poor people to buy vehicles that are expensive to own and operate.
I wouldn’t bother with one in any sort of decent sized city, I think they’re effectively essential in most rural places here in the UK,
I grew up in a small village in the UK and I agree. We had a bus to the city once a day (and it was timed for people visiting the countryside, so if you took it into town you didn’t have one coming back until the next day). Walking to the outskirts of the city took about an hour. With an electric bike, it was probably quite easy (they were far too expensive then) but there was a big hill just before the city that was not at all fun in a normal bike, and then the trip into the city was uphill.
The bus went from about two minutes walk from my house though. If it had run hourly, owning a car would have been far less important. When I moved to Swansea, there were regular busses that looped through the nearby villages at least once an hour, so it was possible (just not convenient) to live there without a car. Increased spending on infrastructure would make that easier. The bus service was great back then. Students could get a bus pass that gave unlimited trips for under £1/day, you could also buy a day pass for about £2 that gave you unlimited trips (most returns were more expensive, so this was the only ticket you ever bought) and they ran every 5-10 minutes on most of the in-town routes. When I went back about 7 years ago, the buses were so expensive that it was cheaper to take a taxi.
Sorry, I can’t help but interpreting your posts as “I don’t need it, therefore I don’t think it’s a good idea to use it”, although you probably don’t mean it this way.
Do they actually improve your quality of life?
You can walk out of home right now and travel 1000km alone with the baggage of your choice. I need to have this option, because otherwise I would feel like I’m in jail.
Are you safe to drive when you’re sick? Again, how often do you do this?
It’s not really about me; I simply wouldn’t want other people who are sick to use the same transit as I’m using right now. That’s why I don’t want to use the transit, or go to the office, when I’m not feeling very healthy.
If it’s once or twice a year then renting a car (or a camper van, or some kind of off-road vehicle) probably makes more sense.
Driving requires skill, and people get rusty with driving skills when not done often enough. Some time ago I didn’t need to drive for a month, and I’ve felt the difference when I’ve finally sat behind the wheel. Driving once a year for a thousand kilometers doesn’t sound very safe to me to be honest.
For the last 20+ years, I’ve done it online and had it delivered
but I can also just do it online and have it delivered
Well, you could since 20+ years. For me it wasn’t really an option before Covid. Also small shops don’t have this service, and I like to support smaller shops instead of big malls.
cost of car ownership in the UK is £3406.80/year
Statistics often don’t include cost optimization each of us can do, based on our unique situation. In Poland it costs me ~£1400 per year, according to my own statistics (including fuel). Not sure how is this similar to UK.
If I need to go further, taxis are available.
What are the alternatives? Taking that £3400/year number, that buys a lot of taxi trips.
Last time I’ve tried to use a taxi to go back home after leaving my car for repairs, I couldn’t find any taxi. I need to walk 1 kilometer to a bus stop and then wait 40 minutes for a bus. Another time I had to wait 30 minutes in front of my office building, because all taxis were busy. So this is my experience with taxis. Also I don’t like this deadline you need to conform to – with a car you just leave when you’re ready.
For unusual things like airport trips, the cost of a taxi is about the same as the cost of parking at the airport
Sure, if you live close to the airport.
If rich people own cars, this incentivises governments to incentivise infrastructure that requires cars, which pushes inequality by forcing poor people to buy vehicles that are expensive to own and operate.
Wow, what a stretch. And fighting for electorate by satisfying the majority at the costs of discontenting the minorities doesn’t have anything to do with how government operates?
Really? You’re comparing a kook who can feel AC current (cue “electrical oversensitivity”) and who promotes Soylent (whatever happened to them??) with someone living an utterly normal life in an urban environment?
Renting cars doesn’t feel like an “anti-car” strategy, and cargo bikes limit your possibilities to ~20km from the point where you did rent the bike. Cargo bikes are only a thing in biggest cities, unless you have your own. Some of them cost as much as an used car, this means that their price/value ratio seems to be very low.
The U.S. Department of Transportation’s Bureau of Transportation Statistics does calculations for what it costs the average American to own/drive their own car.
This is not anywhere near the cost of even the fanciest of cargo bikes.
They also have this page which shows a more complete and detailed picture an including average costs by income level. For whatever reason, this only shows info as late as 2021: https://data.bts.gov/stories/s/ida7-k95k/
This is all to say that cars are indeed very expensive and so it is perfectly understandable why someone would want to avoid such an enormous cost burden in their lives even if every level of the U.S government, some of the largest corporations, and a car-brained culture want to make saving that money as difficult as possible.
I have lived in a more typical city in a fairly quiet neighborhood and my grocery store was directly across the street from my apartment. Basically everything else I needed was walkable within a couple blocks. The U.S. has tried its best to be pedestrian-unfriendly almost everywhere but plenty of people do live in places throughout the country where walking or cargo biking is totally possible.
Interesting data, thanks. Although, I’m doing my own statistics, and in my case it’s $142 per month for the last 12 months (including gas, maintenance, paperology stuff, parking, highways, basically anything that has to do with the car is included here).
Also:
Insurance figures are based on a full-coverage policy
Not sure how it looks like in the US, but in my case I can limit my insurance to basic coverage and pay $116 for one year, instead of full coverage for $940 per year.
The average also assumes the car is changed to a new one every 5 years, so it probably includes profits of car dealers. Meaning, these statistics seem to show the absolute worst possible, but still realistic, price of having a car, and it should be pretty easy to optimize it.
Did you include capital expenditure spread across the lifetime of the vehicle in your calculations? But if you buy an old robust and maintainable car like a Toyota or a Volvo the initial purchase and maintenance costs are probably well below average.
I completely acknowledge using a cargo bike is a thing I can do because of where I live, its really not for everywhere. And yeah, they get pricey, but I would push back slightly on your cost argument because the total cost of a cargo bike is so much less than a car once you factor in insurance, maintenance, gas.
I didn’t think we were being “anti-car” per se, rather, anti car ownership. I’ve spent my whole life driving and only recently have I had the option to use bikes as my primary mode of transportation. Cars are useful! It’s hard to imagine our society without delivery vehicles and ambulances and such, so I’m personally not anti-car as much as I would like to live in cities with viable alternatives.
What’s funny about this discussion is, while privacy on cars is atrocious, privacy on transit is probably so much worse.
If you live in a car-centric city, it’s entirely rational that living without a car is unimaginable. It’s a vicious loop — if everyone must have a car, businesses and services are built around cars, therefore everyone must have a car.
I lived in Warsaw and London which aren’t car-dependent.
All shops selling big/heavy stuff have delivery. Sometimes it’s an option even for groceries. For stuff too small for delivery, but too big to carry, you get a taxi (for occasional things like that taxis are cheaper than TCO of a car).
There are many ways to rent a van if needed. It can be self-drive, or with a driver and people to help. You can order a van to pick up stuff for you from any shop or friend’s house (it’s like Uber Eats for sofas).
For moving, hiring a “man-and-van” is IMHO a great solution. You get someone with a muscle to carry all the stuff, quickly and without complaining, and it doesn’t cost much more than IOUs to your friends.
The are local clinics throughout the city within 15 min walk distance. If it’s not serious or infectious, there’s public transport, taxis. If necessary, you can have a home visit or an ambulance (at no extra cost in nationalized healthcare).
In touristy places for a tent, there’s a train station + minibus that will drop you off at a start of a hiking trail.
Car rental is always an option. It’s still an improvement, e.g. you don’t need to drive a pick-up truck every day if you just need it once to buy a sofa.
There’s really excellent cargo bikes these days, recumbent 4-wheel bikes (quikes?) that take 100+ kg of cargo, and are still small enough to fit through a standard door for your neighbourhood bicycle storage, and go on the non-car roads with everyone else. I don’t have one, but that’s my dream vehicle, which would let me maybe find a cheap house a bit farther out.
I can do without a car because there’s decent non-car infrastructure in this town of ~90K people. The non-car roads are shared by pedestrians, bicyclists, wheelchair users, and everyone else who can’t or won’t drive a car. In the winter I use the bus more.
I’ve lived in other places where it would be very difficult, if possible at all, to get by with an electric wheelchair or a bicycle. So I recognise that it’s a huge privilege. Not having a car is what lets me afford other things that make my life better. It’s quite expensive to have a car here, and I don’t want to work even more just to afford that too.
I don’t want to come across as better than anyone who has a car. My only wish is that more people would be able to get by without one.
Is there a “rallying cry” for anti-car?
Something short and recognisable, akin to “black lives matter” or “be gay, do crime” or “animals are not property”? Or even a hashtag?
My current job does this, and I don’t have a word for just how confusing it make things. It would be a very bad word though. Something evoking religious imagery of eternal suffering, perhaps a reference to Sisyphus.
BUT - and this pains me to say - it’s also not wrong at all. These points are completely valid. Semantics change over time. Names become obsolete. Worse, having to name two very similar things that only differ by a tiny semantic bit leads to really terrible names that don’t make the situation any better.
Also, let’s say you could even successfully rename things from a technical perspective, there’s the human perspective. Everyone is going to use the old name for 2 years anyway. Humans don’t have the ability to erase context overnight.
Basically, against my desire, I firmly believe that at its limit, naming is futile. We shouldn’t purposefully obfuscate things with bad names, but there is no hope for good names either. Behavioral semantics is too information-dense to communicate with a few characters. It needs more bits of info to successfully be transmitted with low entropy.
My approach is this: when you name a component like, SomethingSomething, and you’re explaining it to someone and they’re like “SomethingSomething?” And you respond, “You know, the X-doing service”.
Then you should just rename it to “X-doing-service”. No matter how stupid or wrong it sounds.
Three years ago I made the mistake of introducing the new search engine as Elasticsearch and I’ve been correcting people ever since that it’s just the database.
I use this pattern for branch names. It’s convenient. Rarely it discovers some offensive pair of words. Most often though it results in names that are completely useless and on a number of occasions I’ve struggled to find the right branch for something.
Certainly better branch name than enforced ticketing system ID. You really think I’m gonna bother memorizing a string of integers or that I want to open & search JIRA just to find the ID?
I’ve recently started doing that with my personal projects, using some random generator page I found. It basically spits out “whimsical adjective” “animal or funny object” word pairs. I cycle through them until I find one that sort of kind of matches the project. Examples:
glowing lamp: my effort to keep a restructured text engineering log book to make public
fancy hat: building an FOC-based brushless motor controller. Tenuous connection… a ball cap with a propellor on it is a fancy hat. The BLDC will be spinning a propellor on it.
lost elf: ESP32-based temperature sensor that’s going to live in the crawl space over the winter to make sure the heat tape on the water line is working
A little while ago I was working with someone on a StackExchange site who was really determined to solve a “get data from point A to point B” problem in an unconventional way — namely, 2.4GHz WiFi over coax. It seems like they were working under conditions of no budget but a lot of surplus hardware. Anyway they kept asking RF-design questions, being unsatisfied with the answers (which amounted to “no, what you have in mind won’t work”), and arguing down to the basic theory (like, what it means to have so many dB of loss per meter, and why measurements with an ohmmeter aren’t valid for microwave).
So, the last question they asked was whether they could use some 16mm aluminum pipe (which is a diameter of about 1/8 wavelength at 2.4GHz) as a waveguide. The answer from someone who knows what they’re talking about was: no, that won’t work. 1/8 wavelength is too small a diameter for any waveguide mode to propagate, and so the loss would be ludicrously high (>1000dB/m). The minimum size for 2.4GHz is more like 72-75mm.
Not satisfied with that answer, the OP decided to ask ChatGPT to “design a 1/8 wavelength circular waveguide for 2.4GHz”, and posted the result as a self-answer. And ChatGPT was perfectly happy to do that. It walked through the formulas relating frequency and wavelength, and ended with “Therefore, the required diameter for a circular waveguide for a 2.4 GHz signal at 1/8 wavelength is approximately 1.6 cm.” OP’s reaction was “there, see, look, it says it works fine!”
Of course the reality is that ChatGPT doesn’t know a thing. It calculated the diameter of a 1/8-wavelength-diameter circular thingy for 2.4GHz, and it called the thingy a “waveguide” because OP prompted it to. It has no understanding that a 1/8-wavelength-diameter thingy doesn’t perform the function of a waveguide, but it makes a very convincing-looking writeup.
I simply cannot take anyone who anthropomorphises computer programs seriously. (i.e. “I asked it and it answered me!”). Attributing agency, personhood, thinking to a program is naïve and at this scale problematic.
I wouldn’t take it that far. I’m fine with metaphor. (I will happily say that something even simpler than a computer program, like a PID controller, “wants” something). But people who can’t tell the difference between metaphor and literal truth are an issue.
But people who can’t tell the difference
between metaphor and reality are an issue.
It’s easy and convenient for people with a technical background to talk
about this stuff with metaphor. It’s even simpler than that: we talk
about abstractions with metaphor all the time. So if I say ChatGPT
lies, that’s an entirely metaphorical description and lots of people in
tech will recognize it as such. Chat GPT has no agency. It might have
power, but power and will / agency are different things.
Let me put it another way. People often say that “a government lied” or
“some corporation lied”. Both of these things, governments and
corporations, are abstractions. Abstractions with a lot of power, yeah
sure, but not agency. A government or a corporation cannot, on its own,
decide to do diddly squat, because it only exists in the minds of people
and on paper. It is an abstraction, consisting of people and processes.
And yet, now we play games of semantics, because corporations and
governments lie all the bloody time.
Power without agency is a dangerous thing. We should know that by now.
We’re playing with dynamite.
Slap a human face on the chat bot, and it will be even harder for most
people to see past the metaphors.
We’re currently within a very small window where tools like this are seen as novelties and thus “cool”, and people will proudly announce “I asked ChatGPT and here is the result”. In about 6 months the majority of newly written text will be generated using LLMs but will not be advertised as such. That’s when the guardrails offered by “search in Google to verify” and “ask Stackoverflow” will melt away, and online knowledge become basically meaningless.
People are mining sites like alternativeto to generate comparison articles for their blog. Problem is, alternativeto will sometimes list rather incomparable products because maybe you need to solve your problem in a different way. Humans can make this leap, GPT will just invent features to make the products more comparable. It really set me on the wrong track for a while…
There must be a way for these LLM’s to sense their “certainty” (perhaps the relative strength of the correlation?) since we are able to do so. Currently I think all they do is look for randomized local maxima (of any value) without evaluating its “strength”. Once it was able to estimate its own certainty about its answer, it could return that as a value along with the textual output.
No. “We can do this therefore LLMs can do this” is nonsense. And specifically to the point of ‘how sure the LLM is’, ‘sureness’ for this kind of thing relates to the degree of ‘support’ for the curve being sampled to generate the text, and the whole point of LLMs is being able to ‘make a differentiable million+ dimensional curve from some points and then use that curve as the curve to sample’ but the math means that ~ all of the measure of the curve is ‘not supported’, and if you only have the parts of the curve that are supported you end up with the degenerate case where the curve is only defined at the points, so it isn’t differentiable, and you can’t do any of the interesting sampling over it, and the whole thing becomes a not very good document retrieval system.
Probably yes. But that’s the point where they really do get as complicated as humans. Evaluating the consistency of your beliefs is more complicated and requires more information than just giving an answer based on what you know. Most humans aren’t all that good at it. And you have to start thinking really hard about motiviations. We have the basic mechanism for training NNs to evaluate their confidence in an answer (by training with a penalty term that rewards high confidence for correct answers, but strongly penalizes high confidence for incorrect answers) but it’s easy to imagine an AI’s “owners” injecting “be highly confident about these answers on these topics” to serve their own purposes, and it’s equally easy to imagine external groups exerting pressure to either declare certain issues closed to debate, or to declare certain questions unknowable (despite the evidence) because they consider certain lines of discussion distasteful or “dangerous”.
but it’s easy to imagine an AI’s “owners” injecting “be highly confident about these answers on these topics” to serve their own purposes, and it’s equally easy to imagine external groups exerting pressure to either declare certain issues closed to debate, or to declare certain questions unknowable (despite the evidence) because they consider certain lines of discussion distasteful or “dangerous”.
I mean… OK, a few thoughts. 1) bad actors using a technology to bad ends is not an argument against a technology IMHO, because there will always be more good actors who can use the same or similar technologies to combat it/keep it under control, 2) this sounds exactly like what humans are subject to (basically brainwashing or gaslighting by bad actors), is that an argument against humans? ;)
That was pretty much exactly my point in the first sentence. This makes them just as complicated to deal with as humans. And humans are the opposite of trustworthy. “The computer will lie to you” becomes a guarantee instead of a possibility. And it will potentially be a sophisticated liar, with a huge amount of knowledge to draw on to craft more convincing lies than even the most successful politician.
There isn’t a “therefore we shouldn’t…” here. It will happen regardless of what you or I think. I’m just giving you a hint what to expect.
You have a good point about “lie sophistication.” Most of the time, actual liars are (relatively) easily detected because of things like inconsistencies in their described worldview or accounting of events. The thing is, the same reasoning that can detect lies in humans can also guide the machine to detect its own lies. Surely you’ve seen this already with one of the LLM’s when you point out its own inconsistency.
Also, I think we should start not calling it “lying” but simply categorize all non-truths as “error” or “noise”. That way we can treat it as a signal to noise problem, and it removes the problem (both philosophical and practical) of assigning blame or intent.
But to your point, if, say, ChatGPT4’s IQ is about 135 as someone has apparently tested, it’s much more difficult to detect lies from a 135IQ entity than a 100IQ entity… I’m just saying that we have to just treat it the same as we treat a fallible human.
The issue is not certainty, but congruence with the real world. Their sensory inputs are inadequate for the task. Expect multimodal models to be better at this, while never achieving perfection.
I think that like humans, they will never achieve perfection, which makes sense, since they are modeled after human output. I do think that eventually, they will be able to “look up” a rigorous form of the answer (such as using a calculator API or “re-reading” a collection of science papers) and thus become more accurate, though. Like a human, except many times faster.
Window management is one of those areas I’m fascinated with because even after 50 years, nobody’s fully cracked it yet.
In my opinion, Windows 95-2000 pretty much nailed it, though.
However, the basic primitives have not changed since the 70s and, as a result, the issues have never gone away.
They did go away, though. Taskbar solved the issue for good. Unlike on MacOS, GNOME, or whatever mobile, I never had problem locating my windows on, well, Windows.
Manually placing and sizing windows can be fiddly work, and requires close attention and precise motor control.
Windows solved that problem too.
You could select multiple windows in the taskbar (ctrl+click), and tile them horizontally/vertically from the context menu.
Or you could open the window menu (alt+space) than click move (m) or resize (s) and position the window with arrow keys, in some pixel intervals.
I think later versions (7?) also added some support for edge snapping.
Messy is the default, and it’s up to you to clean it up.
Well, that’s just life. At least if you spend those 30 seconds to clean up by yourself, you know where things are, and you stay in control.
GNOME has had basic tiling functionality since early in the GNOME 3 series. While this is nice to have, it has obvious limitations:
The tiling in Windows 95 was incredibly clumsy and mostly an afterthought for MDI windows. You have the tile, but it just puts them in a new shape, with no auto-resize. I don’t remember ever using it. It took until Windows 7 to get the modern edge snapping approach, and that’s pretty limited.
Windows isn’t perfect, and pretending it was is peak end-of-history. The Mac has its own tradition of overlapping windows, to say nothing of other systems from the past. We should try new things to see if they work better, and examine past traditions in a way other than “Windows 98 perfect”. I say this as someone who enjoys studying prior art a lot.
I would never say that Windows itself was perfect, but I genuinely believe that the classic 95-2000 design was the peak desktop UX, and everything that came afterwards just kept adding bloat, removing useful features, and wasting screen space for the sake of looking fancy and shiny. Or worse, merely to mimic OSX. That’s why I’m generally skeptical of GNOME “rethinking” desktop again.
I mean, I would say the same thing about Mac OS version 7, which to me says that people prefer the technologies that shaped their expectations. I find Windows and OS X to both be largely terrible, and the less said about the various clones the Unix folks cough up, the better.
I loved the window management in System 7 - windows layered by application, not by window, and a corner menu for application switching rather than a taskbar. To the point that in the Gnome 1 era, I wrote a window menu panel applet, and a raise-by-application extension for Sawfish WM. Just in time for Gnome 2 to come out, and for me to never try anything like that again.
worse, merely to mimic OSX. That’s why I’m generally skeptical of GNOME “rethinking” desktop again
I don’t think that’s fair. Gnome 40 is a redesign. Taking what is nice from others, and putting it into different context is work, and I find the respective approaches fundamentally different: OSX is very floating window-oriented, while GNOME is more workspace oriented. The latter is absolutely great to use on laptops, in my personal opinion.
My favourite thing about Windows 95 MDI windowing was how the start button itself was implemented this way, so you could ctrl+ESC ESC alt+- M and wiggle it around the taskbar with arrow keys. :)
In my opinion, Windows 95-2000 pretty much nailed it, though.
The eternal question is whether it’s objectively better or because it’s what I/we grew up with. I’ve decided the answer doesn’t matter any more; I’ve picked my rut and I’m staying there. I remain ever-grateful that there are FOSS projects both for those who like to innovate on UI and those of us who want to remain in stasis.
Except I mostly grew up with Linux ;-) Between 2000-2010 I tried about every remotely popular DE/WM since GNOME 1 / KDE 2, and it was always the same story - cool ideas on the surface, but permanently unfinished when you looked closer.
Which I guess makes sense, if you do it as a hobby, you’d rather jump to the next cool thing than spend your time polishing all the boring details. I certainly don’t claim any right to expect anything from FOSS volunteers.
It’s just disappointing after all these years that Linux desktop keeps heading in the bloated/unfinished/flashy direction rather than mature & productive.
Gnome 2 had a taskbar. Gnome 3 removed it, along with the ability to minimize windows, and now cries that the window management is messy. Well, it took them 12 years, but I’m glad they finally realized that things are not ok.
It is messy, with or without a taskbar. Sure, some people are notorious cleaners both in real life and digital and may only have 2 tasks open at any one time. Others have 1000s of tabs open in their browsers, and I have seen the windows taskbar’s scroll bar plenty of times. Telling that it is a solved problem is just lying to ourselves.
I agree, it’s not perfect. I use awesomewm for the tiling features myself. But gnome2 was ok and they made it worse. I will forever resent gnome3/40/… for that.
There is Gnome mate which is the spiritual successor of Gnome 2. I don’t think it’s fair to resent a project for going their own way with their own, community work.
There are an endless number of Linux distros, plenty come with Mate as the default. You would think they would be more popular, if people really wanted that, wouldn’t they?
The real reason why almost no one switches to linux for desktop use cases is simply lack of software and support (neither official, but especially not from friends/family. One can call the neighbors kid to help them with Windows, not so much with linux). You still have no Office, which is unfortunately a must for many, nor proper PDF editors. I dislike Electron-apps as much as the next person, but they are great at bringing support to more programs for linux.
Honestly, I am only putting up with linux desktop because linux itself is the best developer environment, hands down. But it has plenty missing, not even just for average users, simply for average use cases that are also done by power users.
There are plenty of reasons that people don’t use Linux. There is undoubtedly a segment who try out default Ubuntu and either keep using it or go back to Windows/Mac based on that experience. I for one had my first Linux experience on RHEL 5, and the consistency and calming aesthetics of GNOME 2 were a big a factor in my going deeper. On the other hand, people who are willing to try out a range of distros and DEs are already pretty committed.
Besides, MATE is not as consistent or as usable as GNOME 2 (for one, it uses GTK3), and the user base is much smaller than GNOME 3, so it’s less well tested and it’s harder to find answers to your questions. Case in point, in 2010 if your neighbor’s kid was a Linux user there’s a good chance they were running GNOME 2. Obviously MATE doesn’t benefit from that network effect, and even GNOME 3 lacks that level of hegemony because it’s so dissatisfying to so many users.
Exactly, my physical desk is messy and that works just fine, cause I made that mess. It’s the skeuomorphisms that’s most often forgotten. Neat organizing tools just make you feel good about organizing, and then seconds later there’s mess again. I wish mess was embraced more. I want to group windows so they pop up together when alt-tabbing, messily, without changing the rest of my mess. I want to snap a window to the side and have it take up that space, messily, without changing the rest of my mess. Like on real desks.
I never had problem locating my windows on, well, Windows.
but is it good to have all tasks always visible? many people report “i started work but accidentally switched to twitter and started scrolling”
You could select multiple windows in the taskbar (ctrl+click), and tile them horizontally/vertically from the context menu.
this is not how tiling is usually used, and this capability is not indicated in the ui anyhow and requires either 2 hands or sticky keys, and both a mouse and a keyboard😢️!
A good window system will use the peripherals available to their greatest potential, and even depend on them. Trying to make a window system that will work well on both computers and touchscreen tablets is a doomed endeavor. You can have different window systems for different devices.
Q: is it possible to make periodic tiling with this? Is it hard or would it be likely if you’re just laying them randomly? Is it possible to get stuck when tiling, forcing you to backtrack to avoid overlaps?
Q: is it possible to make periodic tiling with this?
No! Hats (the previous paper) and Specters (this paper) only admit aperiodic tilings. This is in contrast to Penrose tilings (either rhombuses, or kits&darts) which can be tiled periodically unless you add additional matching rules (lines drawn on top of the tiles that must match up across tiles). A quote from the original paper about this: “In this paper we present the first true aperiodic monotile, a shape that forces aperiodicity through geometry alone, with no additional constraints applied via matching conditions.”
Is it possible to get stuck when tiling, forcing you to backtrack to avoid overlaps?
I suspect so? But very curious if anyone knows for sure.
The base shape (with straight edges) is “weakly aperiodic”, which (in the terminology of the paper) means that it can tile periodically if you allow both the tile and its reflection, but must be aperiodic if you disallow reflections (but allow translation and rotation). The spectre variant has curved edges which prevent tiles from making a tiling with their reflections, so it is aperiodic whatever planar isomorphisms you allow.
The average user won’t care or even notice those small imperfections in the images, same way that most people are perfectly happy with the photos they get our of their smartphones, despite lacking details and sharpness compared to more professional cameras.
I’m not sure that this is a fair comparison. Most people are happy with the photos they get out of their smartphones, but they still hire a photographer with a professional camera for their wedding. So it’s not that people don’t notice the difference. But the difference is also incredibly minor to begin with in many situations—at this point I don’t even think that a camera is required equipment for getting into photography.
I think that when people don’t mind some reduction in quality it’s usually because we can see “through” the crappiness to the thing behind. A slightly blurry, underexposed photo of someone important to you probably does its job just as well as the same photo taken to technical perfection.
On the other hand, an image of someone important to you with, I don’t know, an extra nostril, could be more jarring. It might not be noticeable in every case—I’ve taken lots of photos in which I wouldn’t have noticed if my camera had been giving people extra nostrils—but at least some of the time, in portraits for example, it would be a deal breaker.
I think it’s a similar story with the generated art. When the worst crimes in a given image are a few paths that lead nowhere, a lantern floating in midair, a floor tree, a campfire sitting on top of an unharmed wooden cabinet, etc… you might not notice if that image is just background filler. But if it’s a level in a game that you are supposed to walk around, paths that lead nowhere jump into the mental foreground.
We’re not making memories with loved ones of average people, we’re making Content for Gamers. You expect gamers to ignore details? In some games, out of place elements have caused months worth of lore theory videos. Even some that you can only see with hacks and glitches.
I think it depends on the context. For a blog post that has an AI image at the top, the details are all wrong, but no one is going to look at it, because you just skip past it and read the blog post. For a game, you’re going to be spending hours, perhaps hundreds of hours, staring at the screen, so you will notice if the art is weird.
That address bar search thing really took me a moment to notice on the video, but it’s really cool! Doesn’t work for me at all though. And I hope there is a key to toggle it between URL and search term…
It is now possible to override a JavaScript file in the debugger
Here’s a screenshot showing the feature. I have searched ‘test one two three’ using the URL bar on DuckDuckGo. You’ll notice the search terms are there in the URL bar.
I heard on the grapevine that Firefox would start hiding cookie consent banners automatically in the future. Does anyone know if this is now behind some configuration setting or should one still opt for an extension? I’ve held off on trying any extensions to do this, but as of late it seems like it has gotten worse… What would people even recommend to effectively block these banners?
Seconded. I also liked the Self Destructing Cookies extension, which doesn’t yet work with newer Firefox (though there’s a reimplementation that’s making progress). This had the cookie policy that should be default for all browsers:
When you close a tab, cookies are moved to a separate store.
When you visit the site again, you get a notification pop-up saying ‘cookies from a previous visit were deleted, do you want to restore them?’
If it’s a site that you log into, you tell it to keep the cookies and remain logged in. If it’s a site that you don’t want record anything about you, then you ignore the notification. By default, all cookies are gone as soon as you leave a page but if that wasn’t the right choice for a particular site then there’s an undo button available the next time you return.
I think the undelete bit was the killer feature for Self Destructing Cookies. As a user of a web site, I don’t always know if a feature that it depends on requires cookies until I return, so being able to say ‘oh, oops, I didn’t actually want to delete the cookies from my last visit, restore them and reload the tab please’ meant that it could have a very aggressive deletion default, without ever losing state that I cared about.
For sure, it was definitely better. I hope the rewrite for WebExtensions turns out to be viable.
My flow with Cookie AutoDelete is similar, but at the point of realising “oh, oops, I didn’t actually want to delete the cookies from my last visit” then can quickly add the site to the Cookie AutoDelete Whitelist, and log in again (or set up the site again, etc). Then at least I won’t lose it again.
It’s nowhere near as slick, but at least it only happens at most once per site (and one of the things it’s underscored for me is how few sites I actually want persistent cookies for!)
That’s fine for sites where you have an account and you can restore any state fairly easily, but the self destructing cookies model was really nice for places where I had ephemeral state tied to a cookie, even fairly simple things like shopping basket contents. With richer web apps, state is often stored directly in cookies or HTML5 local storage, or in a server-side back end with a cookie as a key to find it, so losing this is annoying.
I don’t want persistent state for 99% of sites that I visit, but the ones where I do, I often don’t realise it until I return.
I haven’t tried it (I mentioned it in my original post) but the description suggests that it doesn’t support the ‘undo’ mode, which is the thing that made this the perfect cookie-management strategy: delete everything aggressively but give users a way of undoing the deletion after they discover that the cookies contained some state that they’ve lost.
I don’t speak for Amazon, but my experience has been that this kind of analysis and architectural refitting is essentially constant and that’s a good thing. As volume and scope change, different approaches are needed. Monoliths are great for some problems and for some length of time; same with SOA, microservices. The lifetime of the problem usually sees one or all of those approaches as conditions change.
Completely agree with you. It’s all about the trade offs, and sometimes a problem is not understood well enough at the start to properly analyze those trade offs.
Given how flexible and malleable software is, it always amazes me the reluctance to refactor at scale, especially architecture. Electrical and mechanical (let alone civil!) all have massive up-front cost in manufacturing and existing stock needs to be used/trashed, yet revisions are common. Software has none of that overhead and everyone overreacts to revisions…
Software does have an up front cost in testing that the replacement is equivalent. If you can’t fully simulate production without risking production, is it any surprise it’s hard to get rework through?
If we’re using other engineering disciplines as our comparison point, the testing requirements are the same (and much higher for electrical/mechanical/civil). We can get far closer to simulating a true production use-case with software, it’s an unfortunate part of our industry that integration testing is mostly an afterthought.
No. At worst it goes via a DERP relay but it’s still encrypted client-side first. (Think WebRTC / TURN servers.)
As a customer, this is awesome because it’s making better use of the full connection whilst keeping the network traffic encrypted and applying permissions based on sender/receiver.
Brainfuck is simple
I do this for Godot games. The whole engine is only 50mb.
You mention CUDA for ML, but isn’t that all very machine dependent? I could be wrong but I thought the dependency manager downloads your correct version.
i’m sure if you picked some run-of-the-mill examples of AI generated text, and sent them to the author 5 years ago, they would confidently say “an AI could never write this”, so i’m not going to state too confidently what AI will or won’t do 5 years from now
We have had an AI winter before. We are either at the start of a hockey-stick curve of AI development, or at a local maximum, before the start of another.
There’s no exponential growth in a finite universe that both consumes any kind of resource and goes on forever, so there is going to be a plateau at some point.
But I’ll readily admit that I’m one of the people who five years ago wouldn’t have guessed an AI would be able to generate text at GPT-4 quality today, so I’m not going to pretend I can confidently predict where that plateau is going to be.
Markov chains could generate plausible text in the ‘90s. In the Cambridge computer lab, they ran a weekly happy hour (beer and snacks) and, after writing a load of announcements every week, one of the organisers got bored and wrote a tiny Python program that built Markov chains of all of the existing ones and wrote new ones. It worked well, right up until it announced a free one by accident.
ChatGPT is not fundamentally different in functionality, only in scale. The model is a denser encoding of the probability space than a Markov chain and the training set is many orders of magnitude larger, but the output is of the same kind: it looks plausibly like something that could have been in the input set but contains clear examples of not being backed by any underlying understanding (ChatGPT will confidently assert things of the form ‘A, therefore not A’).
I’ve written several Markov chain-based text generators myself - the most amusing one used the blog of a famously bullshit-prone local political commentator as its training corpus; the result ended up sometimes being very hard to discern from the real deal. But only on the snippet level: If I had it compose an entire essay, the result was so obviously nonsense that nobody was fooled for a second. That’s what having a “context window” of two words and a blog-sized corpus gets you. :)
I suppose my own inability to predict the quality of current-day text generation was more about underestimating the current scale of datacenter compute (and the practicality of working with truly enormous data sets) than anything else. If anything, LLMs are as much an accomplishment of Big Data as one of AI.
I completely agree with this. It’s also worth noticing which companies are pushing them. It’s not just that they’re companies that have senior leadership who are promoted on their ability to spout plausible bullshit and so think that’s what intelligence looks like, they’re also companies that make money selling large compute services.
Back in the ‘80s, if you needed a database for payroll and so on, you wanted to buy something from Oracle and IBM, often with a big piece of hardware to run it. By the late ‘90s, you could do the same thing with PostgreSQL on a cheap PC. Maybe a slightly more expensive PC if you wanted RAID and tape backups. Now, things like payroll, accounting, inventory control, and so on are all handling such tiny amounts of data that you wouldn’t think or running them with anything other than commodity infrastructure (unless you’re operating at the scale of Amazon or Walmart).
This is always the problem for folks selling big iron. Any workload that needs it today probably won’t in 10 years. As such, you need a continuing flow of new workloads. Video streaming was good for a while: it needs a lot of compute for transcoding, a lot of storage to hold the files, and a lot of bandwidth to stream them. As the set of CODECs that you need shrank and they improved, these requirements went down. With FTTP bringing gigabit connections to a lot of folks, there’s a risk that this will go away. With something like BitTorrent streaming, a single RPi on a fast home Internet can easily handle 100 parallel streams of HD video and so as long as 1% of your viewers are willing to keep seeding then you’re good and the demand for cloud video streaming goes away for anyone that isn’t a big streaming company (and they may build their own infrastructure).
But then AI comes along. It needs big data sets (which you can only collect if you’ve got the infrastructure to trawl a large fraction of the web), loads of storage for them (you use the same data repeatedly in training so need to have enough storage for a large chunk of the web), loads of specialised hardware to train (too expensive to buy for a single model, you need to amortise the cost over a bunch of things, which favours people renting infrastructure to their customer) and then requires custom hardware for inference to be effective. It’s a perfect use case for cloud providers. The more hype that you can drive to it, the more cloud services you can sell. Whether they’re actually useful to your customers doesn’t matter: as long as they’re part of the hype cycle, they’re giving you money.
I disagree. NLP researchers have been experimenting with text generation for decades. I first heard the term “hallucinated text” in 2014, which back then meant any text generated by a model because because it was a given that text models aren’t concerned with the truth. People in our department were convinced that more complex models with more data would generate more coherent text. Especially after the leaps and bounds we saw image generators making. The big surprise is that the architecture turned out to be quite simple, an even bigger surprise is how much money is spent on training the models.
“We could never make an AI that can write this with our budget” more like.
There’s not nearly such optimism in fact generating AI that I know of. People in expert systems research have been humble realists ever since the AI winter, and nobody serious is jumping on the LLM hype train. All I hear are wishes from business people.
Texture Healing looks amazing.
It has also been done as “smart kerning” in https://commitmono.com
I’m conflicted. It does looks really good, but your text is jumpy while you’re typing.
It does look good, but it also looks like they cherry picked the examples. What happens in a word that has multiple m’s, but only one of them can be made bigger and the other can not because it’s neighbours are also wide? I can’t test it right now, but I expect it would look weird and uneven.
https://www.youryoure.com/?its
The same is true for any proportional font that ligates “fi”, though
It’s not, because the fi ligature doesn’t change the f, so both letters when typed appear in their final positions.
Oh, that’s an interesting point. I have to try it some more, before I can say whether it would disturb me. But TBH, I’m not sure I actually look at the exact letters or words of my code when I type. I’ve just installed monaspace and will find out. (Though I’m not sure how to switch all those monaspace-toggles in e.g., vscode)
yes, but i don’t understand why do they still want a fixed width font if the crammed latters are not good, why not just have proportional then?
Because texture healing preserves alignment for stuff like ascii art.
It messes with tables too. But I’ve been coding in a proportional font for 5 years now and I can count on one hand the times that alignment has been an actual problem.
I still use monospace in the terminal because programs there like to print tables.
ascii art shouldn’t be in code anyway, because it’s totally bad on screen readers, probably braillle too
The graph is confusing. why doesn’t the matrix version grow logarithmically?
I think it does, just very slowly compared to the linear method.
I really hope I will make it through this life without ever having a car. I made it this far.
Me too
Go down the not just bikes path (see youtube channel) and join the fight to make your city planning more pedestrian/bike/transit friendly!
From the article:
This implies it’s affecting passengers, too. Even without actually owning the car you’re in your privacy might be violated!
How? What does the car do to a passenger?
From their Subaru review:
[…]
I like my Subaru (which is why I chose that review to dive into) and I don’t think I use any of the features that would make it a “Connected Vehicle”, but that very decidedly creeps me out.
I don’t know. A quick guess: send phone bluetooth or wifi identifiers to the manufacturer, who can then sell it as extra data to the same companies that are already tracking your phone so that they know even more about you?
“we drone people on metadata” combined with the n degrees of social graph steps from you to a surveillance target used as a metric to expand surveillance to you.
It’s hard for me to imagine not having a car. I mean, it’s obviously doable, but I’m having problem in some scenarios, like: having to buy some furniture and bring it home (only big shops deliver), moving across cities, helping friends move, being sick and needing to go to the doctor, trying to visit some lonely place with a tent, doing big/heavy groceries (e.g. buying 6 bottles of 5L water plus stuff for the week). A bike is not the answer for these.
I, too, have never owned a car. To each of your scenarios:
I have never seen a furniture shop that doesn’t deliver. A lot of furniture doesn’t fit in a car, so you end up needing to rent a van anyway. Man with a van services are fairly cheap and come with someone to help you carry things as well.
How often do you do this? Last time we moved, we rented a van for the day, which cost about as much as a week’s worth of tax and insurance on a car. Moving with a car sounds quite painful.
Are you safe to drive when you’re sick? Again, how often do you do this?
I live in a city, so my GP is about 5 minutes walk from my house. If I need to go further, taxis are available.
Sure, you might want a car for that but, again, how often do you do it? If it’s every weekend, it might make sense. If it’s once or twice a year then renting a car (or a camper van, or some kind of off-road vehicle) probably makes more sense.
That’s a lot of bottled water. I live in a civilised country, so the tap water is drinkable, but you might be surprised at how much you can carry on a bike. Two pannier bags will happily carry a week’s shopping for an individual.
That said, I can’t imagine going back to doing a big grocery run in person. For the last 20+ years, I’ve done it online and had it delivered. It takes less time to do the shop than it would take to drive to the supermarket, and these days I can do it on a tablet so I can wander around and check the fridge and cupboards to see if I’m out of something (or, if it’s a non-perishable on special offer, how much I have space for), which is far more convenient. I pick up fresh things every few days from a shop within walking distance.
Mr Money Moustache has some good rants on the economics of car ownership. I am somewhat in awe of an industry that has managed to equate ownership of a depreciating asset with high operating costs with freedom in the minds of consumers.
These arguments always seem a bit circular unfortunately, and can be summarised as ‘if you just do less of the things that need a car, to the point at which you no longer really need a car, then hey presto you don’t actually need a car!!’. I mean yes, sure.
I wouldn’t bother with one in any sort of decent sized city, I think they’re effectively essential in most rural places here in the UK, and the need to make ‘anti car’ as a sort of religion or identity (I don’t think the parent comment is doing this, i should say) seems like a psychological tick that isn’t very helpful in what needs to be a more sober debate about urban infrastructure and planning.
That’s quite reductionist. There’s a question for each of those things in a few dimensions:
I grew up in a small village in the UK and I agree. We had a bus to the city once a day (and it was timed for people visiting the countryside, so if you took it into town you didn’t have one coming back until the next day). Walking to the outskirts of the city took about an hour. With an electric bike, it was probably quite easy (they were far too expensive then) but there was a big hill just before the city that was not at all fun in a normal bike, and then the trip into the city was uphill.
The bus went from about two minutes walk from my house though. If it had run hourly, owning a car would have been far less important. When I moved to Swansea, there were regular busses that looped through the nearby villages at least once an hour, so it was possible (just not convenient) to live there without a car. Increased spending on infrastructure would make that easier. The bus service was great back then. Students could get a bus pass that gave unlimited trips for under £1/day, you could also buy a day pass for about £2 that gave you unlimited trips (most returns were more expensive, so this was the only ticket you ever bought) and they ran every 5-10 minutes on most of the in-town routes. When I went back about 7 years ago, the buses were so expensive that it was cheaper to take a taxi.
Sorry, I can’t help but interpreting your posts as “I don’t need it, therefore I don’t think it’s a good idea to use it”, although you probably don’t mean it this way.
You can walk out of home right now and travel 1000km alone with the baggage of your choice. I need to have this option, because otherwise I would feel like I’m in jail.
It’s not really about me; I simply wouldn’t want other people who are sick to use the same transit as I’m using right now. That’s why I don’t want to use the transit, or go to the office, when I’m not feeling very healthy.
Driving requires skill, and people get rusty with driving skills when not done often enough. Some time ago I didn’t need to drive for a month, and I’ve felt the difference when I’ve finally sat behind the wheel. Driving once a year for a thousand kilometers doesn’t sound very safe to me to be honest.
Well, you could since 20+ years. For me it wasn’t really an option before Covid. Also small shops don’t have this service, and I like to support smaller shops instead of big malls.
Statistics often don’t include cost optimization each of us can do, based on our unique situation. In Poland it costs me ~£1400 per year, according to my own statistics (including fuel). Not sure how is this similar to UK.
Last time I’ve tried to use a taxi to go back home after leaving my car for repairs, I couldn’t find any taxi. I need to walk 1 kilometer to a bus stop and then wait 40 minutes for a bus. Another time I had to wait 30 minutes in front of my office building, because all taxis were busy. So this is my experience with taxis. Also I don’t like this deadline you need to conform to – with a car you just leave when you’re ready.
Sure, if you live close to the airport.
Wow, what a stretch. And fighting for electorate by satisfying the majority at the costs of discontenting the minorities doesn’t have anything to do with how government operates?
I enjoy reading your posts on software topics, but this one has major Rob Rhinehart vibes. http://web.archive.org/web/20150924055227/http://robrhinehart.com/?p=1331
Really? You’re comparing a kook who can feel AC current (cue “electrical oversensitivity”) and who promotes Soylent (whatever happened to them??) with someone living an utterly normal life in an urban environment?
I was going to say something very similar if no one else had.
Soylent is still around. It looks more and more like Ensure, just marketed to a younger crowd.
For me personally, a mix of having a cargo bike and renting vans fills this niche
Renting cars doesn’t feel like an “anti-car” strategy, and cargo bikes limit your possibilities to ~20km from the point where you did rent the bike. Cargo bikes are only a thing in biggest cities, unless you have your own. Some of them cost as much as an used car, this means that their price/value ratio seems to be very low.
“Anti-car” is actually mostly “anti-having-only-car-centered-infrastructure”. Having easy access to rental vans is a main essential, not a hindrance.
The U.S. Department of Transportation’s Bureau of Transportation Statistics does calculations for what it costs the average American to own/drive their own car.
The total for the year 2022 is $10,729. That is per year. https://www.bts.gov/content/average-cost-owning-and-operating-automobilea-assuming-15000-vehicle-miles-year
This is not anywhere near the cost of even the fanciest of cargo bikes.
They also have this page which shows a more complete and detailed picture an including average costs by income level. For whatever reason, this only shows info as late as 2021: https://data.bts.gov/stories/s/ida7-k95k/
This is all to say that cars are indeed very expensive and so it is perfectly understandable why someone would want to avoid such an enormous cost burden in their lives even if every level of the U.S government, some of the largest corporations, and a car-brained culture want to make saving that money as difficult as possible.
I have lived in a more typical city in a fairly quiet neighborhood and my grocery store was directly across the street from my apartment. Basically everything else I needed was walkable within a couple blocks. The U.S. has tried its best to be pedestrian-unfriendly almost everywhere but plenty of people do live in places throughout the country where walking or cargo biking is totally possible.
Interesting data, thanks. Although, I’m doing my own statistics, and in my case it’s $142 per month for the last 12 months (including gas, maintenance, paperology stuff, parking, highways, basically anything that has to do with the car is included here).
Also:
Not sure how it looks like in the US, but in my case I can limit my insurance to basic coverage and pay $116 for one year, instead of full coverage for $940 per year.
The average also assumes the car is changed to a new one every 5 years, so it probably includes profits of car dealers. Meaning, these statistics seem to show the absolute worst possible, but still realistic, price of having a car, and it should be pretty easy to optimize it.
Did you include capital expenditure spread across the lifetime of the vehicle in your calculations? But if you buy an old robust and maintainable car like a Toyota or a Volvo the initial purchase and maintenance costs are probably well below average.
I didn’t include it, but after including the initial price, my monthly cost is on average $225 – $2700 per year (Volvo V50).
I completely acknowledge using a cargo bike is a thing I can do because of where I live, its really not for everywhere. And yeah, they get pricey, but I would push back slightly on your cost argument because the total cost of a cargo bike is so much less than a car once you factor in insurance, maintenance, gas.
I didn’t think we were being “anti-car” per se, rather, anti car ownership. I’ve spent my whole life driving and only recently have I had the option to use bikes as my primary mode of transportation. Cars are useful! It’s hard to imagine our society without delivery vehicles and ambulances and such, so I’m personally not anti-car as much as I would like to live in cities with viable alternatives.
What’s funny about this discussion is, while privacy on cars is atrocious, privacy on transit is probably so much worse.
If you live in a car-centric city, it’s entirely rational that living without a car is unimaginable. It’s a vicious loop — if everyone must have a car, businesses and services are built around cars, therefore everyone must have a car.
I lived in Warsaw and London which aren’t car-dependent.
There’s really excellent cargo bikes these days, recumbent 4-wheel bikes (quikes?) that take 100+ kg of cargo, and are still small enough to fit through a standard door for your neighbourhood bicycle storage, and go on the non-car roads with everyone else. I don’t have one, but that’s my dream vehicle, which would let me maybe find a cheap house a bit farther out.
I can do without a car because there’s decent non-car infrastructure in this town of ~90K people. The non-car roads are shared by pedestrians, bicyclists, wheelchair users, and everyone else who can’t or won’t drive a car. In the winter I use the bus more.
I’ve lived in other places where it would be very difficult, if possible at all, to get by with an electric wheelchair or a bicycle. So I recognise that it’s a huge privilege. Not having a car is what lets me afford other things that make my life better. It’s quite expensive to have a car here, and I don’t want to work even more just to afford that too.
I don’t want to come across as better than anyone who has a car. My only wish is that more people would be able to get by without one.
Is there a “rallying cry” for anti-car? Something short and recognisable, akin to “black lives matter” or “be gay, do crime” or “animals are not property”? Or even a hashtag?
The closest I have heard is “cars ruin cities”
“cities aren’t loud, cars are loud” is also a good one that gets people thinking
In all seriousness what you’re looking for is “f*** cars”. There’s a popular subreddit.
My current job does this, and I don’t have a word for just how confusing it make things. It would be a very bad word though. Something evoking religious imagery of eternal suffering, perhaps a reference to Sisyphus.
BUT - and this pains me to say - it’s also not wrong at all. These points are completely valid. Semantics change over time. Names become obsolete. Worse, having to name two very similar things that only differ by a tiny semantic bit leads to really terrible names that don’t make the situation any better.
Also, let’s say you could even successfully rename things from a technical perspective, there’s the human perspective. Everyone is going to use the old name for 2 years anyway. Humans don’t have the ability to erase context overnight.
Basically, against my desire, I firmly believe that at its limit, naming is futile. We shouldn’t purposefully obfuscate things with bad names, but there is no hope for good names either. Behavioral semantics is too information-dense to communicate with a few characters. It needs more bits of info to successfully be transmitted with low entropy.
My approach is this: when you name a component like, SomethingSomething, and you’re explaining it to someone and they’re like “SomethingSomething?” And you respond, “You know, the X-doing service”.
Then you should just rename it to “X-doing-service”. No matter how stupid or wrong it sounds.
And what do you do when the service takes on or changes responsibility? I.e. when “X-doing-service” is no longer accurate.
Then you make a new service. (Even if just by forking.)
Three years ago I made the mistake of introducing the new search engine as Elasticsearch and I’ve been correcting people ever since that it’s just the database.
ever deploy something on heroku? You get two random names like “dark-resonance” or “sparkling-star”. There you go. Always just take two random words.
That’s too flippant for my personal taste.
I use this pattern for branch names. It’s convenient. Rarely it discovers some offensive pair of words. Most often though it results in names that are completely useless and on a number of occasions I’ve struggled to find the right branch for something.
Certainly better branch name than enforced ticketing system ID. You really think I’m gonna bother memorizing a string of integers or that I want to open & search JIRA just to find the ID?
I’ve recently started doing that with my personal projects, using some random generator page I found. It basically spits out “whimsical adjective” “animal or funny object” word pairs. I cycle through them until I find one that sort of kind of matches the project. Examples:
glowing lamp: my effort to keep a restructured text engineering log book to make public
fancy hat: building an FOC-based brushless motor controller. Tenuous connection… a ball cap with a propellor on it is a fancy hat. The BLDC will be spinning a propellor on it.
lost elf: ESP32-based temperature sensor that’s going to live in the crawl space over the winter to make sure the heat tape on the water line is working
How about sparklines?
I had some fun with that https://i.imgur.com/eGKs5aG.png
A little while ago I was working with someone on a StackExchange site who was really determined to solve a “get data from point A to point B” problem in an unconventional way — namely, 2.4GHz WiFi over coax. It seems like they were working under conditions of no budget but a lot of surplus hardware. Anyway they kept asking RF-design questions, being unsatisfied with the answers (which amounted to “no, what you have in mind won’t work”), and arguing down to the basic theory (like, what it means to have so many dB of loss per meter, and why measurements with an ohmmeter aren’t valid for microwave).
So, the last question they asked was whether they could use some 16mm aluminum pipe (which is a diameter of about 1/8 wavelength at 2.4GHz) as a waveguide. The answer from someone who knows what they’re talking about was: no, that won’t work. 1/8 wavelength is too small a diameter for any waveguide mode to propagate, and so the loss would be ludicrously high (>1000dB/m). The minimum size for 2.4GHz is more like 72-75mm.
Not satisfied with that answer, the OP decided to ask ChatGPT to “design a 1/8 wavelength circular waveguide for 2.4GHz”, and posted the result as a self-answer. And ChatGPT was perfectly happy to do that. It walked through the formulas relating frequency and wavelength, and ended with “Therefore, the required diameter for a circular waveguide for a 2.4 GHz signal at 1/8 wavelength is approximately 1.6 cm.” OP’s reaction was “there, see, look, it says it works fine!”
Of course the reality is that ChatGPT doesn’t know a thing. It calculated the diameter of a 1/8-wavelength-diameter circular thingy for 2.4GHz, and it called the thingy a “waveguide” because OP prompted it to. It has no understanding that a 1/8-wavelength-diameter thingy doesn’t perform the function of a waveguide, but it makes a very convincing-looking writeup.
I simply cannot take anyone who anthropomorphises computer programs seriously. (i.e. “I asked it and it answered me!”). Attributing agency, personhood, thinking to a program is naïve and at this scale problematic.
I wouldn’t take it that far. I’m fine with metaphor. (I will happily say that something even simpler than a computer program, like a PID controller, “wants” something). But people who can’t tell the difference between metaphor and literal truth are an issue.
It’s easy and convenient for people with a technical background to talk about this stuff with metaphor. It’s even simpler than that: we talk about abstractions with metaphor all the time. So if I say ChatGPT lies, that’s an entirely metaphorical description and lots of people in tech will recognize it as such. Chat GPT has no agency. It might have power, but power and will / agency are different things.
Let me put it another way. People often say that “a government lied” or “some corporation lied”. Both of these things, governments and corporations, are abstractions. Abstractions with a lot of power, yeah sure, but not agency. A government or a corporation cannot, on its own, decide to do diddly squat, because it only exists in the minds of people and on paper. It is an abstraction, consisting of people and processes.
And yet, now we play games of semantics, because corporations and governments lie all the bloody time.
Power without agency is a dangerous thing. We should know that by now. We’re playing with dynamite.
Slap a human face on the chat bot, and it will be even harder for most people to see past the metaphors.
The “Power without agency” bit instantly reminded me of this, and of algorithmic social media in general.
We’re currently within a very small window where tools like this are seen as novelties and thus “cool”, and people will proudly announce “I asked ChatGPT and here is the result”. In about 6 months the majority of newly written text will be generated using LLMs but will not be advertised as such. That’s when the guardrails offered by “search in Google to verify” and “ask Stackoverflow” will melt away, and online knowledge become basically meaningless.
People are mining sites like alternativeto to generate comparison articles for their blog. Problem is, alternativeto will sometimes list rather incomparable products because maybe you need to solve your problem in a different way. Humans can make this leap, GPT will just invent features to make the products more comparable. It really set me on the wrong track for a while…
There must be a way for these LLM’s to sense their “certainty” (perhaps the relative strength of the correlation?) since we are able to do so. Currently I think all they do is look for randomized local maxima (of any value) without evaluating its “strength”. Once it was able to estimate its own certainty about its answer, it could return that as a value along with the textual output.
No. “We can do this therefore LLMs can do this” is nonsense. And specifically to the point of ‘how sure the LLM is’, ‘sureness’ for this kind of thing relates to the degree of ‘support’ for the curve being sampled to generate the text, and the whole point of LLMs is being able to ‘make a differentiable million+ dimensional curve from some points and then use that curve as the curve to sample’ but the math means that ~ all of the measure of the curve is ‘not supported’, and if you only have the parts of the curve that are supported you end up with the degenerate case where the curve is only defined at the points, so it isn’t differentiable, and you can’t do any of the interesting sampling over it, and the whole thing becomes a not very good document retrieval system.
Probably yes. But that’s the point where they really do get as complicated as humans. Evaluating the consistency of your beliefs is more complicated and requires more information than just giving an answer based on what you know. Most humans aren’t all that good at it. And you have to start thinking really hard about motiviations. We have the basic mechanism for training NNs to evaluate their confidence in an answer (by training with a penalty term that rewards high confidence for correct answers, but strongly penalizes high confidence for incorrect answers) but it’s easy to imagine an AI’s “owners” injecting “be highly confident about these answers on these topics” to serve their own purposes, and it’s equally easy to imagine external groups exerting pressure to either declare certain issues closed to debate, or to declare certain questions unknowable (despite the evidence) because they consider certain lines of discussion distasteful or “dangerous”.
I mean… OK, a few thoughts. 1) bad actors using a technology to bad ends is not an argument against a technology IMHO, because there will always be more good actors who can use the same or similar technologies to combat it/keep it under control, 2) this sounds exactly like what humans are subject to (basically brainwashing or gaslighting by bad actors), is that an argument against humans? ;)
That was pretty much exactly my point in the first sentence. This makes them just as complicated to deal with as humans. And humans are the opposite of trustworthy. “The computer will lie to you” becomes a guarantee instead of a possibility. And it will potentially be a sophisticated liar, with a huge amount of knowledge to draw on to craft more convincing lies than even the most successful politician.
There isn’t a “therefore we shouldn’t…” here. It will happen regardless of what you or I think. I’m just giving you a hint what to expect.
You have a good point about “lie sophistication.” Most of the time, actual liars are (relatively) easily detected because of things like inconsistencies in their described worldview or accounting of events. The thing is, the same reasoning that can detect lies in humans can also guide the machine to detect its own lies. Surely you’ve seen this already with one of the LLM’s when you point out its own inconsistency.
Also, I think we should start not calling it “lying” but simply categorize all non-truths as “error” or “noise”. That way we can treat it as a signal to noise problem, and it removes the problem (both philosophical and practical) of assigning blame or intent.
But to your point, if, say, ChatGPT4’s IQ is about 135 as someone has apparently tested, it’s much more difficult to detect lies from a 135IQ entity than a 100IQ entity… I’m just saying that we have to just treat it the same as we treat a fallible human.
A relevant paper is Language Models (Mostly) Know What They Know https://arxiv.org/abs/2207.05221
There has also been work on extracting a truth predicate from within these models.
The issue is not certainty, but congruence with the real world. Their sensory inputs are inadequate for the task. Expect multimodal models to be better at this, while never achieving perfection.
I think that like humans, they will never achieve perfection, which makes sense, since they are modeled after human output. I do think that eventually, they will be able to “look up” a rigorous form of the answer (such as using a calculator API or “re-reading” a collection of science papers) and thus become more accurate, though. Like a human, except many times faster.
What does “start free” mean, exactly?
Any chance of implementing it as a Firefox extension? Preferably in the sidebar
Start free is just a call to action button. Renamed it to Open app.
I’m thinking about firefox extension as well. One thing I don’t like about ff is a lot of bugs in contenteditable implementation in ff.
In my opinion, Windows 95-2000 pretty much nailed it, though.
They did go away, though. Taskbar solved the issue for good. Unlike on MacOS, GNOME, or whatever mobile, I never had problem locating my windows on, well, Windows.
Windows solved that problem too.
You could select multiple windows in the taskbar (ctrl+click), and tile them horizontally/vertically from the context menu.
Or you could open the window menu (alt+space) than click move (m) or resize (s) and position the window with arrow keys, in some pixel intervals.
I think later versions (7?) also added some support for edge snapping.
Well, that’s just life. At least if you spend those 30 seconds to clean up by yourself, you know where things are, and you stay in control.
That’s what you get when you keep thinking and rethinking and never actually finish your product. Windows had tiling in 1995! (https://devblogs.microsoft.com/oldnewthing/20090728-00/?p=17333)
The tiling in Windows 95 was incredibly clumsy and mostly an afterthought for MDI windows. You have the tile, but it just puts them in a new shape, with no auto-resize. I don’t remember ever using it. It took until Windows 7 to get the modern edge snapping approach, and that’s pretty limited.
Windows isn’t perfect, and pretending it was is peak end-of-history. The Mac has its own tradition of overlapping windows, to say nothing of other systems from the past. We should try new things to see if they work better, and examine past traditions in a way other than “Windows 98 perfect”. I say this as someone who enjoys studying prior art a lot.
I would never say that Windows itself was perfect, but I genuinely believe that the classic 95-2000 design was the peak desktop UX, and everything that came afterwards just kept adding bloat, removing useful features, and wasting screen space for the sake of looking fancy and shiny. Or worse, merely to mimic OSX. That’s why I’m generally skeptical of GNOME “rethinking” desktop again.
I mean, I would say the same thing about Mac OS version 7, which to me says that people prefer the technologies that shaped their expectations. I find Windows and OS X to both be largely terrible, and the less said about the various clones the Unix folks cough up, the better.
I loved the window management in System 7 - windows layered by application, not by window, and a corner menu for application switching rather than a taskbar. To the point that in the Gnome 1 era, I wrote a window menu panel applet, and a raise-by-application extension for Sawfish WM. Just in time for Gnome 2 to come out, and for me to never try anything like that again.
I don’t think that’s fair. Gnome 40 is a redesign. Taking what is nice from others, and putting it into different context is work, and I find the respective approaches fundamentally different: OSX is very floating window-oriented, while GNOME is more workspace oriented. The latter is absolutely great to use on laptops, in my personal opinion.
My favourite thing about Windows 95 MDI windowing was how the start button itself was implemented this way, so you could ctrl+ESC ESC alt+- M and wiggle it around the taskbar with arrow keys. :)
The eternal question is whether it’s objectively better or because it’s what I/we grew up with. I’ve decided the answer doesn’t matter any more; I’ve picked my rut and I’m staying there. I remain ever-grateful that there are FOSS projects both for those who like to innovate on UI and those of us who want to remain in stasis.
Except I mostly grew up with Linux ;-) Between 2000-2010 I tried about every remotely popular DE/WM since GNOME 1 / KDE 2, and it was always the same story - cool ideas on the surface, but permanently unfinished when you looked closer.
Which I guess makes sense, if you do it as a hobby, you’d rather jump to the next cool thing than spend your time polishing all the boring details. I certainly don’t claim any right to expect anything from FOSS volunteers.
It’s just disappointing after all these years that Linux desktop keeps heading in the bloated/unfinished/flashy direction rather than mature & productive.
Gnome 2 had a taskbar. Gnome 3 removed it, along with the ability to minimize windows, and now cries that the window management is messy. Well, it took them 12 years, but I’m glad they finally realized that things are not ok.
It is messy, with or without a taskbar. Sure, some people are notorious cleaners both in real life and digital and may only have 2 tasks open at any one time. Others have 1000s of tabs open in their browsers, and I have seen the windows taskbar’s scroll bar plenty of times. Telling that it is a solved problem is just lying to ourselves.
I agree, it’s not perfect. I use awesomewm for the tiling features myself. But gnome2 was ok and they made it worse. I will forever resent gnome3/40/… for that.
There is Gnome mate which is the spiritual successor of Gnome 2. I don’t think it’s fair to resent a project for going their own way with their own, community work.
They did a lot of damage. Who knows how many more people would have been won over to Linux if they had stayed with GNOME 2.
There are an endless number of Linux distros, plenty come with Mate as the default. You would think they would be more popular, if people really wanted that, wouldn’t they?
The real reason why almost no one switches to linux for desktop use cases is simply lack of software and support (neither official, but especially not from friends/family. One can call the neighbors kid to help them with Windows, not so much with linux). You still have no Office, which is unfortunately a must for many, nor proper PDF editors. I dislike Electron-apps as much as the next person, but they are great at bringing support to more programs for linux.
Honestly, I am only putting up with linux desktop because linux itself is the best developer environment, hands down. But it has plenty missing, not even just for average users, simply for average use cases that are also done by power users.
There are plenty of reasons that people don’t use Linux. There is undoubtedly a segment who try out default Ubuntu and either keep using it or go back to Windows/Mac based on that experience. I for one had my first Linux experience on RHEL 5, and the consistency and calming aesthetics of GNOME 2 were a big a factor in my going deeper. On the other hand, people who are willing to try out a range of distros and DEs are already pretty committed.
Besides, MATE is not as consistent or as usable as GNOME 2 (for one, it uses GTK3), and the user base is much smaller than GNOME 3, so it’s less well tested and it’s harder to find answers to your questions. Case in point, in 2010 if your neighbor’s kid was a Linux user there’s a good chance they were running GNOME 2. Obviously MATE doesn’t benefit from that network effect, and even GNOME 3 lacks that level of hegemony because it’s so dissatisfying to so many users.
I think that 3% of desktop linux users is negligible either way, unfortunately. Not even GNOME 3 is well-tested enough.
I don’t know what you mean. What is “negligible” and what is “enough”?
Exactly, my physical desk is messy and that works just fine, cause I made that mess. It’s the skeuomorphisms that’s most often forgotten. Neat organizing tools just make you feel good about organizing, and then seconds later there’s mess again. I wish mess was embraced more. I want to group windows so they pop up together when alt-tabbing, messily, without changing the rest of my mess. I want to snap a window to the side and have it take up that space, messily, without changing the rest of my mess. Like on real desks.
but is it good to have all tasks always visible? many people report “i started work but accidentally switched to twitter and started scrolling”
this is not how tiling is usually used, and this capability is not indicated in the ui anyhow and requires either 2 hands or sticky keys, and both a mouse and a keyboard😢️!
A good window system will use the peripherals available to their greatest potential, and even depend on them. Trying to make a window system that will work well on both computers and touchscreen tablets is a doomed endeavor. You can have different window systems for different devices.
it’s good to have one for both keyboard+mouse and keyboard only…
Pretty much this. Gnome’s a constant building site.
So their Java is restricted such that threads are not available? That would be even simpler because you don’t need a „yield“.
I feel like I’ve written single-threaded Java like this with Thread.yield()
Q: is it possible to make periodic tiling with this? Is it hard or would it be likely if you’re just laying them randomly? Is it possible to get stuck when tiling, forcing you to backtrack to avoid overlaps?
No! Hats (the previous paper) and Specters (this paper) only admit aperiodic tilings. This is in contrast to Penrose tilings (either rhombuses, or kits&darts) which can be tiled periodically unless you add additional matching rules (lines drawn on top of the tiles that must match up across tiles). A quote from the original paper about this: “In this paper we present the first true aperiodic monotile, a shape that forces aperiodicity through geometry alone, with no additional constraints applied via matching conditions.”
I suspect so? But very curious if anyone knows for sure.
The base shape (with straight edges) is “weakly aperiodic”, which (in the terminology of the paper) means that it can tile periodically if you allow both the tile and its reflection, but must be aperiodic if you disallow reflections (but allow translation and rotation). The spectre variant has curved edges which prevent tiles from making a tiling with their reflections, so it is aperiodic whatever planar isomorphisms you allow.
The average user won’t care or even notice those small imperfections in the images, same way that most people are perfectly happy with the photos they get our of their smartphones, despite lacking details and sharpness compared to more professional cameras.
I’m not sure that this is a fair comparison. Most people are happy with the photos they get out of their smartphones, but they still hire a photographer with a professional camera for their wedding. So it’s not that people don’t notice the difference. But the difference is also incredibly minor to begin with in many situations—at this point I don’t even think that a camera is required equipment for getting into photography.
I think that when people don’t mind some reduction in quality it’s usually because we can see “through” the crappiness to the thing behind. A slightly blurry, underexposed photo of someone important to you probably does its job just as well as the same photo taken to technical perfection.
On the other hand, an image of someone important to you with, I don’t know, an extra nostril, could be more jarring. It might not be noticeable in every case—I’ve taken lots of photos in which I wouldn’t have noticed if my camera had been giving people extra nostrils—but at least some of the time, in portraits for example, it would be a deal breaker.
I think it’s a similar story with the generated art. When the worst crimes in a given image are a few paths that lead nowhere, a lantern floating in midair, a floor tree, a campfire sitting on top of an unharmed wooden cabinet, etc… you might not notice if that image is just background filler. But if it’s a level in a game that you are supposed to walk around, paths that lead nowhere jump into the mental foreground.
We’re not making memories with loved ones of average people, we’re making Content for Gamers. You expect gamers to ignore details? In some games, out of place elements have caused months worth of lore theory videos. Even some that you can only see with hacks and glitches.
L is Real 2401
I’ve seen plenty of normie friends get an uncanny valley feel from AI art without knowing it was AI art.
I think it depends on the context. For a blog post that has an AI image at the top, the details are all wrong, but no one is going to look at it, because you just skip past it and read the blog post. For a game, you’re going to be spending hours, perhaps hundreds of hours, staring at the screen, so you will notice if the art is weird.
Raccacoonie?
Racccagui
That address bar search thing really took me a moment to notice on the video, but it’s really cool! Doesn’t work for me at all though. And I hope there is a key to toggle it between URL and search term…
Very cool!
Care to explain? I still don’t get what it’s supposed to show.
Here’s a screenshot showing the feature. I have searched ‘test one two three’ using the URL bar on DuckDuckGo. You’ll notice the search terms are there in the URL bar.
Only works for the default search engine, meh. Most www search engines already have sticky headers so this doesn’t add much.
It works for me on Nightly.
If you’re running 113 and it’s not showing, got to
about:config
and enablebrowser.urlbar.showSearchTerms.enabled
That was enabled, but I’ve had to enable
browser.urlbar.showSearchTerms.featureGate
actually. (WTF is it gated on, region/language or something?!)And yea, double pressing Esc gives me the URL. Very cool.
I heard on the grapevine that Firefox would start hiding cookie consent banners automatically in the future. Does anyone know if this is now behind some configuration setting or should one still opt for an extension? I’ve held off on trying any extensions to do this, but as of late it seems like it has gotten worse… What would people even recommend to effectively block these banners?
You want Consent-o-matic.
https://addons.mozilla.org/en-US/firefox/addon/consent-o-matic/
Seconded. I also liked the Self Destructing Cookies extension, which doesn’t yet work with newer Firefox (though there’s a reimplementation that’s making progress). This had the cookie policy that should be default for all browsers:
If it’s a site that you log into, you tell it to keep the cookies and remain logged in. If it’s a site that you don’t want record anything about you, then you ignore the notification. By default, all cookies are gone as soon as you leave a page but if that wasn’t the right choice for a particular site then there’s an undo button available the next time you return.
You can kind of get that with Temporary Containers but really the ergonomics aren’t quite there.
I think in general containers are a fantastic concept in want of someone figuring out the UX.
It’s not as sophisticated, but Cookie AutoDelete is pretty good.
(It doesn’t automatically keep old cookie for later, but it makes it pretty easy to whitelist domains you want to keep cookies for.)
I think the undelete bit was the killer feature for Self Destructing Cookies. As a user of a web site, I don’t always know if a feature that it depends on requires cookies until I return, so being able to say ‘oh, oops, I didn’t actually want to delete the cookies from my last visit, restore them and reload the tab please’ meant that it could have a very aggressive deletion default, without ever losing state that I cared about.
For sure, it was definitely better. I hope the rewrite for WebExtensions turns out to be viable.
My flow with Cookie AutoDelete is similar, but at the point of realising “oh, oops, I didn’t actually want to delete the cookies from my last visit” then can quickly add the site to the Cookie AutoDelete Whitelist, and log in again (or set up the site again, etc). Then at least I won’t lose it again.
It’s nowhere near as slick, but at least it only happens at most once per site (and one of the things it’s underscored for me is how few sites I actually want persistent cookies for!)
That’s fine for sites where you have an account and you can restore any state fairly easily, but the self destructing cookies model was really nice for places where I had ephemeral state tied to a cookie, even fairly simple things like shopping basket contents. With richer web apps, state is often stored directly in cookies or HTML5 local storage, or in a server-side back end with a cookie as a key to find it, so losing this is annoying.
I don’t want persistent state for 99% of sites that I visit, but the ones where I do, I often don’t realise it until I return.
Is this the reimplementation? https://addons.mozilla.org/en-US/firefox/addon/self-destructing-cookies-webex/ Doesn’t seem very active but gets good reviews
I haven’t tried it (I mentioned it in my original post) but the description suggests that it doesn’t support the ‘undo’ mode, which is the thing that made this the perfect cookie-management strategy: delete everything aggressively but give users a way of undoing the deletion after they discover that the cookies contained some state that they’ve lost.
It’s in nightly. I just updated to 113 and the setting is not yet in stable.
Here’s an article from your favourite grapevine ;). https://lobste.rs/s/igqvhd/firefox_may_soon_reject_cookie_prompts (linked article has instructions)
I think that’s a harsh conclusion from the outcome of a single team out of many at Amazon.
I don’t speak for Amazon, but my experience has been that this kind of analysis and architectural refitting is essentially constant and that’s a good thing. As volume and scope change, different approaches are needed. Monoliths are great for some problems and for some length of time; same with SOA, microservices. The lifetime of the problem usually sees one or all of those approaches as conditions change.
Completely agree with you. It’s all about the trade offs, and sometimes a problem is not understood well enough at the start to properly analyze those trade offs.
Given how flexible and malleable software is, it always amazes me the reluctance to refactor at scale, especially architecture. Electrical and mechanical (let alone civil!) all have massive up-front cost in manufacturing and existing stock needs to be used/trashed, yet revisions are common. Software has none of that overhead and everyone overreacts to revisions…
Software does have an up front cost in testing that the replacement is equivalent. If you can’t fully simulate production without risking production, is it any surprise it’s hard to get rework through?
If we’re using other engineering disciplines as our comparison point, the testing requirements are the same (and much higher for electrical/mechanical/civil). We can get far closer to simulating a true production use-case with software, it’s an unfortunate part of our industry that integration testing is mostly an afterthought.
For video processing nonetheless.
Why do they need this? Does all my traffic go through their servers?
No. At worst it goes via a DERP relay but it’s still encrypted client-side first. (Think WebRTC / TURN servers.)
As a customer, this is awesome because it’s making better use of the full connection whilst keeping the network traffic encrypted and applying permissions based on sender/receiver.