Thankyou for explaining the differences in detail. I’ve always seen device trees as scary.
Now in practice: how portable are device trees between different OSs and distros? I believe a lot of Linux SBCs still have custom code burned into them as well as a DTB?
The upstream device tree spec is supposed to be cross-compatible. For example, on Macs with the Asahi m1n1 bootloader, Linux and OpenBSD both use the same FDTs. Even though not all of the DT additions are upstream yet, the team tries to keep them reasonable and upstreamable so there’s no need to break compatibility.
The problem is most SBCs are running downstream vendor fork kernels with completely uncontrolled and unreviewed FDT additions (sometimes really badly designed ones), and then those aren’t standard or kept compatible with anything else.
Basically, if you are running upstream kernels, then the FDTs are compatible regardless of your distro. If you are running downstream kernels then it depends on how well or not that downstream kernel upholds FDT compat requirements and how well their additions are designed.
So it sounds like there was a battery concern? To avoid physical issues they severely capped the max charge on one of the two battery models used in these devices. However it seems that they wanted to keep this quiet so got some engineer to do a hack build rather than the usual CI?
Some obvious questions:
They are harming these users who they sold a defective device to. It is out of warranty but the right thing to do here would be a recall.
They aren’t being open about this, the honest thing would be to share information about whatever risk they are trying to mitigate. This could be important if these batteries are being used with third-party OSes or have been repurposed for uses outside of the original phone.
How can a random engineer sign and ship an update outside of the regular CI process. Shouldn’t this signing key be very locked down?
How can a random engineer sign and ship an update outside of the regular CI process. Shouldn’t this signing key be very locked down?
It’s a little more complicated than that. Apparently, only the kernel build was from a random engineer. The OS as a whole went through normal release processes. Apparently the kernel builds being separate and then getting vendored into the OS build is normal.
Yes, it’s terrible practice. Not going through CI usually means no control over what actually went into the binary because the environment it’s built in is not controlled. It’s hard to reproduce this build and there is an endless source of potential errors, like embedding the wrong version numbers (because CI usually takes care of version numbering), using the wrong branch, using the wrong compiler, using the wrong build flags, etc. There’s a reason why this stuff is automated and under strict version control.
It makes me feel marginally better. At least that indicates the signing key isn’t available to just anyone - presumably the config to include the hand-built kernel had to go through normal review and CI processes.
But it doesn’t make me feel that much better. The whole thing is still sketchy as hell.
Going from 4.45V to 3.95V is a massive jump. For context: most of the usable energy of a lithium ion battery is between it’s max charge voltage (eg 4.2V) and about 3.4V. Below that there is only a tiny bit of capacity, the voltage plummets quickly (lookup “lithium ion discharge curves”), the exact choice of low cutoff voltage (2.6-3V are common) is a bit arbitrary and only grants you a few more % at best.
It could be a well researched change with actual data behind it. But I vote corporate laziness instead, Google probably just doesn’t care and put pressure on the person to do something quickly and cheaply.
i don’t know if this was the case elsewhere, but in the uk google offered free battery swaps at EE stores or £40 for each device (i think regardless of the condition), which is probably fair for a 5 year old phone?
Most lithium batteries I’ve worked with have around 4.2V as the upper limit, so 4.45V was a bit surprising to me, but that is what the manufacturer states.
4.2V is the general recommendation, but indeed some cells are specced by the manufacturer to go higher. They like doing this in phones because it gets them a much better energy/volume ratio.
I have accidentally overcharged some small lipo pouch cells to 4.5V once before (faulty USB charger). I used them immediately to drain them back down, so at a minimum they didn’t immediately explode :P
Interesting! I’ve been rocking a device from https://www.gl-inet.com/ which comes with OpenWRT + some extensions. Would definitely consider switching next time I’m due for an upgrade.
I also got some GL.iNet devices this Black Friday and honestly it’s been such a joy to just unbox a router, plug it in, and then instantly SSH to it and start installing packages.
So far I kind of like the GL.iNet approach of “user friendly UI” + “full openwrt UI if you want it” just because so many of the simple things (like Tailscale setup) are as easy as flicking a switch instead of running a bunch of scripts, but I’m definitely rooting for any company building on top of OpenWRT instead of more closed-source, super janky, proprietary ones.
I also enjoy GL.iNet devices for the same reason. I’ve rolled out dozens in a few previous jobs and use them at home.
Just be warned that only SOME of their devices are supported by the OpenWRT project, others you can’t download firmware for except for GL.iNet’s custom fork/builds. My experience with many SBCs is that means they are dead ends, but I might be wrong in this case and GL.iNet might keep providing updates, not sure.
I can’t believe I’m taking google’s side here but this is ludicrous. The motivation and proposed correctional measures are too ill-conceived; they’re just going to hurt the ecosystem instead. There is a right way to do this but this ain’t it.
The government could fund Firefox or core tech in FF or otherwise contribute to those projects, thus weakening Google’s hold over the company. US gov pours billions into tech startups and companies, seems perfectly reasonable for them to do so here.
Maybe a dim view, but I don’t think I would wish government funding on Firefox. I can only imagine them getting drawn into political fights, spending time justifying their work to the American people, and getting baroque requirements from the feds.
Government funding comes in all shapes and sizes. Most of it has nothing to do with politics. The air force and DoD are constantly investing or pouring money into startups. I myself had a government investor at my startup. No justification to the US needed.
I don’t think they care particularly much about Google’s hold over Mozilla. They care about Google using their simultaneous ownership of Google Search, Chrome, and all their ad-related stuff to unfairly enrich themselves, and they see Google’s payments to Apple as a method to defend that monopoly power. If Mozilla had an alternate source of funding, it wouldn’t really change anything except maybe make the browser that 5% of people use have a different default search engine. It probably wouldn’t help Firefox to become more popular, and it’d be a much smaller difference than whatever happens with Safari.
Regardless, it seems very likely to me that neither Chrome nor Firefox would survive this. But who knows, maybe that’s a good thing. Maybe that will pave the way for consumers paying for their browsers instead of paying through subjecting themselves to advertisement and propaganda. Doesn’t sound too bad since it’s probably the ad economy that turned the world into the propaganda wasteland that it is today.
Initially I’ve read the title and a part of the article and I was almost quick to dismiss the article.
However, although I do consider that I know my bash very well and I write very defensive scripts, I couldn’t believe my eyes when I saw the execution of the following snippet:
a=''
x='a[$( sleep 2s )]'
[[ "$x" -eq 42 ]]
I.e. if one replaces sleep 2s with another side-effect command, it is executed. (I didn’t manage to make it interact with the console, but the command does execute.)
I really I can’t state how shocked I am to learn this… I now wonder if there are other places in bash where, although one properly quotes arguments (and in case of [[ the quoting isn’t even necessary according to the documentation), this hidden evaluation happens?
There are also 10+ other places it happens in the language! It’s anywhere that bash and ksh accept a number, like $(( x )) and ${a[x]}, the argument to printf -v, unset, etc.
Back then, I was able to find vulnerable example code on the web, but it was difficult to find a “real” script that was vulnerable – i.e. user input was in fact subject to a hidden eval, aka user-supplied code execution.
Otherwise I would have made a lot more noise about it. (OpenBSD ksh fixed it based on my report, but bash didn’t)
I think it’s a “ksh-ism” that made it into bash. Most bash-isms are actually ksh-isms … i.e. sorry to say but David Korn of AT&T, or somebody on his team, probably deserves the blame for this :-/
Somehow they mixed up parsing and execution, in a way that’s significantly worse than Bourne shell / POSIX shell. (Bourne deserves the blame for word splitting, which OSH fixes as well.)
I was asked to point out that the second issue actually exists in ZSH: you just have to use a previously defined variable (in your example, there is no $a where you want it to evaluate an index within $a). If you re-use the existing PWD definition to smuggle in the evaluation, you get the (un)desired result:
this is a stark reminder how despite the fact that it feels like we have a fresh internet every X years with new tech, it’s still built on ancient foundations
it always was and always will be. The only real solution is to not have syn/ack between origin and target but between all the nodes in between. Not very efficient. UDP would basically be useless
I’ve only written a bit of GLSL, but enough to know I hated interfacing with shaders and data more than the shader language itself :P Looking forward to trying out these new APIs.
Some of the items on the linked page explain why those choices were made, others not so much. Eg:
There is not (currently) a “double” type
Why? Is that a driver/hardware/backend compat issue? Is there a way of intentionally saying “double if available otherwise float is fine and I pinky promise I tested that it doesn’t turn everything into clip spaghetti”?
(Doubles are nice for Z-buffers because you get less/no Z-fighting without needing to split a scene into separate foreground & background renders)
The language will not implicitly dither an int down to a bool. If you need to test an integer is non-zero, you should do if (x != 0) instead of if (x).
Why? Is there some performance or compatibility advantage here because of the nature of GPUs? Eg do you want us to explicitly only check one bit instead or something?
I’m not involved with this project but I would guess it comes down to some combination of doubles being slow on GPUs and general gamedev sentiment being that if you need doubles you are doing it wrong.
The bool thing is just because implicit conversions are footgunny.
general gamedev sentiment being that if you need doubles you are doing it wrong.
:(
My little games run at hundreds of FPS. Most little games from other people use Unity or Unreal and do not even make 60FPS. Doubles “doing it wrong” seems like a sentiment from another planet.
I hope SDL intends to support people like me, not just bigger and more formal game developers.
The bool thing is just because implicit conversions are footgunny.
Specifically if (<float>)? Yeah I guess that would be a pain, especially with the different (non-IEEE conformant) float formats that might exist out there in GPUs and other oddities.
I was thinking of if (<int>), but I have not done that in my simple shaders and perhaps it’s not that common anyway.
I’m mostly a polyagonal modeller, not a parametric CAD person, but I’ve dipped my toes into Solvespace and FreeCAD a tiny before before. I’m giving this new FreeCAD release a go.
Appimage works great. It’s big, but I think they stuffed a python runtime in there.
The splash screen asking what navigation controls you use is nice (thou shalt use middle click to drag and shift+middle to rotate, heathen, else thou is different to 2D editors). There are so many standards out there that people are used to from different software (but mine is correct and you are all wrong).
I can’t seem to change how rotating the view works. It rolls the camera (ie tilts your head to the left or right), which makes me feel ill. Who does that in real life? Rolls their head onto their shoulders (and then further, snapping their neck) and leaves it in those positions for hours whilst working on something they have made? Actually that could explain modern appliance design.
I am used to 3D editors where the camera pans, yaws and orbits around your workpiece, but never rolls unless you specifically ask it to. In FreeCAD you can’t avoid rolling, the mouse doesn’t have enough degrees of freedom (it’s a 2D input device for 2 axes, not 3) so they magic up some weird relationship that causes you to constantly end up with arbitrary roll angles when you try to move your viewpoint. I sometimes find that moving my mouse in large circles around the orbit origin sometimes helps undo (or increase) the roll… sometimes.
Yeah that’s really confusing. Given that Microsoft is sponsoring them I get the impression that they want to steer everyone to MS’s .NET. It would be interesting to see Wine’s take on this.
In practice Mono has not been enough to run many apps for a long time, instead I’ve had to install Microsoft’s dotnet frameworks under Wine for quite a lot of things to work (mainly game related, like W3DHub and STALKER Gamma). And they’re buggy, oh god they’re buggy. The Wine team have done an amazing job to get more and more of them working but I can’t shake the feeling that incompatibility with non-Windows OS might be an intentional business strategy.
I can’t shake the feeling that incompatibility with non-Windows OS might be an intentional business strategy.
It might be hard to believe, but it actually isn’t. In fact, I believe .NET was created specifically (besides the legal stuff with Sun, of course) so MS could have their own portable platform in case they ever changed their base OS (which they already had to do once when going from 9x to NT, so they absolutely knew the value of having that kind of technology). The problem with .NET Framework is that a lot of Windows’ mechanisms are deeply integrated into the CLR, from COM to the registry to how Windows loads DLLs. This wasn’t an intentional act of sabotage, they were just using the technology stack they had available. It was easy enough to rip out for .NET Core, but originally they just never expected to have to develop the platform for any OS besides ones MS made. This was back in 2000, people were just barely starting to take Linux seriously, and I doubt either Apple or MS cared enough to port it over to Mac OS X, especially since .NET was specifically made to create Windows software, and Apple already had their own solutions to application development. So making .NET for “any other OS” seemed both moot and more difficult than just using what APIs Windows provided. But by and large, the core design of .NET and the CIL bytecode was designed to be as portable as Java, just that the implementation didn’t have much opportunity to be super portable at the time.
This actually goes the same for the entire Windows API, the whole reason WINE (and ReactOS) can exist is because the Windows API was specifically designed to be implemented over any OS, which they took advantage of to both backport some APIs to Windows 3.1 as well as allowing 9x software to work on NT, which was part of a deliberate plan to gradually replace their DOS software ecosystem for WinAPI software, the entire 9x architecture was based off NT and deliberately made as a disposable bridge from DOS to NT.
I believe .NET was created specifically (besides the legal stuff with Sun, of course) so MS could have their own portable platform in case they ever changed their base OS
It is more specific, but also simpler, than that.
Between 1998 and 2001, it looked quite likely that the US DOJ might split Microsoft up into (at least) 2 different companies:
The likely split was an applications division, and an operating systems division.
However MS knew very well that some tools spanned that divide: for example its lucrative development tools product line. (It also posed difficult questions, e.g. Outlook is clearly an app and would go to the apps company, but Exchange Server is effectively an OS component and would go to the OS company. But Outlook is primarily the native Exchange client.)
So MS frantically did several things.
It embedded IE into Windows 98 as part of the UI. Explorer was rejigged to render content as HTML and display it using IE, called “active desktop”, and it was put out as part of Win98 and as an update for Win95 and NT4. It also invented a new Help file format rendered thru’ IE.
Reason: It wasn’t illegally bundling IE if IE was a necessary part of the OS, right?
It also put together a sort of MS JVM: a runtime and tooling that would let MS tools be used to build apps that could run on any OS. And thus the dev tools could be rebuilt as MS VM apps, and they could run on anything with the MS VM, and produce apps that would target that MS VM.
That was named .NET.
It only existed because MS was afraid that it’d be split up and this was a solution it whipped up to built an MS runtime platform that the apps division could target, and which would enable MS AppCo to produce apps for Windows, MacOS, and UNIX.
Bear in mind this is before Mac OS X. At the time it offered Internet Explorer 4 on Solaris and other Unixes, for instance. Linux was barely a thing yet but was on the radar.
But the fierce DoJ judge Thomas Penfield Jackson was taken off the case, and replaced with the much meeker Colleen Kollar-Kotelly, and the split up never happened.
So now what was it going to do with .NET its big bold future platform for Windows?
Er…
Er, er, er, it’s a safer replacement for C code, er, and it’s in a sandbox, so, er, we call it “managed code” and it’s a better safer way to develop Windows apps than that boring old Win32 API.
And, er, well, no, they’re not actually portable, no, not if they have a GUI, and er, no, it’s not very quick… but, er, but, er, hey, look, here is a new touchscreen version of Windows, with a new UI called Metro, er no, sorry, Modern, and it has an App Store, and we’ll only let you sell apps in the App Store if they’re .NET apps with a Metro er, no, er Modern UI, and we’ll only take a little cut.
Everyone hated Win8 of course. Nobody bought anything much from the Store.
It backed down. Win10 gets rid of much of the Modern UI and you can put Win32 apps in the Store.
My personal impression is that it’s a bit of a lingering remnant now and MS would quite like to slipstream it into the OS and eliminate it as a separate thing.
Wow, that was rather insightful, I never knew that was the reason they bundled IE into Windows, I thought it was just a weird marketing push.
Unfortunately, I have to agree that MS has abandoned .NET. I never really thought about why, but you bring up good points. After the marketing team forced the .NET team to rip out hot reloading from the toolchain and hid it behind Visual Studio, I knew that they were not only done with .NET but in fact still had full control over it, and lied about it being it’s own independent organization. Which is why I bailed and switched to Rust. And yes, I know they quickly reversed the decision to lock hot reload behind VS, but they showed their hand and I had no intention of learning the hard way twice.
Sun sued over the MS JVM and MS’s embraced-and-extended Java J++ didn’t it?
I don’t think that’s directly connected here… except maybe as another inspiration for MS to create .NET in the first place, as its own totally-not-Java-honest VM.
It was more directly the reason than the DOJ lawsuit.
I’m curious if there are any sources for the DOJ response theory for IE bundling and .NET? My faulty memory is that functionally like Active Desktop were part of the lawsuit and the versions of Windows that came out as part of consent specifically were able to turn them off.
It was more directly the reason than the DOJ lawsuit.
I think – I am not sure I can prove, over 25Y later – that you’ve got the cause and the effects mixed up here.
The DOJ stuff took a long time (1-2 years) to happen but it was visible coming down the road. Thus frantic technological flailing. IIRC there was a Netscape lawsuit about anticompetitive bundling that came first.
It invented .NET and then it tried to come up with cool things to do with .NET and a big justification was – as you say – that the earlier Sun lawsuit had shut down J++ and the J++ VM, and so it needed a “better Java than Java” language to offer to the devs it had got on board with J++.
The CLR grew out of actual MS Java… but when I wrote about the MS JVM I was drawing a technological comparison, not saying literally “the MS™ JVM™ version N+1 was renamed the .NET LCR.”
I’m curious if there are any sources for the DOJ response theory for IE bundling and .NET?
The IE part was very widely discussed at the time. .NET is historical hindsight and personal analysis.
maybe as another inspiration for MS to create .NET in the first place, as its own totally-not-Java-honest VM.
As I recall, one of the reasons Sun sued them was that J++ was a faster implementation of Java than Sun’s. This meant that people preferred to target J++, rather than the Sun JDK. The justification for the lawsuit was that J++ put a load of MS-specific things in the java.* namespace and so things written for J++ didn’t necessarily run anywhere else, but that wouldn’t have mattered if no one wanted to use J++.
My understanding is that the first versions of the CLR were based heavily on the J++ runtime and so inherited the performance wins.
They did indeed sue over OS-specific APIs in the standard library, which is the same reason they sued Google over it in Android a few years ago. It’s not some sort of corporate bullying tactic, it’s genuinely against their licensing terms (or something, I know they officially state you can’t do that somewhere.) It’s dumb, definitely, but they don’t try to hide it.
I’m just surprised Google didn’t take a page out of MS’ book and simply make a new language many times superior to Java.
I believe .NET was created specifically (besides the legal stuff with Sun, of course) so MS could have their own portable platform in case they ever changed their base OS
Partly that, but a big(er?) part was moving away from their dependency on Intel and, especially, on x86. Microsoft ported Windows NT to several other architectures but they all died because of a lack of software. Most people don’t buy Windows because they like Windows, they buy Windows because Windows runs all of their software. Windows on Alpha, for example, ran native software much faster than the fastest Intel machine, but almost everything that actually ran on it ran in the x86 emulator and the machines couldn’t run x86 software faster than the fastest x86 chips.
Having an architecture-agnostic distribution format was a good hedge here. If most third-party software avoided native code except for Windows-provided DLLs, then it could be moved to a new architecture trivially. Unfortunately, this never worked and most .NET apps included some custom native components.
.NET launched at the peak of the Itanium hype, when x86 was going away and everyone needed a way of moving from x86 to Itanium, and to whichever alternative came along if Intel bungled the transition (x86-64 was not yet announced). It was expected to be able to emulate x86 reasonably fast, but you would get the biggest benefits only with recompiled code, so Microsoft needed an easy way of getting the most popular Windows apps moved across.
At the launch, Microsoft made a big deal about being able to do the compilation at any point. Most of the early .NET implementations were mostly JITs, but install-time compilation was one of the benefits that they talked about a lot. The idea was that you’d do the final codegen when you installed the app, which would let you tune for the target microarchitecture and avoid any JIT overheads. I don’t think they ever shipped this (various groups did ship .NET AoT compilers).
At the launch, Microsoft made a big deal about being able to do the compilation at any point. Most of the early .NET implementations were mostly JITs, but install-time compilation was one of the benefits that they talked about a lot. The idea was that you’d do the final codegen when you installed the app, which would let you tune for the target microarchitecture and avoid any JIT overheads. I don’t think they ever shipped this (various groups did ship .NET AoT compilers).
They did actually ship this - NGen basically did tuned AOT compilation, and you ran that at install time.
Linux existed and was just starting the killing of “Big Unix” trend. By 2010 when Oracle took over Sun Microsystems, Big Unix was dead. IBM’s AIX and HP’s HP-UX still technically exist, but I haven’t seen either one in forever.
Contrary to popular belief, AIX doesn’t exist. It’s just a substrate for Unix syscall emulation on a more popular object capability system!
(Well, AIX still does exist, but I don’t touch it. Apparently the latest version has bash and Python 3 in the base system, which is a little crazy to hear. And there are indeed more IBM i sites than AIX sites at this point.)
I had to go look up if they actually still existed! They both have had releases in the past year(according to Wikipedia) I was not willing to try and figure out their actual websites.
I wonder if they have a developer or hobbyist program like OpenVMS does. Another of those that effectively got killed off by the rise of Linux and X86. It might be fun on a boring stormy day to install them and see how much if anything has really changed from what I remember.
There is modern .NET (previously .NET Core), and then the legacy .NET Framework. I imagine Mono in this case might still be useful for the legacy .NET Framework?
Cheapest available Micro-B from JLCPCB comes at $0.0135 for 1500 pieces. Type-C at $0.0297 for the total difference of:
+$0.0162
That’s 37 haléřů for fellow penny-clinching Czechs.
There probably is some profit margin and they would be able to eat the difference for sure. It’s probably so that the board is 1:1 drop in for the previous one.
There is a chance that the next revision of the board will use the 48p chip and feature USB-C, since there would not be the backwards compatibility requirement.
Yeah I have to imagine that keeping a specific form factor compatibility is the primary driver.
Tangentially, I’ve no personal experience with 2350 based boards, but I’ve run into some other common microcontrollers which have started shipping boards with usb-c headers replacing a previous generation’s micro-usb and I keep running into design issues with dedicated usb controller chips bleeding too much current back into the data lines of the microcontroller which prevents those lines from being able to be pulled down. I don’t know a lot about circuit design to know how avoidable such a situation is, but I can also appreciate not just jumping to a new connector without taking a thoughtful approach at how it may impact the overall product.
Never thought of that either. It does look like there is a price difference, but I don’t think it’s bit enough in absolute terms (compared to the other parts on the board) to be a driving force.
I’m ok with this as long as it’s occasional. If too many appear then it will cause the same problem as memes being upvoted more than in-depth articles on reddit.
I’m a newer community member but I’ve lurked for years (and i am the submitter of this story) so my opinion is basically invalid BUT I think the occasional trend is quite enjoyable, as long as the trend is evanescent (as opposed to reddit commenters using the same cliche for years straight). It’s interesting to see what interesting takes on a principle exist
If this is true, it would be another example of Google making their problem into everybody else’s problem. We have seen various articles here from Android developers trying to convince Google to include their perfectly valid app in Google Play because some stupid rules were not met. Now we are going to see the same with online content.
Your website doesn’t have a privacy policy.
But I don’t… it’s some static html files… OK sure.
Your privacy policy doesn’t mention how you handle deletion requests.
Wut… OK
Your website example.com seems to be substantially similar to testing.example.com, you are de-ranked as a duplicate. Please avoid spamming copies of your website on multiple domains.
!?!?@#@?!
I’d like to see some examples and evidence. That would help me understand.
For example: I just tried searching for “Many indoor air quality sensor products are a scam” which is the title of one of my blog articles from a couple of years back.
On DuckDuckGo: my website doesn’t exist. A ycombinator discussion on my article is first then nothing more for at least several pages of results.
On Google: my article is the first result.
Perhaps I need to do a more topic-based search instead of a specific title-based search? Again some examples would be good to illustrate the point.
A couple of days ago @robey mentioned his Zukte programming language on lobste.rs, without giving a link. I tried searching for it, but couldn’t find it. The actual link is https://code.lag.net/robey/zukte, but this link doesn’t seem to be indexed on any of the search engines I tried.
Is this really a google specific problem? I presume all of the search engines must deal with the same issues described in the article, which is that the bulk of new web pages now contain machine generated junk content.
Once upon a time, Google had a reasonably good solution for this: allowing users to flag spam domains and remove them from their results.
Google removed this feature, presumably because Google profits from ad impressions when people visit spam pages that include Google ads, and Google’s engagement metrics rise when you have to wade through multiple pages of outright junk and sponsored links instead of finding what you asked for immediately.
This is true. But not all search engines work this way.
One of the search engines I use is Kagi.com. They do let you block spam domains, and also, “Ads and trackers on a website can negatively affect its page ranking on Kagi search results. Kagi prioritizes non-commercial sources and penalizes bloated sites with ads and trackers, regardless of their agendas.”
I won’t absolve Google of everything, but I believe that there is more probable and less cynical reason that the feature was dropped: Abuse.
There are many example where mob of angry people downvote or flag content they have beef with, even if the content is perfectly fine for another audience. Ecological activist flagging oil company content, gamer giving bad reviews on Steam on game they didn’t even play to punish a company, business owner giving bad review on Google Maps to their competitor. Heck, they are Anime fanatics downvoting shows so that their own favorite stay on top of a user rated list.
I can absolutely see activist employing bot farm to flap as Spam websites they want delisted from Google for whatever reasons’ they fancy
Good point. However, Google is free to implement the same solution that Kagi.com uses, which is that logged in users may block spam domains to remove them from their own search results (not from other people’s search results).
for some more anicdata, two days ago I published a new article to my blog. Other substantially more popular posts of mine show up on google search with a fragment of the title. My post from yesterday has much less traffic, is pretty niche, and has a pretty unique title, does not show up even with the full title in quotes and my site name in the search.
That firmware reminds me of a network cam I looked at years ago. RW filesystem and loading a new firmware didn’t delete my extra files.
I’m kind of disappointed by modern Android smart devices and no obvious way of getting a shell, or perhaps I’m wearing blinders and they’re just as bad (suggestions welcome! I hate my TV and want to debug it, but it has no exposed USB for adb).
hate my TV and want to debug it, but it has no exposed USB for adb).
Check if enabling USB debugging on your TV also silently enables networked adb. IIRC, it enabled remote debugging on the default port 5555 on my Coocaa TV at home. Then just use adb connect $tv and you’re in.
Any way of detecting if you have been under such an attack (assuming the attacker has not succeeded and cleaned up their entry)? The timeline says that this was announced on openwall a couple of weeks ago.
Since it requires loads and loads of attempts, I suppose a really large number of SSH connection attempts that don’t seem to care about varying login names and passwords would be a good reason to suspect that someone is trying to exploit this, rather than running a normal password guessing attack.
I just tried to timeout an auth, my logs showed this:
sshd[x]: Timeout before authentication for connection from xxx to xxx, pid = 24747
I only have two Timeouts in my logs (the first being in February). That’s assuming my logs have not been adulterated of course, AND that this is the right message to be looking for.
I would be really curious to see if I start seeing lots of Timeout entries like that now.
I have been seeing lots of other stuff over the last few months, including:
sshd[x]: banner exchange: Connection from xxx port 50913: invalid format
auth.err: Jun 27 00:09:10 sshd[x]: error: kex_exchange_identification: read: Connection reset by peer
auth.info: Jun 27 00:09:10 sshd[x]: Connection reset by xxx port 48198
auth.info: Jun 27 00:09:28 sshd[x]: Connection closed by xxx port 42280 [preauth]
There are a ton of Timeouts in my ssh logs going back months. So either this attack was already known, or some ssh botnets tend to timeout connection attempts regardless.
I was looking at a server with ssh on a nonstandard port.
If I instead look at one that is running ssh on port 22: I also see a constant stream of timeouts.
Volume seems about constant for the last few months, so it’s probably unrelated. I’m eager to see if the volume starts ramping up now however :) But I guess they’d probably check the SSH version string before attempting it.
The problem isn’t really open-source printers, it’s open-standard printer interfaces. Decent printers consume PostScript, PCL, or PDF. The printer driver for these can be completely generic, because it’s just something like IPP passing files in a common format.
Cheap printers offload a lot more to the host. They often do rasterisation entirely on the host, so need to have a bespoke model of the print head (what kinds of dot patterns it can do). They sometimes even make the host generate all of the control commands and so there’s a tiny microcontroller that just runs a USB device interface to pull off USB encapsulation from the commands and feed them directly to the motors / print heads. This was even easier with parallel-port printers where you could wire the motors directly to the pins in the parallel port and make the computer directly drive everything, but even with USB you can typically get away with a $1-2 microcontroller with a few tens / hundreds of KiBs of RAM, whereas rasterisation on the printer needs a $10-20 SoC with (at least) tens of MiBs of RAM.
I would love to do this, but I’m terrified that the big companies will be less than welcoming to an open source competitor (or really any small competitor).
Is it legal to make your printer compatible with existing toner + drum generics? Or would they still try and sue you for patent infringement anyway?
i am aware of these ‘arguments’ (taking the form ‘look at the assembly! it’s so bad, it has so many instructions!’). i have not been shown a real-world application whose performance was significantly impacted by -fwrapv under llvm/gcc (the famous example from chandler carruth makes no difference in my testing, and the relevant code has garbage performance either way)
(this being doubly true insofar as the requisite transforms can be done by hand at the source level if need be, and insofar as alternate approaches are available to the compilers, should they be pressured to employ them—as mentioned by fabian in the text you link—but even leaving these aside. the obsession of c++ with ‘zero cost abstractions’ is boggling, given they employ a model of implementation which is fundamentally conducive to suboptimal code generation and ignore the fact that fast is actually about queues.)
Thankyou for explaining the differences in detail. I’ve always seen device trees as scary.
Now in practice: how portable are device trees between different OSs and distros? I believe a lot of Linux SBCs still have custom code burned into them as well as a DTB?
The upstream device tree spec is supposed to be cross-compatible. For example, on Macs with the Asahi m1n1 bootloader, Linux and OpenBSD both use the same FDTs. Even though not all of the DT additions are upstream yet, the team tries to keep them reasonable and upstreamable so there’s no need to break compatibility.
The problem is most SBCs are running downstream vendor fork kernels with completely uncontrolled and unreviewed FDT additions (sometimes really badly designed ones), and then those aren’t standard or kept compatible with anything else.
Basically, if you are running upstream kernels, then the FDTs are compatible regardless of your distro. If you are running downstream kernels then it depends on how well or not that downstream kernel upholds FDT compat requirements and how well their additions are designed.
So it sounds like there was a battery concern? To avoid physical issues they severely capped the max charge on one of the two battery models used in these devices. However it seems that they wanted to keep this quiet so got some engineer to do a hack build rather than the usual CI?
Some obvious questions:
It’s a little more complicated than that. Apparently, only the kernel build was from a random engineer. The OS as a whole went through normal release processes. Apparently the kernel builds being separate and then getting vendored into the OS build is normal.
That… doesn’t make me feel much better 😅
Yes, it’s terrible practice. Not going through CI usually means no control over what actually went into the binary because the environment it’s built in is not controlled. It’s hard to reproduce this build and there is an endless source of potential errors, like embedding the wrong version numbers (because CI usually takes care of version numbering), using the wrong branch, using the wrong compiler, using the wrong build flags, etc. There’s a reason why this stuff is automated and under strict version control.
It makes me feel marginally better. At least that indicates the signing key isn’t available to just anyone - presumably the config to include the hand-built kernel had to go through normal review and CI processes.
But it doesn’t make me feel that much better. The whole thing is still sketchy as hell.
Yeah, I’m sure that binary kernel they checked in was well reviewed. I guess it is at least traceable to a human.
Going from 4.45V to 3.95V is a massive jump. For context: most of the usable energy of a lithium ion battery is between it’s max charge voltage (eg 4.2V) and about 3.4V. Below that there is only a tiny bit of capacity, the voltage plummets quickly (lookup “lithium ion discharge curves”), the exact choice of low cutoff voltage (2.6-3V are common) is a bit arbitrary and only grants you a few more % at best.
It could be a well researched change with actual data behind it. But I vote corporate laziness instead, Google probably just doesn’t care and put pressure on the person to do something quickly and cheaply.
i don’t know if this was the case elsewhere, but in the uk google offered free battery swaps at EE stores or £40 for each device (i think regardless of the condition), which is probably fair for a 5 year old phone?
Most lithium batteries I’ve worked with have around 4.2V as the upper limit, so 4.45V was a bit surprising to me, but that is what the manufacturer states.
4.2V is the general recommendation, but indeed some cells are specced by the manufacturer to go higher. They like doing this in phones because it gets them a much better energy/volume ratio.
I have accidentally overcharged some small lipo pouch cells to 4.5V once before (faulty USB charger). I used them immediately to drain them back down, so at a minimum they didn’t immediately explode :P
Interesting! I’ve been rocking a device from https://www.gl-inet.com/ which comes with OpenWRT + some extensions. Would definitely consider switching next time I’m due for an upgrade.
I also got some GL.iNet devices this Black Friday and honestly it’s been such a joy to just unbox a router, plug it in, and then instantly SSH to it and start installing packages.
So far I kind of like the GL.iNet approach of “user friendly UI” + “full openwrt UI if you want it” just because so many of the simple things (like Tailscale setup) are as easy as flicking a switch instead of running a bunch of scripts, but I’m definitely rooting for any company building on top of OpenWRT instead of more closed-source, super janky, proprietary ones.
I also enjoy GL.iNet devices for the same reason. I’ve rolled out dozens in a few previous jobs and use them at home.
Just be warned that only SOME of their devices are supported by the OpenWRT project, others you can’t download firmware for except for GL.iNet’s custom fork/builds. My experience with many SBCs is that means they are dead ends, but I might be wrong in this case and GL.iNet might keep providing updates, not sure.
I can’t believe I’m taking google’s side here but this is ludicrous. The motivation and proposed correctional measures are too ill-conceived; they’re just going to hurt the ecosystem instead. There is a right way to do this but this ain’t it.
Change and unknowns versus keeping the status quo.
Is there a right way of breaking monopolies? They design themselves to make breaking them up look as unattractive as possible.
The government could fund Firefox or core tech in FF or otherwise contribute to those projects, thus weakening Google’s hold over the company. US gov pours billions into tech startups and companies, seems perfectly reasonable for them to do so here.
Maybe a dim view, but I don’t think I would wish government funding on Firefox. I can only imagine them getting drawn into political fights, spending time justifying their work to the American people, and getting baroque requirements from the feds.
Government funding comes in all shapes and sizes. Most of it has nothing to do with politics. The air force and DoD are constantly investing or pouring money into startups. I myself had a government investor at my startup. No justification to the US needed.
If it’s government funding with few or no strings attached that would be great. I just wouldn’t want to see Firefox become a political football.
Most government funding for tech has no strings attached, or they just own stock, which is ideal for everyone.
I feel like this would open a whole new can of worms and actually wouldn’t be good for Firefox in the longer term.
I don’t think they care particularly much about Google’s hold over Mozilla. They care about Google using their simultaneous ownership of Google Search, Chrome, and all their ad-related stuff to unfairly enrich themselves, and they see Google’s payments to Apple as a method to defend that monopoly power. If Mozilla had an alternate source of funding, it wouldn’t really change anything except maybe make the browser that 5% of people use have a different default search engine. It probably wouldn’t help Firefox to become more popular, and it’d be a much smaller difference than whatever happens with Safari.
It would reduce Google’s ability to exercise this monopolist power over Mozilla.
If the money were spent well, I think it absolutely could.
Nationalizing natural monopolies has not been a popular approach in the US, unfortunately.
Regardless, it seems very likely to me that neither Chrome nor Firefox would survive this. But who knows, maybe that’s a good thing. Maybe that will pave the way for consumers paying for their browsers instead of paying through subjecting themselves to advertisement and propaganda. Doesn’t sound too bad since it’s probably the ad economy that turned the world into the propaganda wasteland that it is today.
Initially I’ve read the title and a part of the article and I was almost quick to dismiss the article.
However, although I do consider that I know my
bashvery well and I write very defensive scripts, I couldn’t believe my eyes when I saw the execution of the following snippet:I.e. if one replaces
sleep 2swith another side-effect command, it is executed. (I didn’t manage to make it interact with the console, but the command does execute.)I really I can’t state how shocked I am to learn this… I now wonder if there are other places in
bashwhere, although one properly quotes arguments (and in case of[[the quoting isn’t even necessary according to the documentation), this hidden evaluation happens?There are also 10+ other places it happens in the language! It’s anywhere that bash and ksh accept a number, like
$(( x ))and${a[x]}, the argument toprintf -v,unset, etc.I listed them all here in 2019 - https://github.com/oils-for-unix/blog-code/blob/main/crazy-old-bug/ss2-demos.sh#L49
Back then, I was able to find vulnerable example code on the web, but it was difficult to find a “real” script that was vulnerable – i.e. user input was in fact subject to a hidden eval, aka user-supplied code execution.
Otherwise I would have made a lot more noise about it. (OpenBSD ksh fixed it based on my report, but bash didn’t)
(See my comments elsewhere in this thread)
Did you also report this to zsh? What did they say? It is vulnerable to ciprian’s example and probably some of yours.
I didn’t report it to zsh in 2019, probably because I didn’t find it then. I just tried again and didn’t find it in zsh:
https://lobste.rs/s/mla0ns/til_some_surprising_code_execution#c_odikpl
Though I’d be interested if anyone else sees it
I think it’s a “ksh-ism” that made it into bash. Most bash-isms are actually ksh-isms … i.e. sorry to say but David Korn of AT&T, or somebody on his team, probably deserves the blame for this :-/
Somehow they mixed up parsing and execution, in a way that’s significantly worse than Bourne shell / POSIX shell. (Bourne deserves the blame for word splitting, which OSH fixes as well.)
I was asked to point out that the second issue actually exists in ZSH: you just have to use a previously defined variable (in your example, there is no
$awhere you want it to evaluate an index within$a). If you re-use the existingPWDdefinition to smuggle in the evaluation, you get the (un)desired result:(the “0” isn’t technically necessary, but silences an error so the test just fails)
Oh wow, thanks for the correction! zsh is indeed vulnerable – I confirmed it on my machine
Yeah the problem was
a[]wasn’t defined, so that error was masking the vulnerability.Interestingly it does not always trigger:
This is terrifying. I’m reading through and testing some public facing scripts right now.
EDIT:
Yes, this triggers because of bash’s string-to-int code being actually a full blown expression evaluator. If it’s not an int operator, you’re fine.
this is a stark reminder how despite the fact that it feels like we have a fresh internet every X years with new tech, it’s still built on ancient foundations
I see the problem in the handling of the abuse complaints. It’s not great to be shut down because of invalid abuse complaints.
It also points out a fundamental flaw of the internet.
A whole category of abuse is now very difficult to deal with.
it always was and always will be. The only real solution is to not have syn/ack between origin and target but between all the nodes in between. Not very efficient. UDP would basically be useless
I’ve only written a bit of GLSL, but enough to know I hated interfacing with shaders and data more than the shader language itself :P Looking forward to trying out these new APIs.
Some of the items on the linked page explain why those choices were made, others not so much. Eg:
Why? Is that a driver/hardware/backend compat issue? Is there a way of intentionally saying “double if available otherwise float is fine and I pinky promise I tested that it doesn’t turn everything into clip spaghetti”?
(Doubles are nice for Z-buffers because you get less/no Z-fighting without needing to split a scene into separate foreground & background renders)
Why? Is there some performance or compatibility advantage here because of the nature of GPUs? Eg do you want us to explicitly only check one bit instead or something?
I’m not involved with this project but I would guess it comes down to some combination of doubles being slow on GPUs and general gamedev sentiment being that if you need doubles you are doing it wrong.
The bool thing is just because implicit conversions are footgunny.
:(
My little games run at hundreds of FPS. Most little games from other people use Unity or Unreal and do not even make 60FPS. Doubles “doing it wrong” seems like a sentiment from another planet.
I hope SDL intends to support people like me, not just bigger and more formal game developers.
Specifically if (<float>)? Yeah I guess that would be a pain, especially with the different (non-IEEE conformant) float formats that might exist out there in GPUs and other oddities.
I was thinking of if (<int>), but I have not done that in my simple shaders and perhaps it’s not that common anyway.
I’m mostly a polyagonal modeller, not a parametric CAD person, but I’ve dipped my toes into Solvespace and FreeCAD a tiny before before. I’m giving this new FreeCAD release a go.
Appimage works great. It’s big, but I think they stuffed a python runtime in there.
The splash screen asking what navigation controls you use is nice (thou shalt use middle click to drag and shift+middle to rotate, heathen, else thou is different to 2D editors). There are so many standards out there that people are used to from different software (but mine is correct and you are all wrong).
I can’t seem to change how rotating the view works. It rolls the camera (ie tilts your head to the left or right), which makes me feel ill. Who does that in real life? Rolls their head onto their shoulders (and then further, snapping their neck) and leaves it in those positions for hours whilst working on something they have made? Actually that could explain modern appliance design.
I am used to 3D editors where the camera pans, yaws and orbits around your workpiece, but never rolls unless you specifically ask it to. In FreeCAD you can’t avoid rolling, the mouse doesn’t have enough degrees of freedom (it’s a 2D input device for 2 axes, not 3) so they magic up some weird relationship that causes you to constantly end up with arbitrary roll angles when you try to move your viewpoint. I sometimes find that moving my mouse in large circles around the orbit origin sometimes helps undo (or increase) the roll… sometimes.
Pandoc is magic. 100MiB magic, but magic nonetheless. The authors should be proud of their modern spellwork.
Hey, all those monads aren’t free!
…unless they’re free monads, of course, in which case I suppose they are…
If most people should use the Microsoft fork via .NET, who should be using Wine’s Mono? Is it only for running stuff through Wine?
Yeah that’s really confusing. Given that Microsoft is sponsoring them I get the impression that they want to steer everyone to MS’s .NET. It would be interesting to see Wine’s take on this.
In practice Mono has not been enough to run many apps for a long time, instead I’ve had to install Microsoft’s dotnet frameworks under Wine for quite a lot of things to work (mainly game related, like W3DHub and STALKER Gamma). And they’re buggy, oh god they’re buggy. The Wine team have done an amazing job to get more and more of them working but I can’t shake the feeling that incompatibility with non-Windows OS might be an intentional business strategy.
It might be hard to believe, but it actually isn’t. In fact, I believe .NET was created specifically (besides the legal stuff with Sun, of course) so MS could have their own portable platform in case they ever changed their base OS (which they already had to do once when going from 9x to NT, so they absolutely knew the value of having that kind of technology). The problem with .NET Framework is that a lot of Windows’ mechanisms are deeply integrated into the CLR, from COM to the registry to how Windows loads DLLs. This wasn’t an intentional act of sabotage, they were just using the technology stack they had available. It was easy enough to rip out for .NET Core, but originally they just never expected to have to develop the platform for any OS besides ones MS made. This was back in 2000, people were just barely starting to take Linux seriously, and I doubt either Apple or MS cared enough to port it over to Mac OS X, especially since .NET was specifically made to create Windows software, and Apple already had their own solutions to application development. So making .NET for “any other OS” seemed both moot and more difficult than just using what APIs Windows provided. But by and large, the core design of .NET and the CIL bytecode was designed to be as portable as Java, just that the implementation didn’t have much opportunity to be super portable at the time.
This actually goes the same for the entire Windows API, the whole reason WINE (and ReactOS) can exist is because the Windows API was specifically designed to be implemented over any OS, which they took advantage of to both backport some APIs to Windows 3.1 as well as allowing 9x software to work on NT, which was part of a deliberate plan to gradually replace their DOS software ecosystem for WinAPI software, the entire 9x architecture was based off NT and deliberately made as a disposable bridge from DOS to NT.
It is more specific, but also simpler, than that.
Between 1998 and 2001, it looked quite likely that the US DOJ might split Microsoft up into (at least) 2 different companies:
https://en.wikipedia.org/wiki/United_States_v._Microsoft_Corp.
The likely split was an applications division, and an operating systems division.
However MS knew very well that some tools spanned that divide: for example its lucrative development tools product line. (It also posed difficult questions, e.g. Outlook is clearly an app and would go to the apps company, but Exchange Server is effectively an OS component and would go to the OS company. But Outlook is primarily the native Exchange client.)
So MS frantically did several things.
Reason: It wasn’t illegally bundling IE if IE was a necessary part of the OS, right?
That was named .NET.
It only existed because MS was afraid that it’d be split up and this was a solution it whipped up to built an MS runtime platform that the apps division could target, and which would enable MS AppCo to produce apps for Windows, MacOS, and UNIX.
Bear in mind this is before Mac OS X. At the time it offered Internet Explorer 4 on Solaris and other Unixes, for instance. Linux was barely a thing yet but was on the radar.
But the fierce DoJ judge Thomas Penfield Jackson was taken off the case, and replaced with the much meeker Colleen Kollar-Kotelly, and the split up never happened.
So now what was it going to do with .NET its big bold future platform for Windows?
Er…
Er, er, er, it’s a safer replacement for C code, er, and it’s in a sandbox, so, er, we call it “managed code” and it’s a better safer way to develop Windows apps than that boring old Win32 API.
And, er, well, no, they’re not actually portable, no, not if they have a GUI, and er, no, it’s not very quick… but, er, but, er, hey, look, here is a new touchscreen version of Windows, with a new UI called Metro, er no, sorry, Modern, and it has an App Store, and we’ll only let you sell apps in the App Store if they’re .NET apps with a
Metroer, no, er Modern UI, and we’ll only take a little cut.Everyone hated Win8 of course. Nobody bought anything much from the Store.
It backed down. Win10 gets rid of much of the Modern UI and you can put Win32 apps in the Store.
My personal impression is that it’s a bit of a lingering remnant now and MS would quite like to slipstream it into the OS and eliminate it as a separate thing.
Wow, that was rather insightful, I never knew that was the reason they bundled IE into Windows, I thought it was just a weird marketing push.
Unfortunately, I have to agree that MS has abandoned .NET. I never really thought about why, but you bring up good points. After the marketing team forced the .NET team to rip out hot reloading from the toolchain and hid it behind Visual Studio, I knew that they were not only done with .NET but in fact still had full control over it, and lied about it being it’s own independent organization. Which is why I bailed and switched to Rust. And yes, I know they quickly reversed the decision to lock hot reload behind VS, but they showed their hand and I had no intention of learning the hard way twice.
You forgot that Sun sued them too.
Sun sued over the MS JVM and MS’s embraced-and-extended Java J++ didn’t it?
I don’t think that’s directly connected here… except maybe as another inspiration for MS to create .NET in the first place, as its own totally-not-Java-honest VM.
https://lobste.rs/s/cnpaup/was_javascript_really_made_10_days#c_nfk2vp
It was more directly the reason than the DOJ lawsuit.
I’m curious if there are any sources for the DOJ response theory for IE bundling and .NET? My faulty memory is that functionally like Active Desktop were part of the lawsuit and the versions of Windows that came out as part of consent specifically were able to turn them off.
I think – I am not sure I can prove, over 25Y later – that you’ve got the cause and the effects mixed up here.
The DOJ stuff took a long time (1-2 years) to happen but it was visible coming down the road. Thus frantic technological flailing. IIRC there was a Netscape lawsuit about anticompetitive bundling that came first.
It invented .NET and then it tried to come up with cool things to do with .NET and a big justification was – as you say – that the earlier Sun lawsuit had shut down J++ and the J++ VM, and so it needed a “better Java than Java” language to offer to the devs it had got on board with J++.
The CLR grew out of actual MS Java… but when I wrote about the MS JVM I was drawing a technological comparison, not saying literally “the MS™ JVM™ version N+1 was renamed the .NET LCR.”
The IE part was very widely discussed at the time. .NET is historical hindsight and personal analysis.
As I recall, one of the reasons Sun sued them was that J++ was a faster implementation of Java than Sun’s. This meant that people preferred to target J++, rather than the Sun JDK. The justification for the lawsuit was that J++ put a load of MS-specific things in the
java.*namespace and so things written for J++ didn’t necessarily run anywhere else, but that wouldn’t have mattered if no one wanted to use J++.My understanding is that the first versions of the CLR were based heavily on the J++ runtime and so inherited the performance wins.
They did indeed sue over OS-specific APIs in the standard library, which is the same reason they sued Google over it in Android a few years ago. It’s not some sort of corporate bullying tactic, it’s genuinely against their licensing terms (or something, I know they officially state you can’t do that somewhere.) It’s dumb, definitely, but they don’t try to hide it.
I’m just surprised Google didn’t take a page out of MS’ book and simply make a new language many times superior to Java.
Partly that, but a big(er?) part was moving away from their dependency on Intel and, especially, on x86. Microsoft ported Windows NT to several other architectures but they all died because of a lack of software. Most people don’t buy Windows because they like Windows, they buy Windows because Windows runs all of their software. Windows on Alpha, for example, ran native software much faster than the fastest Intel machine, but almost everything that actually ran on it ran in the x86 emulator and the machines couldn’t run x86 software faster than the fastest x86 chips.
Having an architecture-agnostic distribution format was a good hedge here. If most third-party software avoided native code except for Windows-provided DLLs, then it could be moved to a new architecture trivially. Unfortunately, this never worked and most .NET apps included some custom native components.
.NET launched at the peak of the Itanium hype, when x86 was going away and everyone needed a way of moving from x86 to Itanium, and to whichever alternative came along if Intel bungled the transition (x86-64 was not yet announced). It was expected to be able to emulate x86 reasonably fast, but you would get the biggest benefits only with recompiled code, so Microsoft needed an easy way of getting the most popular Windows apps moved across.
At the launch, Microsoft made a big deal about being able to do the compilation at any point. Most of the early .NET implementations were mostly JITs, but install-time compilation was one of the benefits that they talked about a lot. The idea was that you’d do the final codegen when you installed the app, which would let you tune for the target microarchitecture and avoid any JIT overheads. I don’t think they ever shipped this (various groups did ship .NET AoT compilers).
They did actually ship this - NGen basically did tuned AOT compilation, and you ran that at install time.
Was Linux even around then? It was all corporate UNIX variants at that point, no?
Linux was absolutely around then. Both Redhat and VA went public at the end of ‘99.
Linux existed and was just starting the killing of “Big Unix” trend. By 2010 when Oracle took over Sun Microsystems, Big Unix was dead. IBM’s AIX and HP’s HP-UX still technically exist, but I haven’t seen either one in forever.
Don’t say that, you’ll make @calvin sad!
Contrary to popular belief, AIX doesn’t exist. It’s just a substrate for Unix syscall emulation on a more popular object capability system!
(Well, AIX still does exist, but I don’t touch it. Apparently the latest version has bash and Python 3 in the base system, which is a little crazy to hear. And there are indeed more IBM i sites than AIX sites at this point.)
Wow.
Sorry @calvin!
I had to go look up if they actually still existed! They both have had releases in the past year(according to Wikipedia) I was not willing to try and figure out their actual websites.
I wonder if they have a developer or hobbyist program like OpenVMS does. Another of those that effectively got killed off by the rise of Linux and X86. It might be fun on a boring stormy day to install them and see how much if anything has really changed from what I remember.
There is modern .NET (previously .NET Core), and then the legacy .NET Framework. I imagine Mono in this case might still be useful for the legacy .NET Framework?
New Pico 2 boards from Raspberry Pi are still rocking micro USB 😳
I think those are due to the 5$ constraint. usb-c sockets are much more expensive than micro-usb ones
Cheapest available Micro-B from JLCPCB comes at $0.0135 for 1500 pieces. Type-C at $0.0297 for the total difference of:
+$0.0162
That’s 37 haléřů for fellow penny-clinching Czechs.
There probably is some profit margin and they would be able to eat the difference for sure. It’s probably so that the board is 1:1 drop in for the previous one.
There is a chance that the next revision of the board will use the 48p chip and feature USB-C, since there would not be the backwards compatibility requirement.
Yeah I have to imagine that keeping a specific form factor compatibility is the primary driver.
Tangentially, I’ve no personal experience with 2350 based boards, but I’ve run into some other common microcontrollers which have started shipping boards with usb-c headers replacing a previous generation’s micro-usb and I keep running into design issues with dedicated usb controller chips bleeding too much current back into the data lines of the microcontroller which prevents those lines from being able to be pulled down. I don’t know a lot about circuit design to know how avoidable such a situation is, but I can also appreciate not just jumping to a new connector without taking a thoughtful approach at how it may impact the overall product.
I design all my RP2040 boards with USB-C and it works just fine. Most Pico clones use USB-C connectors nowadays too.
Oh interesting. Never considered that as the reason.
Never thought of that either. It does look like there is a price difference, but I don’t think it’s bit enough in absolute terms (compared to the other parts on the board) to be a driving force.
Cheapest I could find on LCSC:
16pin USB C female right-angle PCB mount: $0.029 at quantity 1000
microUSB female right-angle PCB mount: $0.015 at quantity 1000
If you’re looking yourself then note that there are such things as 6 pin USB C, that only gives you power and not data.
Controversially flagged and upvoted.
I’m ok with this as long as it’s occasional. If too many appear then it will cause the same problem as memes being upvoted more than in-depth articles on reddit.
We need a name for the… ubuntu centipede?
I’m a newer community member but I’ve lurked for years (and i am the submitter of this story) so my opinion is basically invalid BUT I think the occasional trend is quite enjoyable, as long as the trend is evanescent (as opposed to reddit commenters using the same cliche for years straight). It’s interesting to see what interesting takes on a principle exist
could just merge them all into one thread. you know, like a singularity.
If this is true, it would be another example of Google making their problem into everybody else’s problem. We have seen various articles here from Android developers trying to convince Google to include their perfectly valid app in Google Play because some stupid rules were not met. Now we are going to see the same with online content.
I can see it now.
Your website doesn’t have a privacy policy.
But I don’t… it’s some static html files… OK sure.
Your privacy policy doesn’t mention how you handle deletion requests.
Wut… OK
Your website example.com seems to be substantially similar to testing.example.com, you are de-ranked as a duplicate. Please avoid spamming copies of your website on multiple domains.
!?!?@#@?!
I’d like to see some examples and evidence. That would help me understand.
For example: I just tried searching for “Many indoor air quality sensor products are a scam” which is the title of one of my blog articles from a couple of years back.
On DuckDuckGo: my website doesn’t exist. A ycombinator discussion on my article is first then nothing more for at least several pages of results.
On Google: my article is the first result.
Perhaps I need to do a more topic-based search instead of a specific title-based search? Again some examples would be good to illustrate the point.
A couple of days ago @robey mentioned his Zukte programming language on lobste.rs, without giving a link. I tried searching for it, but couldn’t find it. The actual link is https://code.lag.net/robey/zukte, but this link doesn’t seem to be indexed on any of the search engines I tried.
Is this really a google specific problem? I presume all of the search engines must deal with the same issues described in the article, which is that the bulk of new web pages now contain machine generated junk content.
Hm, it’s almost like there’s a robots.txt disallowing all crawlers on that domain.
Nice catch.
Once upon a time, Google had a reasonably good solution for this: allowing users to flag spam domains and remove them from their results.
Google removed this feature, presumably because Google profits from ad impressions when people visit spam pages that include Google ads, and Google’s engagement metrics rise when you have to wade through multiple pages of outright junk and sponsored links instead of finding what you asked for immediately.
This is true. But not all search engines work this way.
One of the search engines I use is Kagi.com. They do let you block spam domains, and also, “Ads and trackers on a website can negatively affect its page ranking on Kagi search results. Kagi prioritizes non-commercial sources and penalizes bloated sites with ads and trackers, regardless of their agendas.”
I won’t absolve Google of everything, but I believe that there is more probable and less cynical reason that the feature was dropped: Abuse.
There are many example where mob of angry people downvote or flag content they have beef with, even if the content is perfectly fine for another audience. Ecological activist flagging oil company content, gamer giving bad reviews on Steam on game they didn’t even play to punish a company, business owner giving bad review on Google Maps to their competitor. Heck, they are Anime fanatics downvoting shows so that their own favorite stay on top of a user rated list.
I can absolutely see activist employing bot farm to flap as Spam websites they want delisted from Google for whatever reasons’ they fancy
Another interpretation is that for-profit businesses generally do not optimize the product for their users, but for their customers.
Good point. However, Google is free to implement the same solution that Kagi.com uses, which is that logged in users may block spam domains to remove them from their own search results (not from other people’s search results).
for some more anicdata, two days ago I published a new article to my blog. Other substantially more popular posts of mine show up on google search with a fragment of the title. My post from yesterday has much less traffic, is pretty niche, and has a pretty unique title, does not show up even with the full title in quotes and my site name in the search.
That firmware reminds me of a network cam I looked at years ago. RW filesystem and loading a new firmware didn’t delete my extra files.
I’m kind of disappointed by modern Android smart devices and no obvious way of getting a shell, or perhaps I’m wearing blinders and they’re just as bad (suggestions welcome! I hate my TV and want to debug it, but it has no exposed USB for adb).
Check if enabling USB debugging on your TV also silently enables networked adb. IIRC, it enabled remote debugging on the default port 5555 on my Coocaa TV at home. Then just use
adb connect $tvand you’re in.Any way of detecting if you have been under such an attack (assuming the attacker has not succeeded and cleaned up their entry)? The timeline says that this was announced on openwall a couple of weeks ago.
Since it requires loads and loads of attempts, I suppose a really large number of SSH connection attempts that don’t seem to care about varying login names and passwords would be a good reason to suspect that someone is trying to exploit this, rather than running a normal password guessing attack.
I just tried to timeout an auth, my logs showed this:
I only have two Timeouts in my logs (the first being in February). That’s assuming my logs have not been adulterated of course, AND that this is the right message to be looking for.
I would be really curious to see if I start seeing lots of Timeout entries like that now.
I have been seeing lots of other stuff over the last few months, including:
There are a ton of Timeouts in my ssh logs going back months. So either this attack was already known, or some ssh botnets tend to timeout connection attempts regardless.
Ahah.
I was looking at a server with ssh on a nonstandard port.
If I instead look at one that is running ssh on port 22: I also see a constant stream of timeouts.
Volume seems about constant for the last few months, so it’s probably unrelated. I’m eager to see if the volume starts ramping up now however :) But I guess they’d probably check the SSH version string before attempting it.
We need a company that makes open-source printers.
The problem isn’t really open-source printers, it’s open-standard printer interfaces. Decent printers consume PostScript, PCL, or PDF. The printer driver for these can be completely generic, because it’s just something like IPP passing files in a common format.
Cheap printers offload a lot more to the host. They often do rasterisation entirely on the host, so need to have a bespoke model of the print head (what kinds of dot patterns it can do). They sometimes even make the host generate all of the control commands and so there’s a tiny microcontroller that just runs a USB device interface to pull off USB encapsulation from the commands and feed them directly to the motors / print heads. This was even easier with parallel-port printers where you could wire the motors directly to the pins in the parallel port and make the computer directly drive everything, but even with USB you can typically get away with a $1-2 microcontroller with a few tens / hundreds of KiBs of RAM, whereas rasterisation on the printer needs a $10-20 SoC with (at least) tens of MiBs of RAM.
I would love to do this, but I’m terrified that the big companies will be less than welcoming to an open source competitor (or really any small competitor).
Is it legal to make your printer compatible with existing toner + drum generics? Or would they still try and sue you for patent infringement anyway?
Dont you just need to recherche your cartridges? That might solve this problem.
Oh I meant any use of their “design”, official cartridges or not.
I’m pretty sure it is. If I remember correctly Brother released a printer that used LaserJet cartridges back in the nineties.
smdh at playing Nethack and not knowing about the Moon’s influence…
I bet they don’t even know how to wash their rings.
Woah
don’t get your hopes up—overflow is still ub. because ummm uhhhh ummmm..
It’s handy when everything gets promoted to
int, but 64-bit CPU addressing modes don’t overflow likeint: https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759de5a7i am aware of these ‘arguments’ (taking the form ‘look at the assembly! it’s so bad, it has so many instructions!’). i have not been shown a real-world application whose performance was significantly impacted by -fwrapv under llvm/gcc (the famous example from chandler carruth makes no difference in my testing, and the relevant code has garbage performance either way)
(this being doubly true insofar as the requisite transforms can be done by hand at the source level if need be, and insofar as alternate approaches are available to the compilers, should they be pressured to employ them—as mentioned by fabian in the text you link—but even leaving these aside. the obsession of c++ with ‘zero cost abstractions’ is boggling, given they employ a model of implementation which is fundamentally conducive to suboptimal code generation and ignore the fact that fast is actually about queues.)
Here are a few reasons: https://kristerw.blogspot.com/2016/02/how-undefined-signed-overflow-enables.html
But if you don’t like them, you can always add
-fwrapvor-ftrapvto yourCFLAGS(even to the whole OS at once, if you use Gentoo or NixOS).