It’s platform strategy. If Apple stays with OpenGL (or adopts Vulkan) they are at the mercy of Khronos, and Khronos may not do a good job (in the long run – in the short run Vulkan seems pretty good.)
Microsoft is never going to give up on Direct X, it grants incredible market control. $$$. OpenGL is owned by committee and can’t be relied on to be competitive (something the last 20 years of history demonstrated, and since Khronos controls Vulkan there’s a substantial possibility committee politics destroy it too in the long run.)
3D engines all require multiple rendering backends anyways (DX9/11/12, OpenGL ES 2, ES 3, ES 3.1, desktop GL) so the incremental cost of implementing another backend for Metal is low. The cognitive cost is also low because Metal/DX12/Vulkan are similar. So Metal is better (for Apple) than Vulkan or OpenGL because they get complete control. Complete control is where Apple likes to be, they have a history of delivering strongly when they have complete control. Although Apple’s values rarely align with the prosumer, they do a good job at fulfilling their own vision.
Really the main disadvantage of deprecating OpenGL on OSX is all the heckling from armchair quarterbacks, but let’s be honest, Apple DNGAF about the nerdygentsia’s opinion ;)
If you’re an indie or OSS dev either you’re not doing high end rendering and OpenGL remains fine, or you can just use MoltenVK/GL and call it a day, it’s not worth getting angry about IMHO.
Doesn’t seem like the problem is ever identifying the flaws in the organization, the flaws are everywhere, easy to find.
And it usually feels like the flaws come from the top of the org. It’s hard (impossible?) to change your boss. The problem is finding and settling on the organization with the most acceptable set of flaws.
I’m kind of excited about payment request, would that integrate with e.g. Apple Pay? Reducing the overhead to paying sites is one of the things that I think could turn the web around. I have wished that e.g. Firefox would put a “$1” button on their toolbar, which would allow you to just give the site a dollar. Practical problems aside, could really improve the best parts of the web.
PaymentRequest does support Apple Pay and is also supported by Google, Samsung and Microsoft at least - so building a PWA with in-app purchases is very much possible now
As a side note, I actually built such a browser button that you mention when I was at Flattr + investigated ways to identify the rightful owner of that page so that they could claim the promise of a donation. We never got it to fully work on all sites, but it worked for some of the larger silos, like Twitter and GitHub, and also worked for those who had added rel-payment links to Flattr, but we/I investigated having it crawl peoples as well public identity graphs to try and find a connection between the owner of a page (through eg rel-author link) and a verifiable identity – like their Twitter account or maybe some signed thing Keybase-style. That ended up with the creation of https://github.com/voxpelli/relspider but the crawler was never fully finished (eg. smart recrawling was never implemented) and never put into production. I still like the idea though.
I look forward to a possible future where competitive pressure forces a management shakeup at Qualcomm, or even splitting the radio business from the SoC business.
Why do people think that CDN infrastructure should be radically neutral? Access to the internet does seem increasingly equivalent to free speech, but does that equate to a right to access other people’s syndication systems?
EFF quote: “Because Internet intermediaries, especially those with few competitors, control so much online speech” – eh? No one is stopping anyone from hosting their own content, and it’s not even hard. You don’t need a CDN to exercise speech. You need a CDN to reach a broad consumer audience. Granted I’d feel a little different if their ISP pulled the plug on them, but that’s not what we’re talking about.
I’m actually way more concerned about the fact that DNS (which is a government-operated system) is gated behind the arbitrary control of private registrars, who have recently used that control to censor people. The ideal solution is to switch away from centralized DNS (a la namecoin), but until then there should be rules preventing denial of access to this public system.
I think Intel is running out of gas. Their process lead and their design lead are both smaller than ever. Qualcomm and AMD are almost as good, and perfectly happy undercutting Intel. (Never mind that Apple is miles ahead of all three in terms of low power performance, but is mostly irrelevant to Intel because they don’t compete directly.)
This is just the Intel management lining up the lawyers to try to defend entrenched markets because they don’t have much technological advantage left.
Long ago and far away, we had these things called “shared libraries” which allowed one to build code and reuse it, so that even if the build process was very long and complex, you only had to do it once. An elegant solution from a more civilized time.
I think that’s missing the point I was trying to make. Even if 75% of them happened due to us using C, that fact alone would still not be a strong enough reason for me to reconsider our language of choice (at this point in time).
Ironically this is missing the point that memory safety advocates are trying to make. The response the post got is less about Stenberg’s refusal to switch languages (did anyone actually expect him to rewrite?), and more about how he is downplaying the severity of vulnerabilities and propagating the harmful “we don’t need memory safety, we have Coverity™” meme.
curl is currently one of the most distributed and most widely used software components in the universe, be it open or proprietary and there are easily way over three billion instances of it running in appliances, servers, computers and devices across the globe. Right now. In your phone. In your car. In your TV. In your computer. Etc.
So in other words, any security vulnerabilities that affect curl will have wide impact. How does this support his argument?
(did anyone actually expect him to rewrite?)
I don’t think rewriting in a safe language would be the problem. I have ported a couple 50k line codebases, between objc, java, and cpp and the rewrite wasn’t the hard part. It’s the tooling. Even using supported languages it’s a ton of work to get the build systems to work well. Good luck getting your newish but safe language code base playing nice with the build systems for the big three platforms, consistently, fast, and with good developer support for e.g. debugging and logging and build variants.
cURL isn’t popular because it’s written in C, per se, it’s popular because it runs freaking everywhere and very nearly just works.
I think if you want safe language adoption you should go to the people that are choosing to use cURL and talk to them, and work on their barriers to adoption. cURL is C.
He has the benefit of being able to promote the virtues of a working and widely used program versus some vaporware.
I think he’s saying the project owner would say the good they’ve done outweighs the bad. That’s the impression I got from OP.
What he’s saying is we’re comparing a tangible C program to a Rust program that does not exist. Faults of C notwithstanding, vaporware has all the attributes of the best software—except for that existence problem.
Nah, if that’s all he wanted to say he wouldn’t have made such a big fuss about how popular his software is. He shoehorned that paragraph in there in an attempt to lend credibility to his arguments. Later he tries this again by implying that his detractors don’t code, or something:
Those who can tell us with confidence how to run our project but who don’t actually show us any code.
All of that is bunk. If maintaining a popular piece of software implied security expertise then PHP’s magic quotes would have never seen the light of day.
Do the economics of their business work out? Or are they just gobbling market with investment money? For the little company I work for CF is dramatically less expensive than MaxCDN, which itself was dramatically cheaper than AWS Cloudfront.
And aside from that, I hate their attempts to differentiate via value-adds like this site scraper shield garbage, or their DDOS shield stuff. It doesn’t work well and isn’t trustworthy. I wish there was a CDN that just took an nginx config and got out of the way.
Did Google do the CPU design? Is Rockchip just doing the fabrication?
Odd world to have Google poised to join Apple as the best mobile CPU vendors. Maybe they got sick of Qualcomm’s relatively lackluster performance.
I’m not sure there’s any evidence for a Google-designed CPU; if it was happening, it’d be pretty hard to hide hiring a team of that size.
Right. Looks like an ARM-designed CPU core for sure.
Last October, a product page for the Plus, then branded the Chromebook Pro, was leaked, ID'ing the chip as the Rockchip RK3399. Some folks benchmarked a dev board with it. Some early announcements about it exist too, also tagging it as based on Cortex-A72/A53 cores and a Mali GPU.
There’ve also benchmarks out there of another A72-based SoC, the Kirin 950.
There’s reasonable evidence of Google ramping up at least more competence in chip design over the past 3-5 years than they traditionally had, which seems to spawn rumors of a Google CPU every time they hire someone. Anecdotally from the perspective of academia, they do seem much more interested in CE majors than they once were, plus a few moderately high-profile hardware folks have ended up there, which would’ve been surprising in the past. But I agree it’s nowhere near the scale to be designing their own CPU. I don’t know what they’re actually doing, but assumed it was sub-CPU-level custom parts for their data centers.
CPU design is also a really small world; it’s almost all the same people bouncing between teams. You can trace back chip designs to the lineage of the people who made them; there’s even entire categories “pet features” that basically indicate who worked on the chip.
Pet features, that’s neat. Like ISA features or SoC/peripheral stuff? Can you give an interesting example?
One example is the write-through L1 cache, which iirc has a rather IBM-specific heritage. It also showed up in Bulldozer (look at who was on the team for why). A lot of people consider it to be a fairly bad idea for a variety of reasons.
Most of these features tend to be microarchitectural decisions (e.g. RoB/RS design choices, pipeline structures, branch predictor designs, FPU structures….), the kind of things that are worked on by quite a small group, so show a lot of heritage.
This is probably a slightly inaccurate and quite incomplete listing of current “big core” teams out there:
Intel: Core team A, Core team B, and the C team (Silvermont, I think)? They might have a D team too.
AMD: Jaguar (“cat”) team (members ~half laid off, ~half merged into Bulldozer), not sure what happened after Bulldozer, presumably old team rolled into Zen?
ARM: A53 team, A72 team, A73 team (Texas I think)
Apple
Samsung (M1)
Qualcomm (not sure what the status of this is after the death of mobile Snapdragon, but I think it’s still a thing)
nvidia (not sure what the status of this one is after Denver… but I think it’s still a thing)
Notably when a team is laid off, they all go work for other companies, so that’s how the heritage of one chip often folds into others.
Though cliche, I think nuking the escape key is a courageous thing to do. Use is limited, alternatives are available, the physical cost is relatively high, but it’s steeped in tradition and probably has vocal support.
It depends on the benefits that are made available by removing the key. Doing away with the floppy drive was courageous because it freed up precious space in laptop cases and generally enabled better industrial design. It also helped push people toward more reliable and convenient forms of portable storage. If that OLED strip up at the top doesn’t have some pretty amazing benefits, I don’t think the loss of the Esc and Function keys will have been worth it.
Pretty cutting. Verizon looks like it will have no trouble staying profitable due to their spectrum control, though. Ironically the strength of the dumb pipe (good coverage) will keep the value add afloat.
Though I doubt “afloat” also means “growing.” Would AOL and Yahoo turn into a stone, weighing down Verizon?
For Valve to have any position to negotiate with MS from, they have to have a credible threat of being able to walk away from Windows. Mature Linux support and Steam Machines (which are integrated systems) make a credible threat. You could say it’s dead in the water as a consumer product. Maybe it’s not a consumer product though, maybe it’s a negotiation tool.
And, Valve could step on the gas at any moment. I think they have a good shot at shipping the best possible gaming hardware platform by virtue of having a lot of development resources and a strong track record for delivering quality. Maybe they’re just letting the project idle because they’re close to closing an agreement to be the only other MS endorsed app store on Windows?
If you want to build a program at run-time, you can find the addresses of blocks of code that do what you want, but are followed by a ret. These addresses are called gadgets. You can then execute your program by pushing these gadgets onto the stack in the order you want them to execute, and then returning with the ret instruction. This is called ROP: return-oriented programming.
RAP is “return-address protection”, and it’s a device where you save extra cookie someplace before calling a function, and then verifying that the cookie is in that place before returning. This allows you to detect if someone is using the end of your function as a gadget.
Is there an equivalent for J(ump)OP? Will we just see many ROP-using exploits switch to JOP or is there a practical reason that it’s unlikely to be the next iteration in the cat-and-mouse game?
I was curious about this too and found this paper: https://cseweb.ucsd.edu/~hovav/dist/noret.pdf.
This paper is one of the best ones I’ve read on it. It also wins points for a fantastic title:
It’s the predominant technique to achieve remote code execution in systems with ~“data execution prevention” or “write or execute” memory protections, which prevent exploits from executing malicious code directly.
That sucks, I like Phoronix a lot. I know the comments aren’t great, but the news Michael puts together is usually pretty interesting and a great way to get caught up on developments in the Linux graphics stacks.
I find the news quite frequent, interesting, focused and on topic. I haven’t subscribed yet though :(
If it’s a thing they do on their own, and in a way that exploits existing security weaknesses which they do not attempt to politically perpetuate, rather than asking for the introduction of new weaknesses…
… then I still disapprove, but only at the level that I disapprove of all their surveillance. :) As far as I can see, that would remove the negative externalities which are specific to this case, leaving only the ones inherent in their mission.
I asked the question because the slant of this article seemed to be piling on the FBI. The FBI is wrong to invoke writs act. Even the cyber czar says so! But his suggestion for what they should do is also disagreeable, no? So we should perhaps discount his advise?
I’m waiting for some former advisor to say that back in the day we’d have tortured the shooters friends and family to get intel. What will ars say then? :)
When professional spies are talking, we need to not accept the package deals implicit in how they frame things. The intelligence community has had its own unique and effective form of PR since its inception. He can say a true thing and a false one in the same breath; it happens. :)
I have to admit it’s frustrating that Ars pretty much just let him use their platform without adding their own critical thought. Thank you for adding yours.
I suggest desoldering the flash memory and attacking it offline, without the phone.
But should they? The NSA and FBI are two different organizations with different goals and boundaries. Who is to say the NSA didn’t falsify evidence? They’d need to provide proof, which would reveal their methods.
That would require having a way to bypass the fact that the phone’s TPM chip is the only component that has the decryption key. A brute-force attack on the entire keyspace isn’t going to work.
They shouldn’t - unless I’m missing some major change, their mandate is to spy on non-“U.S. persons”. The FBI is the thing we use to investigate citizens and other “U.S. persons”, with different rules and oversight. I still think this distinction is important.
Although I might be convinced that some sharing of expertise and techniques would be acceptable, direct collaboration along the lines of “here NSA, crack this phone” seems too far.
Actually, while we’re here, question for lobsters: should the NSA crack the phone?
Morally, or in what sense?
The question of whether or not they can is interesting from a legal perspective mostly because per US v NY Telephone, if the Government can access the data on their own they must pursue that angle in preference to compelling action from a third-party via All Writs.
So from a legal perspective if the FBI wants that data then yes, the NSA should crack the phone rather than Apple. But whether they should from a security perspective is a different question
I wonder if anyone has tried using ART as a server side JVM, seems to have a great GC with super low STW times.
One thing that the Terminal app in Mac OS X has always done extremely well is to automatically rewrap text when resized - even Linux terminal window apps, numerous though they may be, don’t seem to handle that well. Windows wouldn’t even let you resize the damn window - hopefully that will be fixed now!
Windows console resizing is already fixed.
Doesn’t Gnome and XFCE’s Terminal usually do this?
And it they don’t, one can always use dvtm. In combination with st one can get a very lightweight environment (requires less resources than xterm, for example) that’s actually surprisingly nice to use.
ST sounds nice. I’ll give it a whirl. Thanks :)
Yeah, Terminal.app is amazing