I know where the author is heading, but some browsers building with one compiler doesn’t strike me as a monoculutre. Not too long ago “everyone” (with few exceptions) using or programming for Linux was using GCC and glibc. Now people use clang, gcc and probably others (icc, etc.). So it’s more like things became a lot less of a monoculture and probably mostly for the effort of BSD and MacOS users and developers making sure that software doesn’t only work with GCC.
Yes, it’s at least Mozilla and Chrome now using clang, but these are neither the only browsers nor is big projects focusing mostly on a defined set of tools something very uncommon.
It’s just a guess, but I also think that it will not suddenly become a huge undertaking to try to compile Firefox with another compiler. For the Rust parts maybe, but it’s already like that.
Not to say it’s a good thing, but there of course are up- and downsides. Especially for such a big project and especially for a project already using said implementation, helping to develop it it makes a lot more sense than in various other cases where you often only have one supported version of GCC. People using source based approaches to install packages probably know this. Compiling some version of some compilers, maybe taking hours just to compile a little piece of software that absolutely requires it.
Other than that, even if Mozilla now uses one compiler over different platforms I hope they won’t start “ruling out” compilation with other compilers or rejecting a few lines of code to keep or establish compatibility. At least from the article it sounds like that would be the case.
It makes me really appreciate the projects that require only a c89/c99 compliant compiler, like sqlite and lua. Admittedly their dependencies are also minimal, only require the c standard library iirc, but it sure is nice.
I know where the author is heading, but some browsers building with one compiler doesn’t strike me as a monoculutre.
So, even for a “toy” project, we used to build again:
And we would’ve built on an Alpha if we had one lying around–helps reveal the really thorny issues.
The thing is, not using multiple compilers (and architectures!) helps hide bugs.
Completely agree, but it’s still not unusual for projects to use one compiler for their official releases.
It’s just a guess, but I also think that it will not suddenly become a huge undertaking to try to compile Firefox with another compiler.
Suddenly? No, but I’m afraid that sooner or later having both clang and rust will become required for any platform that wants to ship firefox. Which is a shame, since Mozilla mission is:
Our mission is to ensure the Internet is a global public resource, open and accessible to all.
Suddenly? No, but I’m afraid that sooner or later having both clang and rust will become required for any platform that wants to ship firefox. Which is a shame, since Mozilla mission is:
That’s already the case. Stylo needs clang to build. You can build some parts with GCC though.
I agree. Sadly the browser already without this has very big difference in platform support, even without that. For example WebRTC (the multimedia part), sandboxing capabilities, etc. But then of course supporting that on many platforms isn’t easy. Would be great of course, if that mission lead to a focus on not only supporting Windows, Linux and MacOS.
Maybe someone has more insights, but something that makes me wonder a lot about how things work internally at Mozilla is that there is quite a few bug reports with ready to integrate patches remaining unanswered for often years, yet there is often changes that completely surprise users, some of them being very far away from Mozilla’s stated mission.
While I get that not all the people working for Mozilla work in all areas it seems a bit like on the “accepting and integrating contributions” side of things there is a problem. As a foundation asking for monetary contribution it’s often a bad sign when contribution in form of work gets not taken care of. I hope Mozilla can fix this, so contributors don’t get too frustrated.
So it’s more like things became a lot less of a monoculture and probably mostly for the effort of BSD and MacOS users and developers making sure that software doesn’t only work with GCC.
Not to belittle works of BSD people, a lot of Clang portability work was done by Debian before BSD decided on Clang. https://clang.debian.net/ goes back to Clang 2.9.
FreeBSD initially imported Clang at revision r72732 into the tree June 2nd 2009:
https://svnweb.freebsd.org/base?view=revision&revision=193323 https://llvm.org/viewvc/llvm-project/?pathrev=72732
This was long before FreeBSD 9.0-RELEASE (January 2012).
The public documentation of the effort starts back in Feburary of 2009:
https://wiki.freebsd.org/action/recall/BuildingFreeBSDWithClang?action=recall&rev=2
As of June 2009, Clang was at version 2.5. Version 2.6 didn’t happen until October 2009.
http://lists.llvm.org/pipermail/llvm-announce/2009-March/000031.html http://lists.llvm.org/pipermail/llvm-announce/2009-October/000033.html
So this means the devs were working with the devel/llvm-devel FreeBSD port, which would have been based on HEAD or slightly newer than Clang 2.4.
So I’m not sure that I believe the story that Debian was that invested in LLVM/Clang before FreeBSD was. There was no reason to; the Linux kernel had so many GCC-isms to overcome, what would be the gain? (other than some faster compiling of packages but poorer performing binaries)
edit: FreeBSD was trying to build all of the ports collection with Clang around May 2010. This still predates Debian by over a year
https://wiki.freebsd.org/action/recall/PortsAndClang?action=recall&rev=1
Okay, wrong perspective then. From my angle I saw how tons of projects got pull requests, patches, etc. so they’d work with clang.
Do you have any background on why the Debian clang community even popped up early? I’d have considered them to be be philosophically closer to sticking to GCC (other than for where it’s necessary).
Also saw that Wikipedia actually does have a nice timeline. However it doesn’t mention where Debian starts only where it “finishes”: https://en.wikipedia.org/wiki/Clang#Status_history
Do you have any background on why the Debian clang community even popped up early?
Debian is so large that it has a lot of (pardon me) crazy people. As an evidence, I submit the existence of Debian GNU/kFreeBSD.
The title is a little misleading. The author is not against adblocking in the abstract, but is against Adblock Plus, a specific adblocker.
I think that was done on purpose, because the title wouldn’t have made sense otherwise. For me personally it is click-baity but definitely more tolerable and enjoyable than the standard clickbait titles one sees on the internet.
The title capitalizes Adblock, which makes it pretty clear that it’s talking about a specific product.
It wasn’t clear to me. All the other words in the title are capitalized, and “adblock” without qualification usually refers to all extensions which block ads.
As far as I know, yes. British, French, Spanish and Portuguese-language sites don’t capitalize everything and it’s such smooth sailing.
I could be misunderstanding something, but wouldn’t any implementation of a two-way link system (where each link must be aware of another) require either (A) a centralized authority or (B) majority decentralized consensus, necessarily opening the door to censorship?
Could Xanadu not be accomplished with some kind of file spec/protocol and rich transclusion rules (i.e. PDF but with a standard way to “refer” to other entities by hash)? Why are two-way links even necessary for the functioning of Xanadu as described?
Why are two-way links even necessary for the functioning of Xanadu as described?
Because Nelson thinks that the links should never get invalidated by a single end point, and because he thinks that nothing in the system should ever be copied, but always linked to or transcluded.
It’s an interesting thought. Think how the digital world would be if you could always find the original source of everything? What if the original creator would always get rewarded (even in a very small way) for being transcluded? It’s not necessarily a world I’d think is ideal, but interesting nevertheless.
Because Nelson thinks that the links should never get invalidated by a single end point, and because he thinks that nothing in the system should ever be copied, but always linked to or transcluded.
When I was an undergrad, I relied on AFS and FrameMaker for a lot of assignments, and that was precisely how I collaborated with other students.
It worked for some things. BUt it’s part of why AFS didn’t become the Web.
Content-addressed networks like IPFS make it straightforward without centralized coordination. We don’t care about linking to the “original source” in the sense of “original host” – just that we have truly-immutable associations between address and content.
(I should note that, last I heard, IPFS wasn’t being used internally in Xanadu projects except experimentally. I pushed for it hard when I was involved, and so did Brewster Kahle. Mostly, we relied on conventional HTTP caches of documents originally fetched by HTTP, either storing them ourselves or relying on the Internet Archive to ensure they didn’t change under us. But, everybody involved is aware of IPFS so I expect IPFS support to eventually appear – probably once we can figure out how to coordinate a guaranteed minimum number of pinned copies of any given document.)
Two-way link systems don’t require either centralized authority or majority decentralized consensus. The sides don’t need to be “aware of each other” because they are together – the documents involved don’t need to know about the links they are associated by. (Consider something like RapGenius. The documents being annotated don’t know about the annotations. Nobody would see any annotations unless they checked for them. The same is true with hypertext.)
Two-way links are literally easier than one-way links. After all, you can create them without permission from any of the parties involved in the original documents, keep them secret and distribute them to only your friends, distribute “OJAS’s 100 BEST LINKS OF 2018” packs to show off how clever you are at connecting Proust to the Plan9 documentation, etc.
Meanwhile, since links usually refer to underlying source text in a transclusion (rather than the context in which the link was created), even the tail ends of links you yourself created can be meaningfully surprising & interesting when you read transcluded content in a different context.
The documents being annotated don’t know about the annotations. Nobody would see any annotations unless they checked for them. The same is true with hypertext.
Thanks, this made it click.
No problem. It’s a really common misunderstanding, probably due to how TBL did it in the web. (I think Enquire used a central database, which is only marginally better.)
Its really easy to write a simple HTTP server. I’ve done it a couple of times for weird languages like Self. You can do it in an afternoon and you end up with something which is fast enough and usable enough for simple websites with low traffic and all you need is to be able to talk to your OS to open a socket, read, write and close it.
There is no way that I’m going to write a HTTPS server. Which is a shame because it means that small language projects like Self will need to rely on a bunch of third party C code and third party infrastructure (letsencrypt, caddy, openssl etc) to serve a simple website.
Any HTTP Server can be easily converted into a HTTPS server by piping it through a SSL proxy. It’s the same protocol going over the pipe, after encryption there is no difference.
Sure, that’s what letsencrypt, caddy, openssl, etc provide you: a way to turn your simple HTTP server into a public facing HTTPS server. But the cost is that a small protocol which could be completely written in house for fun now needs a whole bunch of complicated C/Go/etc code and systems written and hosted by someone else…
At the risk of being overly reductive you’re already depending on a bunch of code someone else wrote unless your HTTP implementation also included the TCP and IP layers. Adding in TLS can be thought of as just inserting one more layer to the several that already exist beneath you.
(A complicated one you might need to configure and that isn’t provided by the OS, I grant you)
I was reading your post and wondering if the size difference between a standard kernel-space TCP implementation and openssl was negligible or not.
find linux/net/ipv*/tcp_* -name '*.c' -o -name '*.h' | xargs cat | wc -l
25119
find openssl/ssl openssl/crypto -name '*.c' -o -name '*.h' | xargs cat | wc -l
285101
Turns out t’s an 11.3 ratio, it is not negligible at all.
I was actually not expecting a difference that big!
[edit]: I just re-read my comment, do not interpret that as an attack, it is not :) You just itched my curiosity here!
On top of that, if we compromise performance for code size by dumping optional parts of the spec, we can get a minimal functional TCP stack in an amazingly small amount of code (cf uIP, lwIP, and the fabulous VPRI parser based approach in Appendix E of http://www.vpri.org/pdf/tr2007008_steps.pdf)
Come on now, there’s a big difference between depending on, say, BSD sockets and depending on an SSL proxy like nginx or something.
I’m more familiar with using languages/frameworks with built-in support. If you’re implementing it by a proxy that’s obviously a whole other component to look after.
A separate daemon adds more complexity (and therefore fragility) to the system. BSD sockets are well understood and aren’t something the sysadmin has to manually set up and care for.
I’ve heard a great deal of buzz and praise for this editor. I’ve got a couple decades’ experience with my current editor – is it good enough to warrant considering a switch?
What do you love about your current editor?
What do you dislike about it?
What are the things your editor needs to provide that you aren’t willing to compromise on?
It probably isn’t, but it’s maybe worth playing around with, just to see how it compares. It’s definitely the best behaved Electron app I’ve ever seen. It doesn’t compete with the Emacs operating system configurations, but it does compete for things like Textmate, Sublime, and the other smaller code-editors. It has VI bindings(via a plugin) that’s actually pretty good(and can use neovim under the hood!). I still don’t understand Microsoft’s motivation for writing this thing, but it’s nice that they dedicate a talented team to it.
It’s very much still a work in progress, but it’s definitely usable.
Here’s the story of how it was created[1]. It’s a nice, technical interview. However, the most important thing about this editor is that it marked an interesting shift in Microsoft’s culture. It appears that is the single most widely used open source product originating by MS.
It’s worth a try. It’s pretty good. I went from vim to vscode mostly due to windows support issues. I often switch between operating systems, so having a portable editor matters.
It’s pretty decent editor to try it out. I’ve personally given up because it’s just too slow :| The only scenario in which I tolerate slowness, is a heavy-weight IDE (e.g., IntelliJ family). For simple editing I’d rather check out sublime (it’s not gratis, but it’s pretty fast).
It doesn’t have to be a hard switch, I for example switch between vim and vs-code depending on the language and task. And if there is some Java or Kotlin to code then I will use Intellij Idea, simply because it feels like the best tool for the job. See your text editors more like a tool in your toolbelt, you won’t drive in a screw with a hammer, won’t you? I see the text editors I use more like a tool in my toolbelt.
I do a similar thing. I’ve found emacs unbearable for java (the best solution I’ve seen is eclim which literally runs eclipse in the background), so I use intellij for that.
For python, emacs isn’t quite as bad as it is with java, but I’ve found pycharm to be much better.
Emacs really wins out with pretty much anything else, especially C/++ and lisps.
VS Code has a very nice python module (i.e. good autocomplete and debugger), the author of which has been hired by MS to work on it full time. Not quite PyCharm-level yet but worth checking out if you’re using Code for other stuff.
I was already a little familiar with him, but just read the Wikipedia page again. Just a few sentences to show what a life he lived:
In 1969, Barlow graduated with high honors in comparative religion from Wesleyan University in Middletown, Connecticut. He served as the University’s student body president until the administration “tossed him into a sanitarium”. Although he was admitted to Harvard Law School and contracted to write a novel by Farrar, Straus and Giroux, Barlow decided to spend the next two years traveling around the world, including a nine-month sojourn in India and a screenwriting foray in Los Angeles; the novel was ultimately finished but remains unpublished.
Interesting idea. Maybe it would be helpful to have a method of subscribing only to specific topics?
yeah, this is something I would explore later on. Probably after the addition of multiple languages
I don’t use Firefox because the performance with multiple tabs is still underwhelming. Unfortunately, Chrome still does a better job at that. Besides that, they discontinued Tab Groups, which was my favourite thing about it and it’s still something I can’t find on other browsers.
There’s a Mozilla-endorsed replacement.
I have mixed feelings about Mozilla turning into an EFF-like organization, especially since their browser is one of the only things between us and further homogenizaton of web browsers/rendering engines (WebKit and Trident). Obviously more of this political activism would be good, but I’m afraid that the focus on mantaining Firefox and competing with Chrome would be weakened as more resources are put into activism.
Mozilla plans to ship WebRender (written in Rust for Servo) for Firefox in 2017, which will be amazing. Google created an entire site of articles and developer tools dedicated to fight jank, but all of them won’t be necessary for new Firefox. (They will be still necessary for Chrome.) I am not worrying about tech side of Firefox.
On the other hand, market share of Firefox is continuing to decline, and I worry about marketing side of Firefox. Initial success of Firefox was marketing success as much as design success (and not tech success). But then I really don’t have any idea how to market Firefox. Would just letting people know about great tech behind Firefox be enough?
Actually Firefox market share has increased by around 4% since August: http://www.trymodern.com/article/1249/browser-market-share-november-2016
Since the start of 2016 we’ve been heavily re-investing in Firefox and I think the fruits of that labour are starting to pay off (helped of course by IE’s plummet).
It is still declining on StatCounter: http://gs.statcounter.com/
NetMarketShare measures active daily users while Statcounter measures total web traffic. Both data points are valuable for different reasons. The statcounter decline (looks more like a flatline the past 3 months) could indicate Firefox is attracting more “regular” users as opposed to power users.
Also there was a fair bit of controversy a couple years back about Statcounter’s methodologies.. not sure if they’ve addressed them or not since then.
We need more folks than EFF carrying the banner, this is true. But, that shouldn’t fall to Mozilla.
This may be a slight to the folks at Mozilla, but I am increasingly concerned that they are losing their way. I haven’t worked there, I don’t know their internal structure or funding, but they’ve made some public decisions that make me uncomfortable:
Mozilla deciding to sell out their users to monied publishers. What else would you call their final cave to support DRM? What else would that be? What about the introduction of paid ads based on your history in the new tab page (which you can remove, but how many do that one wonders)? Whatever the case may be, perhaps worse is that this seems to be working. Do you really trust an organization with almost half a billion in revenue to not sell you out?
The entire Firefox OS and phone debacle. It is baffling to me that Mozilla would waste resources and engineering time making an operating system–one of the great tarpits of software engineering–especially when there were not one, not two, but three other companies with gigantic warchests trying to saturate the market. There was never any realistic hope that that project was going anywhere, and even its value as research seems questionable when it seemed to be “let’s just bolt a browser we already have onto a linux we already have”. What sort of innovation is that, really?
The failure to make Firefox as good a browser as it could be. How long has it taken Firefox to get proper per-tab process sandboxing? How long has Chrome had it? What about HTML5 feature compliance? What about SVG rendering (compared to, of all things, IE)? I know there are a lot of good engineers there, but I kinda wonder if they’re getting brought into other non-Firefox projects or if they’re just stymied because the new Servo stuff is landing any time now so bugfixes and performance enhancements aren’t seen as fruitful to work on.
Dropping official development support for Thunderbird. There’s the copout that the “community” is the one doing development now, but for key pieces of infrastructure and dependable basic tools that’s usually a good way to lose a nice thing. Worse, it’s a bellwether in my eyes about what’s going on over there: you see, Thunderbird isn’t a sexy piece of software, and it’s pretty gnarly between the inherent madness of everything involving email and the cruft of the windowing toolkit used to make it. It’s not easy to work on, probably, and it’s not nearly as fun as playing CADT with new programming languages or clever UI/UX or silly new HTML APIs. And so the fact that Mozilla isn’t able to field enough engineers who have either the competency or the interest in supporting it suggests that their priorities lie elsewhere.
Lastly, the big elephant. The absolutely yuuuuge fuckup was the entire railroading of Eich, by both employees and the company leadership itself. For me–and I make no claims I’m correct or fair here, just that this is my opinion on the matter–that showed that the Mozilla had finally put politics over technical aptitude or competence. That showed to me that they had either brought in too many folks who were comfortable backstabbing their own or that they had brought in folks too willing to take up a witchhunt instead of ship good code. And for all that, what is Eich doing now? Hint: it’s not making the Web less commercial. Good going on that Mozillans.
~
I have incomplete and doubtless at least slightly inaccurate information, but I’d much rather see Mozilla acting like stewards or doing novel R&D instead of chasing political objectives and funding, especially when they’ve already shown themselves to be willing to compromise on their mission.
We’ve definitely made a lot of mistakes in the past couple years. You have some valid points, while others I would not agree with at all. Some of your points I’ve written rebuttals for more times than I care to admit and have grown a bit tired of them, so I apologize if my replies seem terse.
Mozilla deciding to sell out their users to monied publishers. What else would you call their final cave to support DRM? What else would that be? What about the introduction of paid ads based on your history in the new tab page (which you can remove, but how many do that one wonders)? Whatever the case may be, perhaps worse is that this seems to be working. Do you really trust an organization with almost half a billion in revenue to not sell you out?
The point of directory tiles (ads) was to see if we could build a profitable ad network that didn’t rely on tracking anyone. It was meant to be an experiment, and the experiment failed (and even backfired). But fear not, directory tiles are no more.
I think supporting DRM is a necessary evil. I’ll just link to a previous thread where I discussed this before: https://lobste.rs/s/jqxdc0/firefox_v46_security_hardening_some/comments/3ueanv#c_3ueanv
I also want to specifically call out that one of Mozilla’s tenants is:
Commercial involvement in the development of the Internet brings many benefits; a balance between commercial profit and public benefit is critical.
From the Mozilla Manifesto.
The entire Firefox OS and phone debacle. It is baffling to me that Mozilla would waste resources and engineering time making an operating system–one of the great tarpits of software engineering–especially when there were not one, not two, but three other companies with gigantic warchests trying to saturate the market. There was never any realistic hope that that project was going anywhere, and even its value as research seems questionable when it seemed to be “let’s just bolt a browser we already have onto a linux we already have”. What sort of innovation is that, really?
The failure to make Firefox as good a browser as it could be. How long has it taken Firefox to get proper per-tab process sandboxing? How long has Chrome had it? What about HTML5 feature compliance? What about SVG rendering (compared to, of all things, IE)? I know there are a lot of good engineers there, but I kinda wonder if they’re getting brought into other non-Firefox projects or if they’re just stymied because the new Servo stuff is landing any time now so bugfixes and performance enhancements aren’t seen as fruitful to work on.
I think these two points are related and I agree they are valid. In hindsight Firefox OS was a mistake, and instead we should have focused more on Firefox. But hindsight is 20/20. The good news is that Mozilla has admitted that, and now we are focusing on Firefox. I’ll direct you to a previous post I made on this topic: https://lobste.rs/s/t9kvj2/choose_firefox_now_later_you_wont_get/comments/c4ky8p#c_c4ky8p
Dropping official development support for Thunderbird. There’s the copout that the “community” is the one doing development now, but for key pieces of infrastructure and dependable basic tools that’s usually a good way to lose a nice thing. Worse, it’s a bellwether in my eyes about what’s going on over there: you see, Thunderbird isn’t a sexy piece of software, and it’s pretty gnarly between the inherent madness of everything involving email and the cruft of the windowing toolkit used to make it. It’s not easy to work on, probably, and it’s not nearly as fun as playing CADT with new programming languages or clever UI/UX or silly new HTML APIs. And so the fact that Mozilla isn’t able to field enough engineers who have either the competency or the interest in supporting it suggests that their priorities lie elsewhere.
I agree with the decision to drop Thunderbird, and yes our priorities absolutely lie elsewhere. It does not provide much benefit to Mozilla’s stated mission. I think any innovation in the space of desktop mail clients is not something Mozilla has any business being involved in. Fwiw, I still use Thunderbird as my daily mail client and have no complaints.
Lastly, the big elephant. The absolutely yuuuuge fuckup was the entire railroading of Eich, by both employees and the company leadership itself. For me–and I make no claims I’m correct or fair here, just that this is my opinion on the matter–that showed that the Mozilla had finally put politics over technical aptitude or competence. That showed to me that they had either brought in too many folks who were comfortable backstabbing their own or that they had brought in folks too willing to take up a witchhunt instead of ship good code. And for all that, what is Eich doing now? Hint: it’s not making the Web less commercial. Good going on that Mozillans.
While I was not present at any board meetings, I certainly got no sense that Mozilla leadership forced him out. While there were a handful of Mozilla employees that spoke out against him, what makes Mozilla such an awesome place to work is the ability to do that. I’d be much more concerned if employees weren’t allowed to speak their minds. At the end of the day neither of us know exactly what happened and we can choose to believe what we will. All I can say is that based on my view from the inside, I have no reason to disbelieve the fact that Eich left Mozilla under his own volition.
Thank you for your reply, and especially for the other things you’d linked.
Part of the issue I think is also that, from where I’m sitting, it’s kinda hard to see beyond the Mozilla marketing and propaganda and the coverage of what you all do–least of all because the “Mozilla is the EFF of the Web! Libre software and freedom and rights and ponies and magic dust woohoo!” message that people seems to be receiving from you folks (or from your cheerleaders outside the org, more likely) kinda directly conflicts with things like having near half a billion in revenue or saying that you need to compromise the public benefit for commercial profit. We get the wrong impression about what you all do, and then when you don’t measure up to that impression folks grouse.
Anyways, would you mind talking about what working at Mozilla is like? How large is it, where is it, how are projects decided and moved around on, that sort of thing. It’d be interesting to hear from somebody who actually works there.
(for what it’s worth, I’d rather see Mozilla be more like the OpenBSD foundation than Canonical, but we live in an imperfect world)
Firefox OS actually made some sense to me. Google made the initial bet: the way the web is growing into an all-encompassing platform where people increasingly run most things as webapps (e.g. Gmail, not Outlook), why not just go all the way, and turn Chrome into Chrome OS? If Mozilla thought this had a chance of success, responding with Firefox OS makes sense to me. It’d not do that much good for Firefox to be an alternative to Chrome on Windows, macOS, or Linux, if the future of Chrome was ChromeOS, where Firefox had no alternative. So they looked into building one.
In retrospect ChromeOS hasn’t taken off that much, but I’m not sure that was obvious, and I could easily imagine an alternate world where Mozilla ignored ChromeOS and then was caught flat-footed with no similar product (much like Microsoft missed the smartphone boat).
where Mozilla ignored ChromeOS and then was caught flat-footed with no similar product
That’s the thing, though…other than perceived opportunities to dick-measure with other engineering companies, there’s no market pressure on Mozilla in the same sense that there is with a more standard company releasing a product. They don’t have to have their fingers in every pie that might come up, and should focus more. They don’t need to try and outgoogle Google.
For what it’s worth, I think Microsoft “missing the smartphone boat” is similarly off-base thinking.
They don’t have to compete in a proper sense, right, but if somehow BrowserOS really did become the wave of the future, then imo it’d be important to have an alternative to the Chrome monoculture there too, just like it is on the desktop. Mostly I’m finding it not that hard to imagine them getting exactly the opposite criticism if a few things had turned out differently. If this had happened, I think people would be attacking Mozilla with the benefit of hindsight as being old-fashioned and backwards thinking: here they are still shipping only a desktop browser like some kind of last-decade chumps while everyone has moved to ChromeOS, and Mozilla can’t provide them an open-web alternative because the org was too old-fashioned and conservative to understand that they needed to make a FirefoxOS.
Worth noting Trident got turned into EdgeHTML, which will follow what WebKit does.
[…] Microsoft Edge matches ‘WebKit’ behaviors, not IE11 behaviors (any Edge-WebKit differences are bugs that we’re interested in fixing).
[Comment removed by author]
It’s on “us” to make sure that doesn’t happen, though. Make sure your code works in both Firefox and WebKit (i.e. Edge), and refuse to compromise when making estimates and delivering. That’s the least we can do as developers, and at least it leaves the door open for Firefox – if sites start rendering incorrectly in the browser, it won’t stand a chance at (re)gaining any of the market share.
I still can’t get Signal to work (which I believe is because it relies on certain software in the Google Play Services app which is blocked by the OS).
At this point I wonder why people still care about Signal. It has the same stupid policy as Whatsapp (requiring a phone number, leaking contact list, no custom apps on the official server) with no apparent will to change things. Even Whatsapp itself doesn’t require Google Play!
For that, I rather use the Telegram with its memecrypto (unproved and full of snake oil crap that Moxie itself debunked in his blog) than Signal. (Right now I don’t get a choice anyway)
with no apparent will to change things
Moxie has been asking for contributions from people who care about some of these things for years. He’s made lists of what would be required for them to distribute outside the Play store, and to not use GCM on devices which don’t have Google services, but no one has written the code, just blog posts. They were doing blinded contact discovery but had to stop as it didn’t scale as they gained more users, they asked for ideas for alternatives and so far no one has come up with a solution that works.
wait, what? I was using an actively developed FOSS Android Signal client and they banned it from the server side. Moxie then posted in the associated github thread saying that they would never support alternative clients. the 3rd party dev in question was perhaps being a bit entitled, but the response was very anti FOSS. the official app is clearly open source purely for auditing purposes and advertising purposes. someone out there wants signal very tightly tied to your phone number and passing everything through GCM.
https://github.com/LibreSignal/LibreSignal/issues/37#issuecomment-217211165
I never said anything about third-party clients. I don’t know how they could support them without them breaking every time a new feature gets added or the server changes. This already happened several times with Cyanogen’s WhisperPush client, and LibreSignal also had issues. This wouldn’t be a problem if it only effected the people using these clients but it also effects people using the official client that try to message these people.
The way LibreSignal did websocket support wasn’t good enough to be included in Signal. Like I said, Moxie has posted the requirements for what a patch to make Signal work without GCM would need to do (it’s even in a comment in the discussion you linked to), but no one has written it. IMO “if you want us to support your niche use case; patches welcome” is the epitome of FOSS.
Also, nothing gets passed through GCM, it only sends a wakeup event to the phone to tell it to poll the server.
I actually like the phone number ID system, since it makes it much easier for me to get non-tech-savvy people on it. My parents have never signed up for an instant messaging system account in their lives, but they do send text messages, so the fact that Signal works “like texting, but over data instead of SMS, and encrypted” made it easy to get them to switch to it. All they had to do was install it and start using it. If they would’ve needed to create an account with a username and password, and then add people by username to a friends list, etc., they would never have done so, but the fact that they can keep “texting” my same number as before, just in a different app, made onboarding easy.
I understand that it makes it incredibly easy to get into the service from a app perspective but it’s awful for privacy AND it cuts people like me off from ever being able to use it at all (not owning an android or iOS device).
So use something else and quit complaining about people who do use it. You’re clearly a minority for Signal’s use case.
Signal is really nice because it manages to be both OSS/publicly auditable (unlike whatsapp) and very easy to setup for laymen (unlike the majority of open-source projects). Some things, like the phone number-tagging you mentioned, are necessary to fulfill the latter purpose.
Although I think that disabling javascript completely in Tor would be overkill, there is probably a middle ground that can be struck between full (Firefox-like) javascript features and completely turning it off – maybe a restricted mode of some sort?
This would allow tor to have some protection against javascript 0-days while still keeping Tor useful, since a large majority of the web relies on js.
Tor Browser has a “Security slider” feature which defaults to “Low” which blocks some things (like canvass) to provide fingerprinting defenses, at Medium it disables the JS JIT, blocks web fonts, makes HTML5 video click-to-load and blocks javascript on non-HTTPS sites, and at High it disables javascript everywhere.
Although that people doing something very illegal which usually results in long jail sentences aren’t setting it to High makes me wonder how useful this is if people clearly don’t understand the threats.
And it could probably block more things at Low; do SVG animations really need to be on by default? But then that same question could be applied to Firefox itself.
[Comment removed by author]
Javascript, haskell and even rust also have a bunch of these ‘features’ that need to be learnt. Its just the nature of the beast, nothing specific to C.
Using Haskell for even moderately complex systems usually requires you to use (and learn) several language extensions that are GHC-specific and can add complexity to the language. It’s not common to see a file with 6-10 language extensions.
This isn’t necessarily a bad thing. The core language has had a conservative evolution and most of the extensions that you’ll actually use are safe and well-understood. It gives the programmer the ability to customize the language, which is neat. It’s not beginner-friendly, though. This isn’t a major problem for intermediate or advanced Haskell programmers, but it puts people off to the language, especially if no one tells them that they can use :set -XLanguageExtension in ghci to bring the language extension in and explore its effects.
Rust, like C++, is going to seem impossibly baroque if you’re learning it because you have to (i.e. because you were put on a project that uses it) and don’t understand the reasons why certain decisions were made. It makes explicit a lot of the rules that are implicit in sound C++, and those just take time to learn. If you get into it because you heard that it was like Haskell, you’re going to be disappointed, because it’s designed to be much more low level.
Yep. C’s just from a different era, where there was much less of a gap between the designer and the user. Stuff like this was par for the course – software and computers in general were more arcane and it was just sort of an accepted fact of life.
I dug out C’s history in detail. The design specifics of C were done the way they were mostly because (a) author like BCPL that forced programmer to micro-manage everything; (b) they didn’t think their weak hardware could do anything better & it occasionally dictated things. BCPL was actually made due to (b), too. It wasn’t about design or arcana so much as (a) and (b). It then got too popular and fast moving to redo everything as people wanted to add stuff instead of rewrite and fix stuff.
Revisionist history followed to make nicer justifications for continuing to use that approach which was really made personal and economic reasons on ancient hardware. That simple.
I would argue, however, that if C had not been such a strong fit for a certain (rather low, compared to what most of us dow) level of abstraction, it wouldn’t have been successful. If C had been less micromanage-y, then the lingua franca for low/mid-level system programming would be some other language from the thousands that we’ve never heard of. Maybe it would be better than C, and maybe not; it’s hard to say.
Modula-2 and Edison were both safer done on same machine. Easier to parse and easy enough to compile. Just two examples from people who cared about safety in language design.
http://modula-2.info/m2r10/pmwiki.php/Spec/DesignPrinciples
Modula-2 was designed for their custom Lilith machine:
https://en.wikipedia.org/wiki/Lilith_(computer)
These developments led to the Oberon language family and operating system:
https://en.wikipedia.org/wiki/Oberon_(programming_language)
Also note that LISP 1.5 and Smalltalk-80 were ultra-powerful languages operating on weak hardware. I’m not saying they had to go with them so much as the design space for safety vs efficiency vs power tradeoffs was huge with everyone finding something better than BCPL except the pair that preferred it. ;)
EDIT: Check your Lobster messages as I put something better in the inbox.
C was less designed than organically grown over the past 40 years. Even if it was removed, you’re going to need to learn it to be able to read C.
Once you learn this, its not that big of a deal.
I think that not having a better macro syntax built into the language is just a byproduct of the fact that C fills a niche between usability and control. Speaking generally, if one were to standardize too many of these ‘shortcuts’, C may become more usable but also might become more bloated and infringe upon access to low level control. I think people want access to some low level features without being forced to use assembly.
I’m not necessarily saying that this applies to do {...} while (0) (because IMO C should offer a better way to do this), but I think there’s a need to recognize a slippery slope of making higher level/black box things part of a language geared towards granular control.
I think that not having a better macro syntax built into the language is just a byproduct of the fact that C fills a niche between usability and control.
The designers had a PDP-11 with tiny memory/CPU, optimized for space/performance, preferred tedious BCPL, and didn’t believe in high-level languages or features like code as data. Combination plus maybe backward compatibility led to the preprocessor hack. It was really that simple. What you’re posting is revisionist although probably unintentional.
[Comment removed by author]
I noticed all the people doing the more secure stuff intentionally went with a PDP-11/45. Difference must have been significant. In any case, they could’ve still done basic type-checking and such on the other one. My main counterpoint was that they could do Modula-2-style safety by default with checks turned off where necessary on per module, function, or app basis. All sorts of competing language designers did this. Hard to tell what would’ve been obvious in the past but it seems to me they could’ve seen it and just didn’t care. Personal preference.
[Comment removed by author]
Thanks for the details! I think you’re right about using earlier model to boost credibility. Due to broken memory, I can remember specifically but I know I read something along those lines in one of the historical papers.
Not really it was more bolted on from some things that were floating about bell labs at the time, the original language designers had little to do with it.
To quote dmr
| Many other changes occurred around 1972-3, but the most important was the introduction of the preprocessor, partly at the urging of Alan Snyder [Snyder 74], but also in recognition of the utility of the the file-inclusion mechanisms available in BCPL and PL/I. Its original version was exceedingly simple, and provided only included files and simple string replacements: #include and #define of parameterless macros. Soon thereafter, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation. The preprocessor was originally considered an optional adjunct to the language itself. Indeed, for some years, it was not even invoked unless the source program contained a special signal at its beginning. This attitude persisted, and explains both the incomplete integration of the syntax of the preprocessor with the rest of the language and the imprecision of its description in early reference manuals.
There is no principled endpoint to this – if you followed this sort of reasoning, then everything would be protected under free speech, including all physical objects. That being said, I am not optimistic for the government’s prospects in this case. At least two appeals courts, the 9th circuit (Bernstein v. US), and the 6th circuit (Junger v. Daley) have already explicitly held that source code is free speech. It is only a matter of time before CAD files (imo, incorrectly) will be interpreted as closer to source code than physical objects.
Source code, technical data, poems, and songs are all intangible; you could conceivably speak the information aloud. Where is the slippery slope to physical objects?
If you agree with the following three statements:
then, I think, you are logically forced into accepting that it will be impossible to regulate any 3D printed objects, since it is (both legally and practically) infeasible to restrict what people do on their own 3D printers in their own homes.
It’s not legally impossible to restrict use of 3D printers anymore than it is to restrict use of CNC machines. “Shop guns” are a thing and if unregistered are manifestly illegal in California.
I don’t think that logically follows, free speech doesn’t mean you can say anything you want.
Absent imminent lawless action, I think it does. What do you think it means?
Imminent lawless action is one of a few categories of unprotected speech. Others include libel and false advertising.
This is in the context of safety/secuity-based restrictions on free speech (i.e. guns). Should have made that clearer.
Oh, I see what you mean. That doesn’t lead to “everything” being protected by free speech; 3D printed objects are a subset of all physical objects. You can’t 3D-print uranium, for instance.
It would be impossible to regulate plastic shapes… but so what?