I saw an e-ink bike speedometer thing at the store a couple weeks ago and thought “oh that’s cool, it might be easier to read in the sun”…. but then realized I’ve had the same non-lit LCD bike speedometer for twenty years now and it works perfectly fine in the sun, just like my old non-lit lcd wristwatch ive had for ages. Made me think: could many of the same advantages we ascribe to eink be achieved by regular old non-backlit lcds at much lower cost? i legit don’t know
Mostly, yes. The biggest difference between eInk and traditions LCDs consume power to remain in position, eInk displays consume power only to move between states. This is why trying to put them in places that need high refresh rates isn’t great: even if they can do it, they will consume a lot of power. They are great for situations where you update no more than every few seconds (hmm, now I want to build an eInk sundial). The newer ones may have better contrast than normal LCDs, not sure if you can make an LCD with a white background.
There are larger reflective LCD panels out there at least. As cool as eink is, reflective LCD is probably the right choice for a monitor. Memory LCD is another decent in-between option; Pebble used them on smartwatches.
I wish transflective panels actually kept their promise that they’d be usable in the sun without any backlight. Every one I saw was unusable in daylight at 0% backlight. And I wish Pixel Qi was used in devices I’d want to use (i.e. not an OLPC)….
I’m certainly looking forward to read it, as an assembly and ISA geek… but that way to display a PDF on the web that the author chose is really egregious. ;)
But I’d suggest “Computer Organization and Design RISC-V Edition The Hardware Software Interface” (2021 version aka 6th edition aka 2nd risc-v edition) over it.
“RISC-V Reader” is also a good read, for a turbo introduction for those who already know other assembly languages.
And, of course, the RISC-V unprivileged and privileged specs themselves.
But I’d suggest “Computer Organization and Design RISC-V Edition The Hardware Software Interface” (2021 version aka 6th edition aka 2nd risc-v edition) over it.
Very misleading article. There has been an avalanche of these recently.
It constantly implies that this law is going to prevent people from writing AI systems, while as far as I can see the law is (simplifying a bit) about selling them. edit: and using them in sensitive applications such as law enforcement
I don’t see how any telemetry transmitted via the internet that is opt-out is not a direct violation of the GDPR. The IP address that is transmitted with it (in the IP packets) is protected information that you don’t have consent to collect - you failed at step 0 and broke the law before you even received the bits you actually care about.
Of course, the GDPR seems to be going routinely unenforced except against the largest and most blatant violations, but I really don’t see why a company like google would risk it. Why other large companies are actively risking it.
My understanding of the GDPR was that IP addresses are not automatically PII. Even in situations where they are, simply receiving a connection from an IP address does not incur any responsibilities because you require the IP for technical reasons to maintain the connection. It’s only when you record the IP address that it may hit issues. You can generally use some fairly simple differential privacy features to manage this (e.g. drop one of the bytes from your log).
(30) Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.
This doesn’t actually say that collecting IP addresses is not allowed. It only states that when the natural person is known, online identifiers could be used to create profiles.
Furthermore this is only relevant if those online identifiers are actually processed and stored. According to the Google proposal they are not. They only keep record of the anonymous counters. Which is 100% fine with GDPR.
It’s a shame the go compiler isn’t well positioned UX-wise to ask users for opt-in consent at installation (as an IDE might) since that’d likely solve privacy concerns while reaching folk that don’t know about an opt-in config flag.
Yes IP addresses are not automatically PII, but if you can’t enforce they are not you must assume they are. The telemetry data itself is probably not PII, because it’s anonymized.
GDPR prohibits processing[0] of (private) data, but contains some exceptions. The most common used one is to full fill a contract (this doesn’t need to be a written down contract with payment). So assume you have an online shop. A user orders i.e. a printer you need his address to send the printer to him. But when the user orders a ebook you don’t need the address because you don’t need to ship the ebook. In the case of go the service would be compiling go code. I don’t see a technical requirement to send google your IP-Address.
Next common exception is some requirement by other law (i.e. tax-law or money laundering protection law). I think there is none.
Next one is user consents: You know these annoying cookie banner. Consents must be explicit and can’t be assumed (and dark pattern are prohibit). So this requires an opt-in.
Next one would be legitimate interest. This is more or less the log file exception. Here you might argue that the go team needs this data to improve there compiler. I don’t think this would stand, because other compiler work pretty well without telemetry.
So all together I[1] would say the only legal way to collect the telemetry data is some sort of user consent.
[0] Yes processing not only storing, so having a web server answering http requests might also falls under GDPR.
You are wrong. The GDPR is not some magic checkbox that says “do not ever send telemetry”. The GDPR cares about PII and your IP address and a bunch of anonymous counters are simply not PII. There is nothing to enforce in this case.
Nix solves the problem of unstable tarballs by unpacking them then re-packing them as NAR, its own stable archive format (Figure 5.2). It works quite well, and it’s generally useful for hashing trees of files, for example when projects don’t publish tarballs at all or when you need a hash of a generic result of a network operation (like cargo vendor or go mod download).
Ironically most of the spam I receive comes from the big providers, google most of the time. Their outbound ham/spam ratio is abysmal, yet blocking them sounds ridiculous.
Ironically most of the spam I receive comes from the big providers,
google most of the time.
This is my experience as well. The vast majority of the spam that is
not flagged as such by my rspamd comes from Google. I’ve seen it both
from @gmail.com and for email domains they host. I probably get a dozen
of these in my inbox every day.
I understand why people write these articles, but the argumentation here is just “I don’t like this”. Can’t we talk about design with better arguments? Maybe even data? (Not that I have any at hand.)
but the argumentation here is just “I don’t like this”
That’s just the title. The article goes a bit more in depth on why the author considers the new design to be worse, though you’re right about the data, the article only presents anecdotes:
I feel like the designers of this new theme have never sit down with anyone who’s not a “techie” to explain to them how to use a computer. While a lot of people now instinctively hunt for labels that have hover effects, for a lot of people who are just not represented in online computer communities because they’re just using the computer as a tool this is completely weird. I have had to explain to people tons of times that the random word in the UI somewhere in an application is actually a button they can press to invoke an action.
“Data” doesn’t make arguments automatically better. Quantitative analysis isn’t appropriate for everything, and even when it may be useful, you still need a qualitative analysis to even know what data to look at and how to interpret it.
I was prepared to groan but there are some superb sentiments in here, well articulated. I’d be interested to know why it was written now and if it’s meant to signal any changes in direction for Firefox.
These two lines stood out to me as things that mean a lot to me but honestly wouldn’t have expected to be said by Mozilla.
Our strategy is to categorize [web] development techniques into increasing tiers of complexity, and then work to eliminate the usability gaps that push people up the ladder towards more complex approaches.
…people have a user agent — the browser — which acts on their behalf. A user agent need not merely display the content the site provides, but can also shape the way it is displayed to better represent the user’s interests.
I’d be interested to know why it was written now and if it’s meant to signal any changes in direction for Firefox
It’s more of a “justification” than a new direction:
Background is that other browsers develop and ship APIs that are sometimes hard or even impossible to bring to the web platform without conflicting with the core values of Mozilla. Pushing back on individual standards can be time consuming and repetitive. (See https://mozilla.github.io/standards-positions/)
Among other things, this document serves as a long form explanation of these core values.
Background is that other browsers develop and ship APIs that are sometimes hard or even impossible to bring to the web platform without conflicting with the core values of Mozilla. Pushing back on individual standards can be time consuming and repetitive. (See https://mozilla.github.io/standards-positions/)
Thank you for that link! I took a look at the “harmful” section and I was shocked by the amount of bad ideas. And they keepcoming! It’s great that there’s at least someone opposing this madness.
Yuuuup, there’s quite a story, but basically it boils down to mobile phone software manufacturers in Japan (I think?) were required to implement things in terms of standards (or something like that). None thought a full browser was possible at the time, and the html5 spec was still in its infancy so hadn’t split out into sub-specifications yet.
That meant that to get (for example) XHR they’d need to implement a full browser. Obviously such I thing was impossible on a phone :D
The solution was to give the SVG spec everything that they needed, including raw sockets, an ECMAScript subset that only had integers, etc
Suffice to say that when we implemented SVG in mobile safari we said raw sockets were not a thing that would happen.
The SVG WG of the era was not the most functional.
I used to give the same advice, but I completely changed my opinion over the past 10 years or so. I eventually put in the time and learned shell scripting. These days my recommendation is:
Learn to use the shell. It’s a capable language that can take you very far.
Use ShellCheck to automatically take care of most of the issues outlined in the article.
I really don’t want to figure out every project’s nodejs/python/ruby/make/procfile abomination of a runner script anymore. Just like wielding regular expressions, knowing shell scripting is a fundamental skill that keeps paying dividends over my entire career.
Always pay attention to what version of bash you need to support, and don’t go crazy with “new” features unless you can get teammates to upgrade (this is particularly annoying because Apple ships an older version of bash without things like associative arrays).
Always use the local storage qualifier when declaring variables in a function.
As much as possible, declare things in functions and then at the end of your script kick them all off.
Don’t use bash for heavy-duty hierarchical data munging…at that point consider switching languages.
Don’t assume that a bashism is more-broadly acceptable. If you need to support vanilla sh, then do the work.
While some people like the author will cry and piss and moan about how hard bash is to write, it’s really not that bad if you take those steps (which to be fair I wish were more common knowledge).
To the point some folks here have already raised, I’d be okay giving up shell scripting. Unfortunately, in order to do so, a replacement would:
Have to have relatively reasonable syntax
Be easily available across all nix-likes
Be guaranteed to run without additional bullshit (installing deps, configuring stuff, phoning home)
Be usable with only a single file
Be optimized for the use case of bodging together other programs and system commands with conditional logic and first-class support for command-line arguments, file descriptors, signals, exit codes, and other nixisms.
Be free
Don’t have long compile times
There are basically no programming languages that meet those criteria other than the existing shell languages.
Shell scripting is not the best tool for any given job, but across every job it’ll let you make progress.
(Also, it’s kinda rich having a Python developer tell us to abandon usage of a tool that has been steadily providing the same, albeit imperfect, level of service for decades. The 2 to 3 switch is still a garbage fire in some places, and Python is probably the best single justification for docker that exists.)
While some people like the author will cry and piss and moan about how hard bash is to write, it’s really not that bad if you take those steps (which to be fair I wish were more common knowledge).
I think “nine steps” including “always use two third-party tools” and “don’t use any QoL features like associative arrays” does, in fact, make bash hard to write. Maybe Itamar isn’t just “cry and piss and moan”, but actually has experience with bash and still think it has problems?
To use any language effectively there are some bits of tribal knowledge…babel/jest/webpack in JS, tokio or whatever in Rust, black and virtualenv in Python, credo and dialyzer in Elixir, and so on and so forth.
Bash has many well-known issues, but maybe clickbait articles by prolific self-pronoters hat don’t offer a path forward also have problems?
I think there’s merit here in exploring the criticism, though room for tone softening. Every language has some form of “required” tooling that’s communicated through community consensus. What makes Bash worse than other languages that also require lots of tools?
There’s a number of factors that are at play here and I can see where @friendlysock’s frustration comes from. Languages exist on a spectrum between lots of tooling and little tooling. I think something like SML is on the “little tooling” where just compilation is enough to add high assurance to the codebase. Languages like C are on the low assurance part of this spectrum, where copious use of noisy compiler warnings, analyzers, and sanitizers are used to guide development. Most languages live somewhere on this spectrum. What makes Bash’s particular compromises deleterious or not deleterious?
Something to keep in mind is that (in my experience) the Lobsters userbase seems to strongly prefer low-tooling languages like Rust over high-tooling languages like Go, so that may be biasing the discussion and reactions thereof. I think it’s a good path to explore though because I suspect that enumerating the tradeoffs of high-tooling or low-tooling approaches can illuminate problem domains where one fits better than the other.
I felt that I sufficiently commented about the article’s thesis on its own merits, and that bringing up the author’s posting history was inside baseball not terribly relevant. When you brought up motive, it became relevant. Happy to continue in DMs if you want.
Integrating shellcheck and shfmt to my dev process enabled my shell programs to grow probably larger than they should be. One codebase, in particular, is nearing probably like 3,000 SLOC of Bash 5 and I’m only now thinking about how v2.0 should probably be written in something more testable and reuse some existing libraries instead of reimplementing things myself (e.g., this basically has a half-complete shell+curl implementation of the Apache Knox API). The chief maintenance problem is that so few people know shell well so when I write “good” shell like I’ve learned over the years (and shellcheck --enable=all has taught me A TON), I’m actively finding trouble finding coworkers to help out or to take it over. The rewrite will have to happen before I leave, whenever that may be.
I’d be interested in what happens when you run your 3000 lines of Bash 5 under https://www.oilshell.org/ . Oil is the most bash compatible shell – by a mile – and has run thousands of lines of unmodified shell scripts for over 4 years now (e.g. http://www.oilshell.org/blog/2018/01/15.html)
Right now your use case is the most compelling one for Oil, although there will be wider appeal in the future. The big caveat now is that it needs to be faster, so I’m actively working on the C++ translation (oil-native passed 156 new tests yesterday).
I would imagine your 3000 lines of bash would be at least 10K lines of Python, and take 6-18 months to rewrite, depending on how much fidelity you need.
(FWIW I actually wrote 10K-15K lines of shell as 30K-40K lines of Python early in my career – it took nearly 3 years LOL.)
So if you don’t have 1 year to burn on a rewrite, Oil should be a compelling option. It’s designed as a “gradual upgrade” from bash. Just running osh myscript.sh will work, or you can change the shebang line, run tests if you have them, etc.
There is an #oil-help channel on Zulip, liked from the home page
Thanks for this nudge. I’ve been following the development of Oil for years but never really had a strong push to try it out. I’ll give it a shot. I’m happy to see that there are oil packages in Alpine testing: we’re deploying the app inside Alpine containers.
Turns out that I was very wrong about the size of the app. It’s only about 600 SLOC of shell :-/ feels a lot larger when you’re working on it!
One thing in my initial quick pass: we’re reliant on bats for testing. bats seemingly only uses bash. Have you found a way to make bats use Oil instead?
I wouldn’t expect this to be a pain-free experience, however I would say should definitely be less effort than rewriting your whole program in another language!
I have known about bats for a long time, and I think I ran into an obstacle but don’t remember what it was. It’s possible that the obstacle has been removed (e.g. maybe it was extended globs, which we now support)
In any case, if you have time, I would appreciate running your test suite with OSH and letting me know what happens (on Github or Zulip).
One tricky issue is that shebang lines are often #!/bin/bash, which you can change to be #!/usr/bin/env osh. However one shortcut I added was OSH_HIJACK_SHEBANG=osh
Moving away from Python? Now it has my interest… in the past I skipped past know it’d probably take perf hits and have some complicaged setup that isn’t a static binary.
Yes that has always been the plan, mentioned in the very first post on the blog. But it took awhile to figure out the best approach, and that approach still takes time.
Python is an issue for speed, but it’s not an issue for setup.
You can just run ./configure && make && make install and it will work without Python.
Oil does NOT depend on Python; it just reuses some of its code. That has been true for nearly 5 years now – actually since the very first Oil 0.0.0. release. Somehow people still have this idea it’s going to be hard to install, when that’s never been the case. It’s also available on several distros like Nix.
What is the status of Oil on Windows (apologies if it’s in the docs somewhere, couldn’t find any mentioning of this). A shell that’s written in pure C++ and has Windows as a first class citizen could be appealing (e.g. for cross-platform build recipes).
It only works on WSL at the moment … I hope it will be like bash, and somebody will contribute the native Windows port :-) The code is much more modular than bash and all the Unix syscalls are confined to a file or two.
I don’t even know how to use the Windows sycalls – they are quite different than Unix! I’m not sure how you even do fork() on Windows. (I think Cygwin has emulation but there is way to do it without Cygwin)
To the point some folks here have already raised, I’d be okay giving up shell scripting. Unfortunately, in order to do so, a replacement would: […]
There are basically no programming languages that meet those criteria other than the existing shell languages.
I believe Tcl fits those requirements. It’s what I usually use for medium-sized scripts. Being based on text, it interfaces well with system commands, but does not have most of bash quirks (argument expansion is a big one), and can handle structured data with ease.
Always use #!/usr/bin/env bash at the beginning of your scripts (change if you need something else, don’t rely on a particular path to bash though).
I don’t do this. Because all my scripts are POSIX shell (or at least as POSIX complaint as I can make them). My shebang is always #!/bin/sh - is it reasonable to assume this path?
you will miss out on very useful things like set -o pipefail, and in general you can suffer from plenty of subtle differences between shells and shell versions. sticking to bash is also my preference for this reason.
note that the /usr/bin/env is important to run bash from wherever it is installed, e.g. the homebrew version on osx instead of the ancient one in /bin (which doesn’t support arrays iirc and acts weirdly when it comes across shell scripts using them)
My shebang is always #!/bin/sh - is it reasonable to assume this path?
Reasonable is very arbitrary at this point. That path is explicitly not mandated by POSIX, so if you want to be portable to any POSIX-compliant system you can’t just assume that it will exist. Instead POSIX says that you can’t rely on any path, and that scripts should instead be modified according to the system standard paths at installation time.
I’d argue that these days POSIX sh isn’t any more portable than bash in any statistically significant sense though.
Alpine doesn’t have Bash, just a busybox shell. The annoying thing is if the shebang line fails because there is no bash, the error message is terribly inscrutable. I wasted too much time on it.
https://mkws.sh/pp.html hardcodes #!/bin/sh. POSIX definitely doesn’t say anything about shs location but I really doubt you won’t find a sh at /bin/sh on any UNIX system. Can anybody name one?
That’s because the other ones are options and not errors. Yes, typically they are good hygiene but set -e, for example, is not an unalloyed good, and at least some experts argue against using it.
There are tons of pedants holding us back IMO. Yes, “set -e” and other options aren’t perfect, but if you even know what those situations are, you aren’t the target audience of the default settings.
Yup, that’s how you do it, It’s a good idea to put in the the time to understand shell scripting. Most of the common misconceptions come out of misunderstanding. The shell is neither fragile (it’s been in use for decades, so it’s very stable) nor ugly (I came from JavaScript to learning shell script, and it seemed ugly indeed at first, now I find it very elegant). Keeping things small and simple is the way to do it. When things get complex, create another script, that’s the UNIX way.
It’s the best tool for automating OS tasks. That’s what it was made for.
+1 to using ShellCheck, I usually run it locally as
shellcheck -s sh
for POSIX compliance.
I even went as far as generating my static sites with it https://mkws.sh/. You’re using the shell daily for displaying data in the terminal, it’s a great tool for that, why not use the same tool for displaying data publicly.
I went the opposite direction - I was a shell evangelist during the time that I was learning it, but once I started pushing its limits (e.g. CSV parsing), and seeing how easy it was for other members of my team to write bugs, we immediately switched to Python for writing dev tooling.
There was a small learning curve at first, in terms of teaching idiomatic Python to the rest of the team, but after that we had much fewer bugs (of the type mentioned in the article), much more informative failures, and much more confidence that the scripts were doing things correctly.
I didn’t want to have to deal with package management, so we had a policy of only using the Python stdlib. The only place that caused us minor pain was when we had to interact with AWS services, and the solution we ended up using was just to execute the aws CLI as a subprocess and ask for JSON output. Fine!
I tend to take what is, perhaps, a middle road. I write Python or Go for anything that needs to do “real” work, e.g. process data in some well-known format. But then I tie things together with shell scripts. So, for example, if I need to run a program, run another program and collect, and then combine the outputs of the two programs somehow, there’s a Python script that does the combining, and a shell script that runs the three other programs and feeds them their inputs.
I also use shell scripts to automate common dev tasks, but most of these are literally one-ish line, so I don’t think that counts.
(Although I will agree that it’s annoying that shell has impoverished flag parsing … So I actually write all the flag parsers in Python, and use the “task file” pattern in shell.)
(many code examples from others in that post, also almost every shell script in https://github.com/oilshell/oil is essentially that pattern)
There are a lot of names for it, but many people seem to have converged on the same idea.
I don’t have a link handy not but Github had a standard like this in the early days. All their repos would have a uniform shell interface so that you could get started hacking on it quickly.
We have plenty of good templating languages: Jinja2, Liquid, Askama, Phoenix.Template, etc. They generally don’t need arcane quoting rules, are happy to ingest arbitrary data from outside programming languages, don’t have insane scoping for declarations, etc. I personally am happy to leave m4 in the mausoleum where it belongs.
M4 IS THE TRUE PATH TO NIRVANA! M4 HAS BEEN THE CHOICE OF EDUCATED AND IGNORANT ALIKE FOR CENTURIES! M4 WILL NOT CORRUPT YOUR PRECIOUS BODILY FLUIDS!! M4 IS THE STANDARD TEXT PREPROCESSOR! M4 MAKES THE SUN SHINE AND THE BIRDS SING AND THE GRASS GREEN!!
There are situations when you don’t want to commit to a programming language runtime but you still need a templating language. That’s been the usecase I’ve found for m4. (Not that I disagree with any of the criticism of it being arcane and hard to debug, I just haven’t found a language-agnostic replacement for it yet. Pandoc is large and for these situations is a bit too large (container configs come to mind.))
The snarky argument is that m4 is also bound to a programming language runtime, it just happens to be C. :-P That’s not actually correct though.
This is actually an interesting distinction, and also possibly one of the things that makes m4 such a bear to do complicated things with while making it nice to do simple things with. All the other tools I named are designed to be controlled from a programming language, not a standalone exe. That means that they have more or less that programming language’s data model, not just plain text or their own slightly-half-assed data model, and also that you can pull some of the more complicated operations out of them or inject new functions into them, so that’s the source of a lot of their power, and you don’t need to do as much of your logic in macros.
On the flip side, there’s no real reason you couldn’t write a standalone Python/whatever program to run Jinja2 on arbitrary files and take its data from whatever source you want; command line args, inline declarations, JSON/TOML/whatever file input, etc. But that doesn’t seem to have become popular for some reason.
On the flip side, there’s no real reason you couldn’t write a standalone Python/whatever program to run Jinja2 on arbitrary files and take its data from whatever source you want; command line args, inline declarations, JSON/TOML/whatever file input, etc. But that doesn’t seem to have become popular for some reason.
The snarky argument is that m4 is also bound to a programming language runtime, it just happens to be C. :-P That’s not actually correct though.
Unfortunately I’ve already accepted the C runtime as dependency for most of my code 😛
I’ll be clear that given a choice I will pick a programming language’s templating environment over M4 any day. The only use for M4 is when you don’t want to do that. For me that’s usually for config files used around boot. I would really like to see something like M4 with its footprint that is less wonky than M4.
There are a few compiled Jinja (or jinja-like) implementations in the wild. For example https://tera.netlify.app/docs is just 3 or so lines of code from a (basic) standalone, static tool which doesn’t require a runtime.
Note how the letter says Barinsta “may” violate such and such law. Since it’s not an outright accusation, the lawyer who wrote this doesn’t risk anything. They can always say they wrote it “just in case” or something.
About the violation of terms of service… are they even bound to those terms of service? Maybe Barinsta users are, but the developers of the software itself?
I would be very interested in a discussion around this. The author explicitly didn’t make any statements about differences in password managers. The claim is, they all have equally big attack surfaces if they use web extensions.
Counter example: bitwarden does not inject any elements (it adds properties to input fields though). The extensions drop down interface has to be used. Does that make it safer? Or am I missing something?
Isn’t there a standardised API for password manager inputs in browsers? If not, why not?
Seems like it would stop all these password managers from reinventing the wheel every time; reduce some attack surface by having it built into the browser itself rather than injected elements.
yeah it feels like having OS’s or browsers offer a standard hook for credential storage and having the tools use it would resolve a lot of this stuff. I think the iOS stuff works very well, though there’s a lot of uncertainty about what domain you’re on inside app stuff sometimes, but it usually “fails” in the right direction (not filling in credentials vs filling in incorrect credentials)
Really? I use KeePassDX on Android, with Firefox, and it seems to work fairly smoothly through the autofill framework. It also provides a fake/specialized keyboard implementation for places where autofill doesn’t work.
There’s just a fundamental risk when mixing things inside an untrusted sandbox (the web page) and out where your secrets are. It’s much easier to do that well if you’re building a browser than if you’re building an extension - and even then there’s a long history of bugs with how browsers have done it.
I haven’t looked too closely at the script, but it looks like it does two things: pull out the structure of the page, and then fill it. I have no clue whether either of these are exploitable, but it’s definitely not vulnerable to this sort of redress attack. In particular, the only way to trigger password fill is to click the extension icon or use the right-click menu, both of which are not vulnerable to the same sort of redress/IPC attacks that Tavis mentions. (Well, I guess you could write some JS to fake the right-click menu. But I’m not sure what the way around this is.)
It does inject elements (I think) if you fill in a page to show a ‘would you like to remember this password’-type dialog box, but that’s not really much of an attack surface.
Bitwarden can also be self-hosted, though this doesn’t protect you from the browser extension being malicious.
As someone who uses trackers on a daily basis, I have to say that’s a very solid write-up. Perhaps Jeskola Buzz could have been discussed a bit more, since it was pretty big in the late 90s/early 2000s. There’s also Buzztrax, which continues the legacy on Linux. Other than that, the article is pretty comrehensive though. There were some tracker-like editors before Ultimate Soundtracker, but that’s obviously out of scope for this one.
There are a number of reasons why I prefer trackers over any other means of composing computer music.
The keyboard-driven workflow is just so much faster than clicking around with the mouse.
The minimalistic UI helps me focus.
Trackers are ubiquitous on 80s home computers and consoles. That’s great for a chiptune musician like me, because once you know one tracker, you can easily get started on another one. So I can turn most of my 8-bit junk into a musical instrument with very little learning effort.
What’s interesting to me is that trackers actually make the music writing process more akin to programming. This becomes especially apparent in Chiptune, where you’re basically writing a list of instructions to be interpreted by the music player.
The keyboard-driven workflow is just so much faster than clicking around with the mouse.
What about trackers versus playing a MIDI keyboard? In your opinion, does entering notes in a tracker have advantages over playing them on a MIDI keyboard?
Caveat, I don’t have so much experience with MIDI. For many years I was moving around a lot (aka “my home is where my laptop is”), so I never bothered carrying around a MIDI controller. Nowadays you get these really small controllers, but back in the day these things were clunky. So anyway, I’m not super qualified to answer this.
Generally, I think the use-case is different. MIDI shines in 3 situations. a) when you have a fixed setup, eg. one DAW + a standard set of plugins that you always use. If you’re exploring many different tools and platforms, then the overhead from setting up MIDI controls is usually not worth it. b) for adding a live/human feel to an existing basic structure, ie. when you want to be less than 100% consistent/timing accurate. If you actually want to be precise, then you need to be able to play very well (which I’m not), otherwise you’re going to fiddle with correcting quantization errors a lot (unless you use your MIDI controller to enter notes/triggers on a per-step basis, in which case you might just as well use your computer’s keyboard). c) for automating sound parameters. Definitely a huge plus for “normal” computer music, for my use-case (Chiptune) it’s less relevant though.
There’s also Radium. Haven’t used it yet, but it looks very promising in terms of bringing trackers to the next level. Back in the day I actually used Neil Sequencer, another Buzz-inspired project. Unfortunately that one is completely dead, I can’t even build it on a current system nowadays.
Last but not least, there’s also Schism Tracker, which is to Impulse Tracker what Milky is to Fasttracker/Protracker. The .it format is more compact than .xm, so it’s often the preferred choice for making size-restricted modules.
I know it’s called “essential guide” but it’s a bit weird to write a history of trackers without mentioning Impulse Tracker and its descendants (Modplug, Cheesetracker, Schizm, etc).
Ow. I hope development will continue in some other form.
I really liked the interface, and I accumulated a pretty long list of rules over the years. The rule syntax does seem similar to the ublock one though, so hopefully moving it is trivial
I have a prohibitively slow internet connection and didn’t bother to build a nas yet so I just do incremental backups to an external hdd once in a while.
Not having daily backups isn’t really a problem since
Important data is on fairly reliable mediums with pre-failure alerts
I usually don’t really have much newly created important data except
projects: I just push regularly to a git remote
newly-taken photos: I do not delete them from memory cards until the next backup is complete
That’s true, and I was on the edge about posting it too, but I felt the article really puts in perspective the time scales of a law that affects software development a lot.
After using Firefox on Android for as long as I can remember, I have changed browsers.
Every time I start the new version my screen flashes. I perceive no performance improvements or “experience” benefits. On the contrary my favorite extensions no longer work.
My question is, why should I use/return to this new version?
Same here, even on my latest Google Pixel, the Firefox performance was awful, the browser experience was not good. But now I’m very happy with the latest version, I can see real good improvements, the browser experience is great and it’s not resource hungry as the oldest version. I would like to congratulate the Mozilla team for the great job!
I… hated it. Especially I feel like there wasn’t enough testing with the bar configured on top. I wrote a rant with the issues I have, which will probably read as too angry for a lobste.rs comment but allowed me to vent my frustration.
For now I set the bar on the bottom, which I don’t really like but solves 2 issues (buggy sites, and the new tab button being too far).
Still thank you for your work. I couldn’t get anything done without firefox in my pocket.
Another issue not listed: I will sometimes come back to firefox to find an old tab is now completely blank. Reloading will not help: I have to close the tab and open it again. I’ve had this happen with both a lobsters tab and a completely unrelated site… I will have to try and find a reproducible way to trigger it, could be hard.
Lots of users hate the new tab drawer (vs. the original tab page in earlier Firefox Preview builds). I don’t think it matters whether it’s a drawer or a full screen page, but the fact that scrolling to the top of the list continues into closing the drawer is extremely annoying. I do not ever want to close the drawer by moving my finger down on the list of tabs!! Please make an option to only have the header draggable for closing.
No idea about Klar, but Fennec is similar to IceCat: Firefox with the proprietary blobs removed. I think F-Droid doesn’t like vanilla Firefox for the reason that it contains blobs.
My recollection is that F-Droid’s Fennec build is just Firefox with the trademarks removed, not proprietary blobs. The new Firefox for Android, Fenix, doesn’t get packaged because its standard build system involves downloading pre-compiled copies of other Mozilla components, like the GeckoView widget, rather than building absolutely everything from source. F-Droid does allow apps that download pre-compiled copies of things, but only if they’re obtained from a blessed list of Maven repositories, and Mozilla’s CI system is not on the list.
Also, there may be something about requiring the Play Store to support notifications, but I don’t think it’s the only or even the biggest blocker.
One thing I would absolutely love is socks5 proxy support. Any plans for that? Also, I use ^L and ^K a freakton in the desktop browser. I’d love to see support for that when using Firefox for Android on ChromeOS.
In general I’m pretty happy with the new version of Firefox. The one big mistake Mozilla made however was to pull important features out.
For example I miss “custom search keywords”. I have a carefully crafted list of custom search keywords, and I use Firefox on top of iOS too because of it (otherwise I’ve got no reason to not switch to Safari). And it seems that this particular feature is not coming back on Android, due to some unification with the search engines, which don’t even synchronize. And this made me a little sad.
Also the new engine has some issues with some animations on some websites, as when scrolling such pages I sometimes get lag. I also hope that you’ll improve Android’s UI for tablets, as some of the UI elements are a little small on top of my Galaxy Tab S7.
Otherwise I’m happy to see Firefox improve, and the few add-ons I relied on still working. For me Android is not usable without Firefox ❤️
Great work! It sounds like there’s been a lot of work going on under the hood for this release, and there’s mentioning of it now being easier to make new features in the product. Are there any blog posts - or could you talk a bit about what changes that has been made which now unlocks this extra velocity?
Any way to display your bookmarks on startup or something like this ? I’m used to switching through my bookmarks, now I’ve got to add them all to this “collection”(? german word is “Sammlung”) and that is collapsed every time I create a new tab. “Add to start screen” doesn’t do anything.
This is the version that finally made me rate Firefox in Play store: to 1 star! Why did you (plural) make it this bad?
Things that broke:
setting DuckDuckGo up a default search engine was simple in the past as I remember. It was auto-discovered, I think, I installed Firefox quite a time ago. Now I had to manually edit a search string.
The text selection menu is totally useless. I used to have “copy to clipboard” and “search <default search provider” there. Now I have to push “…” and scroll a tiny list with useless items populated by some incomprehensible logic, containing apps installed on my phone eg. a “pdf reader”, “encrypt”, “private search”, “Firefox search”, “new task”. Lot of useless crap instead of a single simple workflow. The “Firefox Search” option is the functional equivalent of the old operation, but it is at the bottom of the list, so it is a pain to use.
icons in the start page are smaller, and the workflows on their manipulation are not intuitive.
tab selection is terrible. The tabs opened in the background are at the top of the tab stack, but the current tab is at the top of the screen, and there are no visual cues that there may be other tabs above, you need to scroll both ways to find what you are looking for…
The whole UX suggest that the developers don’t use Firefox for daily browsing. The feature are there, the UX is terrible, and is a regression in every possible aspect.
The single good thing is the address bar in the bottom. I’d prefer to downgrade to an older version actually, as the previously advertised speed benefits are not noticable.
The PR page states:
User experience is key, in product and product development
Maybe I’m not the target audience?
I know this is not your (singular) fault, more likely a project management issue, but I think the direction is not the right one.
Hi Stefan, please take a look at brave on mobile. I was eagerly waiting for Brave UX in firefox and chrome. Fantastic news that firefox.
One suggestion -
After clicking on tab number at right bottom corner to open new tab, is it possible to slide to normal window to incognito windows by sliding on screen rather than click on each icon. This will be especially helpful for mobile or tablet with big screens.
Again, big thanks making such huge change possible.
I saw an e-ink bike speedometer thing at the store a couple weeks ago and thought “oh that’s cool, it might be easier to read in the sun”…. but then realized I’ve had the same non-lit LCD bike speedometer for twenty years now and it works perfectly fine in the sun, just like my old non-lit lcd wristwatch ive had for ages. Made me think: could many of the same advantages we ascribe to eink be achieved by regular old non-backlit lcds at much lower cost? i legit don’t know
Mostly, yes. The biggest difference between eInk and traditions LCDs consume power to remain in position, eInk displays consume power only to move between states. This is why trying to put them in places that need high refresh rates isn’t great: even if they can do it, they will consume a lot of power. They are great for situations where you update no more than every few seconds (hmm, now I want to build an eInk sundial). The newer ones may have better contrast than normal LCDs, not sure if you can make an LCD with a white background.
There are larger reflective LCD panels out there at least. As cool as eink is, reflective LCD is probably the right choice for a monitor. Memory LCD is another decent in-between option; Pebble used them on smartwatches.
I wish transflective panels actually kept their promise that they’d be usable in the sun without any backlight. Every one I saw was unusable in daylight at 0% backlight. And I wish Pixel Qi was used in devices I’d want to use (i.e. not an OLPC)….
Yes, one example is https://banglejs.com/
I’m certainly looking forward to read it, as an assembly and ISA geek… but that way to display a PDF on the web that the author chose is really egregious. ;)
Agreed. If anyone finds a link to the pdf, and doesn’t feel it’s too ethically transgressive to post it, I and my e-reader would be most grateful.
Here’s another web pdf viewer link for it, but this one includes an option to download it (in the double angle bracket menu on the top right).
I thought I found it but it was only a copy of that weird format.
I can’t find any seller either, so that leaves only shadow libraries :-/
Anna’s archive and z-library both have it.
But I’d suggest “Computer Organization and Design RISC-V Edition The Hardware Software Interface” (2021 version aka 6th edition aka 2nd risc-v edition) over it.
“RISC-V Reader” is also a good read, for a turbo introduction for those who already know other assembly languages.
And, of course, the RISC-V unprivileged and privileged specs themselves.
Any freely available PDF?
Very misleading article. There has been an avalanche of these recently.
It constantly implies that this law is going to prevent people from writing AI systems, while as far as I can see the law is (simplifying a bit) about selling them. edit: and using them in sensitive applications such as law enforcement
I don’t see how any telemetry transmitted via the internet that is opt-out is not a direct violation of the GDPR. The IP address that is transmitted with it (in the IP packets) is protected information that you don’t have consent to collect - you failed at step 0 and broke the law before you even received the bits you actually care about.
Of course, the GDPR seems to be going routinely unenforced except against the largest and most blatant violations, but I really don’t see why a company like google would risk it. Why other large companies are actively risking it.
My understanding of the GDPR was that IP addresses are not automatically PII. Even in situations where they are, simply receiving a connection from an IP address does not incur any responsibilities because you require the IP for technical reasons to maintain the connection. It’s only when you record the IP address that it may hit issues. You can generally use some fairly simple differential privacy features to manage this (e.g. drop one of the bytes from your log).
The EU has ruled that IP addresses are GDPR::PII, sadly.
There’s nothing sad about it. I bet that you think that your home address, ICBM coordinates, etc. are PII too.
Do you have a link to that ruling, I’d be very interested in reading it.
(emphasis mine. via the GDPR text, Regulation (EU) 2016/679)
fwiw- “PII” is a US-centric term that isn’t used within GDPR, which instead regulates “processing personal data”.
This doesn’t actually say that collecting IP addresses is not allowed. It only states that when the natural person is known, online identifiers could be used to create profiles.
Furthermore this is only relevant if those online identifiers are actually processed and stored. According to the Google proposal they are not. They only keep record of the anonymous counters. Which is 100% fine with GDPR.
(IANAL) I’d seen analytics software like Fathom and GoatCounter rely on (as you mention) anonymised counters to avoid creating profiles on natural persons, but also we’ve seen a court frown upon automatic usage of Google Fonts due to automatic transmission of IP addresses to servers in the US.
It’s a shame the go compiler isn’t well positioned UX-wise to ask users for opt-in consent at installation (as an IDE might) since that’d likely solve privacy concerns while reaching folk that don’t know about an opt-in config flag.
[admittedly, Google already receives IP addresses of Go users through https://proxy.golang.org/ anyway (which does log IP addresses, but “for [no] more than 30 days”) ¯\_(ツ)_/¯]
Yes IP addresses are not automatically PII, but if you can’t enforce they are not you must assume they are. The telemetry data itself is probably not PII, because it’s anonymized.
GDPR prohibits processing[0] of (private) data, but contains some exceptions. The most common used one is to full fill a contract (this doesn’t need to be a written down contract with payment). So assume you have an online shop. A user orders i.e. a printer you need his address to send the printer to him. But when the user orders a ebook you don’t need the address because you don’t need to ship the ebook. In the case of go the service would be compiling go code. I don’t see a technical requirement to send google your IP-Address.
Next common exception is some requirement by other law (i.e. tax-law or money laundering protection law). I think there is none.
Next one is user consents: You know these annoying cookie banner. Consents must be explicit and can’t be assumed (and dark pattern are prohibit). So this requires an opt-in.
Next one would be legitimate interest. This is more or less the log file exception. Here you might argue that the go team needs this data to improve there compiler. I don’t think this would stand, because other compiler work pretty well without telemetry.
So all together I[1] would say the only legal way to collect the telemetry data is some sort of user consent.
[0] Yes processing not only storing, so having a web server answering http requests might also falls under GDPR.
[1] I’m not a lawyer
You are wrong. The GDPR is not some magic checkbox that says “do not ever send telemetry”. The GDPR cares about PII and your IP address and a bunch of anonymous counters are simply not PII. There is nothing to enforce in this case.
If something is permitted by the law, it doesn’t automatically mean it’s also good
It’s a good thing that nobody’s arguing that, then.
Hah, you’re right, I must have mixed up two comments. Glad we all agree then :)
Nix solves the problem of unstable tarballs by unpacking them then re-packing them as NAR, its own stable archive format (Figure 5.2). It works quite well, and it’s generally useful for hashing trees of files, for example when projects don’t publish tarballs at all or when you need a hash of a generic result of a network operation (like
cargo vendor
orgo mod download
).Ironically most of the spam I receive comes from the big providers, google most of the time. Their outbound ham/spam ratio is abysmal, yet blocking them sounds ridiculous.
This is my experience as well. The vast majority of the spam that is not flagged as such by my rspamd comes from Google. I’ve seen it both from @gmail.com and for email domains they host. I probably get a dozen of these in my inbox every day.
The project has posted a response.
Further response by Drew DeVault
I understand why people write these articles, but the argumentation here is just “I don’t like this”. Can’t we talk about design with better arguments? Maybe even data? (Not that I have any at hand.)
That’s just the title. The article goes a bit more in depth on why the author considers the new design to be worse, though you’re right about the data, the article only presents anecdotes:
That’s fair, I missed that paragraph. I agree that it’s not data though.
“Data” doesn’t make arguments automatically better. Quantitative analysis isn’t appropriate for everything, and even when it may be useful, you still need a qualitative analysis to even know what data to look at and how to interpret it.
This is the kind of reply that’s easy to agree with. 🙂
I was prepared to groan but there are some superb sentiments in here, well articulated. I’d be interested to know why it was written now and if it’s meant to signal any changes in direction for Firefox.
These two lines stood out to me as things that mean a lot to me but honestly wouldn’t have expected to be said by Mozilla.
It’s more of a “justification” than a new direction: Background is that other browsers develop and ship APIs that are sometimes hard or even impossible to bring to the web platform without conflicting with the core values of Mozilla. Pushing back on individual standards can be time consuming and repetitive. (See https://mozilla.github.io/standards-positions/)
Among other things, this document serves as a long form explanation of these core values.
Thank you for that link! I took a look at the “harmful” section and I was shocked by the amount of bad ideas. And they keep coming! It’s great that there’s at least someone opposing this madness.
I will never get over the ultimate in bad ideas: the SVG working group trying to give SVG raw sockets access
You’re not serious?
Yuuuup, there’s quite a story, but basically it boils down to mobile phone software manufacturers in Japan (I think?) were required to implement things in terms of standards (or something like that). None thought a full browser was possible at the time, and the html5 spec was still in its infancy so hadn’t split out into sub-specifications yet.
That meant that to get (for example) XHR they’d need to implement a full browser. Obviously such I thing was impossible on a phone :D
The solution was to give the SVG spec everything that they needed, including raw sockets, an ECMAScript subset that only had integers, etc
Suffice to say that when we implemented SVG in mobile safari we said raw sockets were not a thing that would happen.
The SVG WG of the era was not the most functional.
Oh my gosh this sounds horrifying. Thanks for the explanation below!
I used to give the same advice, but I completely changed my opinion over the past 10 years or so. I eventually put in the time and learned shell scripting. These days my recommendation is:
I really don’t want to figure out every project’s nodejs/python/ruby/make/procfile abomination of a runner script anymore. Just like wielding regular expressions, knowing shell scripting is a fundamental skill that keeps paying dividends over my entire career.
Bingo.
My advice is:
#!/usr/bin/env bash
at the beginning of your scripts (change if you need something else, don’t rely on a particular path to bash though).set -eou pipefail
after that.local
storage qualifier when declaring variables in a function.sh
, then do the work.While some people like the author will cry and piss and moan about how hard bash is to write, it’s really not that bad if you take those steps (which to be fair I wish were more common knowledge).
To the point some folks here have already raised, I’d be okay giving up shell scripting. Unfortunately, in order to do so, a replacement would:
There are basically no programming languages that meet those criteria other than the existing shell languages.
Shell scripting is not the best tool for any given job, but across every job it’ll let you make progress.
(Also, it’s kinda rich having a Python developer tell us to abandon usage of a tool that has been steadily providing the same, albeit imperfect, level of service for decades. The 2 to 3 switch is still a garbage fire in some places, and Python is probably the best single justification for docker that exists.)
I think “nine steps” including “always use two third-party tools” and “don’t use any QoL features like associative arrays” does, in fact, make bash hard to write. Maybe Itamar isn’t just “cry and piss and moan”, but actually has experience with bash and still think it has problems?
To use any language effectively there are some bits of tribal knowledge…babel/jest/webpack in JS, tokio or whatever in Rust, black and virtualenv in Python, credo and dialyzer in Elixir, and so on and so forth.
Bash has many well-known issues, but maybe clickbait articles by prolific self-pronoters hat don’t offer a path forward also have problems?
If your problem with the article is that it’s clickbait by a self-promoter, say that in your post. Don’t use it as a “gotcha!” to me.
I think there’s merit here in exploring the criticism, though room for tone softening. Every language has some form of “required” tooling that’s communicated through community consensus. What makes Bash worse than other languages that also require lots of tools?
There’s a number of factors that are at play here and I can see where @friendlysock’s frustration comes from. Languages exist on a spectrum between lots of tooling and little tooling. I think something like SML is on the “little tooling” where just compilation is enough to add high assurance to the codebase. Languages like C are on the low assurance part of this spectrum, where copious use of noisy compiler warnings, analyzers, and sanitizers are used to guide development. Most languages live somewhere on this spectrum. What makes Bash’s particular compromises deleterious or not deleterious?
Something to keep in mind is that (in my experience) the Lobsters userbase seems to strongly prefer low-tooling languages like Rust over high-tooling languages like Go, so that may be biasing the discussion and reactions thereof. I think it’s a good path to explore though because I suspect that enumerating the tradeoffs of high-tooling or low-tooling approaches can illuminate problem domains where one fits better than the other.
I felt that I sufficiently commented about the article’s thesis on its own merits, and that bringing up the author’s posting history was inside baseball not terribly relevant. When you brought up motive, it became relevant. Happy to continue in DMs if you want.
You’re really quite hostile. This is all over scripting languages? Or are you passive aggressively bringing up old beef?
Integrating shellcheck and shfmt to my dev process enabled my shell programs to grow probably larger than they should be. One codebase, in particular, is nearing probably like 3,000 SLOC of Bash 5 and I’m only now thinking about how v2.0 should probably be written in something more testable and reuse some existing libraries instead of reimplementing things myself (e.g., this basically has a half-complete shell+curl implementation of the Apache Knox API). The chief maintenance problem is that so few people know shell well so when I write “good” shell like I’ve learned over the years (and
shellcheck --enable=all
has taught me A TON), I’m actively finding trouble finding coworkers to help out or to take it over. The rewrite will have to happen before I leave, whenever that may be.I’d be interested in what happens when you run your 3000 lines of Bash 5 under https://www.oilshell.org/ . Oil is the most bash compatible shell – by a mile – and has run thousands of lines of unmodified shell scripts for over 4 years now (e.g. http://www.oilshell.org/blog/2018/01/15.html)
I’ve also made tons of changes in response to use cases just like yours, e.g. https://github.com/oilshell/oil/wiki/The-Biggest-Shell-Programs-in-the-World
Right now your use case is the most compelling one for Oil, although there will be wider appeal in the future. The big caveat now is that it needs to be faster, so I’m actively working on the C++ translation (
oil-native
passed 156 new tests yesterday).I would imagine your 3000 lines of bash would be at least 10K lines of Python, and take 6-18 months to rewrite, depending on how much fidelity you need.
(FWIW I actually wrote 10K-15K lines of shell as 30K-40K lines of Python early in my career – it took nearly 3 years LOL.)
So if you don’t have 1 year to burn on a rewrite, Oil should be a compelling option. It’s designed as a “gradual upgrade” from bash. Just running
osh myscript.sh
will work, or you can change the shebang line, run tests if you have them, etc.There is an
#oil-help
channel on Zulip, liked from the home pageThanks for this nudge. I’ve been following the development of Oil for years but never really had a strong push to try it out. I’ll give it a shot. I’m happy to see that there are oil packages in Alpine testing: we’re deploying the app inside Alpine containers.
Turns out that I was very wrong about the size of the app. It’s only about 600 SLOC of shell :-/ feels a lot larger when you’re working on it!
One thing in my initial quick pass: we’re reliant on bats for testing. bats seemingly only uses bash. Have you found a way to make bats use Oil instead?
OK great looks like Alpine does have the latest version: https://repology.org/project/oil-shell/versions
I wouldn’t expect this to be a pain-free experience, however I would say should definitely be less effort than rewriting your whole program in another language!
I have known about bats for a long time, and I think I ran into an obstacle but don’t remember what it was. It’s possible that the obstacle has been removed (e.g. maybe it was extended globs, which we now support)
https://github.com/oilshell/oil/issues/297
In any case, if you have time, I would appreciate running your test suite with OSH and letting me know what happens (on Github or Zulip).
One tricky issue is that shebang lines are often
#!/bin/bash
, which you can change to be#!/usr/bin/env osh
. However one shortcut I added was OSH_HIJACK_SHEBANG=oshhttps://github.com/oilshell/oil/wiki/How-To-Test-OSH
Moving away from Python? Now it has my interest… in the past I skipped past know it’d probably take perf hits and have some complicaged setup that isn’t a static binary.
Yes that has always been the plan, mentioned in the very first post on the blog. But it took awhile to figure out the best approach, and that approach still takes time.
Some FAQs on the status here: http://www.oilshell.org/blog/2021/12/backlog-project.html
Python is an issue for speed, but it’s not an issue for setup.
You can just run
./configure && make && make install
and it will work without Python.Oil does NOT depend on Python; it just reuses some of its code. That has been true for nearly 5 years now – actually since the very first Oil 0.0.0. release. Somehow people still have this idea it’s going to be hard to install, when that’s never been the case. It’s also available on several distros like Nix.
What is the status of Oil on Windows (apologies if it’s in the docs somewhere, couldn’t find any mentioning of this). A shell that’s written in pure C++ and has Windows as a first class citizen could be appealing (e.g. for cross-platform build recipes).
It only works on WSL at the moment … I hope it will be like bash, and somebody will contribute the native Windows port :-) The code is much more modular than bash and all the Unix syscalls are confined to a file or two.
I don’t even know how to use the Windows sycalls – they are quite different than Unix! I’m not sure how you even do fork() on Windows. (I think Cygwin has emulation but there is way to do it without Cygwin)
https://github.com/oilshell/oil/wiki/Oil-Deployments
I believe Tcl fits those requirements. It’s what I usually use for medium-sized scripts. Being based on text, it interfaces well with system commands, but does not have most of bash quirks (argument expansion is a big one), and can handle structured data with ease.
I don’t do this. Because all my scripts are POSIX shell (or at least as POSIX complaint as I can make them). My shebang is always
#!/bin/sh
- is it reasonable to assume this path?you will miss out on very useful things like
set -o pipefail
, and in general you can suffer from plenty of subtle differences between shells and shell versions. sticking to bash is also my preference for this reason.note that the
/usr/bin/env
is important to run bash from wherever it is installed, e.g. the homebrew version on osx instead of the ancient one in/bin
(which doesn’t support arrays iirc and acts weirdly when it comes across shell scripts using them)Reasonable is very arbitrary at this point. That path is explicitly not mandated by POSIX, so if you want to be portable to any POSIX-compliant system you can’t just assume that it will exist. Instead POSIX says that you can’t rely on any path, and that scripts should instead be modified according to the system standard paths at installation time.
I’d argue that these days POSIX sh isn’t any more portable than bash in any statistically significant sense though.
Alpine doesn’t have Bash, just a busybox shell. The annoying thing is if the shebang line fails because there is no bash, the error message is terribly inscrutable. I wasted too much time on it.
nixos has /bin/sh and /usr/bin/env, but not /usr/bin/bash. In fact, those are the only two files in those folders.
https://mkws.sh/pp.html hardcodes
#!/bin/sh
. POSIX definitely doesn’t say anything aboutsh
s location but I really doubt you won’t find ash
at/bin/sh
on any UNIX system. Can anybody name one?I would add, prefer POSIX over bash.
I checked, and
shellcheck
(at least the version on my computer) only catches issue #5 of the 5 I list.That’s because the other ones are options and not errors. Yes, typically they are good hygiene but
set -e
, for example, is not an unalloyed good, and at least some experts argue against using it.Not for lack of trying: https://github.com/koalaman/shellcheck/search?q=set+-e&type=issues
There are tons of pedants holding us back IMO. Yes, “set -e” and other options aren’t perfect, but if you even know what those situations are, you aren’t the target audience of the default settings.
Yup, that’s how you do it, It’s a good idea to put in the the time to understand shell scripting. Most of the common misconceptions come out of misunderstanding. The shell is neither fragile (it’s been in use for decades, so it’s very stable) nor ugly (I came from JavaScript to learning shell script, and it seemed ugly indeed at first, now I find it very elegant). Keeping things small and simple is the way to do it. When things get complex, create another script, that’s the UNIX way.
It’s the best tool for automating OS tasks. That’s what it was made for.
+1 to using ShellCheck, I usually run it locally as
for POSIX compliance.
I even went as far as generating my static sites with it https://mkws.sh/. You’re using the shell daily for displaying data in the terminal, it’s a great tool for that, why not use the same tool for displaying data publicly.
No, it really is ugly. But I’m not sure why that matters
I believe arguing if beauty is subjective or not is off topic. 😛
I went the opposite direction - I was a shell evangelist during the time that I was learning it, but once I started pushing its limits (e.g. CSV parsing), and seeing how easy it was for other members of my team to write bugs, we immediately switched to Python for writing dev tooling.
There was a small learning curve at first, in terms of teaching idiomatic Python to the rest of the team, but after that we had much fewer bugs (of the type mentioned in the article), much more informative failures, and much more confidence that the scripts were doing things correctly.
I didn’t want to have to deal with package management, so we had a policy of only using the Python stdlib. The only place that caused us minor pain was when we had to interact with AWS services, and the solution we ended up using was just to execute the
aws
CLI as a subprocess and ask for JSON output. Fine!I tend to take what is, perhaps, a middle road. I write Python or Go for anything that needs to do “real” work, e.g. process data in some well-known format. But then I tie things together with shell scripts. So, for example, if I need to run a program, run another program and collect, and then combine the outputs of the two programs somehow, there’s a Python script that does the combining, and a shell script that runs the three other programs and feeds them their inputs.
I also use shell scripts to automate common dev tasks, but most of these are literally one-ish line, so I don’t think that counts.
This makes sense to me
FWIW when shell runs out of steam for me, I call Python scripts from shell. I would say MOST of my shell scripts call a Python script I wrote.
I don’t understand the “switching” mentality – Shell is designed to be extended with other languages. “Unix philosophy” and all that.
I guess I need to do a blog post about this ? (Ah I remember I have a draft and came up with a title – The Worst Amounts of Shell Are 0% or 100% — https://oilshell.zulipchat.com/#narrow/stream/266575-blog-ideas/topic/The.20Worst.20Amount.20of.20Shell.20is.200.25.20or.20100.25 (requires login)
(Although I will agree that it’s annoying that shell has impoverished flag parsing … So I actually write all the flag parsers in Python, and use the “task file” pattern in shell.)
What is the “task file” pattern?
It’s basically a shell script (or set of scripts) you put in your repo to automate common things like building, testing, deployment, metrics, etc.
Each shell function corresponds to a task..
I sketched it in this post, calling it “semi-automation”:
http://www.oilshell.org/blog/2020/02/good-parts-sketch.html
and just added a link to:
https://lobste.rs/s/lob0rw/replacing_make_with_shell_script_for
(many code examples from others in that post, also almost every shell script in https://github.com/oilshell/oil is essentially that pattern)
There are a lot of names for it, but many people seem to have converged on the same idea.
I don’t have a link handy not but Github had a standard like this in the early days. All their repos would have a uniform shell interface so that you could get started hacking on it quickly.
You should investigate just for task running. It’s simple like
make
but none of the pitfalls of it for task running.We have plenty of good templating languages: Jinja2, Liquid, Askama, Phoenix.Template, etc. They generally don’t need arcane quoting rules, are happy to ingest arbitrary data from outside programming languages, don’t have insane scoping for declarations, etc. I personally am happy to leave m4 in the mausoleum where it belongs.
No! We Must Use The Tools Of Our Ancestors, Else We Have Strayed From The True Path! Yea, Even Unto The Lowliest Preprocessor.
“M4 is the standard text
editorpreprocessor.”M4, the greatest preprocessor of all.
M4 IS THE TRUE PATH TO NIRVANA! M4 HAS BEEN THE CHOICE OF EDUCATED AND IGNORANT ALIKE FOR CENTURIES! M4 WILL NOT CORRUPT YOUR PRECIOUS BODILY FLUIDS!! M4 IS THE STANDARD TEXT PREPROCESSOR! M4 MAKES THE SUN SHINE AND THE BIRDS SING AND THE GRASS GREEN!!
M4 is a true Tool of The Minimalist Path.
M4 will not corrupt your precious bodily fluids!
I don’t mind the templates…. but I do deny them my essence.
TIL it was designed by K&R. Always thought it was a GNU thing because I associated it with autotools.
There are situations when you don’t want to commit to a programming language runtime but you still need a templating language. That’s been the usecase I’ve found for m4. (Not that I disagree with any of the criticism of it being arcane and hard to debug, I just haven’t found a language-agnostic replacement for it yet. Pandoc is large and for these situations is a bit too large (container configs come to mind.))
The snarky argument is that m4 is also bound to a programming language runtime, it just happens to be C. :-P That’s not actually correct though.
This is actually an interesting distinction, and also possibly one of the things that makes m4 such a bear to do complicated things with while making it nice to do simple things with. All the other tools I named are designed to be controlled from a programming language, not a standalone exe. That means that they have more or less that programming language’s data model, not just plain text or their own slightly-half-assed data model, and also that you can pull some of the more complicated operations out of them or inject new functions into them, so that’s the source of a lot of their power, and you don’t need to do as much of your logic in macros.
On the flip side, there’s no real reason you couldn’t write a standalone Python/whatever program to run Jinja2 on arbitrary files and take its data from whatever source you want; command line args, inline declarations, JSON/TOML/whatever file input, etc. But that doesn’t seem to have become popular for some reason.
There’s a python program/library that does exactly that with jinja2: https://staticjinja.readthedocs.io
Unfortunately I’ve already accepted the C runtime as dependency for most of my code 😛
I’ll be clear that given a choice I will pick a programming language’s templating environment over M4 any day. The only use for M4 is when you don’t want to do that. For me that’s usually for config files used around boot. I would really like to see something like M4 with its footprint that is less wonky than M4.
There are a few compiled Jinja (or jinja-like) implementations in the wild. For example https://tera.netlify.app/docs is just 3 or so lines of code from a (basic) standalone, static tool which doesn’t require a runtime.
I’m sure there are people using it that way.
Note how the letter says Barinsta “may” violate such and such law. Since it’s not an outright accusation, the lawyer who wrote this doesn’t risk anything. They can always say they wrote it “just in case” or something.
About the violation of terms of service… are they even bound to those terms of service? Maybe Barinsta users are, but the developers of the software itself?
It looks like the developer was an user too, and according to that letter they got banned for life from all facebook services
I would be very interested in a discussion around this. The author explicitly didn’t make any statements about differences in password managers. The claim is, they all have equally big attack surfaces if they use web extensions.
Counter example: bitwarden does not inject any elements (it adds properties to input fields though). The extensions drop down interface has to be used. Does that make it safer? Or am I missing something?
Isn’t there a standardised API for password manager inputs in browsers? If not, why not?
Seems like it would stop all these password managers from reinventing the wheel every time; reduce some attack surface by having it built into the browser itself rather than injected elements.
Both iOS[1] and Android[2] have standardised APIs. There is none for desktop browsers.
[1] Password AutoFill [2] Autofill framework
yeah it feels like having OS’s or browsers offer a standard hook for credential storage and having the tools use it would resolve a lot of this stuff. I think the iOS stuff works very well, though there’s a lot of uncertainty about what domain you’re on inside app stuff sometimes, but it usually “fails” in the right direction (not filling in credentials vs filling in incorrect credentials)
It also does not work smoothly on Android inside browsers other than Chrome.
Really? I use KeePassDX on Android, with Firefox, and it seems to work fairly smoothly through the autofill framework. It also provides a fake/specialized keyboard implementation for places where autofill doesn’t work.
Fascinating. I should try that.
Chromium is backed by your OS password manager. If your password manager syncs with your native password store then it should interop smoothly. https://chromium.googlesource.com/chromium/src.git/+/refs/heads/main/docs/linux/password_storage.md
As another example, keepassxc
As long as those checks are performed by the program or by the extension and not by the injected script, I don’t see the problem.
Well, that is really hard to tell, though. So I agree that people should recommend specific products.
There’s just a fundamental risk when mixing things inside an untrusted sandbox (the web page) and out where your secrets are. It’s much easier to do that well if you’re building a browser than if you’re building an extension - and even then there’s a long history of bugs with how browsers have done it.
I don’t know much about bitwarden, but a bit of poking around showed it injecting a few content scripts including one taken from 1password: https://github.com/bitwarden/browser/blob/master/src/content/autofill.js
I haven’t looked too closely at the script, but it looks like it does two things: pull out the structure of the page, and then fill it. I have no clue whether either of these are exploitable, but it’s definitely not vulnerable to this sort of redress attack. In particular, the only way to trigger password fill is to click the extension icon or use the right-click menu, both of which are not vulnerable to the same sort of redress/IPC attacks that Tavis mentions. (Well, I guess you could write some JS to fake the right-click menu. But I’m not sure what the way around this is.)
It does inject elements (I think) if you fill in a page to show a ‘would you like to remember this password’-type dialog box, but that’s not really much of an attack surface.
Bitwarden can also be self-hosted, though this doesn’t protect you from the browser extension being malicious.
As someone who uses trackers on a daily basis, I have to say that’s a very solid write-up. Perhaps Jeskola Buzz could have been discussed a bit more, since it was pretty big in the late 90s/early 2000s. There’s also Buzztrax, which continues the legacy on Linux. Other than that, the article is pretty comrehensive though. There were some tracker-like editors before Ultimate Soundtracker, but that’s obviously out of scope for this one.
There are a number of reasons why I prefer trackers over any other means of composing computer music.
What’s interesting to me is that trackers actually make the music writing process more akin to programming. This becomes especially apparent in Chiptune, where you’re basically writing a list of instructions to be interpreted by the music player.
What about trackers versus playing a MIDI keyboard? In your opinion, does entering notes in a tracker have advantages over playing them on a MIDI keyboard?
Caveat, I don’t have so much experience with MIDI. For many years I was moving around a lot (aka “my home is where my laptop is”), so I never bothered carrying around a MIDI controller. Nowadays you get these really small controllers, but back in the day these things were clunky. So anyway, I’m not super qualified to answer this.
Generally, I think the use-case is different. MIDI shines in 3 situations. a) when you have a fixed setup, eg. one DAW + a standard set of plugins that you always use. If you’re exploring many different tools and platforms, then the overhead from setting up MIDI controls is usually not worth it. b) for adding a live/human feel to an existing basic structure, ie. when you want to be less than 100% consistent/timing accurate. If you actually want to be precise, then you need to be able to play very well (which I’m not), otherwise you’re going to fiddle with correcting quantization errors a lot (unless you use your MIDI controller to enter notes/triggers on a per-step basis, in which case you might just as well use your computer’s keyboard). c) for automating sound parameters. Definitely a huge plus for “normal” computer music, for my use-case (Chiptune) it’s less relevant though.
I’m confused. MIDI keyboards can feed MIDI into trackers.
Some of them do bundle a synth, but these are usually called electronic pianos, and can accept MIDI from a tracker.
Thanks for letting me learn about this LGPL’d tracker. Up until now, I only knew MilkyTracker.
There’s also Radium. Haven’t used it yet, but it looks very promising in terms of bringing trackers to the next level. Back in the day I actually used Neil Sequencer, another Buzz-inspired project. Unfortunately that one is completely dead, I can’t even build it on a current system nowadays.
Last but not least, there’s also Schism Tracker, which is to Impulse Tracker what Milky is to Fasttracker/Protracker. The .it format is more compact than .xm, so it’s often the preferred choice for making size-restricted modules.
Which I also didn’t know, looks promising and is Open Source. Thank you!!!
Seems to be GPL, but I can’t find the sources. Probably a temporal issue.
Schism I was aware of.
I have a big list of FOSS trackers that I’m trying to package for NixOS here if you’re interested: https://github.com/NixOS/nixpkgs/issues/81815
Absolutely! Thank you.
I know it’s called “essential guide” but it’s a bit weird to write a history of trackers without mentioning Impulse Tracker and its descendants (Modplug, Cheesetracker, Schizm, etc).
Agreed, that’s quite an oversight. At least Modplug’s modern incarnation OpenMPT is mentioned in passing.
Will be fixed shortly! The fix already landed in unstable, but wasn’t backported to 20.09 yet.
Thanks! Then, I’ll have to figure out how to convert an application to a package as the former cannot be used in a buildEnv.
Ow. I hope development will continue in some other form. I really liked the interface, and I accumulated a pretty long list of rules over the years. The rule syntax does seem similar to the ublock one though, so hopefully moving it is trivial
I have a prohibitively slow internet connection and didn’t bother to build a nas yet so I just do incremental backups to an external hdd once in a while.
Not having daily backups isn’t really a problem since
This doesn’t mention software or computers so I flagged this as off-topic. I probably wouldn’t have if it at least mentioned software licenses.
That’s true, and I was on the edge about posting it too, but I felt the article really puts in perspective the time scales of a law that affects software development a lot.
This is what has kept me busy the past 18 months. Ask me anything :-)
I’ve been using the preview for a while, and I like it a lot. Thanks to you and everyone at Mozilla.
Moving the address / tool bar to the bottom of the screen is, imho, a very clever decision that made my huge phablet phone a bit less painful to use.
After using Firefox on Android for as long as I can remember, I have changed browsers.
Every time I start the new version my screen flashes. I perceive no performance improvements or “experience” benefits. On the contrary my favorite extensions no longer work.
My question is, why should I use/return to this new version?
Same here, even on my latest Google Pixel, the Firefox performance was awful, the browser experience was not good. But now I’m very happy with the latest version, I can see real good improvements, the browser experience is great and it’s not resource hungry as the oldest version. I would like to congratulate the Mozilla team for the great job!
I… hated it. Especially I feel like there wasn’t enough testing with the bar configured on top. I wrote a rant with the issues I have, which will probably read as too angry for a lobste.rs comment but allowed me to vent my frustration.
For now I set the bar on the bottom, which I don’t really like but solves 2 issues (buggy sites, and the new tab button being too far).
Still thank you for your work. I couldn’t get anything done without firefox in my pocket.
Another issue not listed: I will sometimes come back to firefox to find an old tab is now completely blank. Reloading will not help: I have to close the tab and open it again. I’ve had this happen with both a lobsters tab and a completely unrelated site… I will have to try and find a reproducible way to trigger it, could be hard.
I’ve had that issue on desktop firefox. If the site is bookmarked, I click it (helpful especially if it was a container tab)
Lots of users hate the new tab drawer (vs. the original tab page in earlier Firefox Preview builds). I don’t think it matters whether it’s a drawer or a full screen page, but the fact that scrolling to the top of the list continues into closing the drawer is extremely annoying. I do not ever want to close the drawer by moving my finger down on the list of tabs!! Please make an option to only have the header draggable for closing.
Any plans for completing the bookmark feature?
Will it it be made available on F-Droid? Soon? Ever?
How does this release relate to these:
Getting Firefox via F-Droid has always confused me, so I’ve stayed away, but I’m always on the lookout for a good browser for Android.
No idea about Klar, but Fennec is similar to IceCat: Firefox with the proprietary blobs removed. I think F-Droid doesn’t like vanilla Firefox for the reason that it contains blobs.
My recollection is that F-Droid’s Fennec build is just Firefox with the trademarks removed, not proprietary blobs. The new Firefox for Android, Fenix, doesn’t get packaged because its standard build system involves downloading pre-compiled copies of other Mozilla components, like the GeckoView widget, rather than building absolutely everything from source. F-Droid does allow apps that download pre-compiled copies of things, but only if they’re obtained from a blessed list of Maven repositories, and Mozilla’s CI system is not on the list.
Also, there may be something about requiring the Play Store to support notifications, but I don’t think it’s the only or even the biggest blocker.
Ah, sounds like you know a more about this than me - I stand corrected. Thanks for the information!
why block about:config? why no arbitrary extensions on your own risk? I would love a split screen or dual window feature.
One thing I would absolutely love is socks5 proxy support. Any plans for that? Also, I use ^L and ^K a freakton in the desktop browser. I’d love to see support for that when using Firefox for Android on ChromeOS.
How can I downgrade without losing my settings and open tabs?
Hi @st3fan,
In general I’m pretty happy with the new version of Firefox. The one big mistake Mozilla made however was to pull important features out.
For example I miss “custom search keywords”. I have a carefully crafted list of custom search keywords, and I use Firefox on top of iOS too because of it (otherwise I’ve got no reason to not switch to Safari). And it seems that this particular feature is not coming back on Android, due to some unification with the search engines, which don’t even synchronize. And this made me a little sad.
Also the new engine has some issues with some animations on some websites, as when scrolling such pages I sometimes get lag. I also hope that you’ll improve Android’s UI for tablets, as some of the UI elements are a little small on top of my Galaxy Tab S7.
Otherwise I’m happy to see Firefox improve, and the few add-ons I relied on still working. For me Android is not usable without Firefox ❤️
Keep up the good work.
Great work! It sounds like there’s been a lot of work going on under the hood for this release, and there’s mentioning of it now being easier to make new features in the product. Are there any blog posts - or could you talk a bit about what changes that has been made which now unlocks this extra velocity?
I use Android with a keyboard.
Do you know of any keyboard-driven browsing solutions like Vimium on Android at this time?
Any way to display your bookmarks on startup or something like this ? I’m used to switching through my bookmarks, now I’ve got to add them all to this “collection”(? german word is “Sammlung”) and that is collapsed every time I create a new tab. “Add to start screen” doesn’t do anything.
Finally found the option to add it as part of the start screen. The new Bookmarks view is hard for me to grasp, like everything looks the same.
This is the version that finally made me rate Firefox in Play store: to 1 star! Why did you (plural) make it this bad?
Things that broke:
The whole UX suggest that the developers don’t use Firefox for daily browsing. The feature are there, the UX is terrible, and is a regression in every possible aspect.
The single good thing is the address bar in the bottom. I’d prefer to downgrade to an older version actually, as the previously advertised speed benefits are not noticable.
The PR page states:
Maybe I’m not the target audience?
I know this is not your (singular) fault, more likely a project management issue, but I think the direction is not the right one.
Hi Stefan, please take a look at brave on mobile. I was eagerly waiting for Brave UX in firefox and chrome. Fantastic news that firefox.
One suggestion - After clicking on tab number at right bottom corner to open new tab, is it possible to slide to normal window to incognito windows by sliding on screen rather than click on each icon. This will be especially helpful for mobile or tablet with big screens.
Again, big thanks making such huge change possible.
Just got the update. Really liking the bar on the bottom.
Speaking of lisp and chinese cartoons, SICP has been a meme on /g/ for years. GOOG has many more.
There are days I wonder how many people got their start in programming through that.
https://github.com/laynH/Anime-Girls-Holding-Programming-Books
I got my pdf of The C Programming Language from one of those images.
That meme gave birth to textboard.org, an anonymous bulletin board in MIT/GNU Scheme.
Don’t let your memes be dreams, gentooman.
In my defence, I haven’t been on /g/ (nor any other part of that website) since 2012 and I’ve been a Scheme hacker since only 2017.
Ironically, textboard is hosted on Gentoo Linux.
(I am just making this up)