Welp, we all saw this coming. Microsoft already has two editors (VS and VS Code), keeping a third around was unsustainable.
(Though I am curious what are the reasons for continuing to use Atom instead of VS Code. By all accounts, as well as in my own experience, it’s far slower and clunkier than its counterparts.)
EDIT: the founder of Atom, nathansobo, commented on the related HN and r/programming discussions that they are working on a new editor focused on speed and real-time collaboration, Zed.
(Though I am curious what are the reasons for continuing to use Atom instead of VS Code. By all accounts, as well as in my own experience, it’s far slower and clunkier than its counterparts.)
It’s slower, but depending on what features you heavily rely on, I’d honestly call VSCode the clunkier of the 2. For something I’m in and out of dozens of times a day, the Project-wide Find (and Find-and-Replace) experience in VSCode is terrible compared to Atom’s, and hasn’t seen significant improvement in years. There’s a couple of different “make VSCode’s search work like Atom/Sublime” plugins out there, but they’re mostly broken in my experience.
Tons of other things are similar – the settings UI is a lot more pleasant in Atom than the weird way it works in VSCode. Atom just generally traded off a focus on speed in favor of a much more polished experience, generally.
All that said, I saw the writing on the wall and went back to Sublime last year. It’s not as polished as Atom, but less of a jankfest than VSCode, and its search interface was what Atom’s was based on anyways. Bonus points for not setting my battery on fire by running an entire webbrowser just to draw text on a screen.
Interesting to read this point of view. But a bit baffling, I should say. I used atom a little bit back in the day, but found it too hip and it was very sluggish, compared to gedit, grant, scribes, sublime, etc.
When visual studio code came, it felt snappier, launched quicker, and personally I found it to be more pleasant to use. The UI was more focus on being functional than visually slick. Git integration was done just the way I like it. As well as other rminor but I portant details such as mru file switching. Built in terminal. Split screen.
This is very much an opinion. But I find VSCode to be just a well designed and executed product. With pragmatism taking the central role. Kind of Microsoft showing that it is still capable of releasing useful software.
How Microsoft manages the feature requests and bugs on the public vscode issue tracker never fails to impress me. There is a ton of professionalism shown towards a free and open source product. Product owners clearly communicate status and engage feedback.
When visual studio code came, it felt snappier, launched quicker, and personally I found it to be more pleasant to use. The UI was more focus on being functional than visually slick.
Very much different tastes. I booted it up after posting this, and still find it incredibly grating to use. They’ve screwed up the page “weight” or scrolling speed (on the Mac, at least), so that a sweep of my fingers on the trackpad when I’m scrolling a file is much, much faster than it is in any other app on the system (whereas scrolling in Sublime feels like scrolling in Safari feels like scrolling in the Terminal feels like, etc). It gives the whole thing a floaty, jittery feeling and screws up my years of scrolling muscle memory (cynically, I half wonder if they use a faster-than-system-scroll-speed to perpetuate the whole “VSCode is fast” marketing, because the rest of it, from the app startup to doing a search, is noticeably laggy)
Like the preference pane’s spastic sidebar flying open and closed by itself as you scroll down the unbelievably long single preference page (which the floatyness of the scrolling in general exacerbates) it just behaves like nothing else on the system, and that makes the whole experience grating and weird compared to everything else. It’s like nails on a chalkboard, just a constant low-level stream of annoyances, to me.
But if it’s your cup of tea, all power to you. Editor monocultures are the last thing we need.
The thing I dislike the most about VSCode vs other editors is that it uses the Microsoft method of text selection. So if you hover over a character and click + drag the cursor, it won’t highlight the character that you are hovering over. Whereas in a macOS text field/editor it does.
So in VSCode when I am forced to use it, I tend to miss-select a bunch of text all the time making it an extremely frustrating experience.
The same issue exists in other interfaces that try to emulate some sort of text selection, the AWS console for example has a way to use AWS SSM to connect to a remote system and get a terminal in your browser. It has the same selection behavior.
When applications don’t match the behavior of the OS they are running on it becomes an incredibly jarring experience.
So if you hover over a character and click + drag the cursor, it won’t highlight the character that you are hovering over. Whereas in a macOS text field/editor it does.
I agree that inconsistency is frustrating, but I cannot reproduce the inconsistency you describe in macOS 10.14 Mojave. Whether I’m using VS Code, TextEdit, or Finder, when I hover over a character, then click and drag, the selection starts from whichever side of the character the mouse cursor was closest to. I never see, for example, the selection start at the right side of an ‘O’ if I position my cursor on the left side of the ‘O’ before dragging.
But if it’s your cup of tea, all power to you. Editor monocultures are the last thing we need.
I agree 100%. I’m a long time vim user, but have switched to using both vscode and emacs. I interchange between the two, and am interested in newer projects too such as helix. All monocultures are bad and have awful side effects. That said, I think editors are personal enough that vscode will never fully dominate.
I keep coming back to vscode for multiple reasons including: multiple selection, remote ssh sessions, and relatively simple configuration of everything including plugins. I’ve never noticed significant performance issues in vscode, while I have spent countless hours learning vim and emacs even to do basic configuration. That has taken away from development time. I don’t regret that time, but it is a trade off decision everyone has to make.
I went back to Sublime when it seemed the writing was on the wall for Atom, and I’m currently pretty happy with it – much of the same basic features, and at least my battery life is a lot better. I’ve flirted with Emacs for years, and I’m pretty proficient with elisp, but I always hit a point where the constant mental friction of switching between “one-set-of-platform-wide-conventions-and-keystrokes” and “special-set-of-conventions-and-keystrokes-just-for-emacs” wears me out.
That’s exactly my experience. There’s a very clear lag even while typing. I’m also encountering so many rendering bugs. For example, code warnings show up inline in a very long floating horizontal bar and it’s extremely difficult to scroll through it to read the entire message. The embedded terminals are full of display bugs and characters suddenly start flying all over the place while I type for no obvious reason.
What I found really impressive is the the ability to quickly launch a dev Docker container and do all of the work inside it. That’s the only reason I’m using it. It’s sad though that the core editing experience is so subpar compared to sublime, IntelliJ, Xcode and even TextEdit.
Now that you mention it, you’re right, editors in VS Code do seem to scroll faster than other scrollable areas. I looked into this and found an existing bug report and a workaround.
// Adjust scroll sensitivity to match macOS-native scroll surfaces, by my estimation
// These settings might become unnecessary after this scrolling speed bug is fixed: https://github.com/microsoft/vscode/issues/146403
"editor.mouseWheelScrollSensitivity": 0.5,
"workbench.list.mouseWheelScrollSensitivity": 0.5,
I used atom a little bit back in the day, but found it too hip and it was very sluggish, compared to gedit, grant, scribes, sublime, etc.
I think back in the day, it was very slugish. But because of those early fails, I still don’t want to touch VS Code, even though I know it’s everywhere. At $DAY_JOB we use IntelliJ so I use that after hours as well (yeah, yeah, I know, talk about slugishness - but the important parts are fast for me), and for quick and dirty stuff, I rather pick Sublime. But yeah, I don’t know where it’ll go in the future, I know I’ll keep my vim scripts up to date in any case.
I’m not a VS Code fan, but just FYI Atom does have/had telemetry. If that’s something that bothers you also note there are builds of VS Code that do not (see vscodium).
At the scale of source code you mention, I’d reassess what risks and threats you have in mind.
First of all, I don’t think it’s possible to manually audit at the scale you mention. Static Analysis might be interesting but you also said it’s minified, which will probably defeat most if not all automatic source code analysis. If the code wasn’t minified I’d start with a regex search just to get a feel for code quality and common (anti)patterns. I’m also working on an eslint plugin to detect and prevent typical XSS sinks (innerHTML and friends) called eslint-plugin-no-unsanitized.
If this is all frontend code (WordPress is PHP so the JS in a plugin is not part of the backend. Correct?), you might want to look at mitigation strategies instead: The only (the main?) risk in frontend JS code is XSS. Maybe it’s worthwhile to experiment with CSP. I’ll admit it’s really hard to do for existing websites and can be a really long process of trial and error.
WordPress nowadays uses a REST API to talk with the “new” editor called Gutenberg which is built upon React. The use of npm packages therefor might be more used in the wp-admin (backend) than in the front-end actually.
For now at least. I’d expect the usage of npm packages in WordPress plugins continue to grow and perhaps even take a larger responsibility in theme (frontend) rendering as well.
I agree that the main risk probably is unauthenticated users or visitors of a particular website. However there’s also a few where we don’t know all the authenticated users so there is also an internal risk (although I’d assume it to be low) of “adversarial” authenticated users.
I’ve had the same problem, so I’ve been working on a side project to help with this: xray.computer. It lets you view the published code of any npm package, and supports auto-formatting with Prettier for minified code. It’s not a panacea (sometimes, the published code is still unreadable, even when formatted) but it helps.
I also like to look up packages on Snyk Advisor to ensure that they get timely updates when vulnerabilities are found.
(Hopefully it’s okay to promote your own side projects here; please let me know if not.)
Thanks for sharing your side-project! It is helpful to see the differences between versions. I can use it to see if a security issue was indeed solved between versions of a package. I’m also using Snyk to get an idea of certain packages.
However, I think my problem is a bit more complex: a WordPress plugin may contain hundreds of interdependent npm packages all neatly bundled and minified. Without access to a package.json or package-lock.json it is quite hard to find out which individual packages have been used. Quite often there is also no public repo available of the development files.
To give an example of my process thus far:
Someone in my team wants to see if we can use plugin X. I’m downloading the plugin to have a look at the code. Luckily this plugin has included a non-minified version of the js file. I can derive the use of npm packages from this file. Using Snyk I have a look at the first package mentioned. It’s axios. The included version is vulnerable (high & medium severity) and has been for almost a year (Note: the last version of the plugin is 3 months old and does not exclude this vulnerable version in it’s package.json which I found in a Github repo later on).
Since I have no package.json nor package-lock.json (all I have is the distributed build) I can’t easily update the npm package. I have no clue as to how this package relates to the other packages and how their version might depend on each other. Even if I would update the package, all other users of this plugin are still vulnerable. I contacted the plugin author. He tells me he will update the plugin as soon as possible. The plugin is (as of today) still not updated & has not released a new version. In the meantime there have been two new versions of the axios package released.
Every user of plugin X is still vulnerable to the issues mentioned on Snyk, but is this a real problem in this specific WordPress plugin context? I’m not sure how to interpret the high & medium severity in the context of this plugin. How exploitable are these issues & what is the impact of the exploits in the context of this plugin? Do I need to be a logged in user? Is this something which can be triggered by any visitor? What am I able to do when I can exploit these vulnerabilities? I can only try to find answers to these questions if I’m willing to invest a lot more time into this, which more or less beats the purpose of using a ‘ready-made’ WordPress plugin. And this is just one package of multiple npm packages used in this plugin. Packages which also have their own dependencies as well….
At this moment I’m wondering if any WordPress plugin using npm packages can be trusted at all.
ps: The way the npm ecosystem is structured is, in my view at least, problematic. Often packages are not like libraries as I’d expect, but look more like a function call or method call. I’d prefer to write these short pieces of code myself instead of depending on external code which also includes extra risks. The very rapid release schedules makes it even harder to trust external software (like a WordPress plugin) using npm packages as it seems they cannot keep up with it.
I’m sorry if this seems like a npm rant, but I’m seriously looking for methods on how to deal with these issues so we can use external software (like WordPress plugins) built with npm packages.
I think it’s perfectly reasonable to say no to this plugin.
A WordPress plugin may contain hundreds of interdependent npm packages all neatly bundled and minified. Without access to a package.json or package-lock.json it is quite hard to find out which individual packages have been used. Quite often there is also no public repo available of the development files… At this moment I’m wondering if any WordPress plugin using npm packages can be trusted at all.
I’d be pretty skeptical of any proprietary software that has npm dependencies but doesn’t include its package files. Just taking woocommerce (one of the more popular plugins) for example, they do include their package files in their source code. That ought to be the norm. When you have access to the package files, you can run the following to automatically install patches to vulnerable dependencies and subdependencies.
npm audit fix
Without the package files, the plugin developer left you up a creek.
The included version [of axios] is vulnerable (high & medium severity) and has been for almost a year… I’m not sure how to interpret the high & medium severity in the context of this plugin. How exploitable are these issues & what is the impact of the exploits in the context of this plugin?
It really depends on the context. Some subdependencies are only used in the development or build of the dependency. If the vulnerability assumes the subdependency is running in a public node server, it’s probably not relevant to you. The hard part is knowing the difference. Take this vulnerability in glob-parent for example. I see it a lot in single page apps that have a webpack dev server. It’s hard to imagine how a DoS vulnerability is relevant when the only place the code is going to be run is your local development environment. In your case, it’s worth asking, is the axios vulnerability in the HTTP client or the server? If the vulnerability is just in the server and you’re not running a node server, you might be able to ignore it.
The way the npm ecosystem is structured is, in my view at least, problematic. Often packages are not like libraries as I’d expect, but look more like a function call or method call. I’d prefer to write these short pieces of code myself instead of depending on external code which also includes extra risks. The very rapid release schedules makes it even harder to trust external software (like a WordPress plugin) using npm packages as it seems they cannot keep up with it.
I agree that npm is problematic, but I have a slightly different take on it. There was a time when it was popular to mock projects like Bower for keeping front-end dependencies separate from node.js dependencies. The thinking was, why use two package managers when there’s one perfectly decent one? As it turns out, security vulnerabilities are impossible to assess without knowing in what environment the code is going to be run. I’m not suggesting we all go back to using Bower, but it would be nice if npm and its auditing tools would better distinguish between back-end, front-end, and build dependencies. As it is now, most npm users are being blasted with vulnerability warnings that probably aren’t relevant to what they’re doing.
I’d be pretty skeptical of any proprietary software that has npm dependencies but doesn’t include its package files.
I agree. I think this will be one of the main ‘rules’ for deciding if we want to use a particular WordPress plugin. Without the package.json or package-lock.json and npm audit it is near impossible to audit or update (fork) a plugin
It really depends on the context. Some subdependencies are only used in the development or build of the dependency.
You’ve hit the nail right on the head! And that’s what makes it hard to determine the impact: context.
As far as I can tell, determining context can only be done with (extensive) knowledge on how the npm package has been implemented in the WordPress plugin. It requires knowledge of the WordPress plugin, WordPress, the specific npm package, its dependencies and how all these relate to each other. Just like you said with the axios package.
Creating this context thus requires quite some time & skills, which would be fine if we would deal with just a few npm packages. However due to npm’s nature we usually have to deal with way more interdependent packages, which makes manually creating context near impossible. So even though there’s npm audit we’re still left with how to determine the context of the vulnerabilities found by npm audit. And for this I have yet to find a solution.
PS: npm audit fix is in my view not the solution to solve this. It just hides the problems a bit better ;)
Looking at the list, it feels like the motivation for many of these APIs is to help close the gap between Chromebooks and other platforms. I can’t understand it otherwise.
Web MIDI - really? Is there ever going to be a world where music professionals are going to want to work in the web browser instead of in for-purpose software that is designed for high-fidelity low-latency audio?
For Web MIDI there are some nice uses, for example, Sightreading Training - this site heavily benefits from being able to use a connected MIDI keyboard as a controller, rather than having the user use their regular keyboards as a piano, which is pretty impractica (and limited).
Another website which uses the Web MIDI API is op1.fun - it uses MIDI to let you try out the sample packs right on the website, without downloading it.
So no, it’s probably never going to be used for music production, but it’s nice for trying things out.
Which is why these APIs should be behind a permission prompt (like the notification or camera APIs). Don’t want it, it stays off, if you want it, you can let only the sites that will actually use it for something good have access.
yeah +1 on this. So many of these things could be behind a permission. It feels really weird to hear a lot of these arguments when we have webcam integration in the browser and its behind a permission. Like one of the most invasive things are already in there and the security model seems to work exceptionally well!
The browser is one of the most successful sandboxes ever, so having it be an application platform overall is a Good Thing(TM). There’s still a balance, but “request permission to your MIDI/USB devices” seems to be something that falls into existing interaction patterns just like webcams.
Stuff like Battery Status I feel like you would need to offer a stronger justification, but even stuff like Bluetooth LE would be helpful for things like conference attendees not needing to install an app to do some stuff.
I don’t fully understand why webUSB, webMIDI etc permissions are designed the way they are (“grant access to a class of device” rather than “user selects a device when granting access”).
I want some sites to be able to use my regular webcam. I don’t want any to have access to my HMD cam/mic because those are for playing VR games and I don’t do that in Firefox.
However, Firefox will only offer sites “here’s a list of devices in no particular order; have fun guessing which the user wants, and implement your own UI for choosing between them”.
Don’t forget Android Go - Google’s strategy of using PWAs to replace traditional Android apps.
Is there ever going to be a world where music professionals are going to want to work in the web browser instead of in for-purpose software that is designed for high-fidelity low-latency audio?
Was there ever going to be a world where programmers want to edit code in a browser rather than in a for-purpose editor? Turns out, yes there is, and anyway, what programmers want doesn’t really matter that much.
Yes, web MIDI is useful. Novation has online patch editors and librarians for some of their synths, like the Circuit. I’ve seen other unofficial editors and sequencers. And there are some interesting JS-based synths and sample players that work best with a MIDI keyboard.
It’s been annoying me for years that Safari doesn’t support MIDI, but I never knew why.
MIDI doesn’t carry audio, anyway, just notes and control signals. You’d have the audio from the synth routed to your DAW app, or recording hardware, or just to a mixer if you’re playing live.
Apple’s concern seems to be that WebMIDI is a fringe use-case that would be most popular for finger printing (e.g. figuring out which configuration your OS and sound chipset drivers offer, as an additional bit of information about the system and/or user configuration).
I’d love to see such features present but locked away behind user opt-in but then it’s still effort to implement and that where this devolves into a resource allocation problem given all the other things Apple can use Safari developers for.
The WebMIDI standard allows sites to send control signals (SysEx commands to load new patches and firmware updates) to MIDI devices. The concern is that malicious sites may be able to exploit this as an attack vector: take advantage of the fact that MIDI firmware isn’t written in a security conscious way to overwrite some new executable code via a malicious SysEx, and then turn around and use the MIDI device’s direct USB connection to attack the host.
at that point you’re requiring a locally run application (if nothing else a server). In which case you might as well just have a platform app - if nothing else you can use a webengine with additional injected APIs (which things like electron do).
@kev considering your background: what do you think of the privacy and security impact of having your details visible and easily scrap-able? I also wonder about webmentions and spam (back in the day with WordPress’ linkbacks and pings), is this an issue or has this been solved?
Great question! I think as long as you’re sensible, the privacy and InfoSec risks are low. For example, I would never put my location out there as my address, it just says “North West England” which I think is vague enough to not be a privacy concern.
The social profile links are all public anyway, so no issues there; and the email address that I publish is different than the personal email address that my friends, family, tax office, government etc have.
WRT WordPress and spamming, Webmentions come through as WordPress comments, so you can plug them into Akismet to filter out spam. The Webmention plugin also has a mechanism by which you can whitelist certain domains. So you can automatically allow Webmentions from certain sites, then edit this list as you go. There’s a little work at first, but once you have it setup and tuned, there’s little to do in terms of managing spam.
Thank you. Good to hear you can allow certain sites and prevent others from sending Webmentions. I’ll have a closer look at Webmentions and see if I can add them to a new project I’m planning.
Work: hopefully finishing up a project which has been going on for quite some time. Ready to start working on something new, but first I need to dot some i’s and cross some t’s…
Personal: Trying to workout a bit more instead of being glued to screens which is more or less my natural habitat ;)
I just lost 2.5TB of important data and I’ll be recovering/building those information all week or maybe all month. And I also live in Iran so I can’t pay for stuff (I explained/cried here: https://social.tchncs.de/@arh/104105168565243534) so I’ll be searching for someone/some foundation that can pay costs of a year for a Wordpress host.
How much traffic/storage are you expecting for your WordPress host? Is it connected to those 2.5TB of data? I am asking because your personal host seems to already run on https://www.autistici.org/, and as you probably know, they run a Wordpress farm at https://noblogs.org/ . I guess that’s insufficient for your needs?
Completely unrelated to that 2.5 TB. My whole posts take about 200 KiB right now (yes, that small) and I probably won’t use more than 100 MiB of bandwidth monthly. Yes, my website is currently hosted on Autistici/Inventati but they don’t support blogs (PHP/Database) with personal domain and I always gave my domain as my main address and I want to use my personal domain for my blog. Noblogs doesn’t support custom domain either (however I have a blog there and I update it whenever I update my blog on my domian). If I can’t find a hosting (that is libre and privacy-focused), I’ll continue using static site.
I read your mastodon post. You may be able to use Jekyll on your phone. This is a post by Aral Balkan on how he’s using Hugo on his phone to power his website. See https://ar.al/2018/07/30/web-development-on-a-phone-with-hugo-and-termux/ Perhaps his experience can help you set something similar up on your phone?
That won’t work for me. I’ll need to build my website that uses some gems and then upload it to Autistici. Local gems won’t work on phone. Even if it works, I need something that works everywhere the same and takes less time. But thanks for your message. I really apprentice it.
Creating a memorial site for my deceased cats. This was prompted by the death of my almost two year old cat Max by a careless driver. Also staying in isolation as much as possible & washing my hands.
Personal: Continue working on our project VC4ALL to enable people to stay in touch with videocalls using Jitsi in the Netherlands. We have setup one Jitsi server ourselves (thanks to an anonymous sponsor!) and four organizations/people donated their own instance. Also washing hands and staying at home, isolating ourselves as much as we can.
Work: The usual but instead of just me working from home, everyone is here. It’s a challenge, but we’ll manage ;)
I’ll be looking forward to power consumption figures (namely for idle, but also comparing joules per job against other models). One of my use cases are battery-backed raspis.
Hey Bjorn. Vaccine fridge temperature monitoring. Battery-backed more than battery-powered.
In my locale there are certain rules about how many hours above certain temperature thresholds vaccines can be held before you have to phone it in to NSW health; and potentially dump all of the vaccines (especially if you don’t have accurate logged data of the temp profile for the event). For obvious reasons they err on the side of caution, however losing vaccines is also a pretty big thing (getting replacements is slow and $$$$). When something bad happens you want detailed and accurate data.
Current solutions involve off-the-shelf fridge monitors. Beautiful little dedicated boxes that run off CR2032 cells and use a thermocouple. Unfortunately the ones I’ve used suck in a number of ways:
Recording buffer full? Ignore the circular write option (if available) and instead stop recording data immediately.
Need to get the data off the logger? Unplug it from fridge, plug into PC.
Logger unplugged from fridge? Collected data now has spurious 200degC+ peaks and noise.
Proprietary software that can’t zoom the x and y axes separately? Why, did you want to be able to ignore those 200deg peaks or something?
A much bigger problem with these units is that they are passive, not reactive. If something goes wrong they don’t tell you. Instead you have to regularly go through the hoops to extract, archive and analyse the data yourself. Doctors and nurses are busy, the less the have to do the better they can help patients.
Also note that most of these fridges have built-in temp monitoring and min-max reporting in the form of a few buttons and an LCD display (or sim). Staff tend to be more fluent with these interfaces (to a certain degree – accidental button presses have also been known to lead to full thaws :P). Unfortunately these don’t record time periods or full profiles, so they are good indicators to regularly check but useless for further analysis of events.
The current plans involve a raspi2 + a bank of D-cell alkaline batteries + a string of temp sensors going into the fridge. Basic concept:
Automatically records temps and emails a report of them out every week (solving the menial work and archiving problems).
Sends alerts live if temp-time thresholds are breached (eg fridge door left open).
Externally monitored so alerts are also sent if site power (and hence internet) are cut, or if the unit is otherwise abused.
Able to continue running on D cells in the meantime for somewhere between 1-2 days (by my calcs).
Simple dumb lights on the front of box to say “all is good” or “there’s a problem”.
I’ll be throwing this together over the next few weeks, then running it in parallel with the existing temp monitor solution. Depending on how hard the paperwork gets it might continue to exist in parallel, but when something goes wrong I suspect it will provide much more useful data (and alerts) than the current solutions.
(I could ramble further about thermal mass, temp sensors positioning, and how I think existing solutions are somewhat biased, but this is probably already too long :D).
Back to raspis: given the low workload for a fridge monitor the primary consideration is idle current draw. As other people have mentioned the pi 3’s tend to eat a lot more than the pi 2’s, and there are differently clocked variants of the same models too. This new pi 4’s SoC is fabbed differently, so it will be interesting to see whether or not they have put any effort into idle draw or not.
PO4 chemistry is nice. Unfortunately it’s a single cell design, so I’ll get only a tiny fraction of what 6 alkaline D cells will give me:
LifePO4 18650: 1.5Ah * 3.2V nom ~= 5Wh per cell
Aklaline D-cell: 15Ah * 1.5V nom ~= 25Wh per cell (and I plan to use 6 in series)
I originally considered rechargeables, but I think it’s not as great of an idea for this usage scenario. Blackouts happen seldom (eg once or twice a year) and the cost of new D cells is minimal to the business.
Most of all: as much as I expect to be maintaining these directly myself, if I can’t then I’d much prefer to be giving instructions over the phone on how to replace D-cells versus how to fix a puffy Lipo or failed Li* cylindrical cell.
Hi Hales, thanks for your detailed reply! Sounds like good use for the pi. Have you considered the pi zero (W) since you mentioned the idle current draw? My plan is building a mostly solar powered weather station which includes UV and CO2 measurements. As far as I can tell this combination is either hard to find or none-existing.
Pi zero: I think it came down to an issue of stock and availability when I did the parts order. Otherwise yes, the only thing I’d need to confirm is whether or not the Zero has a native 1-wire interface.
UV: which UV? What sort of sensor do you have in mind?
CO2: currently liasing with a company trying to use these. Be wary of any chip that says it measures “eCO2” or is otherwise a solid-state chip TVOC chip. If you read far enough in you discover that the CO2 levels are invented based off a lookup table describing a “common indoor” relationship between TVOC and CO2; ie complete lies with a long list of assumptions. It looks like you need devices bigger than a single chip (normally little plastic chambery things a few cm big on a PCB module, IR reflection internally?) to actually measure CO2.
I’ve tried making a laptop-type device on the Pi, and the power consumption of the 3b was just brutal; I had to fall back to the 2 to have any hope of making it run. Even then I was able to get under an hour of battery life on a battery that would last a PocketCHIP for a couple hours.
Edit: the pocketCHIP also has onboard circuitry for pass-thru charging of a lipo, which is a lot harder than you’d think to handle yourself when building something like that. It seems like they’re going full-speed-ahead on making the Pi a better desktop replacement and not really concerned much about the portable use case.
I gave this as a talk at CircleCityCon, and actively use the tablet to investigate and attack my own radio assets. Obviously, it could be built and used on non-self assets. But that would be bad, so don’t do that.
Mine was an XBMC system that could handle being suddenly powered down without warning. Power failures are common where my parents live in India and there is a blip in power when battery backup from the inverter kicks in. So I only needed a few seconds till household battery backup can take over.
In the end, I ran out of time on my last visit and just cloned a bunch of SD cards so they just sub in a new one when the old one breaks on power failure.
Currently going back an forth between learning some Golang, playing with esp8266 mcu’s and studying for my LPI certification to close any gaps in my self-taught Linux knowledge. Specialization is for insects, besides having lots of interest in all kinds of (technical) topics helps with being self-employed :)
Welp, we all saw this coming. Microsoft already has two editors (VS and VS Code), keeping a third around was unsustainable.
(Though I am curious what are the reasons for continuing to use Atom instead of VS Code. By all accounts, as well as in my own experience, it’s far slower and clunkier than its counterparts.)
EDIT: the founder of Atom, nathansobo, commented on the related HN and r/programming discussions that they are working on a new editor focused on speed and real-time collaboration, Zed.
It’s slower, but depending on what features you heavily rely on, I’d honestly call VSCode the clunkier of the 2. For something I’m in and out of dozens of times a day, the Project-wide Find (and Find-and-Replace) experience in VSCode is terrible compared to Atom’s, and hasn’t seen significant improvement in years. There’s a couple of different “make VSCode’s search work like Atom/Sublime” plugins out there, but they’re mostly broken in my experience.
Tons of other things are similar – the settings UI is a lot more pleasant in Atom than the weird way it works in VSCode. Atom just generally traded off a focus on speed in favor of a much more polished experience, generally.
All that said, I saw the writing on the wall and went back to Sublime last year. It’s not as polished as Atom, but less of a jankfest than VSCode, and its search interface was what Atom’s was based on anyways. Bonus points for not setting my battery on fire by running an entire webbrowser just to draw text on a screen.
Interesting to read this point of view. But a bit baffling, I should say. I used atom a little bit back in the day, but found it too hip and it was very sluggish, compared to gedit, grant, scribes, sublime, etc.
When visual studio code came, it felt snappier, launched quicker, and personally I found it to be more pleasant to use. The UI was more focus on being functional than visually slick. Git integration was done just the way I like it. As well as other rminor but I portant details such as mru file switching. Built in terminal. Split screen.
This is very much an opinion. But I find VSCode to be just a well designed and executed product. With pragmatism taking the central role. Kind of Microsoft showing that it is still capable of releasing useful software.
How Microsoft manages the feature requests and bugs on the public vscode issue tracker never fails to impress me. There is a ton of professionalism shown towards a free and open source product. Product owners clearly communicate status and engage feedback.
Very much different tastes. I booted it up after posting this, and still find it incredibly grating to use. They’ve screwed up the page “weight” or scrolling speed (on the Mac, at least), so that a sweep of my fingers on the trackpad when I’m scrolling a file is much, much faster than it is in any other app on the system (whereas scrolling in Sublime feels like scrolling in Safari feels like scrolling in the Terminal feels like, etc). It gives the whole thing a floaty, jittery feeling and screws up my years of scrolling muscle memory (cynically, I half wonder if they use a faster-than-system-scroll-speed to perpetuate the whole “VSCode is fast” marketing, because the rest of it, from the app startup to doing a search, is noticeably laggy)
Like the preference pane’s spastic sidebar flying open and closed by itself as you scroll down the unbelievably long single preference page (which the floatyness of the scrolling in general exacerbates) it just behaves like nothing else on the system, and that makes the whole experience grating and weird compared to everything else. It’s like nails on a chalkboard, just a constant low-level stream of annoyances, to me.
But if it’s your cup of tea, all power to you. Editor monocultures are the last thing we need.
The thing I dislike the most about VSCode vs other editors is that it uses the Microsoft method of text selection. So if you hover over a character and click + drag the cursor, it won’t highlight the character that you are hovering over. Whereas in a macOS text field/editor it does.
So in VSCode when I am forced to use it, I tend to miss-select a bunch of text all the time making it an extremely frustrating experience.
The same issue exists in other interfaces that try to emulate some sort of text selection, the AWS console for example has a way to use AWS SSM to connect to a remote system and get a terminal in your browser. It has the same selection behavior.
When applications don’t match the behavior of the OS they are running on it becomes an incredibly jarring experience.
I agree that inconsistency is frustrating, but I cannot reproduce the inconsistency you describe in macOS 10.14 Mojave. Whether I’m using VS Code, TextEdit, or Finder, when I hover over a character, then click and drag, the selection starts from whichever side of the character the mouse cursor was closest to. I never see, for example, the selection start at the right side of an ‘O’ if I position my cursor on the left side of the ‘O’ before dragging.
I agree 100%. I’m a long time vim user, but have switched to using both vscode and emacs. I interchange between the two, and am interested in newer projects too such as helix. All monocultures are bad and have awful side effects. That said, I think editors are personal enough that vscode will never fully dominate.
I keep coming back to vscode for multiple reasons including: multiple selection, remote ssh sessions, and relatively simple configuration of everything including plugins. I’ve never noticed significant performance issues in vscode, while I have spent countless hours learning vim and emacs even to do basic configuration. That has taken away from development time. I don’t regret that time, but it is a trade off decision everyone has to make.
If I may ask, what do you use?
I went back to Sublime when it seemed the writing was on the wall for Atom, and I’m currently pretty happy with it – much of the same basic features, and at least my battery life is a lot better. I’ve flirted with Emacs for years, and I’m pretty proficient with elisp, but I always hit a point where the constant mental friction of switching between “one-set-of-platform-wide-conventions-and-keystrokes” and “special-set-of-conventions-and-keystrokes-just-for-emacs” wears me out.
That’s exactly my experience. There’s a very clear lag even while typing. I’m also encountering so many rendering bugs. For example, code warnings show up inline in a very long floating horizontal bar and it’s extremely difficult to scroll through it to read the entire message. The embedded terminals are full of display bugs and characters suddenly start flying all over the place while I type for no obvious reason.
What I found really impressive is the the ability to quickly launch a dev Docker container and do all of the work inside it. That’s the only reason I’m using it. It’s sad though that the core editing experience is so subpar compared to sublime, IntelliJ, Xcode and even TextEdit.
Atom was just as bad though in my experience.
Now that you mention it, you’re right, editors in VS Code do seem to scroll faster than other scrollable areas. I looked into this and found an existing bug report and a workaround.
Bug report: After 1.66 update scroll speed is faster, submitted 2022-03-31
WorkaroundAdd this to your
settings.json
:I think back in the day, it was very slugish. But because of those early fails, I still don’t want to touch VS Code, even though I know it’s everywhere. At $DAY_JOB we use IntelliJ so I use that after hours as well (yeah, yeah, I know, talk about slugishness - but the important parts are fast for me), and for quick and dirty stuff, I rather pick Sublime. But yeah, I don’t know where it’ll go in the future, I know I’ll keep my vim scripts up to date in any case.
Muscle memory? I mean, if it still works for you, why incur switching costs?
A reason to keep using Atom instead of VS Code could be the use of telemetry in the latter. Although I’m not sure if Atom lacks this ‘feature’.
I’m not a VS Code fan, but just FYI Atom does have/had telemetry. If that’s something that bothers you also note there are builds of VS Code that do not (see vscodium).
At the scale of source code you mention, I’d reassess what risks and threats you have in mind.
First of all, I don’t think it’s possible to manually audit at the scale you mention. Static Analysis might be interesting but you also said it’s minified, which will probably defeat most if not all automatic source code analysis. If the code wasn’t minified I’d start with a regex search just to get a feel for code quality and common (anti)patterns. I’m also working on an eslint plugin to detect and prevent typical XSS sinks (innerHTML and friends) called eslint-plugin-no-unsanitized.
If this is all frontend code (WordPress is PHP so the JS in a plugin is not part of the backend. Correct?), you might want to look at mitigation strategies instead: The only (the main?) risk in frontend JS code is XSS. Maybe it’s worthwhile to experiment with CSP. I’ll admit it’s really hard to do for existing websites and can be a really long process of trial and error.
WordPress nowadays uses a REST API to talk with the “new” editor called Gutenberg which is built upon React. The use of npm packages therefor might be more used in the wp-admin (backend) than in the front-end actually. For now at least. I’d expect the usage of npm packages in WordPress plugins continue to grow and perhaps even take a larger responsibility in theme (frontend) rendering as well.
I agree that the main risk probably is unauthenticated users or visitors of a particular website. However there’s also a few where we don’t know all the authenticated users so there is also an internal risk (although I’d assume it to be low) of “adversarial” authenticated users.
Ah well. That’ll include csrf and role-specific issues which might make stuff even more challenging. Wish I had more ideas at this point.
I’ve had the same problem, so I’ve been working on a side project to help with this: xray.computer. It lets you view the published code of any npm package, and supports auto-formatting with Prettier for minified code. It’s not a panacea (sometimes, the published code is still unreadable, even when formatted) but it helps.
I also like to look up packages on Snyk Advisor to ensure that they get timely updates when vulnerabilities are found.
(Hopefully it’s okay to promote your own side projects here; please let me know if not.)
It is absolutely acceptable to mention your own side projects here, so long as it’s not spammy. We have the “show” tag for a reason. :)
Thanks for sharing your side-project! It is helpful to see the differences between versions. I can use it to see if a security issue was indeed solved between versions of a package. I’m also using Snyk to get an idea of certain packages.
However, I think my problem is a bit more complex: a WordPress plugin may contain hundreds of interdependent npm packages all neatly bundled and minified. Without access to a package.json or package-lock.json it is quite hard to find out which individual packages have been used. Quite often there is also no public repo available of the development files.
To give an example of my process thus far:
Someone in my team wants to see if we can use plugin X. I’m downloading the plugin to have a look at the code. Luckily this plugin has included a non-minified version of the js file. I can derive the use of npm packages from this file. Using Snyk I have a look at the first package mentioned. It’s axios. The included version is vulnerable (high & medium severity) and has been for almost a year (Note: the last version of the plugin is 3 months old and does not exclude this vulnerable version in it’s package.json which I found in a Github repo later on).
Since I have no package.json nor package-lock.json (all I have is the distributed build) I can’t easily update the npm package. I have no clue as to how this package relates to the other packages and how their version might depend on each other. Even if I would update the package, all other users of this plugin are still vulnerable. I contacted the plugin author. He tells me he will update the plugin as soon as possible. The plugin is (as of today) still not updated & has not released a new version. In the meantime there have been two new versions of the axios package released.
Every user of plugin X is still vulnerable to the issues mentioned on Snyk, but is this a real problem in this specific WordPress plugin context? I’m not sure how to interpret the high & medium severity in the context of this plugin. How exploitable are these issues & what is the impact of the exploits in the context of this plugin? Do I need to be a logged in user? Is this something which can be triggered by any visitor? What am I able to do when I can exploit these vulnerabilities? I can only try to find answers to these questions if I’m willing to invest a lot more time into this, which more or less beats the purpose of using a ‘ready-made’ WordPress plugin. And this is just one package of multiple npm packages used in this plugin. Packages which also have their own dependencies as well….
At this moment I’m wondering if any WordPress plugin using npm packages can be trusted at all.
ps: The way the npm ecosystem is structured is, in my view at least, problematic. Often packages are not like libraries as I’d expect, but look more like a function call or method call. I’d prefer to write these short pieces of code myself instead of depending on external code which also includes extra risks. The very rapid release schedules makes it even harder to trust external software (like a WordPress plugin) using npm packages as it seems they cannot keep up with it.
I’m sorry if this seems like a npm rant, but I’m seriously looking for methods on how to deal with these issues so we can use external software (like WordPress plugins) built with npm packages.
I think it’s perfectly reasonable to say no to this plugin.
I’d be pretty skeptical of any proprietary software that has npm dependencies but doesn’t include its package files. Just taking woocommerce (one of the more popular plugins) for example, they do include their package files in their source code. That ought to be the norm. When you have access to the package files, you can run the following to automatically install patches to vulnerable dependencies and subdependencies.
Without the package files, the plugin developer left you up a creek.
It really depends on the context. Some subdependencies are only used in the development or build of the dependency. If the vulnerability assumes the subdependency is running in a public node server, it’s probably not relevant to you. The hard part is knowing the difference. Take this vulnerability in glob-parent for example. I see it a lot in single page apps that have a webpack dev server. It’s hard to imagine how a DoS vulnerability is relevant when the only place the code is going to be run is your local development environment. In your case, it’s worth asking, is the axios vulnerability in the HTTP client or the server? If the vulnerability is just in the server and you’re not running a node server, you might be able to ignore it.
I agree that npm is problematic, but I have a slightly different take on it. There was a time when it was popular to mock projects like Bower for keeping front-end dependencies separate from node.js dependencies. The thinking was, why use two package managers when there’s one perfectly decent one? As it turns out, security vulnerabilities are impossible to assess without knowing in what environment the code is going to be run. I’m not suggesting we all go back to using Bower, but it would be nice if npm and its auditing tools would better distinguish between back-end, front-end, and build dependencies. As it is now, most npm users are being blasted with vulnerability warnings that probably aren’t relevant to what they’re doing.
I agree. I think this will be one of the main ‘rules’ for deciding if we want to use a particular WordPress plugin. Without the package.json or package-lock.json and npm audit it is near impossible to audit or update (fork) a plugin
You’ve hit the nail right on the head! And that’s what makes it hard to determine the impact: context.
As far as I can tell, determining context can only be done with (extensive) knowledge on how the npm package has been implemented in the WordPress plugin. It requires knowledge of the WordPress plugin, WordPress, the specific npm package, its dependencies and how all these relate to each other. Just like you said with the axios package.
Creating this context thus requires quite some time & skills, which would be fine if we would deal with just a few npm packages. However due to npm’s nature we usually have to deal with way more interdependent packages, which makes manually creating context near impossible. So even though there’s
npm audit
we’re still left with how to determine the context of the vulnerabilities found bynpm audit
. And for this I have yet to find a solution.PS:
npm audit fix
is in my view not the solution to solve this. It just hides the problems a bit better ;)Looking at the list, it feels like the motivation for many of these APIs is to help close the gap between Chromebooks and other platforms. I can’t understand it otherwise.
Web MIDI - really? Is there ever going to be a world where music professionals are going to want to work in the web browser instead of in for-purpose software that is designed for high-fidelity low-latency audio?
For Web MIDI there are some nice uses, for example, Sightreading Training - this site heavily benefits from being able to use a connected MIDI keyboard as a controller, rather than having the user use their regular keyboards as a piano, which is pretty impractica (and limited).
Another website which uses the Web MIDI API is op1.fun - it uses MIDI to let you try out the sample packs right on the website, without downloading it.
So no, it’s probably never going to be used for music production, but it’s nice for trying things out.
Not everything that’s “nice” should be shipped to millions of people, “just in case”.
Which is why these APIs should be behind a permission prompt (like the notification or camera APIs). Don’t want it, it stays off, if you want it, you can let only the sites that will actually use it for something good have access.
yeah +1 on this. So many of these things could be behind a permission. It feels really weird to hear a lot of these arguments when we have webcam integration in the browser and its behind a permission. Like one of the most invasive things are already in there and the security model seems to work exceptionally well!
The browser is one of the most successful sandboxes ever, so having it be an application platform overall is a Good Thing(TM). There’s still a balance, but “request permission to your MIDI/USB devices” seems to be something that falls into existing interaction patterns just like webcams.
Stuff like Battery Status I feel like you would need to offer a stronger justification, but even stuff like Bluetooth LE would be helpful for things like conference attendees not needing to install an app to do some stuff.
I don’t fully understand why webUSB, webMIDI etc permissions are designed the way they are (“grant access to a class of device” rather than “user selects a device when granting access”).
I want some sites to be able to use my regular webcam. I don’t want any to have access to my HMD cam/mic because those are for playing VR games and I don’t do that in Firefox.
However, Firefox will only offer sites “here’s a list of devices in no particular order; have fun guessing which the user wants, and implement your own UI for choosing between them”.
Don’t forget Android Go - Google’s strategy of using PWAs to replace traditional Android apps.
Was there ever going to be a world where programmers want to edit code in a browser rather than in a for-purpose editor? Turns out, yes there is, and anyway, what programmers want doesn’t really matter that much.
Yes, web MIDI is useful. Novation has online patch editors and librarians for some of their synths, like the Circuit. I’ve seen other unofficial editors and sequencers. And there are some interesting JS-based synths and sample players that work best with a MIDI keyboard.
It’s been annoying me for years that Safari doesn’t support MIDI, but I never knew why.
MIDI doesn’t carry audio, anyway, just notes and control signals. You’d have the audio from the synth routed to your DAW app, or recording hardware, or just to a mixer if you’re playing live.
Apple’s concern seems to be that WebMIDI is a fringe use-case that would be most popular for finger printing (e.g. figuring out which configuration your OS and sound chipset drivers offer, as an additional bit of information about the system and/or user configuration).
I’d love to see such features present but locked away behind user opt-in but then it’s still effort to implement and that where this devolves into a resource allocation problem given all the other things Apple can use Safari developers for.
It’s not just fingerprinting that’s the concern.
The WebMIDI standard allows sites to send control signals (SysEx commands to load new patches and firmware updates) to MIDI devices. The concern is that malicious sites may be able to exploit this as an attack vector: take advantage of the fact that MIDI firmware isn’t written in a security conscious way to overwrite some new executable code via a malicious SysEx, and then turn around and use the MIDI device’s direct USB connection to attack the host.
Could this be prevented by making the use of Web MIDI limited to localhost only?
at that point you’re requiring a locally run application (if nothing else a server). In which case you might as well just have a platform app - if nothing else you can use a webengine with additional injected APIs (which things like electron do).
Ok, thanks for your reply.
It may be worth pointing out that Apple also has some incentive to protect its ecosystem of native music apps.
I personally wish there was incentive to make the Music app remember where I was and in which playlist I was in… :-/
@kev considering your background: what do you think of the privacy and security impact of having your details visible and easily scrap-able? I also wonder about webmentions and spam (back in the day with WordPress’ linkbacks and pings), is this an issue or has this been solved?
Great question! I think as long as you’re sensible, the privacy and InfoSec risks are low. For example, I would never put my location out there as my address, it just says “North West England” which I think is vague enough to not be a privacy concern.
The social profile links are all public anyway, so no issues there; and the email address that I publish is different than the personal email address that my friends, family, tax office, government etc have.
WRT WordPress and spamming, Webmentions come through as WordPress comments, so you can plug them into Akismet to filter out spam. The Webmention plugin also has a mechanism by which you can whitelist certain domains. So you can automatically allow Webmentions from certain sites, then edit this list as you go. There’s a little work at first, but once you have it setup and tuned, there’s little to do in terms of managing spam.
Thank you. Good to hear you can allow certain sites and prevent others from sending Webmentions. I’ll have a closer look at Webmentions and see if I can add them to a new project I’m planning.
Work: hopefully finishing up a project which has been going on for quite some time. Ready to start working on something new, but first I need to dot some i’s and cross some t’s…
Personal: Trying to workout a bit more instead of being glued to screens which is more or less my natural habitat ;)
I just lost 2.5TB of important data and I’ll be recovering/building those information all week or maybe all month. And I also live in Iran so I can’t pay for stuff (I explained/cried here: https://social.tchncs.de/@arh/104105168565243534) so I’ll be searching for someone/some foundation that can pay costs of a year for a Wordpress host.
How much traffic/storage are you expecting for your WordPress host? Is it connected to those 2.5TB of data? I am asking because your personal host seems to already run on https://www.autistici.org/, and as you probably know, they run a Wordpress farm at https://noblogs.org/ . I guess that’s insufficient for your needs?
Completely unrelated to that 2.5 TB. My whole posts take about 200 KiB right now (yes, that small) and I probably won’t use more than 100 MiB of bandwidth monthly. Yes, my website is currently hosted on Autistici/Inventati but they don’t support blogs (PHP/Database) with personal domain and I always gave my domain as my main address and I want to use my personal domain for my blog. Noblogs doesn’t support custom domain either (however I have a blog there and I update it whenever I update my blog on my domian). If I can’t find a hosting (that is libre and privacy-focused), I’ll continue using static site.
I read your mastodon post. You may be able to use Jekyll on your phone. This is a post by Aral Balkan on how he’s using Hugo on his phone to power his website. See https://ar.al/2018/07/30/web-development-on-a-phone-with-hugo-and-termux/ Perhaps his experience can help you set something similar up on your phone?
That won’t work for me. I’ll need to build my website that uses some gems and then upload it to Autistici. Local gems won’t work on phone. Even if it works, I need something that works everywhere the same and takes less time. But thanks for your message. I really apprentice it.
Creating a memorial site for my deceased cats. This was prompted by the death of my almost two year old cat Max by a careless driver. Also staying in isolation as much as possible & washing my hands.
Personal: Continue working on our project VC4ALL to enable people to stay in touch with videocalls using Jitsi in the Netherlands. We have setup one Jitsi server ourselves (thanks to an anonymous sponsor!) and four organizations/people donated their own instance. Also washing hands and staying at home, isolating ourselves as much as we can.
Work: The usual but instead of just me working from home, everyone is here. It’s a challenge, but we’ll manage ;)
I’ll be looking forward to power consumption figures (namely for idle, but also comparing joules per job against other models). One of my use cases are battery-backed raspis.
Hopefully Larabel will provide on Phoronix in the next week (?) or so.
Hales, I’m curious. What are your use-cases for battery-powered Raspberry Pi’s?
Hey Bjorn. Vaccine fridge temperature monitoring. Battery-backed more than battery-powered.
In my locale there are certain rules about how many hours above certain temperature thresholds vaccines can be held before you have to phone it in to NSW health; and potentially dump all of the vaccines (especially if you don’t have accurate logged data of the temp profile for the event). For obvious reasons they err on the side of caution, however losing vaccines is also a pretty big thing (getting replacements is slow and $$$$). When something bad happens you want detailed and accurate data.
Current solutions involve off-the-shelf fridge monitors. Beautiful little dedicated boxes that run off CR2032 cells and use a thermocouple. Unfortunately the ones I’ve used suck in a number of ways:
A much bigger problem with these units is that they are passive, not reactive. If something goes wrong they don’t tell you. Instead you have to regularly go through the hoops to extract, archive and analyse the data yourself. Doctors and nurses are busy, the less the have to do the better they can help patients.
Also note that most of these fridges have built-in temp monitoring and min-max reporting in the form of a few buttons and an LCD display (or sim). Staff tend to be more fluent with these interfaces (to a certain degree – accidental button presses have also been known to lead to full thaws :P). Unfortunately these don’t record time periods or full profiles, so they are good indicators to regularly check but useless for further analysis of events.
The current plans involve a raspi2 + a bank of D-cell alkaline batteries + a string of temp sensors going into the fridge. Basic concept:
I’ll be throwing this together over the next few weeks, then running it in parallel with the existing temp monitor solution. Depending on how hard the paperwork gets it might continue to exist in parallel, but when something goes wrong I suspect it will provide much more useful data (and alerts) than the current solutions.
(I could ramble further about thermal mass, temp sensors positioning, and how I think existing solutions are somewhat biased, but this is probably already too long :D).
Back to raspis: given the low workload for a fridge monitor the primary consideration is idle current draw. As other people have mentioned the pi 3’s tend to eat a lot more than the pi 2’s, and there are differently clocked variants of the same models too. This new pi 4’s SoC is fabbed differently, so it will be interesting to see whether or not they have put any effort into idle draw or not.
You may be interested in https://lifepo4wered.com/lifepo4wered-pi+.html, which I’ve found makes a pretty good UPS right out of the box.
Thanks minimax.
PO4 chemistry is nice. Unfortunately it’s a single cell design, so I’ll get only a tiny fraction of what 6 alkaline D cells will give me:
I originally considered rechargeables, but I think it’s not as great of an idea for this usage scenario. Blackouts happen seldom (eg once or twice a year) and the cost of new D cells is minimal to the business.
Most of all: as much as I expect to be maintaining these directly myself, if I can’t then I’d much prefer to be giving instructions over the phone on how to replace D-cells versus how to fix a puffy Lipo or failed Li* cylindrical cell.
Hi Hales, thanks for your detailed reply! Sounds like good use for the pi. Have you considered the pi zero (W) since you mentioned the idle current draw? My plan is building a mostly solar powered weather station which includes UV and CO2 measurements. As far as I can tell this combination is either hard to find or none-existing.
Pi zero: I think it came down to an issue of stock and availability when I did the parts order. Otherwise yes, the only thing I’d need to confirm is whether or not the Zero has a native 1-wire interface.
UV: which UV? What sort of sensor do you have in mind?
CO2: currently liasing with a company trying to use these. Be wary of any chip that says it measures “eCO2” or is otherwise a solid-state chip TVOC chip. If you read far enough in you discover that the CO2 levels are invented based off a lookup table describing a “common indoor” relationship between TVOC and CO2; ie complete lies with a long list of assumptions. It looks like you need devices bigger than a single chip (normally little plastic chambery things a few cm big on a PCB module, IR reflection internally?) to actually measure CO2.
I’ve tried making a laptop-type device on the Pi, and the power consumption of the 3b was just brutal; I had to fall back to the 2 to have any hope of making it run. Even then I was able to get under an hour of battery life on a battery that would last a PocketCHIP for a couple hours.
Edit: the pocketCHIP also has onboard circuitry for pass-thru charging of a lipo, which is a lot harder than you’d think to handle yourself when building something like that. It seems like they’re going full-speed-ahead on making the Pi a better desktop replacement and not really concerned much about the portable use case.
This is my purpose: a SigInt tablet capable of 50MHz-1200MHz SDR, Mousejack, 802.11abgn
https://hackaday.com/2019/06/05/mobile-sigint-hacking-on-a-civilians-budget/
I gave this as a talk at CircleCityCon, and actively use the tablet to investigate and attack my own radio assets. Obviously, it could be built and used on non-self assets. But that would be bad, so don’t do that.
Mine was an XBMC system that could handle being suddenly powered down without warning. Power failures are common where my parents live in India and there is a blip in power when battery backup from the inverter kicks in. So I only needed a few seconds till household battery backup can take over.
In the end, I ran out of time on my last visit and just cloned a bunch of SD cards so they just sub in a new one when the old one breaks on power failure.
If you’re interested in this, you might find Blogmesh - a PoC on using RSS feeds for this - interesting as well. Have a look at https://blogmesh.org/
Currently going back an forth between learning some Golang, playing with esp8266 mcu’s and studying for my LPI certification to close any gaps in my self-taught Linux knowledge. Specialization is for insects, besides having lots of interest in all kinds of (technical) topics helps with being self-employed :)