You might miss the point of this article if you didn’t read to the end - essentially, all modern web browsers have so much messy C with so many bugs with security implications that they don’t have time to file CVEs for all of them. He argues that, though imperfect, Rust at least has a chance of fully replacing C and dramatically increasing safety in the software we use every day.
The .NET stack never existed as a viable platform for the popular/consumer startups. That said, there have been many “startups” in industries aligned with the enterprise that have been very successful with .NET. You just don’t hear about them because their value proposition was very different, and they weren’t a startup in the same way. Azure and modern .NET are trying to change things, but I think the former makes the biggest difference.
Yes, those types of “startups” are usually started by insiders in some vertical industry and exist in the funding, sales, and recruiting ecosystem for that industry. They never really touch the Silicon Vally / Tech startup funding and recruiting ecosystem.
I’ve been happy-enough with LastPass - I can’t point to any reason beyond inertia, so really what I’m curious about in this thread: are there any significant differentiators that could sway a person to switch?
To my knowledge at least by staying mainstream there’s a team of individuals working on the product. Ive used LastPass for years, and while there have been issues in the past … There is a large userbase and community scrutinizing it.
Going the self hosted route negates alot of the large community, and trail by fire already accrued by legacy solutions like LastPass.
They also provide an export mechanism …
I’ve stuck with LastPass for a while. AFAIK, no security issues that I’ve judged to be significant. I appreciate that, compared to the other solutions that I know of, it seems to be widely compatible and simple to use on all platforms.
Only minor beef that I have is that the browser plugins, or at least the Chrome one, seems to have gotten slower and a little bit buggier over time instead of better and faster.
I use LastPass, but am not happy with it, as in the past, it had some pretty serious security issues:
I would switch to 1Password, but it does not have linux support (edit: it has a browser extension for linux, which is suboptimal, but probably better than Lastpass). I’ve almost talked myself into switching to Keepass, but I’ll have to find out how trustworthy the iOS version is.
checks Github Nope, no PRs for any of these as far as I can tell.
Although, I suppose our server load is already pretty low anyways. Still, it never hurts to cut response times where we can.
They opened two issues, but neither was usable (linked in my other comment). We do have a couple hotspots, but in general serving a couple hundred records to a few thousand people per day is hard to get wrong.
I’m happy with WSL on Windows 10 for all of my unix-y stuff on Windows needs. I’d say I actually prefer that to running a bare-metal linux install on my own hardware. I run the bash terminal through the Cmder command-line interface, and it works fine with Tmux, Vim, and every programming environment I’ve tried so far, including Ruby, Python, Go, Rust, C, and all related utilities, including ssh, git, and command-line tools. The only issues I have are that switching directories seems oddly slow, and Control-Space doesn’t seem to make it into the terminal.
For graphical applications, I stick with ordinary Windows applications. Use ordinary Windows web browsers, picture viewers, music/video players, etc.
One other minor issue - the Linux applications can access the Windows filesystem, but Windows applications can’t access the Linux filesystem. So if you ever want to use any GUI applications on your code, the project has to live in a Windows directory. But then you can’t really use the Linux CLI Git client anymore, because it doesn’t like what the NTFS does with file permissions. So you have to do all of your Git work with a Windows Client. Not to big of a deal IMO, but some may find it annoying.
Because knock packets use a timestamp to limit knock-reuse attacks, servers and clients must have synchronized clocks. Clock skew greater than 60 seconds is likely to cause the knock process to fail, resulting in a “connection refused” error when establishing the TCP connection.
Given this, if my server’s clock does fall out of sync, how do I connect to it to fix it if Oxy is my only access method?
Software crashes and networks disconnect. Even if you start NTP you can still end up with an inaccessible server.
So if you have no network connectivity for ntp to update, how are you going to log in with any remote access tool?
Software crash of ntpd is unrealistic.
NTP has an exponential back off mechanism for retrying connection, it’s possible that it could get out of sync and not have ticked over when the network is back up, causing time skew but still having the machine allow network connections. Depending on the configuration, it could also plain shut down when it cannot get a connection and not restart when the connection is back.
Saying that software crashes are unrealistic is in itself an unrealistic view in my opinion. Software WILL crash.
Allowing clock skew to prevent connections to the machine definitely adds more moving parts to the remote access process and adds risk to that process failing.
No idea how I’ve never heard of this before. They have a pretty nice method for configuring the options for a CLI app, and nice implementations in a ton of languages. I had been looking for something better for building CLI apps in Python and Rust, and this does the job great.
For python I use click. Manually writing help page, which then gets parsed, then translated to command line parser seems a bit odd to me. In click, you simply annotate methods instead.
Funny you should say that, one of the reasons I was happy to find this is that I don’t much like the way that Python click works, and there seemed to be few alternatives. It seems severely awkward to me that, in order to build a command suite application with click, I have to:
@click.group()
@click.command()
add_command method on the do-nothing method, passing the subcommand methodsIt all seems very messy and unintuitive. I could never guess how to set that up or add options to commands etc without reading the docs. On the other hand, writing the help page in a standard format, calling one command, and getting back a big dict you can use normally seems easy to remember and to lead to clean code. Maybe it doesn’t feel as fancy to if-else through all of the possible commands, but I’ll take not so fancy and easy to understand over something that makes no sense without the docs.
Yes, click is not always exactly straightforward or intuitive. It also gets quite ugly when you have many options. Docopt seems nice, but there are many extra niceties in click I like: I can get prompt for missing required argument; I can get proper password input; it can check that argument is proper and accessible file; progressbars, input validators, etc …many nice little things that make it good and worthy those little inconveniences (at least for me).
I see what you mean better now. I write mostly small helper scripts and utilities for myself for CLI, and haven’t felt the need for any of those features. I may think again about click if I feel the need for any of that kind of stuff.
Docopts is great and has its domain of use, when things get more demanding I move to click. My rule of thumb is single file programs get Docopts while multifile ones get Click.
I don’t really understand this. Sure, it’s cool to optimize something so well, but I don’t see the point of going to so much effort to reduce memory allocations. The time taken to run this, what it seems like you would actually care about, is all over the place and doesn’t get reduced that much. Why do we care about the number of allocations and GC cycles? If you care that much about not “stressing the GC”, whatever that means, then better to switch to a non-GC language than jump through hoops to get a GC language to not do its thing.
On the contrary, I found this article a refreshing change from the usual Medium fare. Specifically, this article is actually technical, has few (any?) memes, and shows each step of optimization alongside data. More content like this, please!
More to your point, I imagine there was some sort of constraint necessitating it. The fact that the allocation size dropped so drastically fell out of using a pooled allocator.
Right at the beginning of the article, it says:
This data is then used to power our real-time calculations. Currently this import process has to take place outside of business hours because of the impact it has on memory usage.
So: They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it (“outside of business hours”). Using 7.5GB may be fine for processing a single input batch on their server, but it’s likely they want to process several data sets in parallel, or do other work.
Sure, they could blast the data through a DFA in C and probably do it with no runtime allocation at all (their final code is already approaching a hand-written lexer), but completely changing languages/platforms over issues like this has a lot of other implications. It’s worth knowing if it’s manageable on their current platform.
They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it
That’s what they claim, but it sounds really weird to me. I’ve worked with plenty of large data imports in GCed languages, and have never had to worry about overhead, allocation, GC details, etc. I’m not saying they don’t have these problems, but it would be even more interesting to hear why these things are a problem for them.
Also of note - their program never actually used 7.5GB of memory. That’s the total allocations over the course of the program, virtually all of which was surely GC’ed almost immediately. Check out the table at the end of the article - peak working set, the highest amount of memory actually used, never budged from 16kb until the last iteration, where it dropped to 12kb. Extra allocations and GC collections are what dropped. Going by the execution time listing, the volume of allocations and collections doesn’t seem to have much noticeable effect on anything. I’d very much like to know exactly what business goals they accomplished by all of that effort to reduce allocations and collections.
You’re right – it’s total allocations along the way rather than the allocation high water mark. It seems unlikely they’d go out of their way to do processing in off hours without running into some sort of problem first (so I’m inclined to take that assertion at face value), though I’m not seeing a clear reason in the post.
Still, I’ve seen several cases where bulk data processing like this has become vastly more efficient (from hours to minutes) by using a trie and interning common repeated substrings, re-using the same stack/statically allocated buffers, or otherwise eliminating a ton of redundant work. If anything, their timings seem suspicious to me (I’d expect the cumulative time to drop significantly), but I’m not familiar enough with the C# ecosystem to try to reproduce their results.
From what I understood, the 7.5GB of memory is total allocations, not the amount of memory held resident, that was around 15 megs. I’m not sure why the memory usage requires running outside business hours.
EDIT: Whoops, I see you responded to a similar comment that showed up below when I was reading this.
The article doesn’t explain why they care, but many garbage collection make it hard to hit a latency target consistently (i.e. while the GC is running its longest critical section). Also, garbage collection is (usually better optimized for short-living allocations than malloc, but still) somewhat expensive, and re-using memory makes caches happier.
Of course, there’s a limit to how much optimization one needs for a CSV-like file in the hundreds of MBs…
As shown in the table, they don’t use anywhere close to 8gb of memory at a time. This seems like a case that .NET is already very good at at a baseline level
I have to agree with this. My personal Mac is a 2013 model Macbook, and between how well it still runs and the high price and design compromises in newer Macbooks, I don’t feel much interest in updating it. I am starting to consider replacing it with a Pixelbook, since the price came down to well below $1,000. I already have a cheaper chromebook, but oh those HiDPI screens are so nice.
I have the same model. It’s a really nice machine, but I agree I just don’t see the reason to update. There’s so much more to offer in other ecosystems (especially considering price), and the idea that the answer to long-form document creation in the Apple ecosystem seems to be “iPad Pro with a 3rd party wireless keyboard/mouse” is just…weird. But maybe I’m excessively old-school.
Just to make this not all snark at poor security, here’s an idea I came up with in a few minutes for how to make a smart lock that’s actually secure:
Lock will Bluetooth pair with anything that asks, but requires a long random key to open. The key is in a QR code and printed as a number on a couple of sturdy slips of paper that are in the package the lock comes with. You download the app, scan the QR code, and you can open the lock. Anyone else can get the app and pair with the lock, but can’t open it because they have no way to get the code.
I’m not a security expert, and I only spent a few minutes on this, so it may have some holes. But it’s definitely better than what this smart lock is actually doing.
That sounds good, there are so many other issues that you’d need to address too like preventing replay attacks, customers who switch phones and have lost the QR code, how would you manage temporary keys?, etc
It requires more work than this, although most consumer hardware manufacturers are so clueless, they’re not aware of the problem at all, much less how difficult it is. I work for a company that sells security solutions for IoT (Afero) and I could tell you stories that would make your toenails fall off.
Replay attacks.
Like, what happens if I sniff your traffic and play it back once I pair?
A system where both I and the lock talk to another service that generates one-time tokens would probably help.
Yes. The only thing we need to unlock the lock is to know the BLE MAC address. The BLE MAC address that is broadcast by the lock.
Wow, that’s awful! I wonder if anyone has some good lock recommendations that have passed testing with good marks?
You should see the mechanical lock they have that flings extra keys at anyone who rings the doorbell.
Well compared to this, you could always buy basically anything else, including the cheapest normal lock they have at the corner drugstore. It might be not too hard to cut, but at least it has an actual key and won’t open right up for any cellphone ever made.
Also, according to the author the Tapplock was easier to cut than a normal hardware store padlock: https://twitter.com/cybergibbons/status/1007144017149063168
the cheapest normal lock they have at the corner drugstore
I don’t really understand the huge variety of languages in these things. Chromium is mostly C/C++, okay. And I suppose they need some JS around for internal UI stuff. I suppose it makes sense to have some Python in there too for build automation or something. But why do they need Python and Ruby and Perl and PHP? And Lisp and Go and Scheme and R and Powershell and Sed and so on? I have to wonder if there are good reasons for all that, or if these projects need some language synchronization.
But why do they need Python and Ruby and Perl and PHP?
An attempt at an explanation:
PythonMost integration test running infrastructure inherited from WebKit was written in and continues to be written in Python. You can see this lineage comparing Chromium and WebKit’s source trees:
All Python files: https://cs.chromium.org/search/?q=lang:%5Epython$&p=2&sq=package:chromium&type=cs
RubyOne thing we used Ruby for was for a tiny utility for formatting patch files. We just replaced it (CL). There are some other random files.
All Ruby files: https://cs.chromium.org/search/?q=lang:%5Eruby$&sq=package:chromium&type=cs
PerlChromium actually vendors in a copy of the Perl language.
All Perl files: https://cs.chromium.org/search/?q=lang:%5Eperl$&sq=package:chromium&type=cs
PHPMany manual tests are written in PHP since (for better or worse) it’s easy.
All PHP files: https://cs.chromium.org/search/?q=lang:%5EPHP$&sq=package:chromium&type=cs
Other languagesor if these projects need some language synchronization
Contributions welcome! :)
(but seriously, if you are interested, I’m at jeffcarp@chromium.org for any questions)
To try to answer the question more directly: code gets written in many languages and it takes SWE hours to rewrite it in a different language. If you’re choosing between investigating a P1 bug and rewriting something that already works in a different language, time usually gets spent on the P1 bug.
(source: I work on the Chrome infrastructure team)
Oo good catch, thx - updated my reply. I can find some Emacs Lisp in the codebase but I can’t find any Common Lisp 🤔.
On Scheme code in V8: V8 implements a fast floating point formatting algorithm which is relatiely recent (2010, IIRC) hence likely to be faster than system printf. As I understand, Scheme code is directly from the paper.
Interesting, thanks for finding all of that! Looks like Chromium has a lot more third-party libs and testing infrastructure than I thought.
I may just take a look at some of the open-source infrastructure there, though I doubt I’ll have the time or energy to try and make contributions.
Wow, what a rant. I’m very sympathetic to “people should be able to control their devices”, but this rant is missing a number of key factors:
In other words, there are clear and obvious reasons (security and basic functionality) why a small management microcontroller like this needs to exist in a laptop (without requiring an NSA conspiracy to insert it.)
At the same time, I totally agree that it would have been great & less problematic if Google had provided a way for advanced users (who understand the associated risks and loss of security) to disable the TPM-like functionality of this chip (ie Android bootloader unlock or older ChromeOS style). Or even better to provision their own signing key. It’s a shame they didn’t do this[*], although not too surprising given the market demand.
[*] It’s worth noting that even if they had done this, the OP wouldn’t be happy because they still can’t audit the rest of the H1 chip’s firmware, build their own, etc. This is a fair enough concern, but it’s hard to see how Google can mitigate that without either finding a TPM-like chip with a fully open source SDK (…), or provisioning two microcontrollers so it’s possible to physically disable the TPM chip entirely but still have a chip to monitor the battery voltage, make the power button work, etc.
The main issue brought up is that this device allows firmware updates without user authorization or clearing user data.
Honestly, part of me would like to see more open-sourcing of these types of security/management chips and ways for knowledgeable users to disable these things. However, it seems that for every user who is genuinely qualified to do these things and decides to do them, there are from 10 to 100 users who can be convinced to go through the unlock process to see some dancing bunnies or something. For every user who is mad that someone else can unlock their system somehow through some Corporate-controlled process, there are 100 users who will forget all of their passwords and get mad that their hardware is now a brick because nobody can help them unlock it. Possibly including the original user mad at corporate backdoors.
One more piece of paranoia still annoying me:
master the I2C bus, on which, among other things, are to be found the sound card’s microphone
Streaming data via I2C (especially on a shared bus with other devices) would still be a massively inefficient way to do this. I’d be surprised if there’s a digital microphone manufacturer who has chosen this over I2S.
This is one of those things that makes me feel sad about the future of the web. It seems like our only choices are between publishers who want to load up the web with autoplaying videos, ADD-inspiring redirects to more articles, and endless piles of ads and tracking scripts, and tech behemoths that want to put the whole web under their proprietary protocols and in-house services. There doesn’t seem to be any forces pushing for quality content on readable pages with reasonable, trustworthy ads or some other sort of monetization.
There doesn’t seem to be any forces pushing for quality content on readable pages with reasonable, trustworthy ads or some other sort of monetization.
I guess the EU is kinda doing that. But I agree, there needs to be a grassroots effort to push back.
I really hate browser notifications. I never click yes ever. It feels like preventing browsers from going down this hole is just yet another hack. The Spammers and the CAPTCHAers are fighting a continuous war, all because of the 2% of people who actually click on SPAM.
My firefox has that in the settings somewhere:
[X] Block new requests asking to allow notifications
This will prevent any websites not listed above from requesting permission to send notifications. Blocking notifications may break some website features.
help links here: https://support.mozilla.org/en-US/kb/push-notifications-firefox?as=u&utm_source=inproduct
Did anyone find the about:config setting for this, to put in ones user.js? I am aware of dom.webnotifications.enabled, but I don’t want to disable it completely because there are 3 websites which notifications I want.
there always has been in Chrome and Safari and since very recently, there’s also one in Firefox. It’s the first thing I turn off whenever I configure a new browser. I can’t possibly think of anybody actually actively wanting notifications to be delivered to them.
Sure, there’s some web apps like gmail, but even there - I’d rather use a native app for this.
I can’t possibly think of anybody actually actively wanting notifications to be delivered to them.
Users of web-based chat software. I primarily use native apps for that, but occasionally I need to use a chat system that I don’t want to bother installing locally. And it’s nice to have a web backup for when the native app breaks. (I’m looking at you, HipChat for Windows.)
There is a default deny option in Chrome, takes a little digging to find though. But I agree that it’s crazy how widespread sites trying to use notification are. There’s like 1 or 2 sites that I actually want them from, but it seems like every single news site and random blog wants to be able to send notifications. And they usually do it immediately upon loading the page, before you’ve even read the article, much less clicked something about wanting to be notified of future posts or something.
The only time I have clicked “yes” for notifications is for forums (Discourse only at this point) that offer notifications of replies and DMs. I don’t see a need for any other websites to need to notify me.
Have you tried BGAN? Satellite internet from a panel the size of a laptop. I field tested some of them for a project a few years ago, they worked pretty well and were easy to set up.
We didn’t end up going with them for that project, IIRC because it was for deployment in less remote locations in the US, and they didn’t support the US at the time, and we had good alternatives - cellular data in most places, and we had the transport capability to bring full-size satellite dishes.
We actually have BGANs in some places, From what I understand they’re terribly expensive and we only have a limited number of them, so in a wider scale deployment it gets very expensive very fast to be distributing multiple BGANs and data plans all over the place.
To give you an idea in a non-emergency context, we may enter a country and start with 20, 30, 49 sentinel reporting sites. But if we go into outbreak mode, we’re talking potentially 100s of locations needing to sync data…
Coverage is actually pretty good with them in Africa as well, but maybe what we could do is use BGANs as a data access point and create local mesh networks or something for denser clusters of reporting sites…
Ah, I see what you mean. I don’t suppose you’ve turned any procurement resources your org has towards seeing if you can get any kind of deal on them for being a humanitarian org? I didn’t really see the prices when I worked with them, but I thought it might not be too bad, since it sounds like your data requirements are pretty low.
Might be possible to do something with using them or some other satellite system as the backhaul for a local mesh network. Though I’d be worried that getting the point-to-point range above a few km and not using massive power would get you into an area where everything is either very technically complex or expensive, or both.
Distaste of Electron in general aside, I think the author does have a point in that you would think we can come up with a way for Electron apps to run more directly in ChromeOS. Do we really need to run basically Chrome on top of a stripped-down Linux, then run another full Linux container inside of that, and run another instance of Chrome inside of that?
Although looking at his apps list, Slack is already an Android app and web app, can run fine on ChromeOS in multiple ways. Hyper appears to be some sort of Electron-based terminal. I have no idea right now why you would want to do that, but there are several types of terminals on ChromeOS already. Don’t know much about SimpleNote either, but it also looks like it’s already a web app and an Android app. VS Code is the only thing on the list that doesn’t really run in some form on ChromeOS already.
I think the author does have a point in that you would think we can come up with a way for Electron apps to run more directly in ChromeOS.
I think it’s called a web browser.
Web Browsers have a lot of UI messiness around them that a “contained app” doesn’t, and sandbox things enough to where you cannot always make a sane experience (you can’t really do VSCode in a web browser, no access to local files)
Funny, I just read this article on how most PGP mail clients have some serious security flaws right after I read this post. I was already inclined to think that really secure email for like 99% of possible uses is hopeless. The most important issue IMO is that whoever you are communicating with is likely to be less paranoid/concerned about this than you are.
The other thing - the NSA and any other potential adversary combined just don’t have the analysis bandwidth or storage to do anything useful with the amount of data they would get from compromising everyone in this way. If they have such valuable vulnerabilities, they aren’t going to spread them all around the world and risk them getting discovered and patched at any time in exchange for data volume that they can’t store or analyze. They save them for very limited application against high-value targets.
I’m not saying they don’t gather massive reams of data where they can do so easily. I do think they don’t want to put their most valuable vulnerabilities at risk of discovery without getting something they really want out of it. Look at how many vulnerabilities made it out into the wider internet and were discovered and patched when that Flame malware spread farther than its controllers intended on the open internet.
Huh, what would a federated message board look like? I guess I could see a reddit-like one where each sub could be on a different server, but you’d have a shared account around them all. Still one server per forum, so you can have consistent ordering of stories and comments. I’m not really sure what the benefit is to anyone of having a shared account among a ton of federated board servers, though. It just preserves reddit weirdness like sharing massively different karma amounts between joke boards and deep research boards.
Lobsters is meant to have one main page though. How would you do consistent ordering of the front page if stories were federated?
what would a federated message board look like?
Usenet, I think. Threaded messages (with different people getting a different, but eventually consistent view of the thread). Each lobste.rs post would be a new top-level thread.
You’d lose voting and ranking on a straight usenet model, but that would be a small extension (usenet already supports control messages - you’d just have upvotes/downvotes propagated as a type of control message and your ‘top level’ view respecting the votes and an aging algorithm etc.
I’m not at all convinced that you need that consistent view. Twitter doesn’t have one - everyone sees their own slice of things that they’re paying attention to.
Having some consistency is a prerequisite for a place to be a community, though, so it would certainly be a very different form of interaction.
I’ve worked at several different places with widely varying processes.
Current place, we do code changes in Github PRs with CI and deploy to multiple levels of testing environment for our testing department and our customers’ testing departments to try out. At various points, we designate a release version, do testing and documentation on that, and release it. We have a in-house built system for deploying onto all of our environments, so by the time something gets released to production, it’s been deployed using the same systems many times. We have a variety of monitoring systems, which includes a custom application that regularly hits a test route on our services and reports failures, logging services that watch for keywords in our logs, and AWS CloudWatch alarms that send emails.
My personal projects, most of them have few to no users other than myself. I like to deploy with git pushes and run a script on the server to update anything that needs updating and restart the server process.
One of the more interesting ones was at one of my previous jobs. We tried having a testing environment for our application, but we were never able to find bugs effectively in it. The application’s purpose was to do complex calculations on how chemical processes run, and setting up those calculations was very time-consuming for the Chemical Engineers doing it. Nobody had much enthusiasm about setting up tests in non-production environments that were thorough enough to actually expose bugs in the calculations. After several episodes of catching bugs in production instead of testing, we decided to switch to deploying straight to production for one of the smaller projects, and update other, bigger projects to newer versions after they had been used in the first project for a few weeks or so.