That could have been really bad. I hear that one way to get around these things is to add a resistor between one of the terminals and the thermal sensing terminal, such that the unit thinks there’s a battery and it is working fine.
The solution to “batteries catch fire” is probably not “start fucking around with the circuits to trick the controllers”
This is about tricking a device, which is usually battery dependent, that it is using a battery when in fact it is an external regulated power source. The author could have an alternative battery like lead acid or lithium iron phosphate which has far less risks in unsupervised settings be regulated down to the needs of the device.
Trying out a post process of a virtual DOM into json friendly intermediate representation. On Memorial Day I tried something similar where it was transformed into white space aware plaintext.
Even with this valid criticism GraphQL is years ahead of OpenAPI with everything having to do with type safety (I challenge you to write a compex OpenAPI Nest.js project with valid scheme). GraphQL comes with great tooling for consuming types, schema governance and documentation while offering a unique consumer side safety and ergonomics with fragments. Although it’s far from perfect I think it’s the best we got so far and future protocols should start from learning and adopting GraphQLs philosophy
Swagger typedefs are a nightmare, if only for the way that it intermixes details of an object property in with details of the type of that property. It’s way too complicated in some areas, and not descriptive enough in others, the official spec simply leaves out about half of the most important details as “basically like JSON schema” another, even more overcomplicated system.
I haven’t used OpenAPI that much, so I can’t really say that with confidence, but this has been my feeling as well. I agree with you that, type-wise, this is much easier to work with, which I try to convey at the end. I don’t want us to stop at GraphQL though.
Definitely agree with you on OpenAPI. I’ve observed my team and an adjacent team rework their implementations to fit schemas that could be expressed. The result was clunky, confusing, and consistently delayed tasks. One of the key issues was, if I recall, sum types.
I also feel that protobuf ought to get a mention too, especially for GRPC APIs. Protobuf felt more rigid and trustworthy (to review and rely upon over time) as an IDL, while GraphQL was more fun and flexible. GraphQL’s interactive tools (hardly any useful tools exist for GRPC) are so ergonomic that I would not hesitate to recommend it for client to server APIs.
The OpenAPI specification allows using JSON Schema to describe types with quite sophisticated types. It’s really up to the codegen implementation to translate that into something appropriate for the type system of the target language. If anything, it’s probably a bit too powerful for most implementations to cover all the edge cases that can be specified.
In terms of schema governance and documentation it’s not that difficult to upload the schema and its documentation page (kinda trivial with Swagger/Rapidoc/etc.) alongside the API itself.
I am studying and practicing WebGL, there’s an idea I had back in January that I would like to materialize. WebGPU seems interesting, though Safari does not support it yet and I would like what I make to be accessible to a large range of platforms.
I also just listened to The Emperor’s Soul, a short novel. It is a really fun one that represents artists and those that care about their craft.
Professional problem solvers is accurate. My org tried to use some product that wrapped around google sheets and it used web hooks to integrate with our primary application. It just became a constraint drain on an engineer to debug and diagnose an issue. Later the same engineer made an in house micro service to handle it and we’ve had few issues since. The editor of the no code / low code product could not solve the problem and could not diagnose the problems they introduced. At the same time, that editor could do things the engineer could not: directing customer support.
The pipe dream behind low code is to provide independence to everyone while engineering resources are constrained. Buyers of these services find out later that there is no shortcut to understanding the problem to be solved.
I am confused by this comment. If you are referring to Amazon, then it seems they didn’t implement much, if any cryptography and used an off-the-shelf component (wpa_supplicant). If you are referring to the article’s author, then I sort of get it because the article has several issues that tell me they shouldn’t be near any real cryptography system without a lot more training and experience. The author suggests hashing the wifi password, which doesn’t make sense since the password needs to be usable for the WPA protocol. You can hash and pre-compute the key, but that is still usable to connect to the wifi. They conflate that with password storage in a system that is a password validator, not the client. Then they added a note suggesting perhaps encrypting it in a proprietary format (which screams Kerckhoffs’s principle violation) and lamented that it would still be decryptable. Ultimately this device has to be able to boot unattended and connect to wifi, so their options for defense are limited.
To me, this article seems to lack a discussion of the threat model. This is an embedded device with limited resources. I have no doubt the wifi password is recoverable if you crack it open.
I had similar takeaways on my reading, but keep in mind the author is 14 years old.
Honestly the bit about exfiltrating the Spotify API key by shorting out a capacitor during boot is pretty impressive on its own.
Certainly impressive at that age. It does reiterate my point about needing more training and experience to work with cryptographic systems.
They roll their own crypto for fun and to learn, but don’t deploy it in production. They show it to experienced cryptographers to learn what they did wrong (notes experienced cryptographers don’t ever stop doing this). They go through a lot of rounds of peer review for their work until it is accepted as being probably not wrong. They do this huge amount of work so that people like me can avoid rolling our own crypto, because the effort involved in doing it right is way more then is worthwhile for any single project.
I think you misunderstand the “Don’t roll your own crypto”. This doesn’t say don’t build your own crypto. But you have to keep in mind, that it’s probable completely insecure. So if you have build your own crypto don’t use it. If you are lucky someone will look at your crypto and explain you the problems with it.
To practical start with crypto you can also look at some known bugs and try to exploit them. I think there is an online curse for this.
This issue isn’t about rolling their own crypto. Good secrets management is a hard problem that sometimes pulls in applied cryptographers.
I am developing an empathy towards those in the commercial printing industry, both the technicians and the illustrators. One of my volunteer projects is to unblock a convention’s registration system. Color and alignment is just so hard on these things. They come out looking like some blurry CRT.
Also Pillow (python) saves images at 75% JPEG quality in PDFs and there’s no way to change it without patching Pillow.
This was engaging to read, thank you for sharing.
An app the takes a day to compile… sounds like a nightmare.
Honestly this is a great way to wrap up an evening. Something might feel small, but all that information was in pieces and real effort and experience went into combining those details together. It would be great if this were more celebrated for even the small things.
A cute and cheeky response! I love the hacky mechanism and never thought to try that.
Thank you for sharing!
I am looking at making webauthn (server and client side) from scratch without big libraries that do everything. Understanding CBOR is my next goal and I might compare it with ASN.1 DER in the future in writing.
Do I see this right that this sort of attack would not be possible with TOTP? I have TOTP as 2FA mechanism with my AWS account and there is no push notification involved. Don’t see how somebody could spam me into accepting them as a login device for AWS.
Is a push-based 2FA more secure in normal operation or why would I want to use it?
I have a lot of thoughts about this, but see here for them because I don’t want to type them all out again.
TOTP can be brute forced without the account owner knowing.
Push notifications from a vendor like duo is part of the incident Uber experienced.
For those wishing to keep push notifications, look into “Number matching”, Active Directory + Microsoft Authenticator and Duo have “number matching” where the account owner enters digits on screen into their device. While this solution makes it harder, I think it is still capable of being phished.
I collaborated with cadey on the post they linked next to this comment, consider reading push 2FA considered harmful.
I don’t think the brute-force argument against TOTP is that convincing. If you aren’t counting failed logins and locking accounts after X failures, then it’s a problem with your implementation not the concept as a whole.
I still agree with your conclusions though. TOTP fails because it sucks to use, and webauthn looks to be a better alternative.
If you aren’t counting failed logins and locking accounts after X failures, then it’s a problem with your implementation
I completely agree with this. But either through ignorance or misconfiguration or no such feature at all, this check may not be performed on some SSO deployments or independent services and that is where it is vulnerable to brute force.
Seems like the easy solution to this sort of attack would be to implement an exponential backoff for 2FA tokens, or give them a “lock until I turn it back on” option. That said, I’ve never considered needing such options before, so…
Exponential backoff might reduce the noise sent to specific individuals, but in practice the attacker will just source more targets. Rate limiting on specific IPs is also somewhat ineffective too. Given that this stage happens after a password is entered and found to be correct, I would recommend that a password reset be issued after some number of attempts. Somewhere between 5 to 10 MFA failures seems reasonable to me.
I suppose. I’m certainly not an expert but it seems like sourcing more targets would get expensive quickly, since you need a known password. But yeah, forcing a password reset after X failed attempts is probably the way to go. I was thinking that locking the account for 24 hours or something might be a decent alternative, since my brain hates things that fail in a disabled state forever, but after thinking about it more that’s exactly what you want here.
This theoretical case may imply the IT team broadly has their credentials compromised if they have to reach for break the glass credentials instead of resolving it for one another. I would not trust their hardware to safely interact with it at that stage, it smells like a keylogger.
An MFA failure is surely a highly unusual event. Why would you ever reject one unless you requested one?
I’d give it 2 attempts max! Especially if it isn’t TOTP, so there are no typos.
Yubikey failures happen quite frequently for me actually — the first time I authenticate with a yubikey usually fails via NFC or on windows, and it’s always only the second or third time that works.
Purrly because the connection fails or Windows’ auth stuff breaks.
On Linux it’s more reliable, but it’s still an issue.
For reference, I’ve usually got several yubikeys enrolled and/or connected, which some systems apparently don’t like all that much?
Or just an off button in the client (MS Authenticator has one already). You should only ever get a notification in response to something that you’ve done, so if you get them at other times then you’re either under attack or there’s a bug. Either way, you can turn them off until you try to do a thing that should trigger one (and notify your support folks in the interim).
It’s something I’d like to write about, reed solomon codes are incredible. Most technical presentations on it peruse rigorous definitions on bounds and theory and it isn’t too accessible.
I am at DEF CON this weekend, the panels are good. Turns out TOTP isn’t great in practice with a patient adversary. Looking forward to seeing more
Plenty of normal dayjob stuff, otherwise post processing an Abstract Syntax Tree / Intermediate Representation with typescript. Unit tests too. It was quite mundane to make a switch tree for 40 symbols.
It’s looking like I will share some code in CI and on the edge for processing this JSON friendly structure.
This sounds relevant to my interests, is there any place you’re sharing more details about your TS work I can go read? Or could you go into a little more detail about it here if it’s not private work stuff?
I’ll likely write about it after I get it done!
My blog source is something between LaTeX and sort-of lispy markdown. My goal is to escape its proprietary and quirky markup format. As a feasibility test, I transformed its virtual DOM-like structure into a whitespace aware RFC like txt file. It was a success! It would be less work and more beneficial in the long run to switch to a new platform with extraction and transformation than to rewrite all content word for word.
After this is done, I could theoretically parse markdown, put it into my structure, and transform it into another format with ease, much like how pandoc can go from markdown to LaTeX.
This technique is vital for static application security testing and source to source transformation for vulnerabilities and updates at scale. Consider how much engineering effort and risk is avoided by automating OpenSSL minor and patch upgrades at Microsoft / Amazon / Google.