1. 2

    I also disagree with ‘CVE’ by itself being brought on-topic for lobste.rs - and I say this as someone who works in security. The recent Rust CVE is a great example of posts I don’t want to see more of here. It was just an announcement, no context, no write up, no lessons learned. I can get feeds of CVE’s elsewhere. I think we can still have write-ups that cover CVE’s submitted under existing tags.

    1. 7

      Well, that was a lot more productive than when I lost that day to making paper clips…

      1. 4

        Does anyone have a sense of the level of effort required to port something like this to run on Firefox?

        1. 5

          Porting this would make for a great starter project if you’re just getting into Firefox extension development!

          Depending on your level of experience it would take a competent developer anywhere from 1 to 4 days to complete the port.

          1. 2

            Indeed, should be quite easy.

        1. 4

          The subtext of this article is hilarious: that GPG is so hard to use that someone decided that it might be less painful to just rewrite the damn thing.

          1. 6

            It’s no secret that GPG shows its age, but the part that the author has rewritten is still only a small fraction of gnupg’s features.

            1. 2

              agreed. another reading - the part of gpg worth using is not worth using gpg for, given the bulk that comes with it.

            1. 15

              Surprisingly, ‘completely’ is not even an exaggeration — the front-end, too, is written in Rust, using the Yew library (lobste.rs thread), which compiles to wasm to run in the browser and render the DOM . Nice!

              I also like the fact that Yew is Elm+React-inspired. The Elm architecture is fantastic to work with. Neat detail, though: where Elm attacks shared mutable state by making the state objects shared but immutable (state + events -> new state), in Yew the state is mutable but unshared (Rust prevents writing code that would cause simultaneous updates.)

              All of the above written without having used Yew myself, only some Elm and hardly any Rust, so please add grains of salt as required.

              1. 4

                That was also my number one takeaway from the article. I knew about compiling to WASM targets but had no idea the front end story included full frameworks like Yew.

              1. 3

                Two days of team all-hands today and tomorrow - great to see everyone but we all seem to think two days is overkill (except our director).

                Head to toorcamp on Wednesday though via an early ferry! I’m excited for the weekend hacking in the woods with friends.

                1. 6

                  Yes! I’ve learned easily as much from reviewing CR’s from Sr. Engineers on my team as I have from their feedback to mine! The best ones I worked with even appreciated the feedback I could offer, and reinforced important lessons about giving honest feedback.

                  There was one instance in particular where I had a question about part of the implementation, but left it unasked, figuring they already thought through that case and that their current implementation would address it. When a bug came up later I offered the Senior the solution and mentioned I had wondered during my review. They were able to stress to me that although experienced they weren’t above mistakes, that they saw value in my reviews and I should be pointing things like that out. It was a very valuable lesson for a dev with 1-2 years experience that has also stuck with me now as I’m at a mid-level working with engineers both my junior and senior.

                  1. 3

                    it’s perfect. never change.

                    1. 2

                      Great treatment of the hidden complexity in making a good market in fares. I’d also be interested to learn more about how market participants (airlines) design fares and fare rules to bid into markets.

                      1. 7

                        A couple weeks ago @wezm posted a great article on his process of leaving Mac behind. Part of his post mentioned a pain point around a budgeting app, and the requirements document that he had written to replace it. I had been thinking of taking on a self-hostable YNAB style budgeting system for a while and his post (plus the groundwork he shared) was the appropriate kick to get things started.

                        I’m off work between jobs this week so will be working on envelope_budget. I’m building against @wezm’s spec in Rust with a goal of a minimal-functional CLI release by the end of this week. I footed out some of the data structures when I got the spec and will be picking up next with serialization, then moving to the CLI binary.

                        1. 4

                          Have you looked at ledger, hledger, and/or beancount?

                          1. 2

                            Yes, and this came up in the thread on Wes’ post here. My first goal is to hit the functional requirements for envelope budgeting, but as I work on serialization I may implement a ledger compatible backend for interoperability with that ecosystem.

                            The value of this project to me is not just the end result, but the process of building it to keep my Rust use current. There doesn’t seem to be a ledger library for Rust so that could end up being a useful contribution to the community if I go that way for serialization.

                        1. 14

                          The Cambridge Analytics scandal has prompted me to delete Facebook and be much more aware of my privacy. I know that deleting Facebook is now a “cool” thing to do now, but it’s been a difficult decision. I still had many friends there that I have no other means of contacting. Ads have gotten much scarier recently, perfectly retargeted among services, so I was getting mentally ready for this. But stealing data for political purposes is where I draw the line.

                          I’ve also replaced google with DuckDuckGo, and am planning on changing my email provider too. But I don’t know if it’s going to be futile. I still shop on amazon and use many other irreplaceable services like google maps.

                          Again, I’m not a privacy freak. I try to find a middle ground between convenience and privacy, so these changes are hard for me

                          Any recommendations for a balanced solution?

                          1. 6

                            Whereas I’m about to have to get back on Facebook after being off quite a long time. I’ve simply missed too many opportunities among local friends and family info since they just refuse to get off it. Once it has them in certain numbers, they find it most convenient to post on it. That’s on top of the psychological manipulations Facebook uses to keep them there. I’ll still use alternatives, stay signed out, block JS, etc for everything I can. I will have to use it for some things for best effect.

                            The most interesting thing about leaving, though, was their schemes get more obvious. They tried to get me back in with fake notifications that had nothing to do with me. They’d look like those that pop up when someone responds to you but you’re not in the thread at all. They started with an attractive, Hispanic woman I’ve never seen from across the country that some friend knew. Gradually expanded to more attractive women on my Facebook but who I haven’t talked to in years or rarely like (not in my feed much). The next wave involved more friends and family I do talk to a lot. Eventually, the notifications were a mix of the exact people I’d be looking at and folks I’ve at least Liked a lot. I originally got nearly 100 notifications in (a week?) or something. Memory straining. Last time I signed in, there was something like 200-300 of them that took forever to skim with only a handful even real messages given folks knew I was avoiding Facebook.

                            So, that whole process was creepy as hell. Especially watching it go from strangers I guess it thought I’d like to talk to or date to people I’m cool with to close friends. A lure much like the Sirens’ song. Fortunately, it didn’t work. Instead, the services’ grip on my family and social opportunities locally are what might make me get back on. The older forms of leverage just in new medium. (sighs)

                            1. 3

                              It kind of depends on what you are trying to prevent. There are some easy wins through

                              1. As of March 2017 US ISPs automatically opt you in to Customer Proprietary Network Information. ISPs can sell this information to 3rd parties.. You can still opt out of these.
                                Look for CPNI opt out with your ISP.
                                https://duckduckgo.com/?q=cpni+opt+out&t=ffab&ia=web

                              2. uBlock Origin / uMatrix are great for blocking tracking systems.
                                These do affect sites who make they’re money based on ads however.

                              3. Opt out of personalized adverting when possible
                                Reddit, Twitter, even Google give you an option for this.

                              4. Revoke Unneeded Accesses
                                https://myaccount.google.com/u/1/permissions https://myaccount.google.com/u/1/device-activity
                                https://myaccount.google.com/u/1/privacycheckup
                                https://twitter.com/settings/applications

                              5. Make your browser difficult to fingerprint.
                                EFF has a tool called panopticlick that can show you how common your browser’s fingerprint is. I locked down what I could (there should be instructions on panopticlick’s site), and added an extension that cycles through various common user-agents. It might sound like overkill, its not onerous to do.

                              6. Don’t store longterm cookies.
                                I actually disabled this mostly. I still blocked for 3rd parties, but first party cookies are allowed now. Using a hardware key or password vault makes signing easy, but ironically the part that killed this for me more sites supporting 2FA. I use Cookie AutoDelete for Firefox.

                              7. Change your DNS provider.
                                I don’t have a good suggestion for this one. I use quad-9, but I don’t really know enough to say whether or not I trust them.

                              1. 2

                                Unlike an email or web server, setting up a resolving only DNS server is quite painless. I do this at home and rarely have issues. And if I do, I can reset it at whim instead of trying to fight with tech support.

                              2. 1

                                I pay $40/year for Protonmail. It is fantastic.

                                As for Facebook, why delete? It is actually a benefit to have an online presence for your identity, but you need to be careful with what about yourself you share. If you don’t take your online identity, someone else will. This is exactly why I’ve registered my name as a domain and kept it for years now. It is just another “string of evidence” that I am who I say I am on the internet.

                                My FB is just a profile picture now and nothing else. I have set my privacy settings to basically super locked down.

                                When it comes to socializing, there is little you can do to not be tracked. The only thing you can do is “poison the well” with fake information and keep important communication on secure channels (i.e. encrypted email, encrypted chat applications).

                                1. 1

                                  I removed Facebook about 6 years ago and recently switched to Firefox beta and DDG. Gmail has had serious sticking power for me, though. I’ve had several fits and starts of switching email over the years but my Gmail is so intertwined with my identity nothing else has ever stuck.

                                  It is possible to switch, I’m sure, but in my case, I have never committed quite enough to pull it off.

                                  1. 3

                                    When I got off gmail, it took about two years before I wasn’t getting anything useful forwarded to my new identity.

                                    Setting up forwarding was quite painless and everything went smoothly otherwise. The sooner you start…

                                    1. 2

                                      When I looked into it, everone was suggesting FastMail if the new service needs longevity and speed. It’s in a Five Eyes country but usually safest to assume they get your stuff anyway if not using high-security software. The E2E services are nice but might not stick around. Ive found availability and message integrity to be more inportant for me than confidentiality.

                                      People can always GPG-encrypt a file with a message if they’re worried about confidentiality. Alternatively, ask me to set up another secure medium. Some do.

                                  1. 9

                                    Tech has a short memory lately, and I would like future implementors to learn not only the lessons of the web but the lessons of pre-web hypertext systems (which often solved problems that the web has yet to address).

                                    I do wish the author would have followed this with lessons from history. I found the requirements list interesting but I would likely give them more weight if they were tied to the specific lessons they were informed by.

                                    1. 5

                                      I may write a follow-up with historical information included. Unfortunately, most of these guidelines could be turned into complex polemics on their own (and I have written some of them)!

                                      The guidelines are heavily influenced by my work on Xanadu, and many are a distillation of ideas that are threaded through a lot of sort of unfocused rants by Ted, some of which are not even public. When I have a chance, I’ll do some archaeology and find proper references where possible. (Alternately, I may just write the polemics I am inclined to write on particular topics, with citations.) My “lessons” are pretty controversial (and some are controversial even within the ex-Xanadu crowd – such as the emphasis on peer-to-peer systems).

                                      Items #2-6 and #9-13 are things that were part of the Xanadu design since at least the early 80s (and in some cases, going back to the 60s), and are well-documented either in Ted’s criticisms of the web or in available design documents from Xanadu projects.

                                      Items #7 and #8 are controversial in Xanadu – they apply to Udanax Green (and probably Gold, though I’m not totally sure), but not to implementations done since 2006. My group maintained that forcing links to always apply to the original source text, as opposed to positions in the document, was a confusion of form and content of the same type as XML (which treats conceptual groupings that function mostly as formatting guidelines as part of content even as those groupings are mostly entangled with form) – particularly in the context of formatting links. (After all, a page break may be appropriate in the context of, say, a book, while a paper quoting that section of the book would not want to add a page break at that point.) Ted said that supporting the document-as-assembled as a first order object in transclusion and linking complicated the specification and complicated the implementation, and wasn’t strictly necessary anyhow (since you could just force that document to un-apply the offending formatting link).

                                      Item #14 is half-controversial. Because I support assembled documents as first-order object for the sake of transclusion, I also think that it’s justified to cache assembled documents in some cases. Very often, a derivative work becomes more popular than the origins of its component parts – and such a work might become very fragmented, if it is itself composed from similarly fragmented sources. The important thing is that an assembled document, when cached, can nevertheless be everted into the cached parts of its source documents, so that we still gain the benefits of caching when we go to load the source. (This is particularly useful when a single source has many popular derivatives with minimal overlap – say, Poor Richard’s Almanac, whose epigrams are quoted all over the place in extremely various forms, or in Marx’s work, different and mostly-non-overlapping pieces of which are very important to economists, sociologists, and bolsheviks.)

                                      #16 is true of both Xanadu and IPFS, probably independently. (It’s controversial elsewhere. Access control is a hard problem, particularly when you’re trying to get a variety of implementations in a peer to peer system with potentially-untrusted peers, and so I would rather depend upon crypto, which will break eventually in individual cases and expose stale data, than access control, which will be attacked directly and probably break even easier. However, I don’t plan to optimize this for secret data. I’d like to use it to encourage openness, and discourage people from hiding things on it at all.)

                                      #17 is from IPFS. Xanadu implementations have not largely been peer to peer – the business model has always been to charge for storage (though sometimes as a flagship node in a federated system). I feel like making these facilities available to people is more important than making a buck off them, so I prefer peer to peer (which after all requires less money up-front to set up).

                                      #18 is my own formulation of a rule that theoretically underlies both Xanadu and the w3m’s URL rules (and seems to also be an assumption underlying the design of HTTP, particularly with regard to the design of response codes). I’ve written about it in various places, as well. TL;DR version: server-side content variability breaks all of the important parts of hypertext, while client-side content generation is a poor and wasteful simulation of regular native app development. A hypertext system should not double as an application sandbox or code delivery system.

                                      #19 is part of post-2006 Xanadu design, which uses a single append-only file and calls it the “permascroll”. I feel like it should be integrated into the cache system (which is absolutely necessary but, when I took over the 2006 project in 2011, was not implemented or really planned for).

                                      #20 is a side effect of #17, and is derived from Usenet, although similar ideas exist in popular forum software, as browser extensions, and in Mastodon.

                                      1. 1

                                        Thank you for coming back and addressing this - I learn a lot from histories.

                                        1. 1

                                          No problem! I wish I could more easily link to records & stuff. The historical basis is a little weaker than I thought when I first began writing that essay – mostly coming down to “this is how Xanadu did it before TBL invented the web”.

                                    1. 5

                                      Is this token local or public? local: shared-key authenticated encryption, public: public-key digital signatures.

                                      If someone doesn’t know whether to pick between encrypting or signing or tagging [1] a token, it seems that asking whether the token is local or public could only confuse them. SWEs implementing their own encryption might be foolish, but not understanding the primitive cryptographic operations you can utilize seems to err in the other end of the possible delineations between researchers and practitioners.

                                      1. since this one is slightly less commonly known — https://en.wikipedia.org/wiki/Message_authentication_code
                                      1. 1

                                        If it’s local, you get authenticated encryption. No other choices.

                                        If it’s public (i.e. the token is signed by one party and verified by another), you get digital signatures.

                                        That’s the only choice that needs to be made.

                                        1. 4

                                          That misses quite a few use-cases, no? Most importantly, tagging, where I don’t need asymmetric signatures and I don’t need encryption, but I want to give you a token you can read but not modify before you pass it back to me.

                                          Also, why does “local” mean “authenticated encryption”? And “public” mean “digital signatures”? I might be getting dense towards the end of a long week, but the linguistic intuition seems non-obvious.

                                          1. 3

                                            Most importantly, tagging, where I don’t need asymmetric signatures and I don’t need encryption, but I want to give you a token you can read but not modify before you pass it back to me.

                                            If you want unencrypted-but-authenticated tokens, stick the raw data in the unencrypted footer. Strictly speaking, your options are AEAD or Ed25519.

                                            Also, why does “local” mean “authenticated encryption”? And “public” mean “digital signatures”?

                                            Local means local to a system. The issuer is the verifier.

                                            Public means it’s not local to a system, it’s going to be transmitted over the public Internet. The issuer is a different entity than the verifier. (It doesn’t make sense to use public-key cryptography for a purely-local use case.)

                                            1. 2

                                              If you want unencrypted-but-authenticated tokens, stick the raw data in the unencrypted footer.

                                              So instead of buttons and levers, there’s more the one place I can stick my data?

                                              The issuer is the verifier.

                                              The word for that use case is “symmetric”.

                                              Public means it’s not local to a system, it’s going to be transmitted over the public Internet.

                                              And if the data is public, but the token verification is local (i.e. symmetric), then you stick it in the unencrypted footer. Got it.

                                              Hope you don’t take it personally if I stick with { data, tag: SHA(secret + data) } and call it a day ;)

                                              1. 7

                                                { data, tag: SHA(secret + data) }

                                                I hope you don’t stick with that, since I can add my own data and produce a new, but valid SHA, via a length extension attack, no?

                                                1. 4

                                                  Just to really drive this home @anfedorov - the tldr from @apg’s link:

                                                  HMAC is the real solution. HMAC is designed for securely hashing data with a secret key.

                                                  1. 1

                                                    False! HMAC was designed for securely tagging data with poorly constructed hash functions. Sorry not sorry for being pedantic, but apg should really know better than trying to nitpick me ;)

                                                  2. 1
                                                    1. 2

                                                      You didn’t specify SHA3, and are replying months later….

                                        1. 3

                                          Maybe this will sound ridiculous, but is there a good reason to not allow wild-west tagging? I get that this community has a curated bend to it, but what would the drawbacks be of opening that up?

                                          1. 10

                                            I think the idea is that if it doesn’t fit an existing tag the post is off-topic. Tag creation references posts that I guess we’re agreeing as a community were on-topic but had to be shoehorned in to existing tags.

                                            I’ve at least on one or two occasions ran in to a story that I couldn’t really fit in a tag so skipped submitting. Overall I like the approach.

                                            1. 2

                                              Ah yeah, that’s right. Welp, it was a crackpot idea after all :) Thanks for reminding me of the signal provided by tags, it’s surely useful.

                                          1. 4

                                            Seeing a Hodinkee link on Lobsters is an unexpected crossover of my normal morning reading

                                            1. 3

                                              You are not alone :)

                                            1. 2

                                              Is there a good resource for understanding Epochs as overlays to the release/versioning system in Rust (if that’s even the right way to think about them)? I use and follow Rust development but haven’t really wrapped my head around the significance of Epochs.

                                              1. 7

                                                Epochs are a way for the Rust team to make breaking changes to the syntax of Rust. For example, reserving new keywords, turning warn-by-default checks into error-by-default, that kind of thing.

                                                Importantly, crates designed for different epochs can be used together, so a Rust epoch won’t fracture the ecosystem like the infamous Python 2/3 split. Even if you want to call a function in an old library whose name is a reserved word in the new epoch, Rust provides a “raw identifier” syntax so you can use anything (within reason) as an identifier.

                                                More pragmatically, epochs are an excuse for more publicity. With a release every six weeks, you rarely get more than one or two new features at a time, which isn’t worth making a fuss about, so people outside the Rust community rarely hear about all the stuff that’s going on. An epoch is an excuse to list all the features of the past year or two and show people just how much progress has been made.

                                                1. 1

                                                  Understanding this as presented, then, it is a breaking change that exceeds the threshold for even major version changes.

                                                  Does that mean an Epoch change will always occur simultaneously with a major version change? And that an earlier Epoch is always incompatible with a later major version?

                                                  I think this has been the confusing part to me.

                                                  1. 10

                                                    No, because you have to explicitly opt in to the new epoch.

                                                    Code on newer epochs using your code does not break.

                                                    People compiling your code with rustc or cargo do not break (rustc and cargo continue to use the same epoch to compile your code)

                                                    The only time your code breaks is when you explicitly do rustc yourcode.rs --epoch=2018 or cargo build after modifying the epoch key in Cargo.toml.

                                                    Which you’ll only do when upgrading to the epoch.

                                                    Rust does not deprecate programming within the old epoch, so it continues to work. Five epochs from now the “2015 epoch” and “2018 epoch” should still work with the latest compiler.

                                                    Also, Rust does not automatically choose the latest epoch for you: it defaults to the 2015 epoch when unspecified. The only time it uses the latest epoch is when you create a new project with cargo.

                                                    Does this make sense?

                                                    1. 5

                                                      it is a breaking change that exceeds the threshold for even major version changes.

                                                      Not really. The breaking changes we’re talking about are “turn some warnings into errors” and “add new keywords”. Fundamentally, the changes possible are very limited. It’s a core constraint of epochs. Also, as was said elsewhere, your code shouldn’t break without an explicit action on your part. And the whole thing is interoperable.

                                                      The closest analogy is the C++ and Java systems. Each new release does technically break some things, but they’re very minor, and so to most people they’re languages that “never break things.”

                                                      1. 3

                                                        Major version changes would include things like removing language features, or adding a mandatory garbage collecting runtime, or anything else that would prevent older code and newer code from working together.

                                                        An epoch explicitly cannot introduce such incompatible changes; it must always be possible for old-epoch crates and new-epoch crates to be used together.. It might not be possible to copy/paste code between them, i.e. they may not be source compatible, but they should compile down to the same ABI, effectively.

                                                        This is not exactly the same thing as ABI stability; a crate compiled by an old-epoch compiler and a crate compiled by a new-epoch compiler probably won’t be compatible. However, the Rust project has promised that every 1.x Rust compiler will always support every previous epoch, so you can take your new-epoch compiler, compile one crate in old-epoch mode and another crate in new-epoch mode, and then they’ll be compatible.

                                                    2. 5

                                                      The canonical text is https://github.com/rust-lang/rfcs/blob/master/text/2052-epochs.md . It should explain it reasonably well, if you’re the kind of person who follows Rust development.

                                                      That said, we haven’t actually done one yet, so it’s kinda hard to point to an example, it’s all theory at the moment. They also may not even be called “epochs” when we actually ship one. The name is… okay. I like it for its bad qualities, which probably means we should change it :p

                                                    1. 3

                                                      The most interesting section for me was Better filtering which breaks down his process for actually populating the app with links. As much as I’ve understood the concept of apps like instapaper, and even tried them, I still found myself victim to the infinite scroll and rarely returned to the app to take up saved links.

                                                      The idea presented there of delayed gratification via waiting periods, only ever reading things via a round trip to the app, would be a key workflow for me to be successful if I were to try one again.

                                                      1. 4

                                                        This is very relevant as I’ve been working with a therapist this month to fix my posture and ergonomics after developing tendonitis in my wrist and shoulder. I’d like to stop it now before it becomes carpal tunnel which my understanding it can be difficult to treat.

                                                        Would love to hear what others are doing to make their computer centric work and life more body-friendly.

                                                        1. 4

                                                          Find a friendly gym. I use the one at my local tech uni. It’s boring, annoying, painful at first, and takes time. But your body will tell you that it is worth it. You don’t need to grow mucles there, just go regularly to move and get some blood flowing in parts of your body which are usually neglected. Especially when I start feeling pain in wrists and fingers, I can go there and the pain gets fixed. Sometimes I use the time there to reflect on things I’m working on, and already even found bugs in my code that way. You should get an intro from either staff or someone experienced. If you they ask you what you want, and you don’t really know, just ask for a set of exercises that will keep your back in good shape. Worked well for me.

                                                          1. 3

                                                            Switching to a split keyboard (kinesis freestyle 2) and 70-degree-rotated mouse (like the Microsoft sidewinder) helped a fair bit, as did getting a good chair. All told I’ve spent about $1200 on ergonomic equipment and it all feels well spent.

                                                            1. 2

                                                              I’ve been wearing wrist braces for about a month now. They’ve helped immensely. Also I’ve been trying to understand posture and ergonomics, like you… it’s hard; I never really paid much attention before.

                                                            1. 3

                                                              Thank you for posting this. I work around a lot of RF equipment, but I only know DSP at the highest and most basic level. I’m hoping this will help me gain a working knowledge of it.

                                                              1. 1

                                                                I feel the same way - which is probably why I also found this interesting.

                                                                Another resource I really liked is Practical Signal Processing. It is also a practitioner focused treatment of the material, with enough theory to make you dangerous. It’s been a big help for me in understanding the DSP components of GNU Radio flow-graphs. It doesn’t necessarily cover details on the internal implementations of different processing stages but is great to come up to speed on discrete steps in a processing pipeline.

                                                              1. 4

                                                                I’m probably way too optimistic but I see the natural outcome of this as open source hardware/software tractors :)

                                                                1. 2

                                                                  Seems to exist some such initiatives: http://opensourceecology.org/