1. 4

    It was great talking at the Decentralised Web Summit, and the IPFS Lab Day last week. I’ve just finished moving the Peergos PKI into IPFS itself, and this week I’m hoping to make that pki mirrored on every node (for privacy of key lookup queries). And think more about improving the scalability of the social side of Peergos.

    We also want to implement a new api call in IPFS which I dicussed with them last week - essentially a p2p http proxy (independent of dns/ip).

    1. 10

      Neat surprise, I maintain this (and similar) pages. Originally started because I wanted to show de-facto support for modern crypto but now I refer back to the pages as a starting point in choosing software that does one particular task or another. I think modern crypto has basically “won” even if there are still gains to be made. One thing I’ve noticed is that the de-facto support goes way beyond de-jure support to a degree that looks suspicious for standards bodies. It’s been the main factor in convincing me that standards bodies need to be taken down a notch – let’s have more public competition and less “design by committee.” I think the community is moving more in that direction and it’s good to see.

      There are over 1700 unique outbound links on these pages. I scan their http status codes with automated scripts several times a week and make other efforts to prevent link rot and outdated info. It takes a surprising amount of time. For instance some people say they support a cryptographic primitive but you look at the code and it’s something else, so you have to check, and it takes time. Good example of the importance of useful, correct documentation.

      Minor thing I noticed: a handful of github pages were deleted after the Microsoft purchase, but not as many as one might have expected based on public discussion. Now that things have settled I’ll check each one manually to look for “page moved to gitlab” type messages where the github repos remain with http 200 status codes, but emptied out. It takes a lot of time to maintain these pages but it’s helped some people so it feels good.

      1. 3

        Yea it was a quite awesome link, and seeing i2p in it warmed my heart<3 (disclamer; I’m a i2p dev)

        1. 2

          Was pleasantly surprised to see Yggdrasil, a project I work on, listed there too!

        2. 2

          Not sure if you’re aware but the font size on that site is absolutely huge on Chrome on Linux. I have to dial it down to 67% the original size to make it readable. Here’s a screenshot to illustrate, with lobsters for comparison: http://i.imgur.com/4I8eegV.png

          1. 1

            Thank you for the feedback! Yeah, font size is an issue these days and it’s something I’ve wrestled with. A while back there was an article about how font sizes haven’t kept pace with monitor / screen resolution increases, which I think is hard to disagree with. The situation is compounded by the large variety of “monitors” from phones to what amount to widescreen TVs. If you have a simple suggestion for HTML/CSS that doesn’t use any JS and makes everyone happy I’d be very interested in hearing it.

          2. 1

            Thank you for all your work! I’m pleased to have two projects in the list. I wonder if there will be an equivalent for post-quantum crypto once the algorithm advice stabilises.

            1. 1

              Yes, there already is a pqcrypto list but it’s kinda shabby IMO (check the links under the homepage). The pqcrypto situation is very fluid at the moment, even chaotic. As one example of many, the front-runner library is libpqcrypto which contains 77 cryptographic systems (50 signature systems and 27 encryption systems). There are more post-quantum algorithms than apps using those algorithms. Also libpqcrypto doesn’t even compile on OpenBSD, a bummer for me personally. IMO for certain things like VPNs, combining an ephemeral X25519 key exchange with a pre-shared key, like WireGuard can optionally do, is a sensible thing to do in 2018 until we get real pqcrypto off the ground.

          1. 6

            I’m working on moving the central PKI in Peergos from sqlite to ipfs itself. We’re using a champ (compressed hash array mapped trie) - the same data structure which we’ve already migrated all user data to. It has a lot of nice properties, like insertion order independence and fast lookups.

            This will make mirroring the PKI trivial, and allow private public-key lookups on the mirrors.

            1. 15

              “Operating system design has been somewhat stagnant since, well, ever. Sure, once in a while, you hear of a cool os that some company worked on ten years ago or an interesting prototype that has recently crawled it’s way out of a professor’s underground lab”

              There’s probably several a year at a minimum. Almost all are made by CompSci but companies do stuff too (eg Fuchsia). I stopped tracking them since there were too many with a lot of duplicated capabilities (esp cloud stuff). If you liked Singularity, you might find SPIN, J-Kernel, JX, Verve, and ExpressOS interesting.They all use a type-safe language with simpler architecture. High-assurance security mostly went with microkernels and separation kernels with Nizza paper explaining the concept nicely. GenodeOS takes that approach. Finally, there were also high-assurance, browser architectures and OS’s like IBOS that shared a few goals such as portability with Javascript support.

              Hope you enjoy some of this stuff as you think about OS design. And welcome to Lobsters! :)

              1. 3

                G’day Nick, Have you looked at redox at all? Also in a similar vein, but in Java and not developed any more I think, is jnode.

                1. 2

                  Both Redox and Muen separation kernel could be on the list. JNode was neat but I didnt evaluate its security. JX had neat architecture.

                  1. 2

                    Jnode was trying to run legacy x86 software with my very own jpc.

                    1. 4

                      If you like those, check out sanos. It’s an older one with a Windows focus few seem to know about.

                1. 1

                  Yeah, but mostly for their core stuff.

                  1. 1

                    I missed that constant-time performance comment - which is very helpful, but wish it was called out a little more explicitly. I wish the memory/runtime complexity were called out as explicitly as the load factor that is liberally sprinkled throughout.

                  1. 21

                    This is a beautiful technology. It is very sad that many people will ignore this because of Oracle v. Google lawsuit. What a tragedy.

                    1. 15

                      This looks suspiciously like an “embrace, extend, and extinguish” play by Oracle to the observer.

                      1. 16

                        Anything from them potentially is one just due to their legal team. I’m avoiding it specifically for that. Java, too, just in case.

                        1. -1

                          It’s all open source.

                          1. 4

                            That’s copyright law mostly with patent provisions in some licenses for the specific work as is. That leaves patents for how it’s used or combined with other software. Oracle, Microsoft, and IBM in particular like to file lots of those. I dont know if any are on GraalVM because just looking triples the damages. I never look.

                            1. 2

                              I’m not a lawyer, but openJDK, which now includes Graal, is GPL licenced which includes patent protection.

                              1. 1

                                This is incorrect; the patent grant applies to the official OpenJDK builds but not forks of OpenJDK.

                                1. 1

                                  Do you have a source for that? That wouldn’t be GPL then under my understanding.

                                  The GPL seems relatively clear on this: http://openjdk.java.net/legal/gplv2+ce.html

                                  1. 3

                                    If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.

                                    Seems to me that the patent clause is opt-in rather than required. OpenJDK uses the GPL v2, which lacks the clear patent grants of v3.

                                    See also https://www.skife.org/java/jcp/2010/12/07/the-tck-trap.html and the lawsuits Oracle fired at Google. (The patent claims ended up getting thrown out because Google has excellent lawyers in that case.)

                                    Edit: more details about the weak language used around patents in v2: https://www.infoq.com/articles/java-dotnet-patents

                                    In 2004 Dan Ravicher, senior counsel for the Free Software Foundation, warned about the weak patent guarantees for BSD and GPL and recommended attaching patent grants.

                                    1. 2

                                      With further digging it seems that Oracle joined the OIN and OIN explicitly covers the openJDK for patents:

                                      http://www.openinventionnetwork.com/community-of-licensees/

                                      https://www.zdnet.com/article/linux-patent-defense-group-expands-open-source-protection/

                                      1. 2

                                        Thank you. That is very interesting reading. Sounds like the waters are muddy and we need an actual lawyer to chime in.

                                        I partially side with Oracle on the Google case (excluding copyrighting APIs). Google had plenty of opportunity to licence Java from Sun, or indeed to buy Sun entirely, but they chose to incompatibly re-implement it and developers have been paying the price for that choice ever since. But it looks like Google finally did the right thing in the end: http://www.fosspatents.com/2015/12/google-switches-to-open-source-license.html

                              2. 1

                                Not all of it. The GraalVM Downloads page makes it clear that there are two versions of GraalVM: Community Edition (CE) and Enterprise Edition (EE). GraalVM EE is closed-source, and it’s the only version with support for macOS and with “additional performance, security, and scalability”.

                                1. 1

                                  The developers have stated the only reason for the macos absence is they haven’t gotten around to it yet, and also there is currently no difference in performance.

                          1. 3

                            This should prove to be a very attractive alternative to Electron when coupled with JavaFX.

                            1. 2

                              I don’t see how. Whole point of Electron is that you can reuse your web frontend knowledges. JavaFX does not allow that. (I guess JavaFX does allow styling with CSS, albeit with hideous fx prefix.)

                              1. 2

                                I have done exactly that. JavaFX also has WebView component. Under the hood I believe it currently uses v8, but it sounds like they can switch that out with Graal.

                                1. 1

                                  That’s part of the point, but a big part of it is also just targeting all 3 platforms with one codebase.

                              1. 2

                                The e2e encrypted chat has a subscription model. I wish this were the private communication platform it claims to be. The closest we have now is keybase, which is not optimal either

                                1. 1

                                  A subscription model itself is not bad. But if they are selling themselves as a private communication network, it’s strange that the only privacy friendly piece is the one that costs extra.

                                  1. 1

                                    I’m interested in your opinion on the deficiencies in keybase. What features would you change or add?

                                    1. 2

                                      If I were to introduce keybase as a communication platform for my less tech interested friends and family, it would have look more like what MeWe looks like. A chat/social network/shared photos.

                                  1. 8

                                    The only shot these decentralized networks/social things have of accomplishing their dream is to ditch federation and deploy true P2P via desktop/mobile apps. Think old Skype.

                                    Any server requirement more than a non-proxying NAT hole puncher is a death sentence for decentralized services targeting mainstream users (unless you get big investments into a handful of quasi-centralized servers). The network needs to run on a handful of techie users fronting $10/month in server resources, or it just won’t scale.

                                    People can install apps, they can’t manage servers.

                                    1. 7

                                      People should just implement these things using email as the communication substrate. The servers are already out there and federated.

                                      1. 1

                                        This is the conclusion I came to as well. The “liked by who” is metadata that would have to be stored somewhere though.

                                        1. 1

                                          That can be sent around via email as well. Likes can be implemented as a CRDT, so people may not have a consistent view but it can become consistent over time.

                                        2. 1

                                          If you’re emailing your friends then you’ve immediately exposed your social graph.

                                          1. 1

                                            You don’t have to use their personal email addresses, they can create free ones on gmail or wherever. I’m just saying to use email as the underlying protocol.

                                            1. 1

                                              That doesn’t help. Even if it was a randomly chosen email, the sender and receiver are in the clear for the network to see and construct the social graph. Even if you rotate emails it’s probably still reconstructible.

                                              1. 1

                                                Sure. If that’s something you want to hide then maybe that’s a problem for you.

                                        3. 4

                                          Not all federation has expensive costs. Pleroma can run on a raspberry pi. The Pleroma/Mastodon/GNUSocial is around a million users right now, so I’m not really sure that argument holds. Being said I would also love to see “Old Skype” apps. Ring.cx is a good example of this working well. Decentralization and Federation don’t have to be mutually exclusive and we should stop thinking about this space as an either / or.

                                          1. 4

                                            This is the approach I took with Firestr, Just download the app and run it. Only server thing is a non-proxying NAT hole puncher. I took this approach because that’s exactly what i thought, no user is going to run a server, therefore has to be an all in one experience where you just run the app and go.

                                            1. 2

                                              Isn’t Skype a bad example for “true decentralisation” (I am assuming you mean by this “distributed”), since a cenral server managed usernames, statuses and IP address communication (if I am not wrong)? Attempts to truly create P2P networks, lets take the standard example of IM/video chat, like Tox suffer from cryptic user names (ie. DHT codes), the need for both parties to be simultaneously online for messages to be sent and received and most of the time a “hacky” feel to the whole setup. The last issue could be avoided by good cooperation between a design/UX/UI and developer team, but I don’t see any way around the first two, without setting some absolute standards (eg. reference servers).

                                              It works for certain use cases, for example firechat for physically manifested crowds or Tox for absolutely anonymous chat, but this doesn’t do what most people want, which has sadly always been what centralized systems are intrinsically good at: deferring responsibility to validated identities, transmit information and guarantee/promise operation from the users to some other instance, which is usually legally bindable.

                                              1. 2

                                                I 100% agree. This is the approach we’ve taken with Peergos. You can create your account by running the desktop version, or you can sign up on our central server (or anyone elses), but your identity, social graph, etc. has nothing do do with that choice of server. All that decides is where, initially, all your data is stored. Through the magic of IPFS it’s accessible from anywhere - we only need at least one server to store each users files to guarantee no loss.

                                                Moving an account you created on our server to your own (desktop or cloud instance) is trivial and doesn’t lose any data, metadata or social connections. This gives both a nice on-boarding experience, and also allows us to satisfy a wide range of threat models. The average user can just log in to a server via a web browser exactly like facebook. More discerning users can run their own server, in the cloud or at home.

                                                1. 2

                                                  True P2P generally isn’t too friendly with battery life and data caps.

                                                1. 1

                                                  It would be interesting to get a version working for v3 tor addresses which are much bigger.

                                                  1. 5

                                                    I’ve been working on a new core data structure for Peergos - changing from a merkle-btree to a merkle-hamt (hash array mapped trie).

                                                    I’m using the champ (canonical hash array mapped prefix trie) variety of hamt. This has some super nice properties, including insertion order independence, never any rebalancing or splitting operations (which makes ipfs pins much faster), and tunable storage overhead and data churn. In the process I found and fixed a bug in the same structure in ipfs/filecoin.

                                                    The other interesting thing is that this data structure is amazingly young as far as data structures go. I think the original hamt was introduced in 2000, and the champ variation in 2015.

                                                    1. 4

                                                      Does this mean it’s possible to just watch the DHT on IPFS and pull data people are inserting? It’s not encrypted in any way?

                                                      1. 8

                                                        That’s exactly what this is :)

                                                        You’re free to publish encrypted content on the IPFS, but you aren’t obligated to.

                                                        1. 6

                                                          And I wouldn’t, since encrypted content on IPFS would be exposed to everyone and brute-forced eventually if anyone cared (once the cipher is broken in the future, etc)

                                                          1. 3

                                                            This is kind of my worry with IPFS. I wanted to have a “private” thing where I could also share with my family in a mostly-secure way (essentially, least chance of leaking everything to the whole world while still being able to access my legitimately-acquired music collection without having to ssh home). Turns out that’s not simple to set up.

                                                            1. 6

                                                              We ([0][1]) are trying to add encryption and other security enhancements, including safe sharing, on top of IPFS. Still pre-alpha though.

                                                              [0] - https://github.com/Peergos/Peergos

                                                              [1] - https://peergos.github.io/book

                                                              1. 5

                                                                You just have to add encryption on before transmission. IPFS is kind of a low level thing (Like how you won’t find any encryption in TCP because that comes later), It really needs good apps built on top to be useful.

                                                                1. 2

                                                                  IPFS is a better bittorrent, which is designed to work very well as a replacement for the public web. Private sharing has different requirements – I use syncthing for a similar semantic in private.

                                                                  1. 1

                                                                    Do you guys know about upspin ? What do you think of it ? One if its stated goal is security. But it seems to be at quite an early stage for now.

                                                                  2. 2

                                                                    Interesting. I bet a lot of inserters aren’t aware. Sounds like a great opportunity for bots that:

                                                                    • look for copyrighted/illegal content, the IP addresses of the nodes seeding them, automating contacting the ISP
                                                                    • Scan for cryptocoin wallets/private keys
                                                                    • Unencrypyted keepass backups, etc

                                                                    More relevant to the article though, I like the Rust code. Very readable!

                                                                    1. 5

                                                                      IPFS is basically just a big torrent swarm. Doing that “copyrighted content scan” thing on the bittorrent DHT is already possible (and I’m pretty sure that’s how they send those notices already)

                                                                1. 2

                                                                  We’re not there yet, but we’re working on something along those lines - a decentralised social network where you are in control of your data - Peergos. Privacy and security are our primary goals.

                                                                  • We try to hide the metadata including the social graph, as well as your actual data.
                                                                  • Multi-device log in
                                                                  • You can log in to any server you trust (not just the one you signed up with) including running it locally yourself
                                                                  • Social layer is currently limited to following and sharing files, but we eventually want to add more of a social feed + messaging

                                                                  You can read more in our (WIP - no diagrams yet) docbook: https://peergos.github.io/book

                                                                  1. 3

                                                                    It says it verifies for memory safety, functional correctness, and secret independence. Could someone expand on exactly what is meant by that latter two?

                                                                    Does “secret independence” mean constant time as a function of the secret and input size?

                                                                    1. 4

                                                                      Functional correctness means you’ve precisely specified what it’s supposed to do and proven it does it. Secret independence is basically about avoiding timing attacks (i.e. side channels). You might find the slides helpful.

                                                                      1. 2

                                                                        Those slides are fantastic! Thanks, Nick!

                                                                        Do any of these proofs covers things like spectre? They must include some model of the cpu to be able to prove anything, right? Is it just as I said, and they prove that the time is not a function of the secret (for the given cpu model)? E.g. I can imagine a cpu which takes longer to multiply by 5 than any other number, and I don’t see how any normal proof would cover this.

                                                                        1. 3

                                                                          Answering my own question, from https://eprint.iacr.org/2017/536.pdf :

                                                                          Yes they do have a cpu model, but it is very simple. They have a few operations which are assumed to be constant time on the cpu. They even explicitly call out non constant time integer multiplication on some cpus (but they ignore this). They use a cool technique of defining a new type (a secret int) to represent the secrets which doesn’t have a compare operation. Very nice.

                                                                    1. 1

                                                                      Merry Christmas everyone! Enjoy the sunshine or snow!

                                                                      1. 5

                                                                        This week I’m hoping to finish the final major missing feature before launching our Peergos [1] alpha - the ability to limit a single user’s storage space on our server (whilst still using native ipfs calls for writes). Then things start to get exciting. It’s gotten much faster in recent weeks, by optimising an ipfs call, fixing our UI code, and now moving the crypto to a web worker so it doesn’t fight with ui events for cpu.

                                                                        Last week I found some interesting DOS potential in the cbor parser - kinda similar to a zip bomb, it’s very easy to encode something in a very small object (which isn’t valid cbor) which will explode the decoder by trying allocate loads of memory. There’s a relatively easy way to remove this possibility by checking in advance the max size of things to allocate.

                                                                        [1] https://github.com/Peergos/Peergos

                                                                        1. 3

                                                                          This week I’m trying to put in the last few things we need to start really talking about Peergos [1], our E2E encrypted social network that doesn’t expose the social graph. Basically server side limits on the max number of accounts, and data stored per account so we don’t get DOS’ed.

                                                                          Last week I managed to speed up writes for us by optimising a call in ipfs itself by 1000x (also my first code merged to go-ipfs), and, in an independent fix, improved our read speed by 3-4x, now the web ui is decently usable speedwise.

                                                                          [1] https://github.com/Peergos/Peergos

                                                                          1. 3

                                                                            This week I’m going to merge a big change to Peergos which fixes a potential data loss bug under concurrent writes by the same user to the same directory/file from different machines. More excitingly, I’m hoping to get streaming end-to-end encrypted video working in Peergos.

                                                                            1. 1

                                                                              New systems should ideally use Multihash

                                                                              1. 2

                                                                                I finished the move to hashes of public keys in peergos last week, which smooths the way for switching to post quantum crypto. Hopefully that’s the last breaking change for a long time.

                                                                                This week I’ll deploy that and flesh out some more parts of our website.