Threads for nbarbey

  1. 7

    Well, some of us are in this category (as the article points out):

    If you’re building API services that need to support server-to-server or client-to-server (like a mobile app or single page app (SPA)) communication, using JWTs as your API tokens is a very smart idea. In this scenario:

    • You will have an authentication API which clients authenticate against, and get back a JWT
    • Clients then use this JWT to send authenticated requests to other API services These other API services use the client’s JWT to validate the client is trusted and can perform some action without needing to perform a network validation

    so JWT is not that bad. Plus, it is refreshing to visit a website that says ‘there are no cookies here’… in their privacy policy.

    1. 17

      Plus, it is refreshing to visit a website that says ‘there are no cookies here’… in their privacy policy.

      The EU “Cookie Law” applies to all methods of identification — cookies, local storage, JWT, parameters in the URL, even canvas fingerprinting. So it shouldn’t have any effect on the privacy policy whatsoever.

      1. 9

        You still can use sessions with cookies, especially with SPA. Unless the JWT token is stateless and short lived you should not use it. Also JWT isn’t the best design either as it gives too much flexibility and too much possibilities to misuse. PASETO tries to resolve these problems with versioning protocol and reducing amount of possible hashes/encryption methods.

        1. 1

          Why shouldn’t you use long lived JWTs with a single page application?

          1. 4

            Because you cannot invalidate that token.

            1. 6

              Putting my pedant hat on: technically you can, using blacklists or swapping signing files; But that then negates the benefit of encapsulating a user “auth key” into a token because the server will have to do a database lookup anyway and by that point might as well be a traditional cookie backed session.

              JWTs are useful when short lived for “server-less”/lambda api’s so they can authenticate the request and move along quickly but for more traditional things they can present more challenges than solutions.

              1. 7

                Putting my pedant hat on: technically you can, using blacklists or swapping signing files; But that then negates the benefit of encapsulating a user “auth key” into a token because the server will have to do a database lookup anyway and by that point might as well be a traditional cookie backed session.

                Yes, that was my point. It was just mental shortcut, that if you do that, then there is no difference between “good ol’” sessions and using JWT.

                Simple flow chart.

                1. 1

                  Except it is not exactly the same since loosing a blacklist database is not the same as loosing a token database for instance. The former will not invalidate all sessions but will re-enabled old tokens. Which may not be that bad if the tokens are sufficiently short-lived.

                  1. 1

                    Except “reissuing” old tokens has much less impact (at most your clients will be a little annoyed) than allowing leaked tokens to be valid again. If I would be a client I would much more like the former rather than later.

        2. 5

          One of my major concerns with JWT’s is that retraction is a problem.

          Suppose that I have the requirement that old authenticated sessions have to be remotely retractable, then how on earth would I make a certain JWT invalid without having to consult the database for “expired sessions”.

          The JWT to be invalidated could still reside on the devices of certain users after it has been invalidated remotely.

          The only way I could think of, is making them so short-lived that they expire almost instantaneous. Like in a few minutes at most, which means that user-sessions will be terminated annoyingly fast as well.

          If I can get nearly infinite sessions and instant retractions, I will gladly pay the price of hitting the database on each request.

          1. 8

            JWT retraction can be handled in the same way that a traditional API token would; you add it to a black list, or in the case of a JWT a “secret” that its signed against can be changed. However both solutions negate the advertised benefit of JWTs or rather they negate the benefits I have seen JWTs advertised for: namely that it removes the need for session lookup on database.

            I have used short lived JWTs for communicating with various stateless (server-less/lambda) api’s and for that purpose they work quite well; each endpoint has a certificate they can check the JWT validity with and having the users profile and permissions encapsulated means not needing a database connection to know what the user is allowed to do; a 60s validity period gives the request enough time to authenticate before the token expires while removing the need for retraction.

            I think the problem with JWTs is that many people have attempted to use them as a solution for a problem already better solved by other things that have been around and battle tested for much longer.

            1. 7

              However both solutions negate the advertised benefit of JWTs or rather they negate the benefits I have seen JWTs advertised for: namely that it removes the need for session lookup on database.

              I think the problem with JWTs is that many people have attempted to use them as a solution for a problem already better solved by other things that have been around and battle tested for much longer.

              This is exactly my main concern and also the single reason I haven’t used JWT’s anywhere yet. I can imagine services where JWT’s would be useful, but I have yet to see or build one where some form of retraction wasn’t a requirement.

              My usual go-to solution is to generate some 50-100 characters long string of gibberish and store that into a cookie on the user’s machine and a database table consisting of <user_uuid, token_string, expiration_timestamp> triples which is then joined with the table which contains user-data. Such queries are usually blazing fast and retraction then is a simple DELETE-query. Also: Scaling usually isn’t that big of a concern as most DBMS-systems tend to have the required features built-in already.

              Usually, I also set up some scheduled event in the DMBS which deletes all expired tokens from that table periodically. Typically once per day at night, or when the amount of active users is low. It makes for a nice fallback just in case some programming bug inadvertently creeps in.

              But I guess this was the original author’s point as well.

            2. 1

              I’ve never done any work with JWTs so this might be a dumb question - but can’t you just put an expiration time into the JWT data itself, along with the session and/or user information? The user can’t alter the expiration time because presumably that would invalidate the signature, so as long as the timestamp is less than $(current_time) you’d be good to go? I’m sure I’m missing something obvious.

              1. 5

                If someone steals the JWT they have free reign until it expires. With a session, you can remotely revoke it.

                1. 1

                  That’s not true. You just put a black mark next to it and every request after that will be denied - and it won’t be refreshed. Then you delete it once it expires.

                  1. 7

                    That’s not true. You just put a black mark next to it and every request after that will be denied - and it won’t be refreshed. Then you delete it once it expires.

                    The problem with the black mark, is that you have to hit some sort of database to check for that black mark. By doing so, you invalidate the usefulness of JWT’s. That is one of OP’s main points.

                    1. 2

                      Well, not necessarily. If you’re making requests often (e.g, every couple of seconds) and you can live with a short delay between logging out and the session being invalidated, you can set the timeout on the JWT to be ~30 seconds or so and only check the blacklist if the JWT is expired (and, if the session isn’t blacklisted, issue a new JWT). This can save a significant number of database requests for a chatty API (like you might find in a chat protocol).

                      1. 1

                        Or refresh a local cache of the blacklist periodically on each server, so it’s a purely in-memory lookup.

                        1. 4

                          But in that case, you’d be defeating their use as session tokens, because you are limited to very short sessions. You are just one hiccup of the network away from failure which also defeats their purpose. (which was another point of the OP).

                          I see how they can be useful in situations where you are making a lot of requests, but the point is that 99,9% of websites don’t do that.

              2. 1

                For mobile apps, that have safe storage for passwords, the retraction problem is solved via issuing refresh tokens (that live longer, like passwords in password store of a mobile phone). The refresh tokens, are then used to issue new authorization token periodically and it is transparent to the user. You can re issue authorization token, using refresh token every 15 minutes, for example.

                For web browsers, using refresh tokens may or may not be a good idea. Refresh tokens, are, from the security prospective, same as ‘passwords’ (although temporary). So their storage within web browser, should follow same policy as one would have for passwords.

                So if using refresh tokens for your single page app, is not an option, then invalidating would have to happen during access control validation, on the backend. (Backend, still is responsible for access control, anyway, because it cannot be done on web clients, securely).

                It is more expensive, and requires a form of distributed cache if you have distributed backend that allows stateless no-ip-bound distribution of requests…

                1. 1

                  For mobile apps, that have safe storage for passwords, the retraction problem is solved via issuing refresh tokens (that live longer, like passwords in password store of a mobile phone).

                  But then why use 2 tokens instead of single one? It makes everything more complicated for sake of perceived simplification of not doing 1 DB request on each connection. Meh. And even you can use cookie as in your web UI, so in the end it will make everything simpler as you do not need to use 2 separate auth systems in your app.

                  1. 1

                    It makes everything more complicated for sake of perceived simplification of not doing 1 DB request on each connection.

                    This is not really, why 2 tokens are used (authentication token, and refresh token). 2 tokens are used to a) allow fast expiration of an authentication request b) prevent passing of actual user password through to the backend (it only needs to be passed when creating a refresh token).

                    This is a fairly standard practice though, not something I invented (it requires an API accessible, secure password store on user’s device ,which is why it is prevalent in mobile apps).

                    I also cannot see how a) and b) can be achieved with a single token.

            1. 23

              I think people rely on JavaScript too much. With sourcehut I’m trying to set a good example, proving that it’s possible (and not that hard!) to build a useful and competitive web application without JavaScript and with minimal bloat. The average sr.ht page is less than 10 KiB with a cold cache. I’ve been writing a little about why this is important, and in the future I plan to start writing about how it’s done.

              In the long term, I hope to move more things out of the web entirely, and I hope that by the time I breathe my last, the web will be obsolete. But it’s going to take a lot of work to get there, and I don’t have the whole plan laid out yet. We’ll just have to see.

              I’ve been thinking about this a lot lately. I really don’t like the web from a technological perspective, both as a user and as a developer. It’s completely outgrown its intended use-case, and with that has brought a ton of compounding issues. The trouble is that the web is usually the lowest-common-denominator platform because it works on many different systems and devices.

              A good website (in the original sense of the word) is a really nice experience, right out of the box. It’s easy for the author to create (especially with a good static site generator), easy for nearly anyone to consume, doesn’t require a lot of resources, and can be made easily compatible with user-provided stylesheets and reader views. The back button works! Scrolling works!

              Where that breaks down is with web applications. Are server-rendered pages better than client-rendered pages? That’s a question that’s asked pretty frequently. You get a lot of nice functionality for free with server-side rendering, like a functioning back button. However, the web was intended to be a completely stateless protocol, and web apps (with things like session cookies) are kind of just a hack on top of that. The experience of using a good web app without JavaScript can be a bit of a pain with many different use cases (for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page). Security is difficult to get right when the server manages state.

              I’ll argue, if we’re trying to avoid the web, that client-side rendering (single-page apps) can be better. They’re more like native programs in that the client manages the state. The backend is simpler (and can be the backend for a mobile app without changing any code). The frontend is way more complex, but it functions similarly to a native app. I’ll concede poorly-built SPA is usually a more painful experience than a poorly-built SSR app, but I think SPAs are the only way to bring the web even close to the standard set by real native programs.

              Of course, the JavaScript ecosystem can be a mess, and it’s often a breath of fresh air to use a site like Sourcehut instead of ten megs of JS. The jury’s still out as to which approach is better for all parties.

              1. 11

                (for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page)

                Some of the UI benefits of SPA are really nice tbh. Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.

                I can’t find the video, but the desire for eliminating stale UI (like outdated notifications) in Facebook was one of the reasons React was created in the first place. There just doesn’t seem to be a way to do things like that with static, js-free pages.

                The backend is simpler (and can be the backend for a mobile app without changing any code).

                I never thought about that before, but to me that’s a really appealing point to having a full-featured frontend design. I’ve noticed some projects with the server-client model where the client-side was using Vue/React, and they were able to easily make an Android app by just porting the server.

                The jury’s still out as to which approach is better for all parties.

                I think as always it depends. In my mind there are some obvious choices for obvious usecases. Blogs work great as just static html files with some styling. Anything that really benefits from being dynamic (“reactive” I think is the term webdevs use) confers nice UI/UX benefits to the user with more client-side rendering.

                I think the average user probably doesn’t care about the stack and the “bloat”, so it’s probably the case that client-side rendering will remain popular anytime it improves the UI/UX, even if it may not be necessary (plus cargo-culting lol). One could take it to an extreme and say that you can have something like Facebook without any javascript, but would people enjoy that? I don’t think so.

                1. 17

                  But you don’t need to have a SPA to have notifications without refresh. You just need a small dynamic part of the page, which will degrade gracefully when JavaScript is disabled.

                  Claim: Most sites are mostly static content. For example, AirBNB or Grubhub. Those sites could be way faster than they are now if they were architected differently. Only when you check out do you need anything resembling an “app”. The browsing and searching is better done with a “document” model IMO.

                  Ditto for YouTube… I think it used to be more a document model, but now it’s more like an app. And it’s gotten a lot slower, which I don’t think is a coincidence. Netflix is a more obvious example – it’s crazy slow.

                  To address the OP: for Sourcehut/Github, I would say everything except the PR review system could use the document model. Navigating code and adding comments is arguably an app.

                  On the other hand, there are things that are and should be apps: Google Maps, Docs, Sheets.


                  edit: Yeah now that I check, YouTube does the infinite scroll thing, which is slow and annoying IMO (e.g. breaks bookmarking). Ditto for AirBNB.

                  1. 3

                    I’m glad to see some interesting ideas in the comments about achieving the dynamism without the bloat. A bit of Cunningham’s law in effect ;). It’s probably not easy to get such suggestions elsewhere since all I hear about is the hype of all the fancy frontend frameworks and what they can achieve.

                    1. 8

                      Yeah SPA is a pretty new thing that seems to be taking up a lot of space in the conversation. Here’s another way to think about it.

                      There are three ways to manage state in a web app:

                      1. On the server only (what we did in the 90’s)
                      2. On the server and on the client (sometimes called “progressive enhancement”, jQuery)
                      3. On the client only (SPA, React, Elm)

                      As you point out, #1 isn’t viable anymore because users need more features, so we’re left with a choice between #2 and #3.

                      We used to do #2 for a long time, but #3 became popular in the last few years.

                      I get why! #2 is is legitimately harder – you have to decide where to manage your state, and managing state in two places is asking for bugs. It was never clear if those apps should work offline, etc.

                      But somehow #3 doesn’t seem to have worked out in practice. Surprisingly, hitting the network can be faster than rendering in the browser, especially when there’s a tower of abstractions on top of the browser. Unfortunately I don’t have references at the moment (help appreciated from other readers :) )

                      I wonder if we can make a hybrid web framework for #2. I have seen a few efforts in that direction but they don’t seem to be popular.


                      edit: here are some links, not sure if they are the best references:

                      https://news.ycombinator.com/item?id=13315444

                      https://adamsilver.io/articles/the-disadvantages-of-single-page-applications/

                      Oh yeah I think this is what I was thinking of. Especially on Mobile phones, SPA can be slower than hitting the network! The code to render a page is often bigger than the page itself! And it may or may not be amortized depending on the app’s usage pattern.

                      https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4

                      https://news.ycombinator.com/item?id=17682378

                      https://v8.dev/blog/cost-of-javascript-2019

                      https://news.ycombinator.com/item?id=20317736

                      1. 3

                        A good example of #2 is Ur/Web. Pages are rendered server-side using templates which looks very similar to JSX (but without the custom uppercase components part) and similarly desugars to simple function calls. Then at any point in the page you can add a dyn tag, which takes a function returning a fragment of HTML (using the same language as the server-side part, and in some cases even the same functions!) that will be run every time one of the “signals” it subscribes to is triggered. A signal could be triggered from inside an onclick handler, or even from an even happening on the server. This list of demos does a pretty good job at showing what you can do with it.

                        So most of the page is rendered on the server and will display even with JS off, and only the parts that need to be dynamic will be handled by JS, with almost no plumbing required to pass around the state: you just need to subscribe to a signal inside your dyn tag, and every time the value inside changes it will be re-rendered automatically.

                        1. 2

                          Thanks a lot for all the info, really helpful stuff.

                      2. 5

                        Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.

                        On the other hand, it can be annoying when things update without a refresh, distracting you from what you were reading. Different strokes for different folks. Luckily it’s possible to fulfill both preferences, by degrading gracefully when JS is disabled.

                        I think the average user probably doesn’t care about the stack and the “bloat”, so it’s probably the case that client-side rendering will remain popular anytime it improves the UI/UX, even if it may not be necessary (plus cargo-culting lol).

                        The average user does care that browsing the web drains their battery, or that they have to upgrade their computer every few years in order to avoid lag on common websites. I agree that we will continue see the expansion of heavy client-side rendering, even in cases where it does not benefit the user, because it benefits the companies that control the web.

                        1. 1

                          Some of the UI benefits of SPA are really nice tbh. Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.

                          Is this old reddit or new reddit? The new one is sort of SPA and I recall it updating without refresh.

                          1. 3

                            Old reddit definitely has the issue I described, not sure about the newer design. If the new reddit doesn’t have that issue, that aligns with my experience of it being bloated and slow to load.

                        2. 12

                          example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page

                          There are lots of ways to do this. Here’s two:

                          1. You can use an iframe for the upvote link, and have the state change just reload the frame.
                          2. If you don’t need feedback, you can also use a button with a target= to a hidden iframe.

                          Security is difficult to get right when the server manages state.

                          I would’ve thought the exact opposite. Can you explain?

                          1. 7

                            In the case where you have lots of buttons like that isn’t loading multiple completely separate doms and then reloading one or more of them somewhat worse than just using a tiny bit of js? I try to use as little as possible but I think that kind of dynamic interaction is the use case js originally was made for.

                            1. 7

                              Worse? Well, iframes are faster (marginally), but yes I’d probably use JavaScript too.

                              I think most NoScript users will download tarballs and run ./configure && make -j6 without checking anything, so I’m not sure why anyone wants to turn off JavaScript anyway, except for maybe because adblockers aren’t perfect.

                              That being said, I use NoScript…

                            2. 4

                              I’m not sure if this would work, but an interesting idea would be to use checkboxes that restyle when checked, and by loading a background image with a query or fragment part, the server is notified of which story is upvoted.

                              1. 2

                                That’d require using GET, which might be harder to prevent accidental upvotes. Could possibly devise something though.

                            3. 4

                              One thing I really miss with SPA’s (when used as apps), aside from performance, is the slightly more consistent UI/UX/HI that you generally get with desktop apps. Most major OS vendors, and most oss desktop toolkits, at least have some level of uniformity of expectation. Things like: there is a general style for most buttons and menu styles, there are some common effects (fade, transparency), scrolling behavior is more uniform.

                              With SPAs… well, good luck! Not only is it often browser dependent, but matrixed with a myriad JS frameworks, conventions, and render/load performance on top of it. I guess the web is certainly exciting, if nothing else!

                              1. 3

                                I consider the “indented use-case” argument a bit weak, since for the last 20 years web developers, browser architects and our tech overlords have been working on making it work for applications (and data collection), and to be honest it does so most of the time. They can easily blame the annoyances like pop-ups and cookie-banners on regulations and people who use ad blockers, but from a non technical perspective, it’s a functional system. Of course when you take a look underneath, it’s a mess, and we’re inclined to say that these aren’t real websites, when it’s the incompetence of our operating systems that have created the need to off-load these applications to a higher level of abstraction – something had to do it – and the web was just flexible enough to take on that job.

                                1. 4

                                  You’re implying it’s Unix’s fault that the web is a mess but no other OS solved the problem either? Perhaps you would say that Plan 9 attempted to solve part of it, but that would only show that the web being what it is today isn’t solely down to lack of OS features.

                                  I’d argue that rather than being a mess due to the incompetence of the OS it’s a mess due to the incremental adoption of different technologies for pragmatic reasons. It seems to be this way sadly, even if Plan 9 was a better Unix from a purely technological standpoint Unix was already so widespread that it wasn’t worth putting the effort in to switch to something marginally better.

                                  1. 7

                                    No, I don’t think Plan 9 would have fixed things. It’s still fundamentally focused on text processing, rather than hypertext and universal linkability between objects and systems – ie the fundamental abstractions of an OS rather than just it’s features. Looking at what the web developed, tells us what needs were unformulated and ultimately ignored by OS development initiatives, or rather set aside for their own in-group goals (Unix was a research OS after all). It’s most unprobable that anyone could have foreseen what developments would take place, and even more that anyone will be able to fix them now.

                                2. 2

                                  From reading the question of the interviewer I get the feeling that it’s easy for non technical users to create a website using wordpress. Adding many plugins most likely leads to a lot of bloaty JavaScript and CSS.

                                  I would argue that it’s a good thing that non technical users can easily create website but the tooling to create it isn’t ideal. For many users a wysiwyg editor which generates a static html page would be fine but such a tool does not seem to exists or isn’t known.

                                  So I really see this as a tooling/solution problem, which isn’t for users to solve but for developers to create an excellent wordpress alternative.

                                  1. 2

                                    I am not affiliated to this in any way but I know of https://forestry.io/ which looks like what you describe. I find their approach quite interesting.

                                  2. 0

                                    for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page)

                                    If a user clicks a particular upvote button, you should know where on that page it is located, and can use a page anchor in your response to send them back to it.

                                    1. 1

                                      It’s not perfectly seamless, sadly, and it’s possible to set up your reverse proxy incorrectly enough to break applications relying on various http headers to get exactly the right page back.

                                  1. 38

                                    It seems to me that suggesting a command-line-only (unless I’m mistaken?) tool like Hugo is a complete non-starter for, I don’t know, at least 80% of the people who are posting on Medium. I appreciate your effort—and I’m also becoming more irritated by Medium every day—but I think that learning how to use the terminal is just too high of a hurdle for most people to bother with. If your intention was only to convince the kind of people who read Lobsters and know what it means that something is “written in Go,” then it’s fine, but I don’t think this site presents a viable solution for the rest of the users.

                                    The fundamental problem, I think, is that in order for someone to own their digital identity in any meaningful way, they have to have (at a minimum) their own domain name, and even that is a significant technical hurdle—never mind the fact that it costs money. Maybe the most viable “indie” solution we have at this moment is to (1) guide people through the process of registering a domain and then (2) offer an easy-to-use, web-based blogging engine that people can point their DNS records to in order to get started with their own sites. The latter thing could be made cheap enough to host that some benevolent geek could just subsidize it. Even this, though, seems like so much more effort than Medium for the non-technical user.

                                    1. 14

                                      The IndieWeb community is very interested in breaking down the barriers to doing these things, like purchasing a domain name.

                                      1. 10

                                        Or, just point people to one of the many 1-click setup Wordpress hosting services. I know people like to hate PHP and Wordpress but it’s still better than Medium.

                                        1. 9

                                          Suggesting non-technical people manage their own Wordpress site is like suggesting a baby go carve your roast turkey. (It’s not going to end well).

                                          Wordpress is the Internet Explorer 6 of CMS’ and it’s plugins are the toolbars.

                                          Yes there are better things than Medium. No, Wordpress isn’t it.

                                          1. 2

                                            Totally agree. I know everyone would rail against this idea because it’s somebody else’s platform, but this is why I host my blog on wordpress.com - They handle the security, I just get the super ease of use and platform with the widest client support of any blogging platform anywhere, and a really nice mobile client.

                                            1. 1

                                              Do you think there is an opportunity for the modern database-backed CMS beyond Ghost?

                                              1. 2

                                                Being database backed isn’t what makes Wordpress terrible.

                                                However for a lot of sites, I think a SSG would be a better solution, even if that means they run a db backed CMS which then publishes content to a static location. The key thing with a SSG is that the rendered pages are static HTML. It’s incidental what the source format is - static files (eg markdown) is a common pattern but it could just as easily be a regular web app with a DB.

                                          2. 7

                                            It seems to me that suggesting a command-line-only (unless I’m mistaken?) tool like Hugo is a complete non-starter for, I don’t know, at least 80% of the people who are posting on Medium. The fundamental problem, I think, is that in order for someone to own their digital identity in any meaningful way, they have to have (at a minimum) their own domain name, and even that is a significant technical hurdle—never mind the fact that it costs money.

                                            Glad to see these remarks already posted!

                                            There’s still room IMO for blogging systems that live closer to WordPress on the Static-Site Gen <-> WYSIWYG CMS spectrum that are — crucially — easy to deploy on a basic LAMP stack. Make it as easy to post as on social media (Twitter / FB), with the admin part much more closely intertwined with the front-end, and you have a winner. (Would also love to know if there’s one already that fits the bill).

                                            1. 2

                                              Do you know https://forestry.io ? It seems to me that what they are doing is pretty close to what you describe. (I am not affiliated in any way by the way).

                                            2. 4

                                              Couldn’t agree more!

                                              Generally speaking I think the first generation of web property developers created a monster with the whole idea of “free but not really” websites. Medium is just one example.

                                              Maybe some kind of future where ubiquitous Raspberry Pi like server infrastructure would enable wide scale publishing and data sharing, but we have a LONG LONG way to go before we can get there.

                                              I suspect in the nearer term, something like having pods of friends collaborate at some small cost to them to make their offerings available could work, but expecting everyone to use a command line is certainly a non starter.

                                              We techies need to keep reminding ourselves that the rest of the world is not us. They don’t care that Medium is slow, or that the paywall violates our tender sensibilities. They want to accomplish something and want the shortest path to getting there. Full stop.

                                              1. -1

                                                definitely agree here.

                                              1. 4

                                                Does this mean it’s possible to just watch the DHT on IPFS and pull data people are inserting? It’s not encrypted in any way?

                                                1. 8

                                                  That’s exactly what this is :)

                                                  You’re free to publish encrypted content on the IPFS, but you aren’t obligated to.

                                                  1. 6

                                                    And I wouldn’t, since encrypted content on IPFS would be exposed to everyone and brute-forced eventually if anyone cared (once the cipher is broken in the future, etc)

                                                    1. 3

                                                      This is kind of my worry with IPFS. I wanted to have a “private” thing where I could also share with my family in a mostly-secure way (essentially, least chance of leaking everything to the whole world while still being able to access my legitimately-acquired music collection without having to ssh home). Turns out that’s not simple to set up.

                                                      1. 6

                                                        We ([0][1]) are trying to add encryption and other security enhancements, including safe sharing, on top of IPFS. Still pre-alpha though.

                                                        [0] - https://github.com/Peergos/Peergos

                                                        [1] - https://peergos.github.io/book

                                                        1. 5

                                                          You just have to add encryption on before transmission. IPFS is kind of a low level thing (Like how you won’t find any encryption in TCP because that comes later), It really needs good apps built on top to be useful.

                                                          1. 2

                                                            IPFS is a better bittorrent, which is designed to work very well as a replacement for the public web. Private sharing has different requirements – I use syncthing for a similar semantic in private.

                                                            1. 1

                                                              Do you guys know about upspin ? What do you think of it ? One if its stated goal is security. But it seems to be at quite an early stage for now.

                                                            2. 2

                                                              Interesting. I bet a lot of inserters aren’t aware. Sounds like a great opportunity for bots that:

                                                              • look for copyrighted/illegal content, the IP addresses of the nodes seeding them, automating contacting the ISP
                                                              • Scan for cryptocoin wallets/private keys
                                                              • Unencrypyted keepass backups, etc

                                                              More relevant to the article though, I like the Rust code. Very readable!

                                                              1. 5

                                                                IPFS is basically just a big torrent swarm. Doing that “copyrighted content scan” thing on the bittorrent DHT is already possible (and I’m pretty sure that’s how they send those notices already)

                                                          1. 3

                                                            I have a friend who runs a French instance : https://infos.mytux.fr/