1. 10

    While I agree that the writer follows a certain logic and it makes sense in their world view, I think there’s one important thing missing: The copy-left and free software part of open source always was more than just “this needs to work well in the established system”, it had a more revolutionary part that made it so interesting. It tried to build an alternative to the capitalist way of working and paying for work, because software and digital content (being copyable) is fundamentally different to physical goods and services. The free software people tried to apply new ways of thinking about those new goods. This is why it was frowned upon in the early days by entepreneurs, and it was completely unthinkable that something like Linux will work somehow. This part can’t be grasped by the logic the author uses.

    1. 11

      To add to that a little: the kind of friction that the author mentions became more or less inevitable once FOSS expanded past the community of dogfooding enthusiasts, and it comes from both sides.

      I.e. for a long time, it wasn’t uncommon for people to show up not just with bug reports, but also with a patch. Some communities still see this as natural (FRIGN mentions it in another comment here re. suckless, but they aren’t the only ones), and facilitate this in every possible way (I haven’t contributed anything to suckless projects but reading their mailing lists it’s clear that they’re really cool about it).

      But it was inevitable that, as FOSS adoption grew, it would eventually grow to include people who lack the expertise to modify the tools they use, people who are simply forced to use them at work and so on. I mean, 20 years ago, lots of people who used Linux in enterprise settings were trying to get the higher-ups to approve using Linux on white boxes instead of Sun’s expensive stuff, and they had a lot of stakes in the game. Lots of people using it today are junior devops who inherited the Linux shop set up 20 years ago. They have no stakes in it, and there aren’t any Sun salesmen to yell at, either, so of course they get yelled at and then show up demanding bugfixes from the same people who showed up at FOSDEM encouraging others to adopt their tools. I’m not saying they’re right, just that I understand why they do it.

      There’s some friction at the other end, too. E.g. sometimes you unwittingly end up working around a bug that you have a patch for, but the fix will never get in because the project – while FOSS in terms of license – is effectively a corporate playground, and maintainers will (understandably) prioritize their colleagues’ fixes over yours, assuming you can even get them to look at yours, that is.

      1. 5

        RMS always stressed that he is a capitalist, not a communist. I guess the revolutionary part is that he wants to eradicate proprietary software, which would mean changing the business models of large parts of the software industry, but you can still pay people to work on free software.

        1. 1

          Then he’s not a capitalist. Capitalism is about earning dividends from property (ie software you’ve written or bought using capital), not from work.

          1. 2

            He believes in a capitalist system of production, which existed before software and could continue without proprietary software (in principle if not in practice).

        2. 4

          The copy-left and free software part of open source always was more than just “this needs to work well in the established system”, it had a more revolutionary part that made it so interesting. It tried to build an alternative to the capitalist way of working and paying for work

          While many choose to approach free software in this way, this mischaracterises the philosophy of the free software movement at least as RMS and the FSF established it. They have never been shy about authors charging money for free software - whether it’s your own or somebody else’s!

          Actually, we encourage people who redistribute free software to charge as much as they wish or can. If a license does not permit users to make copies and sell them, it is a nonfree license. (https://www.gnu.org/philosophy/selling.en.html)

          To me, the author’s approach seems very consistent with what FSF advocates.

          1. 1

            From the GNU Manifesto:

            We must distinguish between support in the form of real programming work and mere handholding. The former is something one cannot rely on from a software vendor. If your problem is not shared by enough people, the vendor will tell you to get lost.

          1. 5

            Note that this is a blog post of a company selling multi-master replication for the competing database MySQL.

            1. 1

              Persona XtraDB Cluster is open source. I don’t think they make any proprietary software, actually

              1. 1

                Looks like they sell support contracts.

                1. 1

                  Don’t get me wrong, I think Galera is a good product, I use it myself. I just thought it’s worth knowing that they say something about the state of PostgreSQL from a competitor perspective.

              1. 7

                Because of the brackets? I never quite understood how this can be found aesthetic or intuitive to be honest:

                 (defun factorial (n)
                   (if (= n 0) 1
                       (* n (factorial (- n 1)))))
                

                Python is not my favorite language either, but this looks just so much better:

                def factorial(n):
                    if n == 0:
                        return 1
                    else:
                        return n * factorial(n-1)
                
                1. 14

                  My suspicion is that you haven’t spent enough time with it.

                  Over the years I’ve used languages with wildly different syntax (Lisp, Haskell, Ada, Python, Erlang) , and after a few months with any of them, my eyes and brain filter out the syntax and all I really think about is the semantic structure and what the code’s doing.

                  I think it’s part of the reason people are so persnickety about code formatting - consistent formatting makes common idioms and syntax elements easier to read at a glance.

                  1. 4

                    I actually agree that it looks better, but I think that’s besides the point. For me the benefits of the S-expression syntax outweigh what I feel a is minor cost in aesthetics. Those benefits include: simplicity through uniformity, no ambiguity, and interchangeability of code and data (meta-programming).

                    1. 3

                      This is a matter of subjectivity for anyone who’s experienced with programming. For instance, I find the Lisp version much more beautiful. The indentation “flows” in a way that is much more curvy and less blocky than the majority of other languages - less so in the example that you gave, but in the general case.

                      Why do I think that it’s more beautiful? Because, even though I wrote Python and C for 4 years, I discovered and wrote Common Lisp for 5 years after that, and my tastes have changed. I think that it’s more interesting to ask beginners which syntax they prefer.

                      (not relevant to my point, but you might be interested to know that after you’ve written Lisp for a few months, the parentheses fade from your perceptual awareness. Lisp programmers read code mostly using indentation - we don’t count parentheses)

                      1. 1

                        Your Python snippet is longer than Lisp one while not conveying extra complexity. Another way to say, less elegant.

                        1. 1

                          Oh? Lisp: 29 tokens, Python: 24 tokens. If I count python indentation as a token, then python goes up to 28 or 30 tokens (more if 2 indentations on a row equals 2 tokens), which seems like a off-by-one error rather than clear win for either :)

                          1. 1

                            I don’t think that calling Lisp parens “tokens” is very accurate, because (1) they’re pervasive (more so than newlines by far) which leads to (2) tooling+experience causes them to perceptually vanish.

                            In other languages, you notice every parenthesis. In Lisp, you notice few (but you do notice some). I perceive that code fragment as:

                            defun factorial (n)
                              if (= n 0) 1
                                * n (factorial (- n 1
                            

                            I agree that there is no clear winner, though, even if you fix the Lisp formatting:

                            defun factorial (n)
                              if = n 0
                                1
                                * n (factorial (- n 1
                            

                            (note the unmatched open parens) I think that much more interesting examples involve larger and more complex code, which allows you to (reasonably) compare Lisp macros against the abstraction features of other languages.

                        2. 1

                          Clojure:

                          (defn factorial [n]
                            (if (zero? n)
                                1
                                (* n (factorial (dec n)))
                                ))
                          
                          1. 1

                            Tbh, when you use it enough all you see is this anyway:

                            defun factorial (n):
                               if (= n 0):
                                  1;
                                   * n (factorial (- n 1));
                            

                            Which amounts to about the same as python.

                          1. 1

                            so, they drop their only really interesting/innovative product, cheap bare-metal arm servers?

                            1. 1

                              Interesting isn’t the same as profitable.

                              1. 1

                                maybe you are right. it’s a bit strange though as the prices at the lower end are all the same for virtualized instances, so bare metal is a good unique selling point, imho.

                                1. 1

                                  How many situations do you specifically want bare metal that someone else manages? While interesting, I genuinely struggle to think of a usecase other than perhaps sr.ht which offers virtualized CI runners (hardly a huge market).

                                  1. 1

                                    Security. CPU sidechannels are a thing since Spectre, often with no (proper) mitigation available.

                            1. 7

                              I kind of get the impression the authors don’t know how asymmetric encryption works?

                              1. 1

                                Yeah, if you’re going to talk to a server every time anyway, you may just as well generate throwaway ssh keys, no need to involve the whole X509-circus.

                                Use a well placed AuthorizedKeysCommand and AuthorizedKeysFile and you don’t have the problems laid out in the article.

                                1. 2

                                  I might be misunderstanding the article, but isn’t it their purpose to effectively manage authorized_keys automatically so you can use your SSO credentials instead of doing ssh directly? Kind of like what BLESS does with AWS IAM roles.

                              1. 2

                                The claim “without a server” left me baffled, I must say. I was intrigued on how that’s going to work and found, that it meant “managed proxy/ SaaS”. For me, I would rather consider a service if I felt it was honest about the claims.

                                I agree to the SSL problem: I suppose having letsencrypt on the API is the best way to deal with this, but if you manage to provide an even easier solution (like having an easy module I can plug into my app that then allows some sort of zero-config and secure communication with your proxy and my API), maybe that would help some devs who would otherwise run the service unsecured.

                                1. 1

                                  Sorry to disappoint! :D You’re correct, server here refers to the developer having to deploy and manage something on the server side, or rather the absence of.

                                  But at no point did I try to be dishonest. I don’t see actually see a benefit for Warpist to make this “revolutionary” claim, as it’s not trying to compete on neatness / bleeding edgeness of the proxying tech as much as simplifying the experience of deploying and managing it.

                                  Re: SSL, your suggestion is actually very close to a note I have in the roadmap, however I de-prioed this approach as I thought it could be more valuable to solve the problem for people who don’t control the API, and most likely don’t want to run a server just to reverse proxy it. The assumption (to be validated) being that if you have your own API, the benefit you would get from a managed reverse proxy will be relatively small, as you could deploy say nginx (from scratch or with a template), or some docker container that will handle reverse proxying for you, for a relatively minimal cost.

                                  In your opinion, why would you choose Warpist over nginx or other proven solutions, when you have control over the server of the API to proxy?

                                  1. 1

                                    No worries, I’m not trying to say you’re dishonest, just that finding out what’s behind the claim baffled me in a negative way. But we’re living in a time where “serverless” doesn’t mean “no-one actually runs a server”, so yeah.

                                    Ah, I see your point - and therefore the CORS claim that you made prominent. I think it might solve an issue, although I feel that it more feels like a workaround for setting up CORS correctly. Shouldn’t services that actually are supposed to be used from other sides where CORS would be an issue have the rules set up?

                                    As nginx and reverse proxying is actually part of my everyday work, I personally would just install nginx for sure - it’s minimal efford and I get full control and eliminate an additional party that would have access to the traffic. Developers that focus more on the backend/ frontend side of things instead of the underlying infrastructure might thing differently, though.

                                    1. 1

                                      No worries, I’m not trying to say you’re dishonest, just that finding out what’s behind the claim baffled me in a negative way. But we’re living in a time where “serverless” doesn’t mean “no-one actually runs a server”, so yeah.

                                      All good, but that’s indeed a legit source of confusion, and I’m trying to think of a good way to rephrase it.

                                      I feel that it more feels like a workaround for setting up CORS correctly

                                      That is partially true. The current incarnation of Warpist would be a transitional solution until all APIs have adopted CORS or a new standard appears. However, beyond the implementation itself, it seems not everyone wants to implement CORS.

                                      Many API providers have a legitimate concern that enabling CORS would make it harder to manage security, for example API secrets could be stolen from client-side only apps, and to my knowledge Google’s the only large provider who implements a proper way to deal with this (origin validation + only a client ID, so no secret to leak).

                                      To address this, Warpist gives you a way to setup an allowed origin whitelist, as well as a way to manage API secrets, so that they’re never exposed to the browser. So it’s a little bit more convenient for this use case than a vanilla reverse proxy setup (nginx based for example).

                                      it’s minimal efford and I get full control and eliminate an additional party that would have access to the traffic.

                                      Exactly my reasoning for targeting APIs the developer does not control. Those would be public APIs mainly where the effort to deploy a CORS proxy might mess with the original project plan, but secondarily it could also be software you deployed yourself, yet it doesn’t have CORS support built-in (Wordpress comes to mind).

                                  1. 1

                                    They saw it coming, and yet the readability of the code still obscured the bug from all.

                                  1. 4

                                    I’m Chris and I also have a technical blog where I share my experiences, open source projects and sometimes opinions about “Devops”, Automation, Open Source, Rust, Golang, Security and alike.

                                    Link: https://chr4.org

                                    1. 1

                                      If I understand the post correctly, this seems like a too big obvious failure. I kind of can’t believe Debian and Ubuntu never thought about that.

                                      Did someone try injecting a manipulated package? I’d assume that at least the signed manifest contains not only URLs and package version but also some kind of shasum at least?

                                      1. 2

                                        Looks like that’s exactly what apt is doing, it verifies the checksum served in the signed manifesto: https://wiki.debian.org/SecureApt#How_to_manually_check_for_package.27s_integrity

                                        The document mentions it uses MD5 though, maybe there’s a vector for collisions here, but it’s not as trivial as the post indicates, I’d say.

                                        Maybe there’s marketing behind it? Packagecloud offers repositories with TLS transport…

                                        1. 2

                                          Modern apt repos contain SHA256 sums of all the metadata files, signed by the Debian gpg key & each individual package metadata contains that package’s SHA256 sum.

                                          That said, they’re not wrong that serving apt repos over anything but https is inexcusable in the modern world.

                                          1. 2

                                            You must live on a planet where there are no users who live behind bad firewalls and MITM proxies that break HTTPS, because that’s why FreeBSD still doesn’t use HTTPS for … anything? I guess we have it for the website and SVN, but not for packages or portsnap.

                                            1. 1

                                              There’s nothing wrong with being able to use http if you have to: https should be the default however.

                                              1. 1

                                                https is very inconvenient to do on community run mirrors

                                                See also: clamav antivirus

                                                1. 1

                                                  In the modern world with letsencrypt it’s no where near as bad as it used to be though.

                                                  1. 1

                                                    I don’t think I would trust third parties to be able to issue certificates under my domain.

                                                    It is even more complicated for clamav where servers may be responding to many different domain names based on which pools they are in. You would need multiple wildcards.

                                            2. 1

                                              each individual package metadata contains that package’s SHA256 sum

                                              Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

                                              But if it is, then forging packages should require SHA256 collisions, which should be safe. And package integrity verified.

                                              Obviously, serving via TLS won’t hurt security, but (given that letsencrypt is fairly young) depend on a centralized CA structure and additional costs - and arguably add a little more privacy on which packages you install.

                                              1. 3

                                                A few days ago I was searching about this same topic when after seeing the apt update log and found this site with some ideas about it https://whydoesaptnotusehttps.com, including the point about privacy.
                                                I think the point about intermetdiate cache proxys and use of bandwith for the distribution servers probably adds more than the cost of a TLS certificate (many offer alternative torrent files for the live cd to offload this cost).

                                                Also, the packagecloud article implies that serving over TLS removes the risk of MitM, but it just makes it harder, and without certificate pinning only a little. I’d defer mostly to the marketing approach on this article, there are call-to-action sprinkled on the text

                                                1. 1

                                                  https://whydoesaptnotusehttps.com

                                                  Good resource, sums it up pretty well!

                                                  Edit: Doesn’t answer the question about whether SHA256 sums for each individual package are included in the manifesto. But if not, all of this would make no sense, so I assume and hope so.

                                                  1. 2

                                                    Hi. I’m the author of the post – I strongly encourage everyone to use TLS.

                                                    SHA256 sums of the packages are included in the metadata, but this does nothing to prevent downgrade attacks, replay attacks, or freeze attacks.

                                                    I’ve submit a pull request to the source of “whydoesaptnotusehttps” to correct the content of the website, as it implies several incorrect things about the APT security model.

                                                    Please re-read my article and the linked academic paper. The solution to the bugs presented is to simply use TLS, always. There is no excuse not to.

                                                    1. 2

                                                      TLS is a good idea, but it’s not sufficient (I work on TUF). TUF is the consequence of this research, you can find other papers about repository security (as well as current integrations of TUF) on the website.

                                                      1. 1

                                                        Yep, TUF is great – I’ve read quite a bit about it. Is there an APT TUF transport? If not, it seems like the best APT users can do is use TLS and hope some will write apt-transport-tuf for now :)

                                                      2. 1

                                                        Thanks for the post and the research!

                                                        It’s not that easy to switch to https: A lot of repositories (incl. die official ones of Ubuntu) do not support https. Furthermore, most cloud providers proivide their own mirrors and caches. There’s no way to verify whether the whole “apt-chain” of package uploads, mirrors and caches is using https. Even if you enforce HTTPS, the described vectors (if I understood correctly) remain an issue in the mirrors/ cache scenario.

                                                        You may be right, that current mitingations for the said vectors are not sufficient, but I feel like a security model in package management that relies on TLS is simply not sufficient and the mitingation of the attack vectors you’ve found needs to be something else - e.g. signing and verifing the packages upon installation.

                                                  2. 2

                                                    Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

                                                    Yes, there’s a chain of trust: the signature of each package is contained within the repo manifest file, which is ultimately signed by the Debian archive key. It’s a bit like a git archive - a chain of SHA256 sums of which only the final one needs to be signed to trust the whole.

                                                    There are issues with http downloads - eg it reveals which packages you download, so by inspecting the data flow an attacker could find out which packages you’ve downloaded and know which attacks would be likely to be successful - but package replacement on the wire isn’t one of them.

                                            1. 7

                                              Neat idea! One question though: How do you handle renewals? In my experience, postgresql (9.x at least) can only re-read the certificate upon a server restart, not upon mere reloads. Therefore, all connections are interrupted when the certificate is changed. With letsencrypt, this will happen more frequently - did you find a way around this?

                                              1. 5

                                                If you put nginx in front as a reverse TCP proxy, Postgres won’t need to know about TLS at all and nginx already has fancy reload capability.

                                                1. 3

                                                  I was thinking about that too - and it made me also wonder whether using OpenResty along with a judicious combination of stream-lua-nginx-module and lua-resty-letsencrypt might let you do the whole thing in nginx, including automatic AOT cert updates as well as fancy reloads, without postgres needing to know anything about it at all (even if some tweaking of resty-letsencrypt might be needed).

                                                  1. 1

                                                    That’s funny I was just talking to someone who was having problems with “reload” not picking up certificates in nginx. Can you confirm nginx doesn’t require a restart?

                                                    1. 1

                                                      Hmm, I wonder if they’re not sending the SIGHUP to the right process. It does work when configured correctly.

                                                  2. 2

                                                    I’ve run into this issue as well with PostgreSQL deployments using an internal CA that did short lived certs.

                                                    Does anyone know if the upstream PostgreSQL devs are aware of the issue?

                                                    1. 20

                                                      This is fixed in PG 10. “This allows SSL to be reconfigured without a server restart, by using pg_ctl reload, SELECT pg_reload_conf(), or sending a SIGHUP signal. However, reloading the SSL configuration does not work if the server’s SSL key requires a passphrase, as there is no way to re-prompt for the passphrase. The original configuration will apply for the life of the postmaster in that case.” from https://www.postgresql.org/docs/current/static/release-10.html

                                                  1. 2

                                                    If you are a fellow developer, you might find developermail.io being a fit. It’s configurable via Git and it’s supposed to be something like an “Heroku for email”.

                                                    Disclaimer: I’m one of the creators.

                                                    1. 2

                                                      Seems like an interesting project.

                                                      I would probably reach for wget (can read an a file of urls to fetch) or a combination of curl and xargs (or gnu parallel), before trying a bespoke tool like this though. That said, the X-Cache-Status statistics are neat, if you need that.

                                                      1. 2

                                                        That’s what I thought. When looping through a file with a few hundred thousand entries with bash/ curl, I had a throughput of ~16 requests/second, while cache_warmer easily was able to do >500 req/s.

                                                        Thanks for the hint, I should probably add that to the post.

                                                        1. 1

                                                          Indeed, looping via bash would be slow due to not reusing the connection. With a carefully crafted xargs, you should be able to get multiple urls on the same line (eg. curl url1 url2 url3...). Then curl /should/ reuse a connection in that case. If curl had a ‘read urls from a file’ parameter, it would be quite a bit easier to script, but alas it currently does not.

                                                      1. 5

                                                        I’ve found some more information (not from original sources though, so treat with care):

                                                        • HN thread of someone stepping down from TSC (with a lot of occurances of “SJW” in the comments)
                                                        • Someone in a german forum did some research and claims the actual reasons are “thoughtless use of pronouns” and “assumtions of gender”.

                                                        Personally, I’m kind of not sure whether it’s a good idea to give this too much attention. It feels like either side can only lose from this….

                                                        1. 6

                                                          The fact this is plausibly useful is a sad comment on the state of software engineering.

                                                          1. 2

                                                            How would you describe the alternative desired state? That insecure protocols don’t exist? That engineers would have deeper knowledge of cryptography?

                                                            1. 8

                                                              Distributions of major server software would come with good configurations out of the box, alleviating every developer from being responsible for configuring things.

                                                              https://caddyserver.com/ is a great example of this; you configure it to do what your app needs, all the TLS defaults are well curated.

                                                              1. 4

                                                                While I agree that a “reasonably secure default” should be standard, mostly you have to find a trade-off between security and compatibility. If you need support for IE8, there’s no way around SHA. If you want to support Windows XP or Android 2, there’s no hope at all. If you want it more secure (as of today) you fence out most Androids (but 4.x), Javas, IEs, Mobile Phones, non-up-to-date browsers. Unfortunately, there is no one size fits all.

                                                                1. 3

                                                                  On the other hand, compatibility with older software is very easy to figure out (people see an error message), whereas insecure configuration appears to work perfectly fine. I also believe developers are more likely to know that they need to support some obsolete software (modern web development doesn’t “just work” on IE8 or Android 2) than about the newest TLS configuration options.

                                                                  1. 2

                                                                    I think if you want that, we ought to have APIs that express things in terms of goals, instead of implementation details: ssl_ciphers +modern +ie8 maybe. Then it’s clear what needs to be changed to drop a platform, instead of it being a guessing game.

                                                                    1. 2

                                                                      This would be great. This is exactly what I’m trying to provide the user with the snippets in nginx.vim: Choose between ciphers-paranoid, ciphers-modern, ciphers-compat, ciphers-low cipher suites.

                                                            1. 4

                                                              First thought: didn’t I see this in the new nginx vim-syntax posted here a while ago?

                                                              Ah.. it’s by the same author.

                                                              Nice work, ch4r. It helped me get rid of a couple I had in my 2014 nginx ssl config. Hat tip.

                                                              1. 2

                                                                Good catch :)

                                                                sslsecure.vim is actually trying to get some of the security features from nginx.vim to work with other configuration files and source code as well.

                                                                Good to hear it actually helped you re-securing your config!

                                                              1. 2

                                                                FYI: I’ve added support for embedded syntax highlighting for ERB/ Jinja templates and LUA.

                                                                1. 3

                                                                  I’m currently working on https://developermail.io - an email-SaaS that is configured with git - might be referred to as the “Heroku for email”.

                                                                  I’d apprechiate if you consider adding it to your list!

                                                                  1. 3

                                                                    Oh cool! I love stuff like this - will bear in mind in case the need arises.

                                                                    Other than exposing the sieve rules over git, how do you compare to fastmail offering more-or-less the same featureset / price point?

                                                                    Your security page claims you use SHA-512 for password storage.

                                                                    My understanding of crypto is… not deep, but that looks like poor marketing given http://security.stackexchange.com/questions/52041/is-using-sha-512-for-storing-passwords-tolerable is one of the first things I came across trying to figure out if it was OK.

                                                                    1. 2

                                                                      The unique selling point would be the git configuration, including all the advantages that git brings: You know who changed what when, you can have review processes using branches/ pull-requests, rollback changes and do bulk changes more easily. Plus you can comment your configuration. Plus: It feels more natural and leet for developers :)

                                                                      Additionally, as you pointed out, the sieve control is way more powerful.

                                                                      1. 2

                                                                        Regardig your observation for SHA-512:

                                                                        Obviously, one would want to have script or bcrypt, maybe even Argon2 instead of SHA-512. There’s however another factor when choosing the right algorithm in my opinion, and that’s the implementation.

                                                                        Let’s take dovecot as an example. According to the documentation SHA-512 is the strongest scheme implemented on all platforms (it mentions bcrypt but with the annotation that it’s not available on most Linux distributions). Furthermore, a Argon2/ scrypt plugin is mentioned, but it’s third party. Of course I’ve considered using one of the mentioned algorithms, do not feel competent enough to review a 3rd party plugin on my own regarding its implementation - Especially since dovecot itself was recently reviewed and received an extremly positive rating. A bad implementation of a secure algorithm may introduce other attack vectors or security issues. In case I missed something, I’d apprechiate feedback. And of course I’ll follow the ongoing implementation and new security features closely to improve the security whereever possible. I’m also wondering how other email providers handle the issue. Most of them are pretty silent on what algorithms they’re using from what I’ve observed. As anyone some insights to share?

                                                                        TL;DR: I’d love to use scrypt() but I’m not sure whether to trust the inofficial plugins implementing it.

                                                                        Edit: developermail.io uses rounds=100000. While one would still prefer scrypt(), this should increase computational requirements a lot. I’m going to add this to the website.

                                                                    1. 8

                                                                      For anyone considering this, there’s also a more elaborate fuzzy searcher called fzf that comes with shell history search shortcut

                                                                      1. 1

                                                                        I’m using fzf and I was wondering whether there’s any advantage using hstr (performance, packages, etc) - as it offers the same feature (and a lot more).

                                                                      1. 5

                                                                        The certificate for https://download.servo.org/ seems to have expired. Unfortunately, I can’t find checksums for the downloaded files elsewhere to verify the download :(

                                                                        Super exited to try out Servo, though!

                                                                        1. 1

                                                                          There’s an issue already on Github

                                                                        1. 1
                                                                          • Safety and security. I’m looking forward to Firefox with more and more Rust components.
                                                                          • C compatible. It’s possible to exchange more and more of your critical system components with Rust code. You can even write Kernel modules in Rust! This would eradicate so many attack vectors.
                                                                          • Plus: The community is incredibly friendly and helpful
                                                                          • And finally: It’s a Mozilla project. Mozilla is one of the few defenders and advocates of internet users left out there.
                                                                          1. 8

                                                                            Okay, that’s great and all.

                                                                            What have you, chr4, personally used Rust for?

                                                                            You didn’t answer any of the above questions. Concrete examples please.

                                                                            1. 6

                                                                              Well, I answered the question about the Rust features I apprechiate. But you’re right, I was missing out on the examples:

                                                                              I’m currently rewriting some of my C projects in Rust (with more or less success), as well as tinkering around with rewriting a Go JSON API. The latter is probably not very useful, as I think Go is the better fit for the job, but it helps me improving my Rust skills. Furthermore, I’m currently working on An Interpreter In Rust.

                                                                              All projects so far are not for commercial projects.