1. 12

    It’s really no more clear. I much prefer putting the constants to the right, as a general rule.

    I expected it to be about using a single comparator in sort algorithms, ect.

    1. 14

      Yeah, I think this is definitely in the realm of one person’s aesthetic sense. I think that, e.g.,

      if (x < MIN_VALUE || x > MAX_VALUE) {
              return (EINVAL);
      }
      

      … makes a lot of sense, especially when you consider a verbal version: less than the minimum or greater than the maximum allowed value.

      1.  

        It does. Except that you should get rid of those parens around EINVAL. There is no need for those. :-)

        1.  

          While that’s technically true in this example, I’ve read and written so much SunOS cstyle code that it looks weird without the parens at this stage.

    1. 3

      I don’t completely disagree with the main thought but there are a lot of completely incorrect assumptions in this article. For example that an OO design means that you get a performance hit. First of all, performance is relative. But I have also never really been in a situation where an object reference or an indirect call through several objects was actually the bottleneck of a program. OO code can be extremely performant.

      Also a lot of the pain points that the author talk about are not per se about OOP but about bad style, or bad implementations. I don’t know if OOP encourages those, but I have seen similar bad patterns or style in pretty much any kind of program. Whether functional, procedural or OO.

      1. 2

        Apologies to those who may not want to see product announcements - I’m aware this seems to split people, but this seemed relevant and ties in with some of the ‘how do I keep using Java?’ posts we’ve had over the past few months. Tagged ‘release’ but technically is still in preview.

        Oracle will no doubt gain some new licensees but overall seem to have done an impressive job at driving people away from their product here. I honestly can’t work out if that will do them any good in the long term? However, I’m always happy to see open source distributions getting even more adoption.

        1. 3

          No need to apologize. This is pretty big news imo.

        1. 3

          For all I know, they sell those cups in the gift shop.

          They do .. I have one too :-)

          1. 20

            The “lacks” of Go in the article are highly opinionated and without any context of what you’re pretending to solve with the language.

            Garbage collection is something bad? Can’t disagree harder.

            The article ends with a bunch of extreme opinions like “Rust will be better than Go in every possible task

            There’re use cases for Go, use cases for Rust, for both, and for none of them. Just pick the right tool for your job and stop bragging about yours.

            You love Rust, we get it.

            1. 2

              Yes, I would argue GC is something that’s inherently bad in this context. Actually, I’d go as far as to say that a GC is bad for any statically typed language. And Go is, essentially, statically typed.

              It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.

              That’s why Go has the “defer” statement, it’s there because of the GC. Otherwise, destructors could be used to defer cleanup tasks at the end of a scope.

              So that’s what makes a GC inherently bad.

              A GC, however, is also bad because it “implies” the language doesn’t have good resource management mechanisms.

              There was an article posted here, about how Rust essentially has a “static GC”, since manual deallocation is almost never needed. Same goes with well written C++, it behaves just like a garbage collected language, no manual deallocation required, all of it is figured out at compile time based on your code.

              So, essentially, a GC does what language like C++ and Rust do at compile time… but it does it at runtime. Isn’t this inherently bad ? Doing something that can be done at CT during runtime ? It’s bad from a performance perspective and also bad from a code validation perspective. And it has essentially no upsides, as far as I’ve been able to tell.

              As far as I can tell the main “support” for GC is that they’ve always been used. But that doesn’t automatically make them good. GCs seem to be closer to a hack for a language to be easier to implement rather than a feature for a user of the language.

              Feel free to convince me otherwise.

              1. 11

                It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left.

                Why do you think this would be the case? A language with GC can also have linear or affine types for enforcing that resources are always freed and not used after they’re freed. Most languages don’t go this route because they prefer to spend their complexity budgets elsewhere and defer/try-with-resources work well in practice, but it’s certainly possible. See ATS for an example. You can also use rank-N types to a similar effect, although you are limited to a stack discipline which is not the case with linear/affine types.

                So, essentially, a GC does what language like C++ and Rust do at compile time… but it does it at runtime. Isn’t this inherently bad ?

                No, not necessarily. Garbage collectors can move and compact data for better cache locality and elimination of fragmentation concerns. They also allow for much faster allocation than in a language where you’re calling the equivalent of malloc under the hood for anything that doesn’t follow a clean stack discipline. Reclamation of short-lived data is also essentially free with a generational collector. There are also garbage collectors with hard bounds on pause times which is not the case in C++ where a chain of frees can take an arbitrary amount of time.

                Beyond all of this, garbage collection allows for a language that is both simpler and more expressive. Certain idioms that can be awkward to express in Rust are quite easy in a language with garbage collection precisely because you do not need to explain to the compiler how memory will be managed. Pervasive use of persistent data structures also becomes a viable option when you have a GC that allows for effortless and efficient sharing.

                In short, garbage collection is more flexible than Rust-style memory management, can have great performance (especially for functional languages that perform a lot of small allocations), and does not preclude use of linear or affine types for managing resources. GC is hardly a hack, and its popularity is the result of a number of advantages over the alternatives for common use cases.

                1. 1

                  What idioms are unavailable in Rust or in modern C++, because of their lack of GC, but are available in a statically typed GC language ?

                  I perfectly agree with GC allowing for more flexibility and more concise code as far as dynamic language go, but that’s neither here nor there.

                  As for the theoretical performance benefits and real-time capabilities of a GCed language… I think the word theoretical is what I’d focus my counter upon there, because they don’t actually exist. The GC overhead is too big, in practice, to make those benefits outshine languages without runtime memory management logic.

                  1. 9

                    I’m not sure about C++, but there are functions you can write in OCaml and Haskell (both statically typed) that cannot be written in Rust because they abstract over what is captured by the closure, and Rust makes these things explicit.

                    The idea that all memory should be explicitly tracked and accounted for in the semantics of the language is perhaps important for a systems language, but to say that it should be true for all statically typed languages is preposterous. Languages should have the semantics that make sense for the language. Saying a priori that all languages must account for some particular feature just seems like a failure of the imagination. If it makes sense for the semantics to include explicit control over memory, then include it. If it makes sense for this not to be part of the semantics (and for a GC to be used so that the implementation of the language does not consume infinite memory), this is also a perfectly sensible decision.

                    1. 2

                      there are functions you can write in OCaml and Haskell (both statically typed) that cannot be written in Rust because they abstract over what is captured by the closure

                      Could you give me an example of this ?

                      1. 8

                        As far as I understand and have been told by people who understand Rust quite a bit better than me, it’s not possible to re-implement this code in Rust (if it is, I would be curious to see the implementation!)

                        https://gist.github.com/dbp/0c92ca0b4a235cae2f7e26abc14e29fe

                        Note that the polymorphic variables (a, b, c) get instantiated with different closures in different ways, depending on what the format string is, so giving a type to them is problematic because Rust is explicit about typing closures (they have to talk about lifetimes, etc).

                        1. 2

                          My God, that is some of the most opaque code I’ve ever seen. If it’s true Rust can’t express the same thing, then maybe it’s for the best.

                          1. 2

                            If you want to understand it (not sure if you do!), the approach is described in this paper: http://www.brics.dk/RS/98/12/BRICS-RS-98-12.pdf

                            And probably the reason why it seems so complex is because CPS (continuation-passing style) is, in general, quite hard to wrap your head around.

                            I do think that the restrictions present in this example will show up in simpler examples (anywhere where you are trying to quantify over different functions with sufficiently different memory usage, but the same type in a GC’d functional language), this is just a particular thing that I have on hand because I thought it would work in Rust but doesn’t seem to.

                            1. 2

                              FWIW, I spent ~10 minutes trying to convert your example to Rust. I ultimately failed, but I’m not sure if it’s an actual language limitation or not. In particular, you can write closure types in Rust with 'static bounds which will ensure that the closure’s environment never borrows anything that has a lifetime shorter than the lifetime of the program. For example, Box<FnOnce(String) + 'static> is one such type.

                              So what I mean to say is that I failed, but I’m not sure if it’s because I couldn’t wrap my head around your code in a few minutes or if there is some limitation of Rust that prevents it. I don’t think I buy your explanation, because you should technically be able to work around that by simply forbidding borrows in your closure’s environment. The actual thing where I got really hung up on was the automatic currying that Haskell has. In theory, that shouldn’t be a blocker because you can just introduce new closures, but I couldn’t make everything line up.

                              N.B. I attempted to get any Rust program working. There is probably the separate question of whether it’s a roughly equivalent program in terms of performance characteristics. It’s been a long time since I wrote Haskell in anger, so it’s hard for me to predict what kind of copying and/or heap allocations are present in the Haskell program. The Rust program I started to write did require heap allocating some of the closures.

                2. 5

                  It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.

                  Deterministic freeing of resources is not mutually exclusive with all forms of garbage collection. In fact, this is shown by Rust, where reference counting (Rc) does not exclude Drop. Of course, Drop may never be called when you create cycles.

                  (Unless you do not count reference counting as a form of garbage collection.)

                  1. 2

                    Well… I don’t count shared pointers (or RC pointers or w/e you wish to call them) as garbage collected.

                    If, in your vocabulary, that is garbage collection then I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.

                    1. 8

                      If, in your vocabulary, that is garbage collection

                      Reference counting is generally agreed to be a form of garbage collection.

                      I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.

                      In Rc or shared_ptr, the moment of the object’s destruction can also not be determined at compile time. Only the destruction of the Rc itself; put differently the reference count decrement can be determined at compile time.

                      I think your argument is against tracing garbage collectors. I agree that the lack of deterministic destruction is a large shortcoming of languages with tracing GCs. It effectively brings back a parallel to manual memory management through the backdoor — it requires manual resource management. You don’t have to convince me :). I once wrote a binding to Tensorflow for Go. Since Tensorflow wants memory aligned on 32-byte boundaries on amd64 and Go allocates (IIRC) on 16-byte boundaries, you have to allocate memory in C-land. However, since finalizers are not guaranteed to run, you end up managing memory objects with Close() functions. This was one of the reasons I rewrote some fairly large Tensorflow projects in Rust.

                      1. 2

                        However, since finalizers are not guaranteed to run, you end up managing memory objects with Close() functions.

                        Hmm. This seems a bit odd to me. As I understand it, Go code that binds to C libraries tend to use finalizers to free memory allocated by C. Despite the lack of a guarantee around finalizers, I think this has worked well enough in practice. What caused it to not work well in the Tensorflow environment?

                        1. 3

                          When doing prediction, you typically allocate large tensors relatively rapidly in succession. Since the wrapping Go objects are very small, the garbage collector kicks in relatively infrequently, while you are filling memory in C-land. There are definitely workarounds to put bounds on memory use, e.g. by using an object pool. But I realized that what I really want is just deterministic destruction ;). But that may be my C++ background.

                          I have rewritten all that code probably around the 1.6-1.7 time frame, so maybe things have improved. Ideally, you’d be able to hint the Go GC about the actual object sizes including C-allocated objects. Some runtimes provide support for tracking C objects. E.g. SICStus Prolog has its own malloc that counts allocations in C-land towards the SICStus heap (SICStus Prolog can raise a recoverable exception when you use up your heap).

                          1. 1

                            Interesting! Thanks for elaborating on that.

                      2. 3

                        So Python, Swift, Nim, and others all have RC memory management … according to you these are not GC languages?

                    2. 5

                      One benefit of GC is that the language can be way simpler than a language with manual memory management (either explicitly like in C/C++ or implicitly like in Rust).

                      This simplicity then can either be preserved, keeping the language simple, or spent on other worthwhile things that require complexity.

                      I agree that Go is bad, Rust is good, but let’s be honest, Rust is approaching a C++-level of complexity very rapidly as it keeps adding features with almost every release.

                      1. 1

                        you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.

                        That is a terrible point. The result of closing the file stream should always be checked and reported or you will have buggy code that can’t handle edge cases.

                        1. 0

                          You can turn off garbage collection in Go and manage memory manually, if you want.

                          It’s impractical, but possible.

                          1. 2

                            Is this actually used with any production code ? To my knowledge it was meant to be more of a feature for debugging and language developers. Rather than a true GC-less option, like the one a language like D provides.

                            1. 1

                              Here is a shocking fact: For those of us who write programs in Go, the garbage collector is actually a wanted feature.

                              If you work on something where having a GC is a real problem, use another language.

                      1. 2

                        Yeah computers are stupid

                        1. 2

                          Can someone ELI5 why Firefox is not to be trusted anymore?

                          1. 4

                            They’ve done some questionable things. They did this weird tie-in with Mr. Robot or some TV show, where they auto-installed a plugin(but disabled thankfully) to like everyone as part of an update. It wasn’t enabled by default if I remember right, but it got installed everywhere.

                            Their income stream, according to wikipedia: is funded by donations and “search royalties”. But really their entire revenue stream comes directly from Google. Also in 2012 they failed an IRS audit having to pay 1.5 million dollars. Hopefully they learned their lesson, time will tell.

                            They bought pocket and said it would be open sourced, but it’s been over a year now, and so far only the FF plugin is OSS.

                            1. 4

                              Some of this isn’t true.

                              1. Mr. Robot was like a promotion, but not a paid thing, like an ad. Someone thought this was a good idea and managed tto bypass code review. This won’t happen again.
                              2. Money comes from a variety of search providers, depending on locale. Money ggoes directly into the people, the engineers, the product. There are no stakeholders we need to make happy. No corporations we got to talk to. Search providers come to us to get our users.
                              3. Pocket. Still not everything, but much more than the add-on: https://github.com/Pocket?tab=repositories
                              1. 3
                                1. OK, fair enough, but I never used the word “ad”. Glad it won’t happen again.

                                2. When like 80 or 90% of their funding is directly from Google… It at the very least raises questions. So I wouldn’t say not true, perhaps I over-simplified, and fair enough.

                                3. YAY! Good to know. I hadn’t checked in a while, happy to be wrong here. Hopefully this will continue.

                                But overall thank you for elaborating. I was trying to keep it simple, but I don’t disagree with anything you said here. Also, I still use FF as my default browser. It’s the best of the options.

                              2. 4

                                But really their entire revenue stream comes directly from Google.

                                To put this part another way: the majority of their income comes from auctioning off being the default search bar target. That happens to be worth somewhere in the 100s of $millions to Google, but Microsoft also bid (as did other search engines in other parts of the world. IIRC the choice is localised) - Google just bid higher. There’s a meta-level criticism where Mozilla can’t afford to challenge /all/ the possible corporate bidders for that search placement, but they aren’t directly beholden to Google in the way the previous poster suggests.

                                1. 1

                                  Agreed. Except it’s well over half of their income, I think it’s up in the 80% or 90% range of how much of their funding comes from Google.

                                  1. 2

                                    And if they diversify and, say, sell out tiles on the new tab screen? Or integrate read-it-later services? That also doesn’t fly as recent history has shown.

                                    People ask from Mozilla to not sell ads, not take money for search engine integration, not partner with media properties and still keep up their investment into development of the platform.

                                    People don’t leave any explanation of how they can do that while also rejecting all their means of making money.

                                    1. 2

                                      Agreed. I assume this wasn’t an attack on me personally, and just as a comment of the sad state of FF’s diversification woes. They definitely need diversification. I don’t have any awesome suggestions here, except I think they need to diversify. Having all your income controlled by one source is almost always a terrible idea long-term.

                                      I don’t have problems, personally, with their selling of search integration, I have problems with Google essentially being their only income stream. I think it’s great they are trying to diversify, and I like that they do search integration by region/area, so at least it’s not 100% Google. I hope they continue testing the waters and finding new ways to diversify. I’m sure some will be mistakes, but hopefully with time, they can get Google(or anyone else) down around the 40-50% range.

                                    2. 1

                                      That’s what “majority of their income” means. Or at least that’s what I intended it to mean!

                                2. 2

                                  You also have the fact they are based in the USA, that means following American laws. Regarding personal datas, they are not very protective about them and even less if you are not an American citizen.

                                  Moreover, they are testing in nightly to use Cloudfare DNS as DNS resolver even if the operating system configure an other. A DNS knows all domaine name resolution you did, that means it know which websiste you visit. You should be able to disable it in about:config but in making this way and not in the Firefox preferences menu, it is clear indication to make it not easily done.

                                  You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                  1. 3

                                    Mozilla does not have your personal data. Whatever they have for sync is encrypted in such a way that it cannot be tied to an account or decrypted.

                                    1. 1

                                      They have my sync data, sync data is personal data so they have my personal data. How do they encrypt it? Do you have any link about how they manage it? In which country is it stored? What is the law about it?

                                      1. 4

                                        Mozilla has your encrypted sync data. They do not have the key to decrypt that data. Your key never leaves your computer. All data is encrypted and decrypted locally in Firefox with a key that only you have.

                                        Your data is encrypted with very strong crypto and the encryption key is derived from your password with a very strong key derivation algorithm. All locally.

                                        The encrypted data is copied to and from Mozilla’s servers. The servers are dumb and do not actually know or do crypto. They just store blobs. The servers are in the USA and on AWS.

                                        The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.

                                        This of course assuming that your password is not ‘hunter2’.

                                        It is starting to sound like you went through this effort because you don’t trust Mozilla with your data. That is totally fair, but I think that if you had understood the architecture a bit better, you may actually not have decided to self host. This is all put together really well, and with privacy and data breaches in mind. IMO there is very little reason to self host.

                                        1. 1

                                          “The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.”

                                          That’s not the worst by far. The Core Secrets leak indicated they were compeling via FBI suppliers to put in backdoors. So, they’d either pay/force a developer to insert a weakness that looks accidental, push malware in during an update, or (most likely) just use a browser sploit on the target.

                                          1. 1

                                            In all of those cases, it’s game over for your browser data regardless of whether you use Firefox Sync, Mozilla-hosted or otherwise.

                                            1. 1

                                              That’s true! Unless they rewrite it all in Rust with overflow checking on. And in a form that an info-flow analyzer can check for leaks. ;)

                                          2. 1

                                            As you said, it’s totally fair to not trust Mozilla with data. As part of that, it should always be possible/supported to “self-host”, as a means to keep that as an option. Enough said to that point.

                                            As to “understanding the architecture”, it also comes with appreciating the business practices, ethics, and means to work to the privacy laws of a given jurisdiction. This isn’t being conveyed well by any of the major players, so with the minor ones having to cater to those “big guys”, it’s no surprise that mistrust would be present here.

                                          3. 2

                                            How do they encrypt it?

                                            On the client, of course. (Even Chrome does this the same way.) Firefox is open source, you can find out yourself how exactly everything is done. I found this keys module, if you really care, you can find where the encrypt operation is invoked and what data is there, etc.

                                            1. 2

                                              You don’t have to give it to them. Firefox sync is totally optional, I for one don’t use it.

                                              Country: almost certainly the USA. Encryption: looks like this is what they use: https://wiki.mozilla.org/Labs/Weave/Developer/Crypto

                                          4. 2

                                            The move to Clouflare as dns over https is annoying enough to make me consider other browsers.

                                            You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                            Please, no FUD. :)

                                            1. 3

                                              move to Clouflare

                                              It’s an experiment, not a permanent “move”. Right now you can manually set your own resolver and enable-disable DoH in about:config.

                                        1. 6

                                          This happens with a lot of bigger “open source” projects. Basically code dumps without any consideration for making it easy to deploy it by someone who’s not a developer and intimately familiar with the code base. I guess as good a definition of “devops” as any.

                                          The easiest to improve this is to get it properly packaged for the most popular Linux distributions. This is a great QA tool as well. If your software (and its dependencies) are easy to package it means you have your dependencies under control, and you can build the software from source at any point.

                                          Unfortunately nowadays you can be happy to get any of those projects to even work in a provider Docker container. Running them as a production service yourself is a completely different story, and practically impossible.

                                          1. 21

                                            To be fair, I don’t think every piece of open source is really meant to be run as-is by users so much as “here’s what we use to run this, you can build off it if you want.” It’s perfectly fair to release your internal tools for the purposes of knowledge sharing and reference without intending to support public consumption of them by non-developers.

                                            Further, it looks like the author made minimal effort to actually use what is distributed as it was meant to be:

                                            • The suggested route of using docker images wasn’t used. Wanting to understand something and be able to run it without a pre-built image is fine, but totally skipping the intended usage and trying to roll your own from the beginning is only likely to make the system harder to understand.
                                            • The projects appear to provide release packages, yet he pulled the source directly from git, and at whatever state master happened to be in at the time, rather than a branch or tag. At least one of them looks to be failing CI in its current state, so it’s not even clear that what he had was a correctly functioning version to start with.
                                            • He’s ignored the npm-shrinkwrap provided and automatically upgraded dependencies without any indication or testing to confirm that they will work. While it would be great to think that this wouldn’t be an issue, the state of npm is not such that this is a realistic expectation.
                                            1. 3

                                              Where is the purpose of knowledge sharing when you do not make things in order to be understandable? Knowledge sharing is not just take my stuff and understand. You have to make it understandable and be sure the person understand it well. That’s why you have wiki and documentation, to facilitate the understanding.

                                              Where do you find the suggested route is Docker ? You know, when you try to deploy something, you begin by somewhere. I read the install part of one the repository, Firefox Accounts Server, as I do for a lot of application I install and I followed the process. The written process is git clone and npm install. After some research, I discovered there was not an unique repository needed but there are several links together. Where is it written? How can I suppose to know?

                                              You can’t say I did minimal effort when I took so much time on it. I am used to deploy stuff and configure application. I configure by my own each microservice in order to make it works. The problem was after three days, I still have to guess things by my own, understand it, configure it properly and fix issues I got. I am sorry but it is too much. It is not my job to make the application easily understandable and easy to deploy. It is the job of maintainers and it is what I said in my blog post.

                                              When I compare other applications I deployed, and some of them are bigger than this, FXA has a lot of work to do. The master branch is actually a development branch, there is no master branch and the documentation told you to pull from it to deploy in production. :o

                                              Here we have just a big thing which is our stuff, deal with it and make it workable if you want. I made a try and failed. It is not suppose to be deploy by someone who is not working in this project full time. That’s all and it is questionable when it is a Mozilla Foundation and who is publicly saying that privacy matter.

                                              1. 5

                                                Knowledge sharing is not just take my stuff and understand. You have to make it understandable and be sure the person understand it well.

                                                Your opinion is not universal.

                                                Different cultures handle this problem differently. Some culture/language pairings are listener-responsible and some are writer-responsible.

                                                1. 2

                                                  Clearly. I think the best way is to have writer and listener responsible but it is not the debate here I guess.

                                                  1. 0

                                                    I’m in agreement. Can’t understand how people can defend bad practice of the art. Why would anyone who cares about good work defend anything like this? It’s like they are working against themselves, karmic-ally setting themselves up for a later fall through someone else’s failing, … for no sensible gain.

                                                  2. 1

                                                    Shouldn’t they both be “responsible”? And please tell me which cultures? Are we talking professional vs unprofessional, or what? I’ve worked in hundreds of different cultures worldwide over many decades, and I’ve never seen a claim like this.

                                                    1. 2

                                                      Come to Asia, never cease being frustrated.

                                                      1. 1

                                                        Been there. Once I understood “hectoring”, learned pretty quick how to generate a larger, louder response.

                                              2. 4

                                                Here is some more concrete documentation that I found:

                                                Run your own Firefox Accounts Server https://mozilla-services.readthedocs.io/en/latest/howtos/run-fxa.html

                                                Self Hosting via Docker https://github.com/michielbdejong/fxa-self-hosting/blob/master/README.md

                                                Installing Firefox Accounts https://webplatform.github.io/docs/WPD/Infrastructure/procedures/Installing_and_theme_Firefox_accounts/

                                                The ‘code dump’ argument is a bit odd .. these projects are all super accessible and being developed in the open on Github. No project is perfect. If you think something specific, like self-hosting documentation, is missing, file a bug or make the investment and work on it together with the devs. Open source is not one way.

                                              1. 7

                                                I am happy to talk to the fxa team to find out what they recommend and if there is more or better documentation available.

                                                1. 5

                                                  These projects are way easier to deploy when you use Docker. It will hide 90% of the stupid stuff that is automated for you in the Dockerfile.

                                                  1. 4

                                                    Is this a bug or a feature? One could fail to explain the “stupid stuff”, and be tripped up by a part of it that really mattered.

                                                    It’s not enough to be “open source”, it needs to be transparent and credible too, so that one can reasonably maintain it. Hiding things in a Dockerfile doesn’t pass this test.

                                                    1. 8

                                                      I think it’s probably “enough” for a project to be whatever the maintainers want it to be. A Dockerfile is just an abstraction like a Makefile or even a shell script; the built artefact is effectively a tar file containing the assembled software, ready for deployment. I’m not a fan of the ergonomics of the docker CLI, but the idea that you’re “hiding” anything with Docker any more than you are with any other packaging and deployment infrastructure seems specious at best.

                                                      1. 0

                                                        Instead of focusing on a single word, try considering the other, opposing two - being credible and transparent. Clearly this isn’t.

                                                        For one thing, the reason you don’t do this is that it’s easy to be taken advantage of and place exploitative code in a big pile of things. For another it’s bad form to not communicate your work well, because maintainer’s struggling to deal with an issue don’t create more (and possibly even worse) versions they might claim “fix” something, and in the fog of code it might not be easy to tell which end is up.

                                                        I’m surprised you’d defend bad practice, since nearly everyone has had one of these waste a few hundred hours of their time. Your defense sounds even more specious than focusing on the wrong word and missing the point of the comment.

                                                        1. 2

                                                          I highlighted the word enough because your comment seems to have come from a place of entitlement and I was trying to call that out. The project doesn’t owe you anything.

                                                          Indeed, most of my comment was attempting to address your apparent suggestion that using a Dockerfile instead of some other packaging or deployment mechanism is somehow not transparent (or, I guess, credible?). I’m not really defending the use of Docker in any way – indeed, I don’t have any use for it myself – merely addressing what I think is specious criticism.

                                                          Regardless of what point you were trying to make, your comment comes across as an effectively baseless criticism of the project for not delivering their software in a way that meets your personal sense of aesthetics. Things are only hidden in a Dockerfile to the extent that they are conveniently arranged for consumption by consumers that do not necessarily need to understand exactly how they work. This isn’t any different to any other process of software assembly that abstracts some amount of the internal complexity of its operation from the consumer; e.g., I am not in the habit of reviewing the Linux kernel source that executes on my desktop.

                                                          If you want to know how the Dockerfile works, you could always look inside! It seems no more or less transparent or credible than a shell script or a markdown document that executes or describes a similar set of steps.

                                                          1. -1

                                                            I build them so I know whats inside. You’re looking for something to be outraged at, and find it in my words.

                                                            Perhaps you can defend those who write programs with meaningless variable names, and stale comments that no longer reflect the code they were next to.

                                                            Point your anger at somewhere else. Meanwhile who speaks up for something unintentionally vague or misleading. Or are you also going to defend syntax errors and typos next.

                                                            1. 1

                                                              I’m not angry – merely flabbergasted! How is a Dockerfile “vague or misleading”? By definition, it contains a concrete set of executable steps that set up the software in the built image.

                                                    2. 1

                                                      I hate the docker setups that are just one piece of the setup and you are expected to spend a few days writing a docker compose file to piece together the whole thing

                                                      1. 1

                                                        Which problems do you encounter when writing docker-compose files? I’ve mostly had the experience that the upstream Dockerfile is horrible (for example Seafile is trying to pack everything into a single image - which causes feature-creep for the setup scripts) - but writing docker-compose.yaml always felt rather straight forward (besides Volumes, I’m still confused by Volume management on occasion).

                                                    1. 2

                                                      Friendly reminder that SScreenSaver (on X11) has the „Sad Mac” screensaver module since ages, and if you’re on (even pretty old) desktop Linux, you can enable it just right now.

                                                      1. 1

                                                        I’ve never seen it. What does it look like?

                                                        1. 1

                                                          It’s just an option in „BSOD” screensaver, where you could also enable other „sad screens” like the BSOD itself, Guru Meditation, variious panic()s and other unusual ones.

                                                      1. 5

                                                        My wife accepted a job in The Netherlands, so we are moving back to NL after five years in Germany. We are both really looking forward to moving back and to the new apartment. Our 4 year old daughter is also pretty excited about the move (moving closer to grandparents) and starts to practice Dutch more.

                                                        We are have been packing the last few days and will continue throughout the weekend (won’t have much time the last 2.5 weeks at work).

                                                        We are not moving the furniture, turns out that it is cheaper to repurchase all the furniture than to use a moving company.

                                                        1. 3

                                                          🇳🇱 🧀 ❤️

                                                        1. 4

                                                          More Elixir learning / hacking! Anyone else doing this?

                                                          I am thinking about a project that could map well to it. But I am very far away from even putting together a simple application. Learning Elixir, Phoenix and Ecto.

                                                          Also, family time :-)

                                                          1. 1

                                                            I also occasionally spend time learning Elixir! Would love to collaborate on some hack!

                                                          1. 2

                                                            Could one store the IP address of the initial request that causes you to generate a JWT in the token itself? Then you can validate that the current request comes from the same IP. If they’re different, then force them to log in again from their current IP.

                                                            The user would need to re-login if they turn on a VPN or change locations, but that’s a small price to pay if that reduces the possibility for certain types of attacks. I’m definitely not a security expert, but working on a fairly sensitive app where a breach would be bad for a user. The fact that I haven’t seen this suggested next to more complex safeguards makes me think there’s a fundamental flaw in it that I’m just not thinking of.

                                                            1. 5

                                                              IPs aren’t a great factor to base stuff like this one, although that’s a good idea.

                                                              I think what’s better is something like token binding (https://datatracker.ietf.org/wg/tokbind/documents/) which is a way to pin a certain token to a specific TLS session. This way you have some basic guarantees. But in the real world things are sorta messy =p

                                                              1. 2

                                                                Most home users would have to re log in every day. Services that tie my login to an IP address piss me off so much because they are constantly logging me out.

                                                                1. 2

                                                                  The fact that I haven’t seen this suggested next to more complex safeguards makes me think there’s a fundamental flaw in it that I’m just not thinking of.

                                                                  It’s not a safe presumption that a users requests will always come from the same IP - even from request to request. Their internet access could be load balanced or otherwise change due to factors like roaming.

                                                                  1. 1

                                                                    Yeah that is also a common technique for cookies. If the remote IP changes you can invalidate the cookie.

                                                                  1. 6

                                                                    TLDR: The laptop was not tampered with.

                                                                    Still a good read though :-)

                                                                    1. 16

                                                                      That he knows of.

                                                                      1. 5

                                                                        It’s impossible to prove… :)

                                                                        1. 5

                                                                          For sure haha. One can do better than he did, though.

                                                                          For one, he can block evil maid style attacks very cheaply. I’ve done plenty of tamper-evident schemes for that stuff. You can at least know if they opened the case. From there, one can use analog/RF profiling of the devices to detect chip substitutions. It requires specialist, time-consuming skills or occasional help of a specialist to give you black box method plus steps to follow for device they already profiled.

                                                                          The typical recommendation I gave, though, was to buy a new laptop in-country and clear/sell it before you leave. This avoids risks at border crossings where they can legally search or might sabotage devices. Your actual data is retrievable over a VPN after you put Linux/BSD on that sucker. Alternatively, you use it as a thin client for a real system but latencies could be too much for that.

                                                                          So, there’s a few ideas for folks looking into solving this problem.

                                                                          1. 3

                                                                            This (and the original article) are a techno solutions to a techno problem that doesn’t really exist.

                                                                            If you’re a journo doing this, they will look at your visa and say, you claim to be a journalist, but you have no laptop, we don’t believe you, entry denied.

                                                                            I’m pretty sure even a very open country like NZ will do this to you. (If you claim not to be a journalist and start behaving as one, again, violating your visa conditions (ie working not visiting, out you go).

                                                                            As to spying on what you have on an encrypted drive….. rubber hose code breaking sorts that out pretty quick.

                                                                            I grew up in the Very Bad Old days and tend to have a very dim view of the technical abilities, patience and human kindness of the average spook.

                                                                            1. 2

                                                                              I got the idea from people doing it. They werent journalists, though. The other thing people did which might address that problem is take boring laptops with them. They have either nothing interesting or some misinformation. Nothing secret happens on it during trip. Might even use it for non-critical stuff like youtube just so its different when they scan it on return.

                                                                      2. 5

                                                                        TLDR: The laptop was not tampered with in a way he’s foreseen.

                                                                        To just say the laptop was not tampered with is missing his point completely.

                                                                      1. 4

                                                                        It only supports GET requests and it is hardcoded to forward incoming DNS requests to 127.0.0.1:53. This means you need to have a DNS server running on the machine where you run this service.

                                                                        Hey, for a moment I thought this makes no sense. But of course this is to be run on a server, not all on localhost 🤦‍

                                                                        Neat side project!

                                                                        1. 3

                                                                          I fixed that! Current version does POST too now.

                                                                        1. 4

                                                                          I was surprised to hear they were advertising there in the first place…

                                                                          Would be nice if they would take it one step further and block all things FB by default unless you explicitly browse to facebook.com yourself.

                                                                          1. 2

                                                                            Pretty sure that Tracking Protection already kills Like buttons.

                                                                          1. 10

                                                                            All good reasons, IMO. But it fails to mention any of the well-known problems with C, which would have prevented many vulnerabilities in SQLite. So it reads like they’re just trying to justify their choice, rather than an honest assessment of C. I don’t know what the intention or purpose of this page is, though. And to be fair, I would probably have made the same choice in 2000.

                                                                            1. 40

                                                                              I don’t know what the intention or purpose of this page is

                                                                              Probably to stop people asking why it’s not written in Rust.

                                                                              1. 14

                                                                                Since it mentions Java but not Go or Rust, I suspect it’s an older page.

                                                                                1. 25

                                                                                  That’s the beauty of C, it refutes all future languages without having to be recompiled.

                                                                                  1. 1

                                                                                    It mentions Swift, too.

                                                                                      1. 1

                                                                                        Yeah, looking at the parent page, it appears it showed up sometime in 2017. I was mislead by the mention of Java as an alternative, because I think it’s rather obviously unsuited for this job.

                                                                                  2. 4

                                                                                    I tried finding a list of vulnerabilities in SQLite and only this page gave current info. Now, I’m unfamiliar with CVE stats so I don’t know if 15 CVE’s in 8 years is more than average for a project with the codebase and use of SQLite.

                                                                                    1. 7

                                                                                      […] I don’t know if 15 CVE’s in 8 years is more than average for a project with the codebase and use of SQLite.

                                                                                      I don’t know either! I looked at the same page before writing my comment, and found plenty of things that don’t happen in memory-safe languages. There were fewer entries than I expected, but also some of them have descriptions like “Multiple buffer overflows […],” so the number of severe bugs seems to be higher than the number of CVEs.

                                                                                      1. 7

                                                                                        The 4 in 2009 appear to have been in some web app that used SQLite, not SQLite itself.

                                                                                        1. 4

                                                                                          The security community generally considers CVE counts a bad mechanism to argue about the security of a project, for the following reasons:

                                                                                          Security research (and thus vulnerability discovery) are driven by incentives like popularity, impact and monetary gain. This makes some software more attractive to attack, which increases the amount of bugs discovered, regardless of the security properties of the codebase. It’s also hard to find another project to compare with.

                                                                                          (But if I were to join this game, I’d say 15 in 8 years is not a lot ;))

                                                                                        2. 1

                                                                                          15 vulnerabilities of various levels in the past 10 years.

                                                                                          https://www.cvedetails.com/vendor/9237/Sqlite.html

                                                                                          How does that compare to other products or even similar complicated libraries?

                                                                                        1. 3

                                                                                          This looks great. Is this library being written to be part of some large application?

                                                                                          1. 8

                                                                                            I’ve written ZeroMQ and nanomsg once. This is part of my years-long project to make writing such applications tractable. And by that I mean being able to write them without falling into either callback hell or state machine hell.

                                                                                            1. 2

                                                                                              On that topic, what is the status of Nanomsg? Is libdill your main focus, or do you grow these projects in parallel? I’ve watched this projects without using them in practice, but I really like the approach of trying to find the right abstractions and patterns for expressive and efficient network programming.

                                                                                              1. 1

                                                                                                Banal question; libdill? why not just use go?

                                                                                            1. 1

                                                                                              As a developer who moved from Linux to the macOS platform, this made me think about how many non-native apps I use as replacements for the Apple version. The obvious ones I’m thinking of:

                                                                                              • Alfred instead of Spotlight
                                                                                              • iTerm2 instead of Terminal
                                                                                              • Dropbox instead of iCloud
                                                                                              • Chrome instead of Safari
                                                                                              • Gmail instead of Mail
                                                                                              • Google Maps instead of Maps
                                                                                              • VLC instead of iMovie
                                                                                              • Spotify instead of iTunes
                                                                                              • Signal instead of Messages

                                                                                              &c. This surely isn’t a good trend for Apple to allow to continue.

                                                                                              1. 13

                                                                                                That’s not what’s meant by “native” in this case. Alfred, iTerm, Dropbox, Chrome, and VLC are native. Spotify is Electron, and I’m not sure about Signal. I’m guessing it’s probably a native app that does most of its UI in a WebView.

                                                                                                1. 5

                                                                                                  Signal for Desktops is Electron.

                                                                                                  1. 2

                                                                                                    As it might be useful to describe what is meant by native, it means something on a spectrum between “using the platform-supplied libraries and UI widgets”, i.e. Cocoa and “not a wrapped browser or Electron app”, so it’s not clear whether an application using the Qt framework would be considered “native”. It could be delivered through the App Store and subject to the sandbox restrictions, so fits the bill for a “native” app in the original post, but it would also not be using the native platform features which are presumably seen as Apple’s competitive advantage for the purpose of the same post.

                                                                                                    1. 2

                                                                                                      I’d call QT native. It doesn’t use the native widgets, but then neither do most applications that are available on multiple platforms.

                                                                                                      1. 2

                                                                                                        It may be native, but it’s not Mac-native in the sense Gruber was talking about. You will find that all three uses of “native” in his article appear as “native Cocoa apps” or “native Mac apps”. He is talking about a quite specific sense of native: apps that integrate seamlessly with all of the MacOS UI conventions (services, system-wide text substitutions, native emoji picker, drag & drop behaviours, proxy icons, and a myriad more). Qt apps do not.

                                                                                                  2. 5

                                                                                                    Why is it not a good trend? You are still using a Mac .. they sold you the hardware. Should they care about what apps you run?

                                                                                                    1. 3

                                                                                                      Apps with good experiences that aren’t available on other platforms keep users around. Third-party iOS apps do a better job of moving iPhones than anything else Apple does, because people who already have a pile of iOS apps they use generally buy new iPhones.

                                                                                                      Electron is just the latest in a long series of cross-platform app toolkits, and it has the same problems that every other one has had: look & feel, perceived inefficiency, and for the OS vendor, doesn’t provide a moat.

                                                                                                      1. 1

                                                                                                        Counterpoint, their apps have always been limited and really for people who weren’t willing to learn and use more robust tooling. I mean how many professionals use iMovie.

                                                                                                        1. 1

                                                                                                          iMovie is a good example. I’m guessing a lot of us prefer VLC.

                                                                                                      2. 1

                                                                                                        It’s good for the end user but not a good trend for their business model, part of which is to have best-in-class apps. Don’t get me wrong, I like having choice and I think they shouldn’t force you into their own app ecosystem.