1.  

    It’s a real shame that Nim is dismissed because of its bus factor + low popularity. Please help us get out of this catch-22 scenario. The fun and performance that the language offers really outweighs this.

    One problem is that if the creator lost interest the project would grind to a total halt, which may be a reason for the lack of adoption.

    While this is true the likelihood of it happening is low. The project has been going for a while now and the creator has not budged at all when it came to his passion for it.

    I’ve got a question for you as well: how did you evaluate the popularity of the languages? I’ve noticed that Nim has a score of 1 here, but Zig has a score of 2. I find that strange.

    1.  

      I’ve noticed that Nim has a score of 1 here, but Zig has a score of 2. I find that strange.

      That seems like an oversight on my part - you are right, I will update the page with a note.

      I actually had the most fun writing Nim of any language recently - For other projects I would definitely use it.

    1. 5

      Interesting. I’ll add the security score for C is unfair given, if subsets and tooling are used, it’s easier to write low-defect software in it than most other things. For instance, Astree Analyzer can prove code is free of the kinds of defects C programmers worry about the most. There’s quite a few open tools for C like KLEE and AFL that should knock out a lot of them in combination. Then, it’s one of only two languages (other is SML) that have certifying compiler to prevent it from adding vulnerabilities. There will be lots of false positives and refactoring using the static analyzers. If you don’t have them or don’t want false positives, then C’s safety drops way down to 0-1 given the high impact of failures.

      For Java, it’s security is nowhere near 5. First, its Trusted Computing Base is a combo of a VM and libraries full of unsafe code. You just rated that language at 2. JVM’s vulnerability track record was what one would expect. I kept Java off machines unless absolutely necessary since it was one of top threats. If memory safe, you can still get concurrency errors. There are tons of tools for catching those in Java, though. Even if concurrency safe, a non-low-level language leaves one problem: covert channels (aka secret leaks). It could leave them in memory that gets accessed by malicious or just leaky code later. Alternatively, it obscures the timing of your routines on secrets in ways that leak them. Altogether, Java is a terrible language for security even though it reduces vulnerabilities in the average case for apps written on top of it. It’s why safety-critical Java was usually a subset running on special, deterministic VM’s (example).

      Regarding C++ and OO, I just used structured programming with C++ during brief time I used it. Couldn’t you do that? I would understand if you just don’t like it. I ditched it quickly.

      Regarding Haskell, it’s popular in high-assurance systems, esp tooling for them. Hardware people use it, too. It has same weakness as Java of gap between high-level code and low-level representation causing potential issues. It’s better than Java at TCB quality, concurrency safety, and getting more out of type system. I’d recommend against it since you’re doing something crypto-related that you want high uptake and contributions for. Might have been nice for your compiler project, though. You still might find this paper interesting which assessed its properties for secure programming with a crypto algorithm used as case study.

      1.  

        I definitely missed a productivity category. The safer C is, the slower work becomes - for some tools (perhaps even this tool) it is worth it to be extremely careful.

        1.  

          That’s true.

      1.  

        I would have loved if zig could just compile to more or less idiomatic C and give projects an escape hatch if things aren’t going well for zig development after all.

        I’ll never hedge my bets in this way. Maybe C should start compiling to idiomatic Zig and give projects an escape hatch for the impending doom of C :-)

        Also regarding simplicity - just yesterday in this stream I made an argument that Zig is actually a simpler, smaller language than C, because it removes 2 (anti-)features from C. For example the preprocessor. C is actually 2 languages which are unaware of each other whereas Zig is only one.

        1.  

          I watched the whole stream :)

        1.  

          What we need […]

          What about “soft-features” like eco-system, compatibility, maintainability, testability … ? Those are very important for some of the use cases that that require that the program works reliable and should be definetly considered when choosing a language for a project.

          Edit: especially for a project that aims to be “highly reliable” on its homepage.

          1.  

            Popularity and eco-system seem highly correlated.

            Not quite sure what you mean by compatibility, but if you mean operating systems, it is quite correlated with popularity.

            I didn’t quite rank maintainability, I think its possible to make a mess of pretty much any language, but it is a good question if some languages produce more maintainable code than others. Rust advertising fearless concurrency is essentially a maintainability issue.

            For testability I have some plans that I might write about later, but it is good to not choose something that paints you into a corner for sure.

            Reliability to me means thorough testing of error conditions and minimizing code in the fault kernel of a program. For this tool in particular this means the write path data takes should be small and tested (across all error paths). The choice of error model for a language does affect this quite a lot. http://joeduffyblog.com/2016/02/07/the-error-model/ .

            If I am totally honest, many languages don’t meet the quality standard (yet) either, which I sort of counted under ‘stability’ even if i didn’t mention it explicitly.

            1.  

              I didn’t quite rank maintainability, I think it’s possible to make a mess of pretty much any language…

              I think this is in large part because languages that focus on correctness, often do so at the expense of difficult refactoring and daunting cognitive overhead. This manifests as a maintainability issue in actual projects, but the self-selecting nature of the maintainers means the issue is unconsciously swept under the rug.

              At the same time, few languages offer features specifically geared toward keeping code refactorable—let alone aim to be both correct and manageable (both on the screen and in the mind). One rare exception, an example of the latter, is Jon Blow’s Jai prototype.

          1. 9

            I’m surprised to see that Go rated only 3/5 for simplicity. It’s far simpler than modern Java! Just consider how few concepts it has and that they’re all orthogonal to each other so you need not learn about those you don’t use.

            1. 9

              And bus factor for Go should be 5/5. It’s used a bit everywhere at Google and by all kind of large projects outside, so it’s there to stay for many, many years.

              1. 8

                A corporate google mandate could also kill the whole project at any time if they invented something much better. It may not die immediately, but I’m pretty sure they could kill it far more easily than C could be killed. One problem is a scale of 1-5 doesn’t have good resolution :)

                1. 5

                  I don’t see how a corporate Google mandate would kill off Go. I could see it removing Go’s google contributions, in the worst case, but the code is open source, and there are a lot of outside companies that use Go, and several of them have implemented other languages in it (at least 2 separate Lua implementations and 1 Lisp come to mind). and Go gets a decent amount of outside developers working on it.

                  At this point, Go’s use outside of Google is enough to keep it going should Google suddenly lose interest in it. But given that Go is used a lot inside Google, to the point that there exists a cross compiler from Python to Go, a sudden loss of interest from Google would be a very unlikely event. Dart, a language that is far less popular, is still quite alive and kicking at Google, on the basis of their Ads team using it, even though it has had a much harder time getting adopted outside of Google.

                  I don’t think Go is going anywhere soon. I’d personally give it a 4.5/5, if C is the 5/5.

                  1.  

                    I very much agree. I am making plans on how to leave the Google ecosystem for emails. Last week they killed Google Inbox (announced plans to do so very soon), and I loved that product very much. I wouldn’t be surprised if Gmail gets killed too. And the same could happen to Go language.

                    Inbox wasn’t a less-popular product by any means, and still it got killed.

                2.  

                  I may be accidentally judging on the fact that my old job used an outdated version of java to support the old code. Modern java must be a different story for some things.

                1. 8

                  Since I write python for a living I can testify that everything you said about it is spot on.

                  With that said, I have since discovered D and I’m quickly falling in love with it. It seems to come naturally to me as a python programmer, but it’s much quicker and feels more powerful to me. Should you ever get the time, I’d like to hear your thoughts as well. It will obviously score pretty low in popularity contest.

                  1.  

                    D is one of those languages I just never poked at. I have no idea why.

                    1. 7

                      You really should before committing to Rust. You might like it. It’s a C++ alternative with lots of features that still compiles really fast. They also have the BetterC mode as a C alternative. Aside from borrow checker, Rust’s main advantage over it will be the community that will crank out lots of libraries you might use or that might use your code. D’s community probably won’t achieve parity any time soon.

                  1. 12

                    You might find this interesting, it’s an attempt to predict bugs by language features. Unsuccessful, but still interesting enough for me to finish.

                    http://deliberate-software.com/safety-rank-part-2/

                    1. 5

                      edit: Hey that is actually really cool and interesting, (the point about clojure is interesting too). It is also a pretty smart way to gather data in a situation where it is normally extremely hard to do so.

                      Something I just read today too - less about bugs, but more about robustness

                      http://joeduffyblog.com/2016/02/07/the-error-model/

                      1.  

                        Thanks! Good link too.

                        Speaking of which, I highly recommend learning Haskell. It’s a lot of work, but it’s really changed how I think about programing. I would absolutely go back and do it again. It really makes the easy things hard (tiny scripts) but the hard things easy. Very much worth learning in my mind.

                      2.  

                        While Tail Call Optimization would certainly be nice to have in Go to improve performance, in practice it’s not a cause of defects because people just use iteration instead of recursion to accomplish the same thing. It doesn’t look as “nice” but you don’t get stack overflows.

                        1.  

                          Arguably that could be said of all the things on that list. Every programming language community has idioms to best use the available feature set.

                          Specifically for recursion, I was assuming that the mental shuffle to convert something from recursion (often more elegant and simple) to iteration would cause issues. Since the whole model doesn’t work very well, I clearly was wrong in multiple places, and this very well could be one.

                          1. 5

                            Specifically for recursion, I was assuming that the mental shuffle to convert something from recursion (often more elegant and simple) to iteration would cause issues.

                            I could be wrong, but I suspect most developers find iterative algorithms more straightforward to write iteratively, not recursively, and consider writing them recursively a mental shuffle.

                            It wouldn’t surprise me if comfort with recursive algorithms is a predictor of developer proficiency, though.

                            1.  

                              You are probably right, but I’d guess now that is more because most developers work in languages that don’t support recursion. Originally, I was also going for the idea that it offers a way for the developer to make a mistake without realizing it. In this case, they don’t realize that recursion isn’t tail optimized, since the language allows it without warning. But since I have yet to see anyone use recursion unless they are used to languages with immutability (and even then they probably just use fold), it probably doesn’t come up much.

                              As such, it probably makes sense to remove that item, which doesn’t change much, just slightly raises the “c-style” languages and lowers the “lisp-style”.

                              1.  

                                but I’d guess now that is more because most developers work in languages that don’t support recursion.

                                Most people think about problem solving in an iterative way. They’ll do this, then this, maybe this conditionally, and so on. Imperative. Iterative. Few people think of their problems in a recursive way without being taught to do so. That’s probably why most prefer iterative algorithms in programming languages.

                                1.  

                                  To fully shave the yak, I’d argue this is entirely a product of how programmers are taught. Human thinking doesn’t map perfectly to either format. Recursion is just saying, “now do it again but with these new values”, and iteration requires mutations and “storing state”. Neither are intuitive - both need to be learned. No one starts off thinking in loops mutating state.

                                  Considering most programmers learn in languages without safe recursion, most programmers have written way more iterative loops so are the most skilled with them. That’s all, and this isn’t a bad thing.

                                  1.  

                                    They might not either be very intuitive. Yet, educational experience shows most students pick up iteration quickly but have a hard time with recursion. That’s people who are learning to program for the first time. That indicates imperative, iterative style is closer to people’s normal way of thinking or just more intuitive on average.

                                    Glad we had this tangent, though, because I found an experimental study that took the analysis further than usual. I’ll submit it Saturday.

                                  2.  

                                    I agree. And I think there’s a lot that just isn’t possible with that mindset.

                              2.  

                                Specifically for recursion, I was assuming that the mental shuffle to convert something from recursion (often more elegant and simple) to iteration would cause issues.

                                I think it really depends on the algorithm. To my understanding, mapping and filtering is a lot easier recursively, but reducing and invariants tend to be easier iteratively.

                              3.  

                                I think I remember reading Russ Cox doesn’t like tail recursion because you lose the debug information in the stack traces.

                                1.  

                                  This is a big pet peeve of mine: because many languages use pointers in stack traces, you can’t see what the values were at that time. I think storing the value instead of just the pointers would be expensive, but it sure would be useful.

                                  1.  

                                    What information would you lose?

                                    1.  

                                      I think that in this example, you’d think that initial directly called final:

                                      def initial():
                                          intermediate()
                                      
                                      def intermediate():
                                          final_caller:
                                      
                                      def final():
                                          throw "boom"
                                      

                                      This could make it extremely hard to debug if intermediate happened to modify state and it was the reason why final was failing.

                                      1.  

                                        I think the call stack may be convenient for this purpose, but not necessary. I’m sure there are other (potentially better & more flexible) ways to trace program execution.

                              1. 7

                                In my experience Go has been pretty performant (of course not on C’s level), certainly more than 3.5/5. And I’d be curious to know why you gave it a 3/5 for simplicity. Taking all of C and not only it’s official standards into account it’s far easier to write Go than (safe, proper) C.

                                One huge pet peeve of mine is a language with concurrency but no notion of immutability.

                                AFAIK they are working on this for Go 2, but in the mean time, I’ve found that using channels instead of “classical” concurrency mechanisms has been a good workaround – and quite easy to use actually.

                                1.  

                                  Last I tested python was 100x slower than C, Go was about 2x and java was 1.5x . Not a detailed look really, and maybe things have changed since then.

                                  I marked go down on simplicity because sometimes things that should be simple are not. One example is forking (due to runtime threads), another is dropping user privileges (due to goroutines not mapping to OS threads.) . It depends what you are trying to do usually though.

                                  1.  

                                    The Java/Go performance disparities really seem to come down to the kind of task one is doing, as seen here – then again “real world” performance is another question…

                                    One example is forking (due to runtime threads), another is dropping user privileges (due to goroutines not mapping to OS threads.) . It depends what you are trying to do usually though.

                                    Ok, I understand your point – Go isn’t the best sysprog language that’s pretty uncontroversial. I was thinking of the number of concepts and (especially arbitrary) rules a language specified.

                                    1.  

                                      For many web /API use cases Go is faster than Java but somewhat slower than C++

                                      1.  

                                        Keep in mind that there are at least two Go compilers: go and gccgo. They have different performance profiles. Gccgo is a GCC frontend. GCC has had built-in support for Go since version 4.6.

                                    1. 12

                                      This is a bit of a wild card, I am not experienced at all with haskell, but my impression is it may not be well suited to imperitive operations like read buffers from a stream and write buffers to a stateful database. I could be totally wrong or ignorant and would like to one day address my ignorance.

                                      These are things that I use Haskell for!! As Tackling the Awkward Squad says:

                                      In short, Haskell is the world’s finest imperative programming language.

                                      1.  

                                        exactly the type of thing I love to read, thanks :)

                                        1.  

                                          Can you mark your PDF? Thanks.

                                        1. 27

                                          I think classifying C as simplicity 5 / 5 is a bit odd, because it is superficially easy but then actually writing correct code is hard because you need to know the subtleties of it. No wonder there is a book called “Deep C secrets”. In this regard Java is much, much simpler because for the most part you don’t need to concern yourself too much about what the code will compile to, it will be sort of correct. There are a couple of weird and surprising gotchas but my impression is that it is overall way more manageable.

                                          Also the amount of tooling required to make C correct is anything but simple.

                                          Of course, all of these points are even more valid in C++.

                                          Also I think your praising Go with points that are just as valid for Python and Ruby, which you’re pretty much damning (“quite a large number of projects invested into it already, has been stable for years and probably isn’t going anywhere”).

                                          1.  

                                            That’s a great point about C, you changed my mind on that one, even more damning for C++.

                                            With regard to python and ruby, you are very right, my brain sort of just registered it as a given, but I should have been more clear. I would bet much much more stuff runs on python or ruby than Go.

                                            1. 7

                                              I think classifying C as simplicity 5 / 5 is a bit odd, because it is superficially easy but then actually writing correct code is hard because you need to know the subtleties of it

                                              Simplicity and ease of use are two different things. You could envision an assembly language for a tiny risc with load & store and a handful of other instructions and that’d get you about as simple a turing complete language as one can imagine but it won’t be easy to write complex and correct software in it.

                                              Now, as for the subtleties of C, I feel like they’re a little overstated. They’re there, but you learn the stuff that matters in a weekend. The rest of Deep sea secrets is just an exposition of funny tricks you can do, if you’re into IOCCC and party tricks.

                                              Can’t really compare it to Java since I don’t know Java. But going by the spec and features, Java does not appear to be a simpler language.

                                              I definitely agree that C isn’t 5/5, but it’s fairly simple for a mainstream language.

                                              1. 6

                                                Now, as for the subtleties of C, I feel like they’re a little overstated. They’re there, but you learn the stuff that matters in a weekend.

                                                The ceaseless litany of security problems in C codebases is evidence that this is pretty demonstrably untrue.

                                                1.  

                                                  I think it’s evidence that a lot of people write code very poorly and that there’s a lot of old C code. It is possible - easy, really - to write very secure C code. It requires you to use a ‘subset’ of the language, I guess, but it’s not difficult. Certainly if you argue that C++ is more secure than C then you’re admitting that the security of a language is not based on all of its features but on the features people use, given that all of C’s security issues exist in C++.

                                                  1.  

                                                    To be fair, security was on a different 0-5 scale so I would presume explicitly excluded from this one. Also, it scored a 0 on security, so I think the author agrees with you.

                                                  2.  

                                                    Plus, there’s tools that can check for funky stuff. KCC, the executable semantics for C, says it crashes when it detects undefined behavior during a compile. So, one idea I had was making sure all safer modules compile clean with it.

                                                    1.  

                                                      Simplicity and ease of use are two different things.

                                                      I agree with you here, unfortunately the article does not do a distinction between simplicity and ease of use. When it comes to simplicity, assembly is pretty simple, so are stack based languages like Forth, with C being someone in the middle ground. In this regard Java is not simple, agreed.

                                                      When it comes to ease of use it is different, but one can also evaluate it in different contexts. Writing a C program that compiles is quite easy, writing correct C programs at scale is a very different matter. Maybe that’s another possible point of comparison: how well does a language scale from simple to complex tasks.

                                                  1. -10

                                                    We don’t need another kilograms of heavy UIs to make people “click ok and get on with things”, we need people to understand how the Git actually works.

                                                    Remember that Git is not a new iPhone and it doesn’t need to be used by idiots, morons and people with zero knowledge - actually, it lifts the bar a bit, but if you sit down and focus on how Git really works, you’ll grasp the ability to answer yourself a question “What I need to do to achieve that…?” in less than a hour.

                                                    Instead of copying and pasting magic spells from Stuck Overblown.

                                                    1. 22

                                                      Please could you not be this acerbic here in future? Thanks.

                                                      1. 5

                                                        I think both sides are right, people could become more expert at easier tools. The nice thing with this UI is it doesn’t seem to hide the underlying git commands, you can hover over the buttons to see what they do.

                                                        1.  

                                                          What @0x2ba22e11 said.

                                                          1.  

                                                            Despite your tone, you’re not entirely wrong when it comes to coders.

                                                            Admittedly Git could do with some overhaul in its command line arguments to make it more logical, but it’s a power tool and it’s possible to wrap your head around it. Not everyone does, but I think they should. Like with their editor/IDE of choice.

                                                            If someone’s more productive with a UI like this and there’s no real detriment, let the markets decide.

                                                            Git can be used to track non-code content, though. The Apollo 17 project comes to mind, as it’s one of my favorite things online.

                                                            I contributed to it using CLI Git, but for less technical people to get involved in things like that, there’s nothing wrong with a GUI.

                                                          1. 7

                                                            If someone visits an old post of yours through an outdated snapshot (like this very link), how do you ensure they get to see newer posts? If I click “blog” at the top, it just shows me the archive of posts that existed up to that point. The same is true for the “subscribe” RSS link, which means I’d never get updates, if I understand this correctly.

                                                            1. 2

                                                              This link is ipns not ipfs. ipns can be updated. ipns is like git refs, ipfs are like object hashes.

                                                              1. 1

                                                                Ah, so even though there’s a huge hash in the URL, that doesn’t mean it is referring to a single point in time!

                                                                1. 3

                                                                  Yep! An IPFS hash refers to a specific piece of content and can never change. An IPNS name points to a hash, but can be moved around. Once IPFS/IPNS are more widely integrated, DNS and/or NameCoin can be used to provide human-readable names.

                                                                  1. 2

                                                                    I think an ipns url is more like the hash of a public key so the updater can sign the changes with the private key.

                                                              1. 3

                                                                I think compile to ‘readable’ C would be a good choice for a lot of projects. It would make using a less mainstream language waaaay less risky.

                                                                1. 4

                                                                  I think interoperability with C is much more important than generating readable C. You’re generally not going to be interacting with the generated code, but it should be easy to link against it from C and vice versa. You get the same risk mitigation either way.

                                                                  I found generating “readable” C to be tricky, since it’s such a simple language and has no namespacing.

                                                                  1. 3

                                                                    For adoption, my default recommendation now is using C’s types, its calling conventions, automatically recognizing its libraries in FFI, and compiling to readable C. Better to be using the C ecosystem than competing with it entirely.

                                                                    1. 4

                                                                      For adoption, my default recommendation now is using C’s types, its calling conventions, automatically recognizing its libraries in FFI, and compiling to readable C.

                                                                      I’ve focused on that in my last two compilers, it’s pretty fun to ship code to clients who don’t even know you’re writing in something else.

                                                                      1. 2

                                                                        Yeah. I used to do it with an enhanced BASIC. They were talking about me using a “real” language. Lulz.

                                                                  1. 6

                                                                    You don’t need to specify the compile line if it’s a C or C++ file:

                                                                    foo : $(OBJS)
                                                                    

                                                                    is enough. Here’s the Makefile (GNUMake, excluding dependencies) for a 150,000+ line project I have:

                                                                    %.a :
                                                                    	$(AR) $(ARFLAGS) $@ $?
                                                                    
                                                                    all: viola/viola
                                                                    
                                                                    libIMG/libIMG.a     : $(patsubst %.c,%.o,$(wildcard libIMG/*.c))
                                                                    libXPA/src/libxpa.a : $(patsubst %.c,%.o,$(wildcard libXPA/src/*.c))
                                                                    libStyle/libStyle.a : $(patsubst %.c,%.o,$(wildcard libStyle/*.c))
                                                                    libWWW/libWWW.a     : $(patsubst %.c,%.o,$(wildcard libWWW/*.c))
                                                                    viola/viola         : $(patsubst %.c,%.o,$(wildcard viola/*.c))	\
                                                                    		libIMG/libIMG.a		\
                                                                    		libXPA/src/libxpa.a	\
                                                                    		libStyle/libStyle.a	\
                                                                    		libWWW/libWWW.a
                                                                    

                                                                    I have a rule to automatically make the dependencies.

                                                                    1. 3

                                                                      foo: $(OBJS) with no command is not enough for non-GNU make.

                                                                      1. 9

                                                                        Ah. I haven’t used a non-GNUMake in … 20 years?

                                                                        1. 2

                                                                          indeed ! it is way better to use a portable make, rather than write portable makefiles :)

                                                                          1. 3

                                                                            In that case I’ll use NetBSD make ;)

                                                                      2. 1

                                                                        for < 10k line projects i tend to just do ‘cc *.c’ :P usually it is fast enough

                                                                        1. 1

                                                                          Meanwhile, I have a 2.5kloc project where a full recompile takes 30 seconds, so incremental compilation is kind of necessary :p C++ is slooow.

                                                                        2. 1

                                                                          Is this a revived ViolaWWW?

                                                                          1. 1

                                                                            Somewhat. It’s one of those “I want to do something but I don’t know what” type projects where I clean up the code to get a clean compile (no warnings—I still have a ways to go). It works on a 32-bit system, but crashes horribly on 64-bit systems because of the systemic belief that sizeof(int)==sizeof(long)==sizeof(void *).

                                                                        1. 1

                                                                          So Clouldfare wants to help decentralisation by… making it more centralised?

                                                                          1. 2

                                                                            if they are adding ipfs nodes, who cares? you can always run your own gateway if you don’t like it.

                                                                            1. 2

                                                                              Does more nodes help IPFS, or is it more nodes that pin content that is more useful? The pull model, where data sits at origin unless pinned, seems like originators of the data would always have the problem of how to distribute it so as not to face the “thundering herd” of initial requesters.

                                                                              1. 2

                                                                                As far as I know the protocol doesn’t actively push blocks around, the various nodes have to request and pull them. That only happens if a client requests the block to the node. The protocol also doesn’t provide incentive for the nodes to keep the data around. I don’t know what is the default retention logic on the actual nodes.

                                                                                In practice what this translates to is that there isn’t much guarantee for data to stay around. Typically a client will want to get the data over HTTP and only address a single gateway (like the CloudFlare one) so the only copies would be in the serving node (your laptop / server) and that single gateway. If your laptop disappears, CloudFlare will probably flush it’s cache at some point and your page will have disappeared entirely.

                                                                                Hopefully in the future the browsers will be talking IPFS directly and store local copies. That way the network would become much more resilient to data loss.

                                                                                1. 1

                                                                                  a nice system might be a provider where you pay upfront the lifetime price of hosting something with storage costs extrapolated into the future.

                                                                            2. 2

                                                                              I guess it does make sense. If the Internet does shift to become more decentralized, Cloudflare would want a piece of that pie as well. Worst case nobody uses this service and they close it eventually, and best case they just future-proofed their business.

                                                                              I also think that having them run an IPFS node is pretty helpful. If there’s a way to translate a website addressed proxied by Cloudflare into a file hash, it would help decentralization.

                                                                            1. 2

                                                                              A mini project for your enjoyment -

                                                                              A self hosted annotation supporting pastebin alternative using ipfs and genius https://cloudflare-ipfs.com/ipfs/QmPaZhmd6tni2Tm2saPwXzdtrgb3HLyGrZic3AgHRvKYup

                                                                              You may need to disable adblock to see my annotations. This may be a security problem, no idea how ipfs gateways and javascript would work.

                                                                              1. 1

                                                                                Btw, anyone with a good knowledge of internet security, how does the fact that the CDN is on a single domain interact with javascript?

                                                                                1. 1

                                                                                  You’re right that a lot of web security features go out the window. Top of the list is any CSRF protections.

                                                                                  I wouldn’t log into a service or do anything trusty over an ipfs gateway… Gateways are more a neat hack than the meat of ipfs.

                                                                                  1. 1

                                                                                    What about the custom domain thing? Is it a redirect or does the browser actually know that it’s a separate domain? (Basically, has anyone tried this yet?)

                                                                                    1. 2

                                                                                      Yes, that would put some of the security back for some visitors depending on how they visit the site.

                                                                                      If you want to get familiar with IPFS and related internet technologies, your time would be much better spent grabbing the client and jumping in that way. IPFS is a fundamentally different way of approaching the internet that focuses on what you’re getting over who you’re getting it from.

                                                                                      1. 1

                                                                                        I’ve always shyed away from IPFS for that very reason. Figuring out where to get something based on a file hash sounds computationally hard and overall just slow. I can’t imagine anything like that running on phones, laptops, etc without destroying all hopes of good battery life and reasonable performance.

                                                                                        Maybe it does work, though. I just downloaded it on my laptop and it got warm very fast, but files I tried to get loaded reasonably quickly.

                                                                              1. 3

                                                                                wrote some nim code, really enjoyed it, and the performance matched C code. Just need to finish it off then get back to more err != nil go code.

                                                                                1. 3

                                                                                  myrddin is such a fun language, I encourage everyone to give it a try.

                                                                                  1. 1

                                                                                    have you written anything major in it? I like the fact that it supports most of the platforms I use, but I haven’t seen much written in it…

                                                                                    1. 2

                                                                                      I wrote the C compiler mentioned in that post and a few command line utilities like https://github.com/andrewchambers/ddmin . The compiler code was probably the biggest thing I wrote.

                                                                                      1. 1

                                                                                        oh that’s beautiful, thank you! Are there any pain points you’ve experienced with using it?

                                                                                        1. 3

                                                                                          Not much pain really, just a small project so you can’t expect too many libraries and be patient with the docs and help fix them if you can.

                                                                                  1. 3

                                                                                    Would zig compile time evaluation be powerful enough for something like string -> PEG parser as a library?

                                                                                    1. 1

                                                                                      The only potential roadblocks I foresee for this use case are:

                                                                                      • zig compile time code execution is much slower than it should be. It should be able to roughly match CPython’s performance, but it’s much slower and doesn’t free memory. Ironically we actually need to add a garbage collector in compile time code execution.
                                                                                      • zig compiler doesn’t yet have sophisticated caching that would make it practical to have a really complicated compile time implementation. So you’d wait for your thing to run with every build.

                                                                                      Both planned to be fixed, it’s just a matter of time.

                                                                                      1. 1

                                                                                        That’s interesting, so you have a full Zig interpreter that runs at compile-time?

                                                                                        But won’t collecting garbage make it slower? Are the compile-time programs allocating so much that they need to free memory?

                                                                                        I’m curious if any other languages have run into this problem.

                                                                                        1. 2

                                                                                          so you have a full Zig interpreter that runs at compile-time?

                                                                                          It’s a little more complicated than that. Zig AST compiles into Zig IR. Each instruction’s value is either compile-time known or not. Most instructions which have all compile-time known operands produce a compile-time known result. There are some exceptions - for example, external function calls always produce a runtime result.

                                                                                          If statements and switch statements whose condition/target value is compile-time known, the branch is chosen at compile-time. This means that zig has “implicit static if”. E.g. if you do if (false) foo(); then foo() is not even analyzed, let alone included in code generation.

                                                                                          In addition, there is the comptime expression: https://ziglang.org/documentation/master/#Compile-Time-Expressions This causes all the branches and function calls - including loops - to be compile-time evaluated.

                                                                                          But, importantly, you can mix compile-time and run-time code. Variables can be marked comptime which means that loads and stores are always done at compile time.

                                                                                          For loops and while loops can be marked inline which unrolls the loops and makes the iteration variables known at compile-time. You can see this in action for the printf implementation: https://ziglang.org/documentation/master/#Case-Study-printf-in-Zig

                                                                                          But won’t collecting garbage make it slower?

                                                                                          I can’t answer this in a clear way yet as I haven’t tried to solve it. The basic problem is the same as in e.g. Python where you could potentially have 2 compile-time values with references to each other, but not referenced from any root that is actually going to go into the executable, so they should not be in the binary.

                                                                                          In Debug builds zig has a goal of compiling fast, and willing to create a more bloated binary with worse runtime performance. In ReleaseFast builds, zig can take a few orders of magnitude longer to compile, but the performance should be optimal and bloat should be minimal. So it might be a thing where Zig does not garbage collect comptime values for Debug builds unless it starts to use too much memory, but it would certainly take the time to do this for ReleaseFast builds.

                                                                                          Are the compile-time programs allocating so much that they need to free memory?

                                                                                          I don’t personally have any use cases where that is true, but in general, I could create a program that allocates an arbitrarily large amount of memory at compile time in order to do a computation, but that value is not ultimately used in the binary, yet the memory allocated has references to each other, and so it would fool the reference counter.

                                                                                        2. 1

                                                                                          Ironically we actually need to add a garbage collector in compile time code execution.

                                                                                          Why? It seems like if you allocate and free as you would in normal Zig, this wouldn’t be a requirement.

                                                                                          1. 1

                                                                                            Thats really cool, things like regexes or sql statements could be pre-prepared at compile time with features like this.

                                                                                        1. 2

                                                                                          First thing that came to my mind was Bindy McBindface.

                                                                                          C must be the programming language with the most compilers in existence. I’ve already seen about a dozen or more independent implementations.

                                                                                          1. 1

                                                                                            Its also one of the only language I trust to still work 10 years from now because there are so many implementations