1. 5

    If you want something like this for your own language, it may be worthwhile to check out my cffi-gen. It generates json files from C headers.

    Also I got a C-parser implemented in Lever. I dunno if anyone would use it, especially when I’m ripe to stop it’s development and I didn’t get to finish it properly, but it may help if you are planning to do something like this yourself. c.lc cffigen.lc. This project ultimately suffered from a kind of inverse-second-system syndrome. I still ended up using the original, cffi-gen to generate most of the stuff because it still works.

    1. 2

      It generates json files from C headers.

      That’s a great idea! Makes it more reusable across projects.

    1. 3

      I smelled that the author is pushing his own, so I went to see what’s going on.

      This is actually post-rationalization because although it gives a good rational, this is not really what’s going on. What is actually going on is that the modulus and division are connected.

      The way how they are connected can be described as: a = (a / b) * b + a % b.

      Division gives a different result depending on rounding. C99 spec says that the rounding goes toward to zero, but we have also had floor division implementations and systems where you can decide on the rounding mode.

      If you have floor division, the 19 / -12 gives you -2. That is correct when the modulo operator gives you -5. If you do a round-towards-zero-division, the 19 mod -12 must give you 7.

      On positive numbers, the rounding to zero and floor rounding give the same results.

      Also checked on x86 spec. It’s super confusing about this. If the online debugger I tried was correct, then the x86 idiv instruction is doing floor division.

      1. 1

        Forgive my extreme mathematical naivety, but a = (a / b) * b + a % b doesn’t make much sense to me. Given (a / b) * b will always equal a, doesn’t this imply that a % b is always 0?

        1. 4

          / in this context is integer division, not rational division, so e.g. 7 / 3 = 2.

          1. 2

            The division operator in this case is not division in the algebraic sense, and it does not cancel with the multiplication such that (a / b * b = a) {b != 0}. Otherwise your reasoning would be correct.

            To still make this super clear, lets look at 19 / -12. The real number division of this would give you -1.58... But we actually have division rounded toward negative (floor division) or division rounded toward zero, and it’s not necessarily clear which one is it. Floor division returns -2 and division rounding toward zero returns -1.

            The modulus is connected by the rule that I gave earlier. Therefore 19 = q*-12 + (19 % -12). If you plug in -2 here, you’ll get -5 = 19 % -12, but if you plug in -1 then you get 7 = 19 % -12.

            Whatever intuition here was is lost due to constraints to stick into integers or approximate numbers, therefore it’s preferable to always treat it as if modulus was connected with floor division because the floor division modulus contains more information than remainder. But this is not true on every system because hardware and language designers are fallible just like everybody else.

        1. 5

          Computer science clocksteps at the rate of algorithms and discoveries. Languages are always going to come and go, unless the language springs up from a good theory.

          If you want to understand why this would be true, just look at the history of mathematics. Read about algebraic representations, which kind of abacuses have been used, slide rules, mechanical calculators. You will find out that what we have present today is a small fragment of what used to be, and the stuff that still exists was lugged to today because there’s not many obvious better ways to do the same thing.

          By this basis, I’d propose that the current “top 20” by Redmonk cannot form any kind of a long-running status quo. It’s a large list of programming languages rooting to the same theory (Javascript, Java, Python, PHP, C#, C++, Ruby, C, Objective-C, Swift, Scala, Go, TypeScript, Perl, Lua).

          There is going to be only one in 30 years, and I think it’ll be falling to C or Javascript axis. They are syntactically near and lot of software was and gets written with these languages. Although there is even more written with C++, it’s way too contrived to survive without reducing back to something like C.

          CSS may have some chance of surviving, but it’s pretty much different from the rest. About Haskell I’m not sure. I think typed lambda calculus appear or will reappear in a better format elsewhere. The language will be similar to Haskell though, and may bear the same name.

          Unix shell and its commands will probably survive, while Powershell and DOS will wither. Windows seems to have its days counted already by now. Sadly it was not because of open source movement. Microsoft again just botched themselves up.

          R seems like a write-and-forget language. But it roots to Iverson’s notation.. Though perhaps the notation itself will be around, but not the current instances of it.

          I think that hardware getting more concurrent and diverging from linear execution model will do permanent shakeup on this list in a short term. The plethora of programming languages that describe a rigid evaluation strategy will simply not survive. Though I have bit of bias to think this way so I may not be a reliable source for checking into the future.

          But I think this be better than looking at programming language rankings.

          1. 8

            I think, most importantly, we haven’t even seen anything like the one language to rule them all. I expect that language to be in the direction of Conal Elliott’s work compiling to categories.

            A language that is built around category theory from the start, like you have many different syntactic constructs and the ones you use in a given expression determines the properties of the category that the expression lives in. Such a language could locally have the properties of all the current languages and could provide optimal interoperation.

            BTW, I think we won’t be calling the ultimate language a “programming language” because it’ll be as good for describing electrical circuits, mechanical designs and biological systems as for describing programs. So I guess it’ll be called something like a specification language.

            1. 4

              “we haven’t even seen anything like the one language to rule them all. “

              That’s exactly what the LISPers always said they had. Their language could be extended to do anything. New paradigms and styles were regularly backported to it as libraries. It’s also used for hardware development and verification (ACL2).

              1. 3

                Well, it’s hard to say anything about LISPs in general since the span is so vast and academic, and especially for me, since my contact with any LISP is quite limited. But, from my understanding of the common usage of LISP, it doesn’t qualify.

                First of all, I think dropping static analysis is cheating, but I don’t intend to tap into an eternal flame war here. What I mean when I say “the properties of the current languages” is no implicit allocations, borrow-checking and inline assembly like in Rust, purity and parametricity like in Haskell, capabilities-security like in Pony etc. etc. , and not only the semantics of these, but also compilers taking advantage of these semantics to provide static assistance and optimizations (like using the stack instead of the heap, laziness & strictness analysis etc.).

                And I’m also not just talking about being able to embed these into a given language; you should also be able to write code such that if it’s simple enough, it should be usable in many of them. For instance, it’d be hard to come up with some language semantics in which the identity function cannot be defined, so the identifier id x = x should be usable under any local semantics (after all every category needs to have identity morphisms). You should also be able to write code that interfaces between these local semantics without leaving the language and the static analysis.

                I know you can embed these things in LISP, expose enough structure from your LISP code to perform static analysis, get LISP to emit x86 assembly etc. etc. But, IMHO, this doesn’t make LISP the language I’m talking about. It makes it a substrate to build that language on.

            2. 2

              I think one major difference between math and computer science, and why we’re not going to see a lot of consolidation for a while (not even in 30 years, I don’t think), is that code that’s on the internet has a way of sticking around, since it’s doing more than just sitting in research papers, or providing a tool for a single person.

              I doubt we’ll see 100% consolidation any time soon, if for no reason than that it’s too easy to create a new programming language for that to be the case.

              Hardware changes might shake up this list, but I think it’ll take 30 years for that to be realized, but there will be a lot of programming languages that fall out of that.

              We’re definitely still going to have COBOL in 30 years, and Java, and C. The rest, I’m unsure of, but I’ll bet that we’ll be able to recognize the lineage of a lot of the top 30 when we look in 30 years.

              1. 1

                R seems like a write-and-forget language. But it roots to Iverson’s notation.

                Did you mean to write J or APL? I understand R as the statistics language.

              1. 2

                I’m disappointed to read the negative comments on TFA complaining that the author has “merely” identified a problem and called on us to fix it, without also implementing the solution. That is an established pattern, a revolution has four classes of actor:

                • theoreticians
                • propagandists
                • agitators
                • organisers

                Identifying a problem is a necessary prerequisite to popularising the solution, but all four steps do not need to be partaken by the same person. RMS wrote the GNU Manifesto, but did not write all of GNU. Martin Luther wrote the ninety-five theses, but did not undertake all of protestant reform. Karl Marx and Friedrich Engels wrote the Communist Manifesto but did not lead a revolution in Russia or China. The Agile Manifesto signatories wrote the manifesto for agile software development but did not all personally tell your team lead to transform your development processes.

                Understandably, there are people who do not react well to theory, and who need the other three activities to be completed before they can see their role in the change. My disappointment is that the noise caused by so many people saying “you have not told me what to do” drowns out the few asking themselves “what is to be done?”

                1. 2

                  I read through and concluded that the author says nothing. Cannot exactly identify the problem he raises up. And by observing the first lines, I think he’s an idiot.

                  That computing is so complex is not a fault of nerdy young 50 years old men. If nerdy 50 years young men had designed that stuff we’d be using Plan9 with Prolog and not have as many problems as now.

                  The current computing platforms are created by multiple-body companies and committees with commercial interests. They’ve provided all the great and nice specs such as COBOL, ALGOL, HDMI, USB, UEFI, XML and ACHI, just few to start the list with. All of the bullshit is the handwriting of the ignorant, not of those playing dungeons and dragons or solving rubik cubes.

                1. 1

                  Not surprised. Oracle’s Java copyright lawsuit going the way it is, Google is going to be license price extorted if they stick to Java so they pretty much need to kill Android if the lawsuit succeeds. Working their own version also gives them the copyrights so the ruling will be helpful in maintaining a tight grip on the new platform.

                  1. 8

                    As I understand it (and as noted by alva above), Fuschia competes with Linux, not Java. It’s a microkernel, not a language VM or a language. The article was technically confused – the Oracle line was just a throwaway and not terribly accurate.

                    Java is going to be in Android forever, simply because there are hundreds of thousands of apps written in Java. Kotlin was a logical move because it’s very compatible with Java.

                    1. 2

                      I assume the thing everyone is kind of getting at is Flutter, which is the preferred IDE for Fuchsia and it’s not Java-encumbered.

                      1. 2

                        flutter’s not an ide is it?

                        1. 3

                          no, Flutter is a UI framework written in Dart. I think one of the main selling points is that it’s fast enough to avoid c++, and friendly enough (as seen by JS/web developers) to make it easy to make a nice UI with (without resorting to something like electron).

                  1. 2

                    Programmers mental model changes while he learns and is very flexible. Statically typed languages cannot match to this.

                    Many statically typed languages still allow you to do really bad errors and botch it in numerous multiple ways despite their type system. Very classic example of this would be the whole C language. But it is not the only statically typed language ridden full of foxholes. For example, you may use variables before you’ve set their contents in some corner-case that the language designer did not manage to solve. Then you got a surprise null, despite that careful use of Maybe or nullable-property.

                    Another example of this kind of failure would be the introduction of corruption bugs. Too many popular statically typed languages do not protect you from data corruption bugs when handling mutable data, and do not provide tools to protect your mutable data from corruption bugs.

                    I think that dynamically typed languages are easier to use because they genuinely let you decide afterwards on some design problems that you face. They are polymorphic without work, and programmers who use them naturally produce more abstract code. I also think that you can prove dynamically typed programs correct, and you don’t need full type annotations for that which means it can still be dynamically typed after that.

                    They are simply, just better programming languages.

                    1. 4

                      Most of these arguments are unrelated to static vs dynamic typing. It sounds like you’re arguing that dynamic languages are easier to quickly prototype ideas, however, which I agree with.

                      1. 2

                        In such situations, I like to bring up Strongtalk and Shen w/ Sequent Calculus & Prolog. Both add typing to high-productivity, dynamic languages/environments.

                    1. 3

                      A message veiled into a personal learning story to make it more palatable. I would not care much, but it scratches an itch.

                      We got a slight tests vs. type checking going in the middle of the lines. These subjects should not be dumped together because you may also do Python without any tools and get along just fine.

                      My opinion about types and type checking has changed as well, but it has grown to very different direction than the posters. I also have had some sort of enlightenment going along. I am a die-hard dynamic-typing proponent. I did not need a static type system, and I neither needed tests. The author had to work with something else than Python to form his opinion. I had to go into other direction, deeper into Python and finally into my own improved variation of Python.

                      If type checking is sound, and if it is decidable, then it must be incomplete. I’ve realized this is really important as the set of correct and useful programs that do not pass a type checker is large. Worse, often these are the most important sort of programs that spares a person from stupid or menial work.

                      “If it compiles, it works” and “type system make unit tests unnecessary” are hogwash. It doesn’t really matter how much you repeat them or whether you clothe them into a learning story. There was a recent post pointing out how difficult it is to construct a proof that some small program is actually correct. This means you cannot expect that program works or is correct despite that it types correctly in any language.

                      There is an aspect that is required for making computation and logical reasoning possible in the first place. That is in recognizing variant and invariant parts of the program. I’ve realized that spamming variants is the issue in modern dynamically typed languages. That cannot be solved by adding type annotations because you still have tons of variables in your code that could theoretically change. And you really have to check whether they do, otherwise you have not verified that your program is correct.

                      Statically typed languages commonly do better in keeping variances smaller, but they are also stupid in the sense that they introduce additional false invariants that you are required to satisfy in order to make the type checking succeed. And you cannot evaluate the program before the type checker is satisfied. This is an arbitrary limitation and I think people defending this for any reason are just dumb. Type checker shouldn’t be a straitjacket for your language. It should be a tool and only required when you’re going to formally verify or optimize something.

                      During working on software I’ve realized the best workflow is to make the software work first, then later validate and optimize. Languages like Python are good for the first purpose while some ML-variants are good for the middle, and for the optimization C and similar are good. So our programming languages have been written orthogonal, to cross with the workflow that makes most sense.

                      1. 20

                        the set of correct and useful programs that do not pass a type checker is large

                        If it’s large then you should be able to give a few convincing examples.

                        1. 5

                          I haven’t had the problem the quote implies. The basic, type systems were about enforcing specific properties throughout the codebase and/or catching specific kinds of errors. They seem to do that fine in any language designed well. When designers slip up, users notice with it becoming a best practice to avoid whatever causes protection scheme to fail.

                          Essentially, the type system blocks some of the most damaging kinds of errors so I can focus on other verification conditions or errors it can prevent. It reduces my mental burden letting me track less stuff. One can design incredibly-difficult, type systems that try to do everything under the sun which can add as many problems as they subtract. That’s a different conversation, though.

                          1. 1

                            This set includes programs that could be put to pass a type checker, given that you put extra work into it, or use a specific type checker for them. Otherwise that set is empty: For every program you can construct such variation where the parts that do not type check are lifted outside from the realm of the type checker. For example. stringly typed code.

                            The recipe to construct a program that does not pass a type checker is to vary things that have to be invariant for the type checker. For example, if you have a function that loads a function, we cannot determine the type for the function that is produced. If the loaded function behaves like an ordinary function, it may result in a dilemma that you may have to resolve either by giving it some weird different type that includes the idea that you do not know the call signature, or by not type checking the program.

                            Analogous to the function example: If you define creation of an abstract datatype as a program, then you also have a situation where the abstract datatype may exist, but you cannot type the program that creates it, and you will know the type information for the datatype only after the program has finished.

                            And also consider this: When you write software, you are yourself doing effort to verify that it does what you want. People are not very formal though, and you will likely find ways to prove yourself that the program works, but it does not necessarily align with the way the system thinks about your program. And you are likely to vary the ways you use to conclude the thing works because you are not restricted to just one way of thinking about code. This is also visible in type systems that themselves can be wildly different from each other, such that the same form of a same program does not type in an another type system.

                            I think for the future I’ll try to pick up examples of this kind of tricky situations. I am going to encounter them in the future because in my newest language I’ll have a type inference and checking integrated into the language, despite that the language is very dynamic by nature.

                            There is some work involved in giving you proper examples, and I think people have already moved to reading something else when I finish, but we’ll eventually resume to this subject anyway.

                            1. 6

                              Looking forward to seeing your examples, but until then we don’t have any way to evaluate your claims.

                              About your function loading example, that may or may not be typeable, depending on the deserialisation mechanism. Again, can’t really say without seeing an example.

                              1. 6

                                When you write software, you are yourself doing effort to verify that it does what you want.

                                That’s exactly why I find type systems so useful. I’m doing the effort when writing the code either way; types give me a way to write down why it works. If I don’t write it down, I have to figure out why it works all over again every time I come back to the code.

                            2. 6

                              A message veiled into a personal learning story to make it more palatable.

                              Why do you think this is veiled message instead of an actual personal story?

                              1. 5

                                If type checking is sound, and if it is decidable, then it must be incomplete.

                                Only if you assume that some large set of programs must be valuable. In my experience useful programs are constructive, based on human-friendly constructions, and so we can use a much more restricted language than something Turing-complete.

                                1. 1

                                  If type checking is sound, and if it is decidable, then it must be incomplete.

                                  That’s not a bug. That’s a feature.

                                  If you can implement a particular code feature in a language subset that is restricted from Turing completeness, then you should. It makes the code less likely to have a security vulnerability or bug. (See LangSec)

                                1. 1

                                  You forgot to point out that Python provides a traceback:

                                  Traceback (most recent call last):
                                    File "smalldemo.py", line 17, in <module>
                                      main()
                                    File "smalldemo.py", line 8, in main
                                      bar(target)
                                    File "smalldemo.py", line 11, in bar
                                      foo(target)
                                    File "smalldemo.py", line 14, in foo
                                      value = target.a['b']['c']
                                  KeyError: 'b'
                                  

                                  And the varying error messages it gives is enough to pinpoint which one of these operations failed.

                                  It looks like a good exercise project though. Someone saw time to work out the documentation and it didn’t fail entirely explaining what this thing is doing. Good exercise especially for the purposes of explaining what’s the purpose of that thing. It can get tricky.

                                  1. 2

                                    EDIT I just realized that you were referring to the author’s statement about which dict get is causing the error. My statement below is not relevant to that.

                                    The traceback is good, but if you are getting lots of deeply nested json documents some fields might be present on one document and not on another within the same collection. So you end up in this loop where you process a piece of the collection, hit an exception, stop and fix it. Repeat this a while until you think the code is stable. Then at some point in the future you end up with another piece of a new collection that blows up. C’est la vie.

                                    1. 2

                                      Trust me, no forgetfulness occurred here. If 'b' and 'c' were variables, which they commonly are, you wouldn’t know which one had the value which caused the KeyError. And furthermore, the example was more about the TypeErrors, such as the one raised when a dictionary is replaced with a list.

                                      The traceback sheds no light on that. The only way to make the traceback work is to split up the operation into multiple lines, and that’s why that code ends up verbose and un-Pythonic.

                                    1. 3

                                      It would be better to talk about tagged records in place of sum types because you would then immediately understand what the subject is about.

                                      I’m commenting because I think it’s interesting to point out. I plan to write a type system into my language that relies on conversions between types, and on tagged records. It won’t have typecases though, because I thought out that the definition of type ended up being very unstable. Also the type annotations do not really have a reason to feedback into the value space.

                                      1. 2

                                        This thing feels genuine, but I can’t stop the feeling that something is missing.

                                        1. 1

                                          This thing feels genuine, but I can’t stop the feeling that something is missing.

                                          The BBS interface. ;)

                                        1. 15

                                          What happened? Did Oracle find a judge who does not know programming?

                                          1. 29

                                            In 2010, Oracle sued Google. In 2012, District Court ruled API uncopyrightable. In 2014, Appeals Court ruled API copyrightable. Google petitioned Supreme Court, which denied the petition. In 2016, District Court, operating under the assumption that API is copyrightable, ruled Google’s use was fair use. In 2018, Appeals Court ruled Google’s use was not fair use. Now the case is back in District Court to determine the damage, operating under the assumption that API is copyrightable and Google’s use was not fair use.

                                            1. 3

                                              Most people do not understand the significance of this decision, so it’s enough for Oracle to re-roll the dice until they get the answer they want.

                                              Besides I think the crowds inflate the significance of this. It’s almost as if somebody unconditionally respected copyrights here.

                                            1. 1

                                              Reorder items not appearing in the LCS, but present in the structures.

                                              What’s an LCS?

                                              1. 1

                                                LCS is an acronym for the longest common subsequence problem. You may know it if you’ve studied how diff works because it’s one, but not only, way to calculate diff between two text files. The point to using it here is to keep the moving of indeterminates few while shuffling them into same order. LCS reveals the longest sequence of indeterminates that are already in the same order.

                                                I added this into the post as well.

                                              1. 1

                                                Most of the points raised in the post are quite awful.

                                                1. If you don’t write code that can be read, then you should learn. Don’t use comments as a crutch as that will fail.
                                                2. You can have ‘sections’ titled with comments, but your code should be structured-enough to not need them.
                                                3. Author, date, license, copyright: These things do not belong into the code. Author info belongs to the AUTHORS -file and git. Date comes from ctime&git. License belongs to the LICENSE -file. Copyright is implied.
                                                4. All rules fly out of the window with esolangs.
                                                5. Appeal to authority. Very convincing.
                                                6. Generating documentation out of your comments is an ugly and terrible way to do documentation.
                                                7. TODO is indeed great way to use comments. Though hilariously often you notice that the TODO can be just removed because there was a better way to do the same thing, or the thing in TODO didn’t matter after all.

                                                The things that the code doesn’t tell, or things that are hard to gather from the code should go into the comments.

                                                When I think something is even slightly surprising later on, I write a comment to describe why it’s there and what it’s needed for. That has turned out to be useful because it allows you to resume on the code much faster than otherwise.

                                                1. 2

                                                  To me this reads like recommendations for internal corporate code shops.

                                                  1. 2

                                                    If you don’t write code that can be read, then you should learn. Don’t use comments as a crutch as that will fail.

                                                    A nice ideal, but why insist that code take on the job of conveying the full semantic intent of the algorithm it’s implementing? Including natural language comments can separate the concern of semantic communication from the concern of accurate and clear implementation.

                                                    1. 1

                                                      I do not think that I did insist anything like that.

                                                      Readability is about the ability to understand the written code. There is no emphasis that you’d tell anything more than what the code would inherently tell.

                                                  1. 6

                                                    This is a problem with any score-keeping or “gamification” mechanism online.

                                                    • You often forget the fun part and take the scoreboard to measure who’s the better person. Until you realize it does not matter. It never mattered anywhere. You see life is full of scoreboards like that, starting out from the school. Lot of people want something from you and they scapegoat it into scoreboards.
                                                    • The score-keeping is always disjointed from what is desired (good attendance to discussion, good articles, great questions/answers)
                                                    • There are people who love playing, and they take the optimal route to gain score. This often means that they “drop the payload” that the whole score-keeping system was supposed to support.

                                                    Though I honestly wonder where people go to talk once reddit/lobsters has been explored..

                                                    1. 5

                                                      Though I honestly wonder where people go to talk once reddit/lobsters has been explored..

                                                      IRC?

                                                      1. 2

                                                        When I joined reddit years ago I cared about my karma count so much and I would delete comments when they start to get downvoted so I wouldn’t lose my karma but after hitting about 10k I stopped caring and now I can say what I want and stand behind it even if it’s not a popular opinion.

                                                        Reminds me of that first episode of black mirror season 3. Only when you ignore scoring systems can you truly be free.

                                                      1. 1

                                                        I had a friend visit over to learn programming few days ago. That made me finally realize what’s wrong with Quorum.

                                                        I was teaching basics of Java: variables, types, conditionals, constants, methods. I showed the friend how to think about programming by writing a naive Fibonacci function from its definition, discussed properties of the produced code and such. And after few minutes prompted him to do the same exercise with the factorial function.

                                                        I told him to not care about java’s syntax and write down the ideas first. The code he ended up writing was something like:

                                                        static public void factorial() {
                                                            if n = 0 then factorial = 1
                                                            if n = 1 then factorial = 1
                                                            else if n=2 then factorial * factorial - 1
                                                        }
                                                        

                                                        Well.. You see some obvious BASIC influence there. There are other influences of languages commonly used for training people.

                                                        I then helped him by rewriting these concepts to actual Java and left him the recursive part of the code left to figure out on himself. Again discussing what is going on there.

                                                        The problem is that practically there is no human being on the Earth who has not been learning some programming before. If you ask their opinion with A/B tests then they tend to pick patterns that are familiar to them.

                                                        What’s familiar for today’s people is not going to be that for tomorrow people.

                                                        1. 2

                                                          It sounds like this page misuses the “straw man” -term.

                                                          To “attack a straw man” is to refute an argument that was not made by anyone.

                                                          There are people who have used these arguments, and they’re only strawmen in the sense that they have been disassociated from the people who made the arguments.

                                                          1. 5

                                                            To “attack a straw man” is to refute an argument that was not made by anyone.

                                                            While this might be a definition, it’s most certainly not the only one peope think of when talking about “straw man” or “straw person” arguments. Another usage I have hear people use, and what I understand this page to imply, is to simplify or stupidify an opponents position, and then attack this easily attackable argument, thereby avoiding the actual issue. I belive that this is being done here, they take points raised against Lisp and lisp like languages, and show that these either miss the point or don’t really make sense anyways.

                                                            But regardlessly, if it’s a “misuse” of the term or not, I belive everyone on this site is bright enoug to understand what is intended, regardless of this or that proposed formal definition, as well as being able to enjoy an article in it’s own merit, instead of an abstract notion of total semantical correctness in regards to each term, especially when it’s not crucial to the intend of the piece.

                                                          1. 2

                                                            The update on how the performance characteristics of this thing changed after JIT would have been interesting. The paper’s 11 years old now.

                                                            1. 3

                                                              There’s a follow-up paper (2010) with the JIT implemented and reporting some benchmarks. I don’t think the project continued after that, although the lead author has continued research in related areas.

                                                              1. 2

                                                                Oh darn, I didn’t realize he was also an author of a paper I was saving for next week when more readers are on. He’s doing some clever stuff. Interestingly, he also ported Squeak to Python while Squeak people made it easier to debug Python using Squeak’s tooling. Some odd, interesting connections happening there.

                                                            1. 0

                                                              I hate this project. It’s had big names, big fanfares, big talks, big money grants, they were hiring people.

                                                              But very little was done that mattered in the end. We barely remember this project by now. Nobody even cares about this thing in 15 years. It created absolutely nothing of value and all the ideas presented were already done better in 1980s. It was kind of Bret Victorian in that sense.

                                                              And now it’s going to die. If they had just pushed on something might have come out of this. Now it is guaranteed that everything of this will be forgotten in very short time.

                                                              If you try to shoot at the moon, at least.. Could you try to aim upwards and put enough fuel into the tank?

                                                              1. 2

                                                                I love all the saltiness. I wish the GPU prices would still rise a little bit more so we’d see more bickering.

                                                                I think the prices keep rising though. Crypto prices are rising and it’ll be profitable to buy those cards off from shelves.

                                                                There’s hard time seeing how this could be bad except for few PC enthusiasts who end up having to pay a bit more for their hardware in the short term. GPU hardware also faces demand for improvement for general purpose parallel workloads and it cannot happen only within the terms of cryptocurrency mining because the demand it creates can collapse without a warning.

                                                                Good times ahead.

                                                                1. 1

                                                                  There are plenty of tutorials like this out there. Some are cross-platform, this one is Linux-only and omits the W^X limit that’s wanted these days. Most of these write hexadecimals into a buffer and then call it.

                                                                  I’m a bit tired to seeing clones from the same story. It’d be nice if people writing new tutorials would sometimes continue from someone else’s tutorial.