1.  

      We should get rid of votes altogether. After watching this site recently I see poor relationship between upvoted comments and how much effort/information a commentator puts in. Down votes have always been to indicate disagreement or displeasure (merely indicating an unpopular opinion) despite guidelines being that these are to be reserved for disruptive content.

      I find myself skimming all the comments because I know many good comments will be down below, past the anodyne, popular ones.

      1. 5

        I’d kind of like to get rid of the bimonthly discussions about voting.

        1.  

          It’s the only standup we have.

          1.  

            Come on: some of the political commentary should count as standup. Saying nothing more, everyone will imagine something different. Haha.

        2.  

          I’d upvote this comment to express my complete agreement… but I can’t because of the little ~. Oh well.

          The de-facto cultural meaning of up and down votes has been largely defined by Reddit, and will not be easily redefined by site-specific guidelines off on a wiki somewhere. People don’t work like that.

          Simple chronological ordering of comment would work just fine for Lobste.rs, and discourage groupthink. We don’t need to rank users by popularity, or hand out little micro-rewards to those who post popular things. Or, do we?

          1.  

            Simple chronological ordering of comment

            That would be interesting. The moderators do want the agitated or lower-quality stuff toward the bottom and collapsed for a reason, though. Even I compromised in the metas on censorship that I’d take “Collapsed, not Deleted/Banned” as a default if we couldn’t compromise on anything better. It lets casual readers get lower-stress, lower-noise experience with little effort while people with more patience to explore riskier threads can still hit plus to see them. I always do that just to see what’s going on in community if nothing else. There’s also often some good comments in there somewhere if it’s a debate.

        1.  

          For DOS LM 1.0, the redirector took up 64K of RAM

          For LAN Manager 2.1, we finally managed to reduce the below 640K footprint of the DOS redirector to 128 bytes.

          There are numbers here I don’t follow. I think I understand what took 64K. What took 640K? And either way, reducing even 64K to 128 bytes seems - amazing. I do miss a bit of detail here. Would appreciate some details on the “truely (sic) clever programming”

          1.  

            The 64K is the size of the program (daemon really). 640K is the famous memory limit of x86 running in real mode. But there is expanded memory above that. They wrote a 128 byte program which lives below 640K and jumps to expanded memory as necessary, which is where the 64K program now lives.

            1.  

              Cool, thanks! @tedu That paragraph is better than the entire article :D

          1. 1

            As a genuine question from someone who hasn’t used procedural programming productively before, what would be the benefits of a procedural language to justify its choice?

            1. 3

              I would say less conceptual/cognitive overhead, but I don’t know if that’s something that can be said of this language as a whole, as I have no experience with it.

              By that I mean something like: I have a rough idea of what code I want from the compiler, how much mental gymnastics is required to arrive at the source-level code that I need to write?

              I would imagine that’s an important consideration in a language designed for game development.

              1. 4

                Yeah, it makes perfect sense.

                To dumb down Kit’s value prop, it’s a “Better C, for people who need C (characteristics)”.

              2. 2

                On top of alva’s comment, they compile fast and are easy to optimize, too.

                1. 1

                  I looked this up for some other article on lobste.rs. I found wikipedia to have a nice summary

                  https://en.wikipedia.org/wiki/Procedural_programming

                  Imperative programming

                  Procedural programming languages are also imperative languages, because they make explicit references to the state of the execution environment. This could be anything from variables (which may correspond to processor registers) to something like the position of the “turtle” in the Logo programming language.

                  Often, the terms “procedural programming” and “imperative programming” are used synonymously. However, procedural programming relies heavily on blocks and scope, whereas imperative programming as a whole may or may not have such features. As such, procedural languages generally use reserved words that act on blocks, such as if, while, and for, to implement control flow, whereas non-structured imperative languages use goto statements and branch tables for the same purpose.

                  My understanding is that if you use say C you are basically using procedural language paradigms.

                  1. 2

                    Interesting. So basically what was registering in my mind as imperative programming is actually procedural.

                    Good to know. Thanks for looking it up!

                    1. 2

                      I take “imperative” to mean based on instructions/statements, e.g. “do this, then do that, …”. An “instruction” is something which changes the state of the world, i.e. there is a concept of “before” and “after”. Lots of paradigms can sit under this umbrella, e.g. machine code (which are lists of machine instructions), procedural programming like C (where a “procedure”/subroutine is a high-level instruction, made from other instructions), OOP (where method calls/message sends are the instructions).

                      Examples of non-imperative languages include functional programming (where programs consist of definitions, which (unlike assignments) don’t impose a notion of “before” and “after”) and logic programming (similar to functional programming, but definitions are more flexible and can rely on non-deterministic search to satisfy, rather than explicit substitution)

                      1. 1

                        If functional programs don’t have a noton of before and after, how do you code an algorithm? Explain newton’s method as a definition.

                          1. 1

                            both recursion and iteration say “do this, then do that, then do … “. And “let” appears to be assignment or naming so that AFTER the let operation a symbol has a meaning it did not have before.

                            open some namespaces
                            open System
                            open Drawing    
                            open Windows.Forms
                            open Math
                            open FlyingFrog
                            

                            changes program state so that certain operations become visible AFTER those lines are executed, etc.

                            1. 3

                              It is common for computation to not actually take place until the result is immediately needed. Your code may describe a complicated series of maps and filters and manipulations and only ever execute enough to get one result. Your code looks like it describes a strict order the code executes in, but the execution of it may take a drastically different path.

                              A pure functional programming language wouldn’t be changing program state, but passing new state along probably recursively.

                              1. 1

                                but you don’t really have a contrast with “imperative” languages - you still specify an algorithm. In fact, algorithms are all over traditional pure mathematics too. Generally the “state” being changed is on a piece of paper or in the head of the reader, but …

                              2. 1

                                so that AFTER the let operation

                                If we assume that let is an operation, then there is certainly a before and an after.

                                That’s not the only way to think about let though. We might, for example, treat it as form of linguistic shorthand; for example treating:

                                let x = somethingVeryLongWindedInvolving y in x * x
                                

                                as a shorthand for:

                                (somethingVeryLongWindedInvolving y) * (somethingVeryLongWindedInvolving y)
                                

                                There is no inherent notion of before/after in such an interpretation. Even if our language implements let by literally expanding/elaborating the first form into the second, that can take place at compile time, alongside a whole host of other transformations/optimisations; hence even if we treat the expansion as a change of state, it wouldn’t actually occur at run time, and thus does not affect the execution of any algorithm by our program.

                                Note that we might, naively, think that the parentheses are imposing a notion of time: that the above tells us to calculate somethingVeryLongWindedInvolving y first, and then do the multiplication on the results. Call-by-name evaluation shows that this doesn’t have to be the case! It’s perfectly alright to do the multiplication first, and only evaluate the arguments if/when they’re needed; this is actually preferable in some cases (like the K combinator).

                            2. 2

                              If functional programs don’t have a noton of before and after, how do you code an algorithm?

                              Roughly speaking, we define each “step” of an algorithm as a function, and the algorithm itself is defined as the result of (some appropriate combination of) those functions.

                              As a really simple example, let’s say our algorithm is to reverse a singly-linked-list, represented as nested pairs [x0, [x1, [x2, ...]]] with an empty list [] representing the “end”. Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty. Here’s an implementation in Javascript, where reverseAlgo is the algorithm I just described, and reverse just passes it the new empty list:

                              var reverse = (function() {
                                function reverseAlgo(result, input) {
                                  return (input === [])? result : reverseAlgo([input[0], result], input[1]);
                                };
                                return function(input) { return reverseAlgo([], input); };
                              })();
                              

                              Whilst Javascript is an imperative language, the above is actually pure functional programming (I could have written the same thing in e.g. Haskell, but JS tends to be more familiar). In particular, we’re only ever defining things, in terms of other things. We never update/replace/overwrite/store/retrieve/etc. This style is known as single assignment.

                              For your Newton-Raphson example, I decided to do it in Haskell. Since it uses Float for lots of different things (inputs, outputs, epsilon, etc.) I also defined a bunch of datatypes to avoid getting them mixed up:

                              module Newton where
                              
                              newtype Function   = F (Float -> Float)
                              newtype Derivative = D (Float -> Float)
                              newtype Epsilon    = E Float
                              newtype Initial    = I Float
                              newtype Root       = R (Float, Function, Epsilon)
                              
                              newtonRaphson :: Function -> Derivative -> Epsilon -> Initial -> Root
                              newtonRaphson (F f) (D f') (E e) (I x) = if abs y < e
                                                                          then R (x, F f, E e)
                                                                          else recurse (I x')
                              
                                where y  = f x
                              
                                      x' = x - (y / f' x)
                              
                                      recurse = newtonRaphson (F f) (D f') (E e)
                              

                              Again, this is just defining things in terms of other things. OK, that’s the definition. So how do we explain it as a definition? Here’s my attempt:

                              Newton’s method of a function f + guess g + epsilon e is defined as the “refinement” r of g, such that f(r) < e. The “refinement” of some number x depends on whether x satisfies our epsilon inequality: if so, its refinement is just x itself; otherwise it’s the refinement of x - (f(x) / f'(x)).

                              This definition is “timeless”, since it doesn’t talk about doing one thing followed by another. There are causal relationships between the parts (e.g. we don’t know which way to “refine” a number until we’ve checked the inequality), but those are data dependencies; we don’t need to invoke any notion of time in our semantics or understanding.

                              1. 2

                                Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty.

                                Algorithms are essentially stateful. A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program. A “functional” language relies on a smaller set of control mechanisms to reduce, in theory, the complexity of algorithm specification, but “recursion” specifies what to do when just as much as a “goto” does. Single assigment may have nice properties, but it’s still assignment.

                                To me, you are making a strenuous effort to obfuscate the obvious.

                                1. 3

                                  Algorithms are essentially stateful.

                                  I generally agree. However, I would say programming languages don’t have to be.

                                  When we implement a stateful algorithm in a stateless programming language, we need to represent that state somehow, and we get to choose how we want to do that. We could use successive “versions” of a datastructure (like accumulating parameter in my ‘reverse’ example), or we could use a call stack (very common if we’re not making tail calls), or we could even represent successive states as elements of a list (lazy lists in Haskell are good for this).

                                  A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program.

                                  I don’t follow. I think it’s perfectly reasonable to say that Prolog code encodes algorithms. How does Prolog’s use of a “universal algorithm” (depth-first search) imply that Prolog code isn’t algorithmic? Every programming language is based on “a kind of universal algorithm”: Python uses a bytecode interpreter, Haskell uses beta-reduction, even machine code uses the stepping of the CPU. Heck, that’s the whole point of a Universal Turing Machine!

                                  “recursion” specifies what to do when just as much as a “goto” does.

                                  I agree that recursion can be seen as specifying what to do when; this is a different perspective of the same thing. It’s essentially the contrast between operational semantics and denotational semantics.

                                  I would also say that “goto” can be seen as a purely definitional construct. However, I don’t think it’s particularly useful to think of “goto” in this way, since it generally makes our reasoning harder.

                                  To me, you are making a strenuous effort to obfuscate the obvious.

                                  There isn’t “one true way” to view these things. I don’t find it “strenuous” to frame things in this ‘timeless’ way; indeed I personally find it easier to think in this way when I’m programming, since I don’t have to think about ‘time’ at all, just relationships between data.

                                  Different people think differently about these things, and it’s absolutely fine (and encouraged!) to come at things from different (even multiple) perspectives. That’s often the best way to increase understanding, by find connections between seemingly unrelated things.

                                  Single assigment may have nice properties, but it’s still assignment.

                                  In name only; its semantics, linguistic role, formal properties, etc. are very different from those of memory-cell-replacement. Hence why I use the term “definition” instead.

                                  The key property of single assignment is that it’s unobservable by the program. “After” the assignment, everything that looks will always see the same value; but crucially, “before” the assignment nothing is able to look (since looking creates a data dependency, which will cause that code to be run “after”).

                                  Hence the behaviour of a program that uses single assignment is independent of when that assignment takes place. There’s no particular reason to assume that it will take place at one time or another. We might kid ourselves, for the sake of convenience, that such programs have a state that changes over time, maybe going to far as to pretend that these hypothetical state changes depend in some way on the way our definitions are arrangement in a text file. Yet this is just a (sometimes useful) metaphor, which may be utterly disconnected from what’s actually going on when the program (or, perhaps, a logically-equivalent one, spat out of several stages of compilation and optimisation!).

                                  Note that the same is true of the ‘opposite’ behaviour: garbage collection. A program’s behaviour can’t depend on whether or not something has been garbage collected, since any reference held by such code will prevent it from being collected! Garbage collection is an implementation detail that’s up to the interpreter/runtime-system; we can count on it happening “eventually”, and in some languages we may even request it, but adding it to our semantic model (e.g. as specific state transitions) is usually an overcomplication that hinders our understanding.

                                  1. 1

                                    A lot of what you see as distinctive in functional languages is common to many non-functional languages. And look up Prolog - it is a very interesting alternative model.

                                    1. 1

                                      A lot of what you see as distinctive in functional languages is common to many non-functional languages.

                                      You’re assuming “what I see”, and you’re assumption is wrong. I don’t know where you got this idea from, but it’s not from me.

                                      I actually think of “functional programming” as a collection of styles/practices which have certain themes in common (e.g. immutability). I think of “functional programming languages” as simply those which make programming in a functional style easier (e.g. eliminating tail calls, having first-class functions, etc.) and “non-functional programming languages” as those which make those styles harder. Most functional programming practices are possible in most languages.

                                      In other words, I agree that “A lot of [features of] functional languages is common to many non-functional languages”, but I have no idea why you would claim I didn’t.

                                      Note that in this thread I’ve not tried to claim that, e.g. “functional programming languages are better”, or anything of that nature. I was simply stating the criteria I use for whether to call a style/language “imperative” or not; namely, if its semantics are most usefully understood as executing instructions to change the state of the (internal or external) world.

                                      And look up Prolog - it is a very interesting alternative model.

                                      I’m well aware of Prolog. The research group I was in for my PhD did some fascinating work on formalising and improving logic programming in co-inductive settings; although I wasn’t directly involved in that. For what it’s worth I’m currently writing a project in Mercury (a descendent of Prolog, with static types among other things).

                        1. 1

                          So procedural languages are similar to imperative languages, but with somewhat more abstraction?

                      1. 2

                        I don’t know, I actually liked the letter labeled as HR compliant. The author says the downside is that the person might come back and try to explain their idea. This is a good thing. This is how a person like Linus could have a chance to understand new use cases and new techniques.

                        1. 2

                          Because Linus has enough time to do that with everybody. We still live in real life where one of the biggest constraints we all have, if not the biggest is time. Let’s not swear but also let’s not encourage people to needlessly waste others’ times.

                        1. 6

                          five-year-olds are not very smart

                          Has this person been any where near children? Where does this statement come from? My kid learnt to punch in the iPad unlock code and select videos on YouTube when she was a bit under 4. This is a fairly ordinary feat for kids. This is primarily because she didn’t have to learn to read words - she could look at icons and live previews and touch things. The touch interface is a HUGE plus to empowering large swathes of the population to operate computers, which are merely tools - a means to an end.

                          On the other end of the spectrum, her daddy makes use of only about 0.1 % of vi’s features because a) he’s not that much smarter than a 5 YO and b) he’s got better things to do than memorize weird keystroke sequences .

                          1. 6

                            Note that the author actually has no problem with touch interfaces - just interfaces which preempt the user’s inputs because the developers think they know better than the user what the user wants.

                            1. 5

                              So, I wrote this a long time ago. The basic statement is that we should only expose basic functionality up front but have advanced panels where we expose the full complexity of the software’s features and options. In my opinion, vi as an example of good design is a peculiar choice.

                          1. 1

                            I did stop reading when I reached the common mis-attribution of who did the bulk of the work on the Apollo code.

                            1. 4

                              What’s the misattribution in question?

                              1. 3

                                Author here: would love to hear more on this? Fact-checking always appreciated. If you’re referring to Margaret Hamilton’s contribution, the article states her position as leading the team, which does not directly mean that she wrote the code in question. Fair enough, I’ll make note to add the actual author of the critical pieces as well.

                                1. 1

                                  I have to get a good book on the Apollo Code (I bet someone here has a recommendation) but I recall that Hamilton was elevated to leadership position at a more mature stage of the project. Once you put a name to a project, you are making it about a personality (rather than the code) and the question of fair attribution comes up.

                                  1. 3

                                    I think “mis-attribution” is an overstatement. But it’s true that Hamilton did not singlehandedly write the AGC code (nor did anyone; it was a team effort). The single person who probably most deserves credit for the fault tolerance feature is probably MIT’s Hal Laning. He designed the executive and waitlist systems. Hamilton deserves as much credit as anyone, but I wish it was expressed that she was a member of a team rather than a one-person effort.

                                    1. 1

                                      @craigstuntz thanks for that reference! Is there a good book on the Apollo automation engineering efforts? I’m interested in both the hardware and software and not just the AGC but the whole of it.

                                      1. 2

                                        I can tell you about some books I’ve actually read; these might not be the best for your specific interests. “How Apollo Flew to the Moon,” by David Woods. But it doesn’t go into the AGC in minute detail. “The Saturn V F-1 Engine: Powering Apollo into History,” by Anthony Young, is good, but doesn’t cover controls at all. There are a couple of books specifically about the AGC which I haven’t read.

                                2. 3

                                  You could just give him the specifics. Then he can update the article. I’m curious what yours are since I’ve read several different versions of the story.

                                  1. 2

                                    This story and the story of Rosalind Franklin are both curious to me. We could start a project - ideally primary sources - to do some archaeology. For DNA all I can think of papers. For the Apollo code it has to be early documentation - all US Gov docs have lists of contributors and responsible people.

                                    1. 5

                                      I was asking because I did it before to address the possibility that a bunch of people did work and she just put her name on it. She seems to have for her team. So, I started saying Hamilton’s team. The problem occurs enough that I started doing that in general.

                                      Now, as to Apollo, I did have some kind of paper that was specific. I faintly recall it saying they were responsible for one or two key modules but they did all the verification. They were the best at making robust software. So, if I’m remembering right, the paper talked like they were handed the code other people wrote to find and fix the errors on top of their own modules. That’s impressive. So is the stacks of spec and code in assembly in the picture. The “Lessons from Apollo” article here has a description of the environment and mindset that led to that as well. Later, she and another woman spent whole career developing CASE tools (HOS and 001 Toolkit) for designing systems with no interface errors that generated code for you. It was like binary trees, Prolog, and Occam combined which is why usability and performance sucked despite it succeeding on robustness and generality.

                                      So, that’s what I learned when I dug into it. If I get the papers again, I’ll send you the one with the attributions. No promises that I’m going to do another deep dive into that soon, though.

                                      1. 3

                                        That would be a very interesting project indeed! Part of the beauty and definitely one of the main reasons the Apollo program was so successful IMHO is specifically the way the work and communication was organized. To this day the project stands as a testament to how such a large-scale project should be carried out effectively. While I’m not privy to the inner working of NASA, there seems to be evidence that some of the organizational systems were lost later and this led to sever inefficiencies. It’s a pity, but luckily it offers us a wealth of learning opportunities.

                                        1. 2

                                          On Hamilton’s side, it seemed like they mostly just let people do their work their own way. The work was also highly-valued and interesting. So, they came up with innovative solutions to their problems. This doesn’t usually happen in process-heavy or micro-managing environments.

                                1. 5

                                  People who can speak but have trouble with hands (motor disorders):

                                  • Parkinsons
                                  • People who are losing fine motor skills
                                  • Amputees

                                  I think this is great.

                                  1. 3

                                    I can’t read that page. I see it as a purple background with a faint red texture on it. Does chrome on Android have a high contrast feature?

                                    1. 4

                                      Ugh, with JavaScript disabled it’s even worse. The text isn’t even readable.

                                      1. 4

                                        Luckily Firefox has Reader View (and similar for other browsers.)

                                        I’m not against making things look pretty, but no default way to read simply plain text is just unforgivable.

                                        1. 3

                                          Thanks for the feedback, I’ll go through it with our web team to improve things in the future! It’s a shame for our authors if their content cannot be read.

                                      2. 3

                                        There’s some sort of scroll monitoring script that turns the background white, do you have javascript enabled?

                                        1. 1

                                          Thanks for replying! The second time I tried the site, it suddenly turned white and I could read it.

                                      1. 0

                                        Incredible! So, uh, why did it catch on fire? Maybe I missed it in the article. I know why the servers would run hot. Now we gotta get from hot servers to fire.

                                        1. 4

                                          I lean towards hyperbole. Also, the photo seemed to be a stock photo of a fire fighting exercise: the thing burning looked like a mock airplane.

                                          1. 1

                                            Makes sense. I was hoping for it to be literal so I have another case study on the THERAC list of disasters. Oh well.

                                        1. 2

                                          I’m a big fan of C++ and I use Python for work, but in contrast to the suggestion in this article I would say if you are a Python fan and 90% of your work is doable in Python but you have 10% of code that is a performance bottle neck I would look into Cython. This is especially true since the author (lightheartedly I think) says

                                          … I guess if you don’t know any C++ it may take a bit longer, but not that much. Just start out with programming like you would in Python, but declare variables with types, put semicolons at the end of lines, put loop and branching conditions in parentheses, put curly braces around indented blocks and forget about the colons that start Python indented blocks… that should get you about 80% of the way there. Oh and avoid pointers for the time being. Oh and use references whenever possible. They’re kinda like pointers but… Well maybe avoid those as well for now.

                                          but omits to mention that, while that is a questionable introduction to C++, parts of it are a pretty good introduction to Cython.

                                          1. 2

                                            In my first draft I had some remarks about Julia and Cython, but I decided to leave them out, since I know far too little about either of them. To summarize: my main reason for not using those languages is that while I’m sure I can get maximum performance using C++, I cannot be sure that will always be the case in Cython or Julia. In fact, the internet is riddled with examples of where C++ will beat all alternatives. I admit, though, that I may be biased, since I’m also just a big C++ fan. Another reason is that C++ is a big industry standard, whereas Julia and Cython are still relatively niche languages. This is the same reason I haven’t tried Rust or Haskell yet :)

                                            1. 2

                                              @egpbos, I’d say for this use case Cython would be a good fit.

                                              With proper type annotations Cython compiles down to “bare C”.

                                              I few years ago I wrote up some of my experiences: https://kaushikghose.wordpress.com/2014/12/08/get-more-out-of-cython/ but rereading I see I didn’t make it very detailed. (Edit: this post is possibly more helpful: https://kaushikghose.wordpress.com/2014/07/28/cythonize/ )

                                              Cython allows you to generate annotated html files that allow you to map your Cython code to C and see how your annotations affect that and so on. Cython integrates fairly nicely with Python distribution mechanisms, though the last time I fought with it, there was a bootstrapping issue when the user did not have Cython already installed.

                                              The downside is that Cython can look ugly - a franken language - with poorer IDE support.

                                          1. 2

                                            This is a good place to ask this: has any one had experience using Julia to replace Pandas? Say for something very like this use case? Thanks!

                                            1. 3

                                              Obviously the author is having a ton of fun, so it’s all good, but I was trying to think what was needed at the core of it all.

                                              Since the package was about simulating analog and digital circuits what should be needed in principle is a simulation of the Arduino board itself, rather than a C++ interpreter.

                                              I wonder how much effort it would have been to write a Atmel AVR microcontroller emulator + arduino circuit board simulator. A proper emulator would then simply ingest the output of avr-gcc and execute it, feeding into the arduino board simulation, causing pins to change voltage etc etc.

                                              This approach would have the advantage that over time the author could then add hardware effects (like parasitic capacitance etc etc.) that is more in keeping with the actual goals of an analog/digital circuit emulator.

                                              Heck, he could then allow users to crack open the AVR itself and measure pin voltages of individual components and this would be awesome, because it would be like peeking into a working micro-controller in real time.

                                              1. 2

                                                Is there enough specification that you could make an AVR microcontroller emulator, and would it fall foul of Atmel’s user agreements? If so, that probably would have been a project of similar (maybe even less) complexity, though I suspect it would have been pretty tricky to convince Apple to allow for running avr-gcc on iDevices. JITs are a no-fly zone for iPad/iPhone applications, except for the JS VM used by WebKit, which is why an interpreter is used here. (I think C# gets through because it can be pre-compiled, but I honestly don’t know all the details)

                                                1. 1

                                                  A quick search brings up a bunch, including one written in LaTeX, so I’m assuming that there is enough documentation to make a digital emulator, and there must be spec sheets for the hardware with capacitances, rise and fall times, high and low voltages and other electrical characteristics that would help someone make a physical simulation of the interface pins.

                                              1. 4

                                                (I think this is a cool development - lobste.rs here is not linking to an article, but serving as a forum where people write articles itself)

                                                @JohnCarter, could you elaborate a bit please: what other things in a constructor do you consider making it too big?

                                                Are there global side effects? Like passing in references/pointers to other objects and the constructor is mutating them? That indeed sounds like spaghetti code.

                                                Are you talking about just the size of the code? I can think of cases where a large class, composed of many smaller classes, would end up doing a bunch of initialization, and so you would have a large constructor.

                                                In the little work that I’ve done, I’ve never needed particularly verbose constructors, especially since I try to rely on default constructors of objects that compose the enclosing class.

                                                To speak to your note about exceptions, I’ve been taught that if your object can fail on construction then you should be supplying a construct function that raises the exception or returns an invalid object (e.g. through std::optional)

                                                1. 4

                                                  I think the two most important things to get about Object Oriented Design, are things I will admit I only got to years too late….

                                                  Classes are all about preserving the invariant. https://www.artima.com/intv/goldilocks3.html

                                                  And a constructor is all about gluing a bunch of subobjects (or POD’s) together so that you are guaranteed that whenever you have an instance of that type, you can henceforth entirely rely on the invariant being true.

                                                  The invariant expression is Implicitly part of the precondition whenever you have an input of that type, and implicitly part of the postcondition when you output one.

                                                  The other thing I learnt way way too late (no, I’m not a particular slow learner, text books and tutorials are slower, and they don’t update nor do they update there readers)

                                                  http://www.drdobbs.com/cpp/how-non-member-functions-improve-encapsu/184401197

                                                  A bit mind blowing that one.

                                                  How do you decide whether code belongs in an instance method or not?

                                                  Answer: If and only if it needs to operate on the guts of the object sufficiently much that the invariant will, for a moment in time, no longer hold AND this operation cannot be achieve by using other simpler public methods.

                                                  Otherwise make it non-friend, non-member.

                                                  ie. Object Instances methods are not the places where you do the work.

                                                  They are thin shims that allow you to do Design by Contract with a powerful set of pre and post conditions enforced with the help of the compilers type system.

                                                  So what do I regard these days as belonging in a constructor? Well in C++ my constructors usually have a member initializer list and an empty body.

                                                  In ruby, often the initialize the instance variables, and then at the end I invoke freeze.

                                                  If, as I carry on developing the class, I really feel the need to mutate the instance, I go back and remove it. Usually I find I don’t need to.

                                                  If I’m maintaining legacy code, the first thing I do is put a freeze in at the end of every constructor. And the run the unit test suite.

                                                  Very informative. Curiously enough, I never find I need to remove all freeze’s I put in.

                                                  1. 1

                                                    Thanks for those links! I will note a tension between those two pieces of writing: what Scott Meyers writes is in opposition to what Stroustrup advocates. Basically Meyers wants increased complexity at the outset as a hedge against future requirements changes. Stroustrup wants us not to over engineer at the start.

                                                    1. 1

                                                      Personally I think they are in perfect accord.

                                                      The one and only thing that matters is the class invariant.

                                                      If the code is not about preserving the class invariant, it doesn’t belong in the class.

                                                      If the data isn’t involved in the invariant it doesn’t belong in a class.

                                                1. 3

                                                  Nice work OP. I have always wanted to try some hardware hacking on the stuff I have but it’s super hard. On another note, this is a really sad trend that every product in every industry is picking up. Why just sell a product when you can sell a product that also locks you in to a subscription.

                                                  1. 2

                                                    I agree, I was really disappointed in the company when it became apparent that a helpful feature (notifying when cartridge was empty) was really DRM. Otherwise its a pretty cool device.

                                                    1. 1

                                                      I really wish companies would create a better quality robot cat litter box/printer/coffee maker, charge more for it and stop trying to make money off supplies, leaving that to third parties. Some bean counter came up with it and now they are drowning with every mom and pop in China having a clever (albeit messy) workaround to how to refill your inkjet cartridge and dodge the DRM chip.

                                                      1. 3

                                                        The initial price seems to put people off. It’s the same story for mobile games. No one wants to pay $1 for the app up front but they are ok paying microtransactions and lootboxes.

                                                    1. 1

                                                      Cytoolz’ curry has always served me well

                                                      1. 6

                                                        I’ve been looking for a visual language for a while. There have been several attempts, but like most languages, they are not mainstream. What are very effective and widely used are Simulink and LabVIEW both of which have core applications in signal processing/control systems problems.

                                                        What drew me to Luna was the promise of going smoothly (and reversibly) between textual and visual representations. So, I went to give luna a spin.

                                                        • The precompiled package for mac asks for your email in order to track usage. There is no opt out. My reluctance to give out my email was overcome by my curiosity, so I continued. I’m sure if you compile from sources you won’t have this.
                                                        • On first startup a blank window titled “Atom” appears and looks like a text editor but is quickly filled by a tutorial and new project wizard
                                                        • I like how changing the code updates the visual representation
                                                        • I went to create a new file, started on a new function and immediately the studio threw up an exception and linked me to the github
                                                        • I played around with creating projects but I couldn’t save the files properly or figure out the basic project organization

                                                        In summary, for my 30min experiment with it, I see this as a alpha release with the interesting concept that you can go back and forth between visual and text based coding, but the language itself is kind of underwhelming.

                                                        I would have been very excited if this kind of effort had been put into a similar visual paradigm for Python.

                                                        1. 1

                                                          As much as your overall impression of the alpha-quality of the whole “develeopment environment” is correct, I wouldn’t be so quick as to assume this must induce that the language is underwhelming. And I don’t see any concrete arguments/critique regarding Luna as a language in your post!

                                                          Personally, I see Luna currently as an “early adopters”-stage technology, a.k.a. “bleeding edge”. Presence of the “bleeding” adjective in this phrase is telling. I totally admit it’s not “production ready”, but I strongly believe it’s noteworthy and has future, and I’m very excited to already have access to it, at this early phase of development. (More specifically, personally I believe it will be revolutionary.)

                                                          1. 1

                                                            Hi, that’s because nothing in the language concepts stood out. Could you note what are the distinguishing characteristics? Thanks

                                                            1. 1

                                                              Ok, I think I better understand your message now.

                                                              As far as I understand, the duality is what is the main “distinguishing characteristic”. Other than that, I believe the language (in its textual part) is not claimed to be innovative indeed; but I don’t really know a lot about its design, and haven’t seen any articles, so not sure, and I’m not a PL scientist/theorist. Only know that it’s statically typed. And AFAIU the authors aim for readability.

                                                              However, as for “underwhelming”, were you expecting something special that you then missed?

                                                              1. 1

                                                                Hi,

                                                                I generally don’t see value in developing new languages for their own sake. This is of course separate from the pleasure it gives the developer.

                                                                From a glance at the language I wondered if it would have been more impactful to take say Haskell or a constrained subset of Racket or Python and build a tool around it that allowed reversible visual/textual programming.

                                                                1. 2

                                                                  My understanding is that the authors believed that the two representations must be developed “in concert”, so that each of them would make sense when viewed separately, and for the “mirror editing” (graphical vs. textual) to be feasible. Also, from what I heard from them, I believe they’re big fans of Haskell personally (that’s what they used as implementation language), but wanted the Luna textual language to be more approachable — aiming for something resembling Python on surface and in ergonomics, but much more Haskell-y (or at least FP) in spirit and semantics. They seemed especially fond of the semantics and appearance of the “dot operator”.

                                                                  So, I believe in their view creating a new language (on the textual front) was not “for its own sake”, but rather the only feasible way, when the goal was to have the feature of “duality”. So for me, looking at the (textual) language purely separately, ignoring the feature of duality, doesn’t make much sense. Though OTOH, now I’m starting to understand such a view is at all possible, especially if someone feels not interested in the visual part.

                                                                  1. 1

                                                                    Hi, Thank you for the detailed responses. I would have thought Haskell’s purity and strong type system would be especially suited to the visual paradigm being pursued here.

                                                        1. 3

                                                          Research gate and academia.edu have similar aims and are closed source, I think. Both have had trouble keeping the lights on.

                                                          1. 2

                                                            Academia and research gate are more like Facebook for academics where you post your papers instead of cute pictures of your cat. I don’t think there is much discussion about the papers there.

                                                            Source: I have an account on both websites.

                                                            1. 2

                                                              I would agree on this. I’ve been using these services for a long time and I don’t remember any case in RG that I had a discussion about a paper. In my field, it’s always experimental questions. Not the papers.

                                                          1. 36

                                                            I am a maths researcher at the university of Cologne and adressed this in a thesis I wrote in 2016. See chapter 3, especially the first part of section 3.1.

                                                            Dividing by zero is totally well defined for the projectively extended real numbers (only one unsigned infinity inf) but the argument for the usual extended real numbers (+-inf) not working is not based on field theory, but of infinitisemal nature, given you can approach a zero-division both from below and above and get either +inf or-inf equally likely.

                                                            Defining 1/0=0 not only breaks this infinitiseminal form, it‘s also radically counterintuïtive given how the values behave when you approach the division from small numbers, e.g. 1/10, 1/1, 1/0.1, 1/0.001…

                                                            lim x->0 1/x = 0 makes no sense and is wrong in terms of limits.

                                                            See the thesis where I proved a/0=inf to be well-defined for a!=0.

                                                            tl;dr: There‘s more to this than satisfying the field conditions. If you redefine division, this has consequences on higher levels, in this case most prominently in infinitisemal analysis.

                                                            1. 8

                                                              I used to be a maths researcher, and would just like to point out that some of the people who define division by zero to mean infinity do it because they’re more interested in the geometric properties of the spaces that functions are defined on than the functions themselves. This is the reason for the Riemann sphere in complex analysis, where geometers really like compact spaces more than noncompact ones, so they’re fine with throwing away the field property of the complex numbers. The moment any of them need to compute things, however, they pick local coordinates where division by zero doesn’t happen and use the normal tools of analysis.

                                                              1. 2

                                                                Thanks for laying this out and pointing out the issue with +/- Inf

                                                                Could you summarize here why +Inf is a good choice. As a practical man I approach this from the limit standpoint - usually when I end up with a situation like this it’s because the correct answer is +/- Inf and it depends on the context which one it should be. Here context means on which side of zero was my history of the denominator.

                                                                The issue is that the function 1/x has a discontinuity at 0. I was taught that this means 1/0 is “undefined”. IMO in code this means throw an exception.

                                                                In practical terms I end up adding a tiny number to the denominator (e.g. 1e-10) and continuing, but that implicitly means I’m biased to the positive side of line.

                                                                I think Pony’s approach is flat out wrong.

                                                                1. 6

                                                                  It is not +inf, but inf. For the projectively extended real numbers, we only extend the set with one infinite element which has no sign. Take a look at page 18 of the thesis which includes an illustration of this. Rather than having a number line we have a number circle.

                                                                  Dividing by zero, the direction we approach the denominator does not matter, even if we oscillate around zero, given it all ends up in one single point of infinity. We really don’t limit ourselves here with hat as we can express a limit to +inf or -inf in the traditional real number extension by the direction from which we approach inf in the projectively extended real numbers (see remark 3.5 on page 19).

                                                                  1/x is discontinuous at 0, this is true, but we can always look at limits. :) I am also a practical man and hope this relatively formal way I used to describe it did not distract from the relatively simple idea behind this.

                                                                  Pony’s approach is reasonable within field theory, but it’s not really useful when almost the entire analytical building on top of it collapses on your head. NaN was invented for a reason and given the IEEE floating-point numbers use the traditional +-inf extension, they should just return the indeterminate form on division by zero in Pony.

                                                                  1. 6

                                                                    NaN only exists for floating point, not integers. If you want to use NaN or something like it for integers, you will need to box all integer numbers and take a large performance hit.

                                                                2. 1

                                                                  Just curious, but why isn’t 1/0=1? Would 1/0=Inf not require that infinity exists between 0 and 1?

                                                                  1. 2

                                                                    I’m not sure I understand your question. Does 1/2=x require x to be between 1 and 2?