Note that the “idempotence” definition that people use to describe side effectful actions is not the mathematical definition.
In math a function is idempotent if:
f(f(x)) = f(x)
This becomes very relevant in functional programming, as in FP the definition of idempotence is the math definition. Mentioning this in order to avoid confusion.
As always developers are overloading math terms in a way as to confuse everyone :-)
I don’t find the resemblance very intuitive. I was familiar with the meaning in TFA and got very confused when acquainted with the math definition.
Also in FP and in computer science in general the more correct term for executing effectful actions only once and then caching the result for subsequent calls is “memoization”.
correct term for executing effectful actions only once and then caching the result for subsequent calls is “memoization”
Memoizing actions which have observable side effects seems more like the sort of situation where you shouldn’t be using memoization. As I understand it the term usually implies an optimization that doesn’t change a program’s semantics, and that’s only the case when the function being memoized is referentially transparent.
I think you should also host something like fermats library and make every PDF link that gets posted reuploaded to there. Then the community can annotate the PDF itself instead of just putting comments below it.
To expand a little bit on this, I would prefer a way to collectively annotate a paper rather than providing comments on it.
The former feels closer to the way that I read and annotate papers (that is, making comments and highlights on specific parts of the paper). The latter feels like reviewing a paper with the goal to provide overarching commentary about it.
The commenting features should depend on the kind of community you want to build (one around conversations about specific parts of the paper or one around public peer review of papers).
I wonder if “thematic” comments (either by tagging them or by breaking ten down in sections) would allow you to implement the discussion part but within the lobsters codebade.
This is pretty nice. I just finished it.
Up to the ALU, you could sort of see the components you are building will be useful. But afterwards, I didn’t always really follow the reasoning through. I wish it let you design the instructions, or at least let you think about them. That seems like the more interesting/difficult part.
That and maybe a “hard mode” where you have access to all your previous components.
Would something like this work at a decent speed on real hardware (if you don’t mind the size) or is there normally too much optimization going on?
it is kind of like the minimum most simple CPU. A lot of extra features to speed it up can be added. Like a pipeline.
Right, but how many times slower do we expect if this is used unmodified?
The follow up question would be: what optimization to do if you could only pick 1 or 2?
wrap comes naturally from hardware implementation of numbers.
but certainly a programming language can throw an exception when the carry bit comes on.
wrap comes naturally from hardware implementation of numbers.
Correct for most platforms, but different platforms may have different integer implementations (for example some DSPs have saturating integers which do not wrap at all). As I understand (and OP can verify) the C spec leaves signed integer overflow undefined to allow for the whole range of signed integer wrap/saturation behaviours.
It’s true that on the the popular architectures that signed integers do indeed wrap. So why does the C spec not make that the norm for x86_64 and ARM? Well, because then your C programs wouldn’t be portable.
It’s also interesting that unsigned integer overflow is defined to wrap in C, yet unsigned integers can saturate on some platforms! Madness.
but certainly a programming language can throw an exception when the carry bit comes on.
Well, OP is talking about C…
I’ve done a bunch of fixes and improvements to my PEG parser library today, https://github.com/rain-1/racket-peg/ I’m very pleased with it and value all the contributors who pushed me to make it even better. I think the next step may be to write some tutorials and guides on how to use it best.
Very cool. Currently on Condition, is this meant to be possible with just or(1), and(1), inv(1), is_neg(16) and is_zero(16)? I can’t see how it could be, shouldn’t there be an add(16) and negate(16) for this?
EDIT: I ended up skipping Condition. The rest were all very enjoyable.
It is certainly possible. No need for arithmetic computation, only different ways of comparing against Zero. Maybe you misunderstood something about the specification?
Yeah I think I read it the first time as comparing two numbers instead of just comparing one number to zero. Opened it up and did it quite quickly today.
I think condition is impossible! I did ever other level though.
I emailed the author thanking him for the great game & pointing out this level was not solvable.
Edit: OK I finally did it! It wasn’t impossible, just very hard!
great to see someone teaching how to properly implement tail calls!
I use a similar approach in my C interpreter https://github.com/rain-1/scheme_interpreter/blob/master/src/sch3.c#L902 where recursion is done with a goto instead of a C recursive call (which would be non tail recursive).
We asked repeatedly, but Intel provided no advance notice. We did not even receive replies to our requests for dialogue.
bad intel
Well I have just finished a release of my build tool, which I’m very pleased with. I have been trying to “prove” it’s utility by builting existing software with it. It was able to make scripts with it to build jq and bash, so it’s pretty effective.
the script is here https://gist.github.com/rain-1/f3434f4b12147d5ef62369e511a184de#file-jq-makesfile I left in the make bits as comments for comparisons sake
which generates this input for the ‘makes’ tool: https://gist.github.com/rain-1/f3434f4b12147d5ef62369e511a184de#file-jq-example
and then makes processes those build rules to execute them in parallel respecting dependency order.
the bash scripts themselves that generate files are pretty ugly, my tool so far is just the engine part. A nice declarative UI to produce the TSV files can be made separately.
they can be generated by a DSL either something similar to the make language, or embedded as a python library.
Any reason your scripts target /bin/sh when they’re actually Bash scripts?
They won’t run, as-is on Debian, for example.
This is one of the most powerful tools around for making extremely precise compiler analysis passes without much effort.
Are you saying you like the ideas in the paper or have previously seen/used the technique? I found it through my semi-blind searches. I don’t know if it had uptake.
Well I think this idea has been deeloped further into the monadic aam approach. It hasn’t been applied in practice yet to my knowledge, except maybe for a java analysis.
One of the their applications is bootstrapping:
Implementing language X in a much harder, lower level language Y may be difficult. But if X is self extensible then the implementation is done with a bit less Y code and a bit more X code.
That reminds me about Bootstrapping Site additions I needed to make. I added Bash Infinity since it contained things that might be useful for bootstrapping compilers via bash. I also added a previous submission near Wirth’s. Tannenbaum, mostly known for microkernels, uses a general-purpose, macro language to boostrap up a language they wrote OS’s with back in the 1970’s. The language itself, Sal, is interesting since it operate directly on memory forms like BCPL did. Going back to that style might be advantageous for assembly-up bootstrapping. Some prior bootstrappers did something similar with LISP’y languages designed just above assembly.
This is ill-advised.
You cannot define 1/0 and still have a field. There’s no value that works. Even when you do things like the extended real numbers where x/0 = infinity, you’re really just doing a kind of shorthand and you acknowledge that the result isn’t a field.
You can of course define any other algebraic structure you want and then say that operating on the expression 1/0 is all invalid because you didn’t define anything else and no other theorem applies, but this is not very helpful. You can make bad definitions that don’t generalise, sure, definitions that aren’t fields. But to paraphrase a famous mathematician, the difficulty lies not in the proofs but in knowing what to prove. The statement “1/0 = 0 and nothing else can be deduced from this” isn’t very interesting.
I want to make an attempt to clarify the discussion here because I think there is some substance I found interesting. I don’t have a strong opinion about this.
The article actually defines an algebraic structure with three operators: (S, +, *, /) with some axioms. It happens that these axioms makes it so (S, +, *) is a field (just like how the definition of a field makes (S, +) a group).
The article is right in saying that these axioms do not lead to a contradiction. And there are many non-trivial such structures.
However, the (potential) issue is that we don’t know nearly as much about these structures than we do about fields because any theorem about fields only apply to (S, +, *) instead of (S, +, *, /). So all the work would need to be redone. It could be said that the purpose of choosing a field in the first place is to benefit from existing knowledge and familiar expectations (which are no longer guarantteed).
I guess formally adding an operator means you should call it something else? (Just like how we don’t call fields a group even though it could be seen as a group with an added * operator.)
This has no bearing on the 1/0 = 0 question however, which still works from what’s discussed in the article.
As I understand it, you’ve only defined the expression 1/0 but you are saying that /0 isn’t shorthand for the multiplicative inverse of 0 as is normally the case for /x being x^-1, by definition. Instead, /0 is some other kind of magical non-invertible operation that maps 1 into 0 (and who knows what /0 maps everything else into). Kind of curious what it has to do with 0 at all.
So I guess you can do this, but then you haven’t defined division by zero at all, you’ve just added some notation that looks like division by zero but instead just defined some arbitrary function for some elements of your field.
If you do mean that /0 is division by zero, then 1/0 has to be, by definition, shorthand for 1*0^-1 and the arguments that you’ve already dismissed apply.
The definition of a field makes no statements about the multiplicative inverse of the additive identity (https://en.wikipedia.org/wiki/Field_(math)#Classic_definition). Defining it in a sound way does not invalidate any of the axioms required by the field, and, in fact, does define division by zero (tautologically). You end up with a field and some other stuff, which is still a field, in the same way that adding a multiplication operator on a group with the appropriate properties leaves you with a group and some other stuff.
The definition of the notation “a / b => a * b^-1” assumes that b is not zero. Thus, you may define the case when b is 0 to mean whatever you want.
That people want to hold on to some algebraic “identities” like multiplying by the denominator cancels it doesn’t change this. For that to work, you need the assumption that the denominator is not zero to begin with.
In what way, whatever it is you defined /0 to be, considered to be a “division”? What is division? Kindly define it.
And when b is zero, what is division? That’s the whole point of this argument. What properties does an operation need to have in order to be worthy of being called a division?
Indeed, it is the whole point. For a field, it doesn’t have to say anything about when you divide by zero. It is undefined. That doesn’t mean that you can’t work with and define a different, but still consistent, structure where it is defined. In fact, you can add the definition such that you still have the same field, and more.
edit: Note that this doesn’t mean that you’re defining a multiplicative inverse of zero. That can’t exist and still be a field.
In what way is it consistent? Consistent with what? As I understand it, you’re still saying that the expression 1/0 is an exception to every other theorem. What use is that? You still have to write a bunch of preconditions, even in Coq, saying how the denominator isn’t zero. What’s the point of such a definition?
It seems to me that all of this nonsense is about not wanting to get an exception when you encounter division by zero, but you’re just delaying the problem by having to get an exception whenever you try to reason with the expression 1/0.
I mean that the resulting structure is consistent with the field axioms. The conditions on dividing by zero never go away, correct. And yes, this is all about avoiding exceptions in the stack unwinding, programming language sense. The article is a response to the statements that defining division by zero in this way causes the structure to not be a field, or that it makes no mathematical sense. I am also just trying to respond to your statements that you can’t define it and maintain a field.
It really doesn’t make mathematical sense. You’re just giving the /0 expression some arbitrary value so that your computer doesn’t raise an exception, but what you’re defining there isn’t division except notationally. It doesn’t behave like a division at all. Make your computer do whatever you want, but it’s not division.
Mathematical sense depends on the set of axioms you choose. If a set of axioms is consistent, then it makes mathematical sense. You can disagree with the choices as much as you would like, but that has no bearing on the meaning. Do you have a proof that the resulting system is inconsistent, or even weaker, not a field?
I don’t even know what the resulting system is. Is it, shall we say, the field axioms? In short, a set on which two abelian operations are defined, with two distinct identities for each abelian operation, such that one operation distributes over the other? And you define an additional operation on the distributing operation that to each element maps its inverse, except for the identity which instead is mapped to the identity of the distributed-over operation?
It’s a field where the definition of division is augmented to include a definition when the divisor is zero. It adds no new elements, and all of the same theorems apply.
You are correct. The field axioms are all still true, even if we extend / to be defined on 0.
The reason for this is that the axioms never “look at” any of the values x/0. They never speak of them. So they all hold regardless of what x/0 is.
That said, even though you can define x/0 without violating axioms it doesn’t mean you should. In fact it seems like a very bad idea to me.
That doesn’t make it not a field; you don’t have to have a division operator at all to be a field, let alone a division operator that is defined to be multiplication by the multiplicative inverse.
zeebo gave the same answer I would give: a / b is a multiplied by the multiplicative inverse of b when b is not zero. This article is all about how a / 0 is not defined and so, from an engineering perspective, you can define it to be whatever you want without losing the property that your number representation forms a field. You claimed that defining a / 0 = 1 means that your numbers aren’t a field, and all I’m saying is that the definition of the division operator is 100% completely orthogonal to whether or not your numbers form a field, because the definition of a field has nothing to say about division.
What is an engineering perspective?
Also, this whole “a field definition doesn’t talk about division” is a bit of misunderstanding of mathematical idioms. The field definition does talk about division since “division” is just shorthand for “multiplicative inverse”. The reason the definition is written the way it is (excluding 0 from having a multiplicative inverse) is that giving zero a multiplicative inverse results in contradictions. When you say “ha! I won’t let that stop me! I’m going to define it anyway!” well, okay, but then either (1) you’re not definining a multiplicative inverse i.e. you’re not defining division or (2) you are defining a multiplicative inverse and you’re creating a contradiction.
(I had a whole comment here, but zeebo is expressing themselves better than I am, and there’s no point in litigating this twice, especially when I feel like I’m just quoting TFA)
Bad idea, it should error or give NaN.
1/0 = 0 is mathematically sound
It’s not mathematically sound.
a/b = c should be equivalent to a = c*b
this fails with 1/0 = 0 because 1 is not equal to 0*0.
Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.
There is a subtlety because some people say (X) and others say (Y)
(X) a/b = c should be equivalent to a = c*b when the LHS is well defined
(Y) a/b = c should be equivalent to a = c*b when b is nonzero
If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.
It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.
I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.
This is explicitly addressed in the post. Do you have any objections to the definition given in the post?
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values
That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.
Those truthy/falsey values are an often source of errors.
I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.
I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.
It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be +?, /?, *?, -?.
https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff
Raymond Chen has a history of just posting interesting arcana about architectures Windiws did and did not quite support.
Any plans for features you want to include? or is it to recreate something as a learning experience?
My aim is to replace make. I have the core algorithm done, it takes a very plain tab separated format as input. I think there needs to be a good UI to describe builds. I’m proving it by building existing projects with it.
Within a few weeks of writing Make, I already had a dozen friends who were using it.
So even though I knew that “tab in column 1” was a bad idea, I didn’t want to disrupt my user base.
So instead I wrought havoc on tens of millions.
(from https://beebo.org/haycorn/2015-04-20_tabs-and-makefiles.html)
The side-note is even better:
Side note: I was awarded the ACM Software Systems Award for Make a decade ago. In my one minute talk on stage, I began “I would like to apologize”. The audience then split in two - half started laughing, the other half looked at the laughers. A perfect bipartite graph of programmers and non-programmers.
VIA is an independent manufacturer of computers that sold low-power, crypto-accelerated, x86 chips designed by the third, x86 vendor that’s still around: Centaur. Here’s a video about them. They worked on processor verification with ACL2. Jared Davis, who contributed to that work, later did a “self-verifying” prover called Milawa that bootstrappers should find inspiring.
So, interesting company and people. Them being low watts with x86 compatibility got them used in a lot of embedded applications. The VIA Artigos were also one of only boxes you could get for $300 with tiny, form factor and crypto accelerator (incl TRNG). VIA stayed being a struggling also-ran in x86 but many users.
because it forces you to use a phone number, the author works for facebook and recommends against GPG, and they did not want people to use the F-droid free appstore and they are against federating with people hosting their own server
This is the best advice. I crave more minimal sites like this. Have you noticed that Lobsters is the minimal link aggregation site?
What an incredible public good you are doing by carefully critically analyzing the crypto pieces of these specs. It’s really hard to appreciate how much background learning and then careful effort reading every detail of the spec goes into that. Thank you so much!