At first glance, this looks similar to https://cycle.js.org which has a similar concept of drivers. I’ve written one app with Cycle, and I really enjoyed the experience. The biggest issue I found was that some DOM APIs really don’t fit the functional reactive model. What is extremely nice about this model is that all data-flow is explicit and “static”: your code really becomes a description of all the ways that data flows around the app, which is really cool.
What I don’t get is why this library needs to define its own view templating language, as it seems only superficially different from say JSX? Haven’t looked very deep so I don’t know if it’s essential. This is something that Cycle is very unopinionated about, which is nice.
Cycle.js looks super interesting. But it looks like it’s at least 10x the size of this. So even if it’s just a more compact implementation of cycle.js, that sounds promising.
It’s similar to Cycle, but it isn’t based on the concept of streams. Instead, an application is just a function of driver inputs and returns driver outputs.
This seems like a super interesting article and I’m currently digging into it. At the outset, it isn’t clear to me why there is a space between the name of the function and its arguments.
The article says “Functions are called with a space between the name and parenthesis. This is done to simulate calling functions by juxtaposition, and it’s helpful for calling curried functions.” but that doesn’t actually help me understand anything. What does “calling functions by juxtapostiion” mean, and why is the space helpful for calling curried functions?
Hey, glad you find it interesting. The space isn’t required, and you could call it without a space there, but in languages like Haskell where currying is more common than JavaScript, functions are called by “juxtaposition”, where arguments are separated by a space. In JavaScript, calling a function requires parenthesis, and calling curried functions would look like f(x)(y)(z)
. I added that note there just to clarify the notation because it’s not common to call functions this way, but I think f (x) (y) (z)
looks nicer.
I’ll edit it to make it clear that it’s not required and purely there for aesthetic purposes, thank you!
It’s not much. In some math texts it is common to write “f x” instead of “f(x)” - that is the juxtaposition which means just “put things next to each other”. That notation is nice, less typing and easy to read, except when there are multiple arguments and function valued expression or other forms of ambiguity. Does f x g y mean f(x,g,y) or f(x)(g (y)) or some other variant. “Currying” just means packing multiple arguments into one as a form of notational simplicity. e.g. f(x,y,z) is f(q) where we’ve done something to stuff x,yz into q or into f itself so add(a,b,c) could be add’(c) where add’(x) = add(a,b,x) or something. Seems pointless in this case.
If you can’t do
int x;
x = f(0);
x = g(x);
because your programming language does not have variable assignment, you need to do either
g(f(0))
which can become too awkward with complex formulas
or have a fancy ;
so f(0) ;; g(_)
means “I will avert my eyes while the compiler does an assignment to a temp variable and pretend it didn’t happen”
“Currying” just means packing multiple arguments into one as a form of notational simplicity
I don’t think that is accurate. Currying is about transforming functions that get multiple parameters into a sequence of functions, each of which takes one argument.
That is still a somewhat imprecise description, but I think an example would be more clarifying than going deeper into the theoretical details: If we take sum as non-curried function, it takes two integers to produce another one, the type is something like (Int x Int) -> Int
, and in practice you call it like sum(1, 2)
which evaluates to 3.
The curried version would be one function that given an integer a
returned a function that given an integer b
returned a+b
. That is, sum is transformed into the type Int -> (Int -> Int)
. Now to calculate 1 + 2 we should first apply sum to get a function that sums one, like sum1 = sum(1)
, where sum1
is a function with type Int -> Int
; and then apply this new sum1
function to 2, as in sum1(2)
which returns 3. Or in short, dropping the temporary variable sum1
, we can apply sum(1)
directly as in (sum(1))(2)
, and get our shiny 3 as result.
If your language uses space to denote function application, then you can write that last application as (sum 1) 2
. Finally, if application is left associative, you can drop the parenthesis and get to sum 1 2
, which, is arguably, pleasant syntax.
because your programming language does not have variable assignment
Just use a different variable as you should in the first case as well and it works just fine.
let x = f 0
y = f x
in ?somethingWithY
Also that’s a bad explanation of currying.
I will avert my eyes while the compiler does an assignment to a temp variable and pretend it didn’t happen
You have a negative reaction to FP for some reason which leads you to write these cheap jabs that are misinformative.
I have a negative attitude to mystification. I don’t like it when reasonably simple programming techniques are claimed to be deep mysterious mathematical operations. Your example, is, of course, an example of why these “fancy semicolons” are needed when it is scaled up. Imagine how hard it would be to track a state variable, say an IO file descriptor around a program if we had to do fd_1, fd_2, fd_n every time there was an i/o operation - keeping these in order would be painful. The “combinator” automates the bookeeping.
The explanation of Currying is perfectly correct, I think, but I’d like to hear what you think I got wrong. There’s not much to it.
In much of mathematics, all of this is just a notational convention, not a endofunctor on a category of sets. The author, who is at least trying to make things clear, could have simply written:
The notation is simpler if we write “f x” to indicate function composition instead of “f(x)”, otherwise it gets cluttered with parenthesis. To avoid ambiguity with multiple arguments, only single argument functions are allowed and we take care of multiple arguments by using functions that produce functions as values so, instead of f(x,y,z) the value of f1 x is a map f2 and f2 y produces a value f3 and f3 z then is equal to f(x,y,z). Equivalently, f(x,y,z) = ((f1 x) y) z.
I think FP is an interesting approach. I wish it could be treated as a programming style instead of as Category and Lambda Calculus Notational Tricks To Impress your Colleagues.
Functions take only one argument. So it’s really taking a (x * y * z), or an ‘x’ and a ‘y’ and a ‘z’. Currying is taking a function that takes an (x * y * z) -> w and transforms it to (x -> (y -> (z -> w))) aka (x -> y -> z -> w) this is useful because it allows us to create little function generators by simply failing to provide all the arguments. This isn’t simply a notation difference though because they are completely different types and this can matter quite a bit in practice. While yes there is a one to one correspondence between the two, that’s not the same as a “notational difference”. Tuples are fundamentally different objects than functions that return functions and this difference matters on a practical level when you go to implement and use them. You can say that it’s simply a notational difference but it’s untrue, and implicit conversions from one to the other does not mean they are “the same thing”.
In Haskell functions may take only one argument. In mathematics and in programming languages, it depends.
https://math.stackexchange.com/questions/2394559/do-functions-really-always-take-only-one-argument
In essence, it doesn’t depend. The notation depends but the notation represents one thing. f(x,y,z,w) is an implicit tuple of x,y,z,w. Without this there’s no concept of like domain of a function. This is all philosophical waxing but once you talk about currying it starts to matter because it affects what things are possible because not all arguments are provided at once. You could argue that objects and mutability sidestep this but I’d also argue that mutation within a function begins to deviate from a well defined “mathematical” function. That may be fine for you, and that’s okay. However since this conversation is primarily about definitions and we talked specifically about the mathematical way of modeling programming with functions yknow it matters.
For example in javascript the difference between f(x)(y)(z) and f(x,y,z) is literally different computations, and while there are times you can convert between the two freely, there are things that f(x)(y)(z) can do that f(x,y,z) cannot. For example I can use f(x)(y), to create a callback to defer computation with because f(x)(y) returns a function which takes a “z”. That’s genuinely useful. Now you can meaningfully argue that with objects you can do the same thing and that’s great but this is about functions and function passing. So it is actually meaningful to describe what you can and cannot do with these things.
This is all philosophical waxing but once you talk about currying it starts to matter because it affects what things are possible because not all arguments are provided at once. You could argue that objects and mutability sidestep this but I’d also argue that mutation within a function begins to deviate from a well defined “mathematical” function.
Very little in programming is a mathematical function. Even something like (==) : a -> a -> bool
isn’t a function, as its domain would be the universal set.
Also, I don’t think all mathematical functions are curryable? Like consider the function f[x: R^n] = n
, which returns the number of arguments passed in. I don’t think there’s a way to curry that.
A year or two ago I wrote an Idris function that took an arbitrary number of (curried) arguments and put them in a list (and gave you the argument count). From what I remember it used a type class with one instance for (->)
(the accumulator) and one for the result type, and a lot of type inference. I’ll dig it out later.
Edit: That was an already-curried function. You could potentially automatically curry f[x: R^n]
using the same technique since, in Idris, tuples are constructed like lists: (1, 2, 3)
is the same as (1, (2, (3, ())))
, so you could deconstruct a cartesian product while simultaneously producing a chain of (->)
function constructors.
Both of these points while interesting, and I think important questions to ask, don’t affect the position that currying is not simply a notational difference.
In essence, of course it depends, but it’s trivia. The course notes for the mulivariate calc course at MIT begins “Goal of multivariable calculus: tools to handle problems with several parameters – functions of several variables.” I’m amazed that some Haskellian or Curry Fan has not intervened to correct the poor fool teaching that course all wrong - “Excuse me, but those are not multiple variables, they are vectors! Please write it down.” If you did say that, and the Professor was in a good mood, she’d probably say: yes, that’s another way to look at the same thing.” And if you then insisted that “a well defined mathematical function” has a single variable, well … As usual, the problem here is that some trivial convention of the Haskell approach is being marketed as the one true “mathematics”.
I have no idea what a “mutation within a function” would mean in mathematics. Programs don’t have mathematical functions, they have subroutines that may implement mathematical functions. There is no such thing as a callback in the usual mathematical definition of a function. Obviously, within a programming language, there will be differences between partial computation or specialization of a function and functions on lists. You seem to be mixing up what programming operations are possible on subroutine definition with the mathematical definition of a function - unless I’ve totally missed your point. Obviously, for programs there is a big difference between a vector and a specialized function. But so what?
You have confused operads with categories, I think.
You can be upset about it, but currying isn’t simply a notational difference from multivariate functions.
Hey, I’m Kabir and I write about design, the web, systems, programming languages, AI, and cryptography. Mostly things I’m working on at the moment — I have plenty of posts planned but seldom get the time to write them.
For school, we have to think about entrepreneur projects related to blockchain. Every single time we find a nice idea, either it has already be done, or the blockchain technology is irrelevant for that idea (which can be done without). Our group hasn’t advanced for 2 months now.
A hammer looking for a nail - A lesson I have heard before is to look for a problem with real value that hasn’t been solved, this doesn’t seem to be taking that approach.
success in most things is about managing the expectations of others. say no more often. ask the manager to prioritize.
Unfortunately I am not a great expert, still gathering wisdom myself. Though i hope to hear what others can suggest.
I wrote an article on finding ideas. Essentially, it is important to find problems and treat them as opportunities, rather than finding solutions first and the problems they solve later.
Maybe start from problems: look for markets for lemons, adverse selection, agency costs. As a rough rule of thumb, any market in which someone can earn a commission. And focus really tightly - rather than land titles on the blockchain, attack mineral or oil rights. Look up what people are suing each other over and you know what corner cases to handle.
Things that might be useful:
What’s wrong with doing something that’s already been done? Unless you’re doing research, there’s usually room for more than one interpretation on how to solve a problem.
That’s actually a good point. YC often says don’t worry if someone has thought of your idea already. Just beat them in execution. Tech history is littered with better ideas or bad implementations of similar ones that lost to better executed and/or marketed ideas.
Although I warn it might be unpopular, you might want to try something similar in concept but not quite blockchain. The benefits of the blockchain without necessarily being one. Here’s a few I’ve heard or was pushing that may or may not be implemented by a startup by now:
Transactions are done with traditional databases that use a distributed ledger to tally up final results. This is similar to what banks already do where most transactions are hidden in their databases with some big chunks of money moved between banks. It works.
Instead of just a coin in the ether, Clive Robinson on Schneier’s blog suggested creating a financial instrument that is tied to a number of commodities or other currencies in such a way that it remains really stable. As in, not a speculator’s tool like Bitcoin. I found one company that did this with several currencies plus carbon credits. I just can’t remember name.
Instead of miners, might again use a low-cost technology for transactions but people need an account with the service to participate that costs about a dollar or so a month (or yearly). Kind of like with texts, they buy blocks of transactions. The providers are non-profits with chartered protections with the provisions or exchange being where the new tech comes in to provide accountability.
I’d do a combination of these if I entered the market. I’m not planning to right now. So, sharing the ideas with others in case someone wants to have a try at it while money is raining from the sky on those that utter the words “blockchain” or “decentralized.” ;)
This can be a good fit for something like SimpleNote. Looking at the data structure since it’s not persisted I am a little skeptical on memory usage; one good solution could be abstract out the interface to write adapters. This can allow storages like IndexDB, or NodeJS with LevelDB etc. Would be really nice if you would have show some memory usage with benchmarks.
The memory usage can definitely go higher as the number of documents increase. It is pretty good at compressing prefixes, but suffixes don’t get the same compression. The actual API allows you to store and load the index as needed, but I might look into database adapters.
I tried MoonJS just for kicks in one of my recent projects (http://invatar.ga shameless plug). Too set context I have heavily used React.js, Vue, Angular 2+, Dio.js and even written one of myself (never published it though). Among other issues the one thing that disturbed/frustrated me most and stood out in my face was they way it forced me to invoke methods on this in a component this.callMethod('bar', ...params)
. I mean having getter setters ya sure makes sense; you can’t detect changes (which I don’t agree with; you can use getters/setters) or something. But why would you ever abstract out methods like that. Put it in a subspace this.methods.bar(...params)
. In today’s modern JS age people are more unforgiving and they have more choices; with these kind of caveats it’s not only hard for me to use moon but it’s even harder to convince my team members to use it.
Hey! I appreciate the feedback. I agree with the points about this.callMethod
, and have addressed them in v1. You will be able to do something like:
this.methods.bar();
Rather than callMethod
. The main reason that it even existed in the first place was because it had to change the context of the method so that this
would point to the instance.
Interesting read, but it makes me wonder: why does Moon exist rather than simply improving Vue? Is there enough fundamental differences to merit an either/or? This is coming from someone who has read about Moon, but uses Vue quite a lot in production.
I get this question a lot. There are a lot of fundamental differences between Moon and Vue that would require a lot of breaking changes to Vue. The Vue compiler works in a completely different way from the Moon compiler (the differences are shown in the article).
Also, Moon uses .set
rather than getters/setters, which would also require a breaking change in how observers work in Vue. If you compare the source of Moon and Vue, you’ll see that they are completely different implementations.
They are so different that it would essentially be rewriting Vue itself, and would have little chances of actually being merged into Vue. Check out my article Introducing Moon for more on why I made it.
This looks pretty interesting. I see you have a router plugin built already as well. The only suggestion I’d make for that is to allow non-hash routing. I hate having # in my urls.
I don’t have it open source, but if you take the todo mvc code and run it along with other framework’s todo MVC implementations, you can compare how long it took for each operation.
I submitted a PR to js-framework-benchmark, and Moon should get included in the next round of benchmarks.
Reading the homepage I can not really distill the benefits of Moon over the rendering model of React. It would be beneficial for our ui libraries to be smaller in kb size, but we shouldn’t underestimate the complexity of state of the art virtual dom rendering. I wonder to what extend Moon can compete with React on performance oriented features (e.g. react fiber or async rendering)
At its core, Moon handles every single side effect with a driver — Moon uses a pretty fast virtual DOM diffing algorithm that performs better than React in benchmarks, but a topic I haven’t covered much is that you can use React with Moon. I haven’t tried it much, but it is pretty simple to make a driver where you can output to React directly and get features like fiber or async rendering.
Can you provide some more details about the benchmarks?
The benchmarks were ran locally on my computer, but benchmarks for previous versions of Moon are available in js-framework-benchmark and there is a PR for the latest version. I’ll update the graphs when it gets merged.
It was merged and the benchmarks were updated! Check out the current results here. Moon is only non-keyed at the moment because you manage local DOM state yourself instead of Moon preserving them through reordering nodes.