To my endless frustration, I can’t talk about my day job. But on the side, I’ve been having a great time working through Coursera’s probabilistic graphical models course, exploring the Rust programming language, and doing lots of modernist cooking.
Maybe I’m missing something here, but this argument doesn’t seem coherent. Monads and actors solve different problems. Isn’t this a false choice?
You are correct. The Reddit thread linked by @raganwald is actually gasp pretty damn good.
I found this article to be very interesting. I’ve never spent time in languages which are by default lazily evaluated, so I’ve never run into this problem—I’ll occasionally turn it on, but usually I don’t, so this is an interesting concern to me. For people who have spent time in Haskell/Clojure/something lazily evaluated by default, do you find it harder to reason about thunk leaks than regular, eager-evaluated memory leaks?
Yes, but I suspect that’s largely a (barely existent) tools problem.
Let’s consider a more complex touch interaction.
Take that adjustment list on the right. Say you wanted to be able to pan it to see more, tap an item to adjust it, or press and hold to reorder or delete or something. If this were a more complex application, you might be able to double-tap an item in such a list, too. Aren’t you going to end up reinventing UIGestureRecognizer to deal with the interactions here?
Cross-posted from the comments section there:
[The] advice as far as the button is good here, but I’ve got one small correction and some bonus explanation for interested readers.
I’d like to clarify a few points about offscreen drawing as described in this post. While your list of cases which might elicit offscreen drawing is accurate, there are two grossly different mechanisms being triggered by elements of this list (each with different performance characteristics), and it’s possible that a single view will require both. Those two mechanisms have very different performance considerations.
In particular, a few (implementing drawRect and doing any CoreGraphics drawing, drawing with CoreText [which is just using CoreGraphics]) are indeed “offscreen drawing,” but they’re not what we usually mean when we say that. They’re very different from the rest of the list. When you implement drawRect or draw with CoreGraphics, you’re using the CPU to draw, and that drawing will happen synchronously within your application. You’re just calling some function which writes bits in a bitmap buffer, basically.
The other forms of offscreen drawing happen on your behalf in the render server (a separate process) and are performed via the GPU (not via the CPU, as suggested in the previous paragraph). When the OpenGL renderer goes to draw each layer, it may have to stop for some subhierarchies and composite them into a single buffer. You’d think the GPU would always be faster than the CPU at this sort of thing, but there are some tricky considerations here. It’s expensive for the GPU to switch contexts from on-screen to off-screen drawing (it must flush its pipelines and barrier), so for simple drawing operations, the setup cost may be greater than the total cost of doing the drawing in CPU via e.g. CoreGraphics would have been. So if you’re trying to deal with a complex hierarchy and are deciding whether it’s better to use –[CALayer setShouldRasterize:] or to draw a hierarchy’s contents via CG, the only way to know is to test and measure.
You could certainly end up doing two off-screen passes if you draw via CG within your app and display that image in a layer which requires offscreen rendering. For instance, if you take a screenshot via –[CALayer renderInContext:] and then put that screenshot in a layer with a shadow.
Also: the considerations for shouldRasterize are very different from masking, shadows, edge antialiasing, and group opacity. If any of the latter are triggered, there’s no caching, and offscreen drawing will happen on every frame; rasterization does indeed require an offscreen drawing pass, but so long as the rasterized layer’s sublayers aren’t changing, that rasterization will be cached and repeated on each frame. And of course, if you’re using drawRect: or drawing yourself via CG, you’re probably caching locally. More on this in “Polishing Your Rotation Animations,” WWDC 2012.
Speaking of caching: if you’re doing a lot of this kind of drawing all over your application, you may need to implement cache-purging behavior for all these (probably large) images you’re going to have sitting around on your application’s heap. If you get a low memory warning, and some of these images are not actively being used, it may be best for you to get rid of those stretchable images you drew (and lazily regenerate them when needed). But that may end up just making things worse, so testing is required there too.
One of the things I really like about learning Erlang (I’ve so far worked through about half of http://learnyousomeerlang.com/ ) is that, because of the common pattern of looping and listening for incoming messages, it is quite easy to see how you would use this functional language with actual changing state. Because Erlang programs commonly loop by calling themselves after handling messages, they are able to pass modified state into the next iteration, which lets you have the program’s behavior change over time.
In contrast, I haven’t really got far enough with Haskell to start dealing with monads, so it’s not clear to me how to start thinking about solving some problems using that language.
With respect to this particular issue, a State monad would abstract away the idea of “passing the modified state from the last iteration into the next iteration,” or the notion of an additional input and output, with operators to describe the transformation between them.
What makes this a better solution than IcedCoffeeScript? It looks like Elm is another language that compiles down to js, but it’s unclear what makes it different from the others.
Also, how is FRP different from regular asynchronous programming, other than that you don’t have to use this nested callbacks thing?
I think the difference is mostly in how a problem is conceptualized. In FRP you think of “signal functions” acting on “signals”. Some input is thought of as generating a signal stream of outgoing events, you then stick some signal functions on that event stream, modifying it, and finally connecting the modified stream to an output of some sort.
From the functional point of view this is nice because the signal functions are pure, they take a signal and return another signal, so no side-effects.
Right. In more traditional asynchronous procedural programming, you’d “perform some actions” in response to something happening. In this system, there is some “ground truth” of data (i.e. “what the user has typed”), and everything else flows as first-class transformations to that stream of data. Those transformations can be reasoned about, analyzed, visualized, etc because they’re regularized.
I was a bit disappointed by this one. The buildup seemed great, like they were going to solve the same use cases for bookmarks through something radically different (and better). The only actual result we got to see (“dropzilla”) seemed like a minor and incremental change to bookmarks.
Yeah, bizarrely, the author seemed really excited to talk about every other stage of the design process in a complete way, but seemed almost embarrassed by the result of the design process. It felt like dropzilla was an afterthought, even though it was theoretically the deliverable from the process.
True, but all the other stages of the process have built up complex structures and understanding in the minds of all involved, which I expect will yield returns in future related features.
I agree. The tiling support is not particularly novel, and aggregating saved links from services doesn’t seem to solve the problem they stated, either.
I’m trying to wonder what a good way to sort and save things would be. del.icio.us had a good idea (tagging content for fast retrieval) but it was hampered by the difficult to catalogue (I don’t want to have to think about what tags fit my content). I currently use pinboard.in for a similar service, but it’d be nice to get automatically tagging bookmarks (I think this can be inferred by page content, hostname, and other metrics) and then do a general grep of these similar to search found in modern mail clients.
Some insightful analysis (particularly about the experimental method): http://www.neverworkintheory.org/?p=432
I suspect that the effects of the different ideologies would become more pronounced with a larger project, or (as the authors suggest) with team projects. There’s that tricky old saying: “any project small enough to be completed within a software engineering course does not actually require software engineering.”
This is really fascinating work, Jon. This has introduced me to a number of new features I didn’t know existed.
Are you concerned that such “duck typing” on records w.r.t. their field names might lead to loss of the very safety Haskell works so hard to guarantee? One can imagine two record types which from separate frameworks, one of which has the fields “name” and “size,” (maybe a key for a cache table of encoded data) and the other of which has those names plus “color” (referring to, say, a shape entity in a drawing application). The former is technically a subtype of the latter, as you’ve defined subtyping, but those fields are semantically distinct, and the type intersection is actually pretty meaningless.
Thanks, Andy, that’s a really good question. This seems to be one of the primary weaknesses of structural typing… One thing that could be done to get around it would be to generalize Field to be polymorphic over the kind of its key; that way, when desiring guarantees that your spurious relations to not arise between your records and those of other modules, you could use your own universe of keys, like so:
data (:::) :: k -> * -> * where
Field :: s ::: t
data LocallyAvailableFields = Name | Age
name = Field :: Name ::: String
age = Field :: Age ::: Int
man = (name, "jon") :& (age, 20) :& RNil
Another thing that might be done would be to abstract out even the field representation itself; this would make the implementation a bit more complicated, but it would allow easy declaration of local “field universes”:
data MyFields :: * -> * where
Name :: MyFields String
Age :: MyFields Int
man = (Name, "jon") :& (Age, 20) :& RNil
I sort of like this approach, but I haven’t yet thought through its implications on the rest of the system; currently a lot depends on the structure of the field representation, but it may be possible to abstract that away and allow something like what I have proposed above.
I agree with you. I’d rather read those things here than at HN because here I can filter out the topics I’m not interested in.
One thing that concerns me about the approach of adding a type system to an existing language is that one must provide (and maintain!) a package which vends annotations for all core and standard library functions. Otherwise, every value which originates from the original packages is untyped, which either defeats the purpose of the overlaid type system completely, or else requires the developer to annotate all values he gets from across a library boundary, which is tedious and error-prone.