If you really want to do vastly multi-tenant web sites on one box, then having unprivileged users be able to bind low ports like 80 and 443 isn’t enough by itself; they all need to either:
In the latter case, if the HTTP protocol came with a guarantee that clients would never send requests belonging to two different origins via the same TCP connection then you could hypothetically do something clever like have the reverse proxy actually hand over the file descriptor for the TCP connection so it doesn’t have to keep round-tripping bytes to another userland process belonging to the site’s owner. Alas HTTP doesn’t make that guarantee, so you can’t.
Maybe it’s also a shame that the convention for DNS is to look up an address via an A or AAAA record and then connect to a fixed port number like 22, 80 or 443 on the target; if we had standardised on something more like SRV records with both an address and a port number to connect to being sent back then it’d have been easier to multiplex up to thousands of HTTP serving daemons on one host.
Resolving symbolic names as (host, port) pairs would recognize that services, not just hosts, have locations and can migrate.
Great concept! Wish you could do more than just walk around though. Some ideas:
I do wish there was a way to see the non 8-bitized pics (though I like them as default). I’m finding myself googling the actual article a lot just to see them.
> focus on
You start to focus on your surroundings.
I’d like to see the same benchmarks repeated for the Odroid C2.
julia> [1,2,3] + [4,5,6]
Modular arithmetic is not a “metaphor” for groups, it is an example, a special case that explains some, but not all, of what groups are for.
To explain groups, you keep piling on examples. Real-life things like mirror symmetries, shuffles of card decks, and rubix cubes. Then move on to math topics that usually come before groups: number systems, matrices, permutations, plane geometry translations, rotations, reflections.
When you see that all of these things can be studied in the same way using the group axioms, you have an answer to “what’s a group?”
The different processing is necessary because ! can be a method:
>> class A; def !; true; end; end
>> if !a then "then" else "else" end
>> if a then "then" else "else" end
>> unless a then "then" else "else" end
Don’t do this.
IMMHO, go for meta/finance/culture for your tags on this post.
also the show tag, right?
An older article, but interesting for its historical perspective and comparisons: http://arstechnica.com/science/2014/05/scientific-computings-future-can-any-coding-language-top-a-1950s-behemoth/
If you all can forgive the plug, here is another difficulty facing self-taught recreational mathematicians
In short, math as a field is (a) not a (one) field, (b) extremely non-linear, and © deriving most of its power from its incestuousness. If you learn math in school my understanding is that you’re just expected to slog through most of the major fields with a, or many, bottles of whisky and helpful tutors until you get basic competency. If you’re doing it on you’re own you, essentially, still have to do this and will suffer a bit of “missing the inside jokes” until you get your head around a sufficient basis set of skills.
In a real sense programming is nice because you spend some time learning different applications and languages and ideas and it all feels like it’s growing this core skill of programming which is highly translational. Math is kind of the same except that it’s presented much more like taking an intensive course in graphics programming and then another one in real-time robotics control and stepping back and saying that the core skill you’re growing is just basic programming sanity. This is right, and the saner you get the more you start to realize that the both of these might as well just be an application of linear algebra, but it can take a bit more patience than what meets the eye.
Math is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is.
Fortunately, the part of it we’ve discovered so far is quite small; you could learn it all in probably less than 1024 lifetimes.
I had a bizarre idea a while ago. When I try to self-teach myself mathematical subjects using Wikipedia and other online resources, it’s a serious problem that the page about the concept I’m interested in has dozens of transitive dependencies on other mathematical concepts. Many of the other concepts are already familiar to me, and many are not, and it’s overwhelming to sort through things. Of course, textbooks and planned curricula work around this, but that’s really not an ideal learning style for me…
If someone feels like making this work, go for it. I think it’s a relatively silly idea, but personally I’d probably use it.
Scrape the relevant portion of the Wikipedia link graph to figure out conceptual dependencies among areas of mathematics. It probably makes sense to ignore links outside the first couple paragraphs, and to have a manual cleanup process to resolve cycles.
It definitely makes more sense to use the Mathematics Subject Classification as the backbone for what nodes to include, using the Wiki data only for the edges, but that does involve building a mapping between the two.
With the data collected, build a UI that will let you pick what you want to learn about and present either the concept DAG, or a topologically sorted list based on it. For bonus points, let the reader click to assert they already know something, to simplify their view.
As a freebie, make each item in the to-learn list go to a Google search identifying reading material about it.
Could you (or anyone else) point me to resources to getting toddlers started with math concepts? I’ve been trying youtube, but have not found anything satisfactory.
I’d recommend a Montessori school. Done properly, it has manipulatives (ie physical toys) that have proven to be very good at that.
As an open-source side project, outside of work, I contribute to the Ruby lmdb gem. This weekend, I pushed out a new release with fixes for some concurrency problems introduced by the ruby wrappers around the LMDB C library. Short story: it was all about managing the dance among the ruby VM lock, the LMDB write mutex, and interrupts, and making this work in ruby 1.9 as well as 2.0.
If you need a key-value store, take a look at LMDB, it has some nice concurrency properties (concurrent reads and writes, transactions, snapshots/cursors) and impressive benchmarks.
The same metaprogamming tricks apply in ruby:
This reduces the benchmark time from 3.5s on my system (compared, cautiously, to the author’s reported 118.62s).
See also http://venturebeat.com/2014/06/25/google-cloud-dataflow.
This week I’ll begin porting my current MVP to our new stack based on Erlang (Elixir) and the JVM (Scala)!
The first thing we will be building is a new Erlang RethinkDB Driver based on their new JSON API, of which we will base much of our platform on. We will also be building services for ID generation, service registry, and more, all of which will be open sourced in coming months.
Curious why you are using both erlang and jvm in a new stack. What tradeoffs went into the decision to use a mixed architecture like that?
The original prototype was built in Node.js, which worked well for a prototype, but not much else.
We chose Erlang and Scala for several reasons:
The kind of application we are developing (a realtime social application) preforms much better on Erlang’s process model, especially when it comes to concurrency and error handling, among many other benefits. The original plan was to build the entire stack on Erlang, but after some investigation with our test data, it was quickly shown that the JVM well out-preformed Erlang when it came to jobs like map reduce and machine learning. If we were building the service as a monolithic app, we would have stuck with Erlang for everything, but as the service is being built as a service-based architecture, we decided to build our processing nodes in Scala (chosen over Java mainly due to syntax and actor support).
Just released an executable model of the Calvin distributed database (for my PWL talk tomorrow):
Since Calvin’s implementation is not very accessible (only recently released and still incomplete), this is the best way I’ve found to understand Calvin, and to use as a foundation for explanations. All concurrency and distribution is modeled in a single-threaded process with in-memory data tables, which makes it easier to see what is going on.
This looks like a good place to start reading: http://swift-lang.org/papers/index.php, particularly section 6 (comparisons to other work) of the 2011 paper: http://swift-lang.org/papers/SwiftLanguageForDistributedParallelScripting.pdf.
So far, swift seems like a reinvention of dataflow programming, on top of a mesos-like resource manager.
[EDIT] Oh, wait, this is the other swift programming language. Damn, we’re running out of names.
Not sure why you are being downvoted. Because the OTHER swift looks pretty damn cool and licensed under Apache license! Thanks for introducing me to the OTHER swift…
This article strikes me as a contrary indicator, like the hype about duck typing a few years ago.
Implementing a precise type signature proves that the software does what it says on the tin.
No. Typing constrains, but does not determine, semantics. And typing is orthogonal to implementation, which is all about keeping the promises on the tin.
Names are totally useless for reasoning about software.
That may be persuasive, if you are the sort who rejects all code documentation, but it assumes that you’ve already banned dynamic binding. So the argument circles back on itself.
Programmers fall victim to reductionism, usually in cycles. This should not distract us from the real, but incremental, gains of languages like Scala and Haskell.
The design docs briefly mention Calvin (“This is another great paper.”), but without a close comparison. I’d be interested in hearing the Cockroach team’s views on the tradeoffs they make differently etc. Are the replication and consistency guarantees the same? Throughput and latency?
 Since I’m speaking on Calvin next week: http://www.meetup.com/papers-we-love-too/events/171291972.
You are welcome to ask any clarifying questions on the list – email@example.com
To be honest we’re at such an early stage that we may not have answers for you, but there’s no harm in asking.
That sounds like the data store a lot of people would want! I’m curious to know who is behind this project, because it cannot be just a hobby project.
FWIW one of my coworkers identified one of those as a former googler.
I’m under the impression this comes from Poptip, but it’s just my impression :)
andybons from Poptip, here. You may have seen me in the contributor list as the top committer, but if you look closely at the stats, Spencer has written the most code and wrote the original design document linked to in the README. He is the mad scientist behind Cockroach.
Poptip has been supportive, but it was not born within our walls and I do not work on it full-time.
From the list of authors, everyone works at Square, Inc. now and we are all ex-Google (except for one person). It is more than a side-project, but I, personally, don’t have the domain expertise that the other authors have within this space, so it has largely been a fun learning experience for me.
I hope this clarifies things a bit. I don’t want to speak for the other authors regarding some of the questions raised, but I can say they are brilliant people who I am lucky to work with.
Thanks Andrew: yes, it clarifies things a lot! This is an ambitious project: kudos for tackling this. Now that I know most authors work at Square, I understand better the emphasis on the ACID properties which, I guess, are quite useful to them.
Dup, actually: https://lobste.rs/s/ad4bzy/a_little_riak_book. (Not that it matters, some links are worth reposting.)
Wonder why the dup detector missed that?
For some reason, I thought it was 3 or 6 months, but it’s actually 30-day window.
Inside of 30 days it will not allow the URL to be reposted, but after 30 days it still shows a warning message to the submitter with a link to the old story. It’s up to the submitter to decide whether it’s worth reposting again.
I saw the warning and reposted it to ride on the coat tails of “The Little Redis Book.”
Trying to figure out how to combine discussion forum with wiki functionality. Temporal and accretive knowledge recording along with discoverability for new comers.
It would be cool to cluster stories by their tags. So when I am on this page, the sidebar would have list of stories with the highest number of overlapping tags by votes. t/databases+distributed in the sidebar, maybe dimmed. I have never hacked on a rails app so not sure where to start.