Man! People are scum.
Because they found security vulnerabilities?
People aren’t scum if they break a barrier. They are curious and mistakenly devious.
Towns have rolled out their own (really awesome) infrastructures and generally can provide homes with better service for a low utility fee. They get met with such harsh opposition from the monopolies. I think having towns invest in their own infrastructure is amazing… need more of that.
This is why I prefer to run a browser built by a neutral, not-for-profit organization, rather than one built by employees of an advertising company.
Ditto. I very much trust the Mozilla organization over any other company in this space. Google’s hesitancy on privacy and their real-name policies really really irk me. This is not only due to the obvious privacy issues, but also due to the related issues regarding transgender identities and the lessened ability to hide one’s gender online. This is a safety issue. Google just doesn’t get it.
I share similar feelings, but every time I switch to Firefox, I find myself switching back to Chrome a few months later. Last time it was die to missing APIs for extensions.
Steve, what are your thoughts on Rust so far? You obviously like it if you’re working on a book.. where do you see it fitting in to your array of tools?
I’ve been using Ruby for the past couple years, but prior to that I used primarily C, C++, and Mono/C#, and in some ways I kind of preferred those languages. I’ve been interested in spending some more time investigating Go and Rust out of personal curiosity, so I’m interested to hear your thoughts on Rust.
I’ll tell you why Rust is important: static analysis. Rust is the convergence of language design ideas to ensure that you can specify behavior more strictly. For instance, you can say that when certain fields are defined in a struct or when passed to a function, that they don’t change. This on top of an actually strong type system (which C and C++ don’t even have) can give you some guarantees you can now assume to be true about your code. Should allow for greater reuse because you can trust the black-box implementations a bit more because they are beholden to those guarantees. You’ll write less silly tests. Etc.
The killer application for this these days is concurrency. If you start making guarantees about state, then you can use those guarantees to ensure that the invariants required for concurrent execution are held. For instance, if fields in the struct are immutable, then they can be shared. Rust also keeps track of ownership of data, so you can mark a piece of mutable data as only being allowed to have one owner. So, it can enforce concurrent access to data at the language level.
All of this means I can see Rust being the language used to build the next modular webserver/db. Probably something we tie our code directly into. Process workers (ala delayed_job, resque) can now be very closely integrated with this webserver/db, possibly written directly in Rust as well. Why? Because you want to ensure that these [ideally] simple cloned processes which require heavy computation can be executed concurrently. Also, your background processes generally make decisions that require a high level of reliability (validating message signatures,) so Rust can give you a better baseline assertion of code correctness as well.
Hey, sorry this took so long to get back to you. I missed this thread. :‘(
I actually explicitly wrote about this as the first chapter of the book: http://www.rustforrubyists.com/book/chapter-01.html
I love the simplicity. Add a few git-commands and you can easily sync the address book across computers through Github.
Please don’t put my contact information to GitHub…
hmm. this is an interesting article about encrypting your external git repos and using dropbox: http://syncom.appspot.com/papers/git_encryption.txt. That would work.
Glad you like it!
I’ve updated the post to include a nice mailto function for even simpler usage.
Or….. John could just always sit down.
You know, I used to capitalize all of my html—err sorry, HTML tags back in the day. It was what you were supposed to do or something. And then I realized I didn’t have to do that and could devote more capital letters TO COMMENTS ON THE INTERNET. And yet, here we now are, with this website making me feel bad… capitalizing all of my tags in my css for me. :)
You know. It’s interesting. I think that all three comments were in bad taste, Steve’s being the least critical due to (and I know him personally so he won’t attest) his inability, just like myself, to say things with any amount of tact. And Corey’s being the absolute worst, most misguided thing you could ever say to anybody in this circumstance, especially considering he does, in fact, know how to speak with a reasonable amount of finesse. And they apologized (except zeeg who must now be honestly considered an asshole.)
Reinvention of things we might otherwise take for granted is very useful. I can never remember how to properly use sed, even in the most common of usecases. Do some of you use something called zsh? Well, you know, bash exists right? csh before that? ksh!!?!
Ok. It’s very proper to rework how we interact with our tools. This is what is important: how we interact with them. Right now, I’d say it is difficult to write new human interfaces to existing code. We generally have this big monoculture that promotes in-house solutions to every problem. Where is that code-reuse our foreparents spoke of?
We really need to separate the functionality of our program by behavior and have the interface be something completely external. sed should have some code that performs the operations and then that code should be easily used by every language in the known world, past present and future. Putting code together should be language agnostic. Rewriting how we interact with machines should be most important. That should be easy. That should be something that is also agnostic to language. The solutions should be easy to use, install, manage. Yet, we only really do this kinda reasonably within a single language. Too many walls around our cultures… exhibited by comments against node such as these.
And this agnosticism of culture is the most important thing we could ever accomplish in this technological era, and we simply don’t do it that well.
Argh. Let me just point out the thing I dislike about design-by-committee languages. This is what happens. You see a library that everybody uses, and you think “I guess that library is REALLY what our language is about.” What do you do? You make that library standard. Seems simple. We had pthread, openmp, etc… and now those are going to be simply C. (As an aside the name C11 is really funny.)
In one way, this is good… more people will do the right thing maybe? (if it wasn’t so difficult in the first place.) In another, it makes adopting C on a new platform really hard and thus building new platforms really hard.
Basically… we are getting away from C’s simple values. These things should be done in a library. That library may be a C-environment language (python, ruby, perl… all C-environment derived languages.) It’s just too hard to use this stuff as it is without new forms of expression. Nobody is going to get this right, and we have no good means of debugging them. In the end it comes down to this: why would you add concurrency and thread-safety foo in a language that isn’t even type-safe?! :)
What we deserve is a replacement for C altogether. Something with veeeery small syntax, static analysis, and proper type-checking. After thinking about it for months, I’d say rust without any of its libraries or assumed memory models (and thus without managed boxes,) where data is assumed immutable and side-effects discouraged.
Oh, and your kernel should not have any locks. It’s 2013, c'mon.
As a former owner of an ISP that provided broadband, it is a very tough market unless you are in a very unique area where the telcos and cable companies don’t want to invest. Even then, it’s hard to compete unless you can achieve sufficient scale. I don’t like where the market is going with more and more centralized control. There wasn’t much point in the anti-monopoly telephone company breakup now that pretty much everything is back to at&t and Verizon now.
I hate this so much. Yeah, sure… the government is totally regulating this market well. It totally promotes and cultivates innovation. TOTALLY.
My first startup was 3D printing in the cloud, but this was a few years back. Good times…
That was an exciting time to be a part of that facet of technology. It was the future back then, and it is the future now… if that makes any sense at all. :P
This is awesome! Is there a convention to follow yet where we could tell it to print from a git repo? optionally, print a particular revision? ;)
*ponders about whether or not if the university tells me to teach java that this would be an acceptable loophole*
And there will be much rebinding of Caps Lock to Ctrl.
Use vim. That way you can bind caps lock to escape. This is the One True Way: where one can use a single key to both switch modes and cancel production in Starcraft.
What we need are tools to allow us to simultaneously use bitbucket, github, and gitorious (and optionally self-host) at the same time. You know… since git is distributed, why put up with centralized systems, etc etc. Just need to solve the whole federation of issues thing. As an rstat.us founder, I’ve thought about how to build this a bit.
I am up for figuring this problem, it’s something I’ve thought about (and “faked” with multiple remotes). What would it take to build this kind of tool?
Instead of devising an autonomous system immediately, like I describe doing, a stepping-stone solution would just be some sort of interface and server that will periodically use site apis to look at what has changed and duplicate it on the other sites. Requires some hardcoding and potentially fragile strings with regard to api changes. You can easily just force push the repos up to the other sites to duplicate them (yay for git already doing the work) but issues require some conflict management (that’s the hard part) and that isn’t trivial because it requires clock synchronization (which is not computable) in order to allow users to comment on whichever site they want and keep ordering.
Fossil thinks the solution is to bake in the issues as blessed metadata in the scm directly. I disagree believing this is a service issue given I prefer modularity that git stresses. Build the system that mitigates the consistency issues and allows for flexibility, it will already be useful… and then you can bake it into web services later (namely gitlab) and it will then become federated and self-hostable.
The initial goal is clear: No need to worry about github going down… the system will repair after it comes back up. No reason it can’t also work for mercurial, etc.
It is becoming clearer to me personally that Stallman should step down and be replaced. But by who? Who could make GNU relevant in this era?
I would certainly discover that I need friends to give me IP addresses. More likely: that I need IP addresses to communicate with friends.
tl;dr : The author mistakes cache coherency for a thing that scales.
The concept of multiple-readers-single-writer has been known for ages with many solutions and generally you need a lock because there are race conditions where a write is not atomic. The point of the single-writer is the idea that readers can simultaneously read and share their lock as long as nothing is writing (and obviously, only one writer can ever exist at a time)
Trivially, only if the write is atomic can you avoid the lock. It has been already known for a long time that message passing systems avoid contention by copying contended data. A write is always atomic if it is only seen once copied after it the update is done. The article states cpu cache coherency is used to copy the data. Somehow. But the cache will certainly see a partial update, right? If the cpu cannot update memory atomically, then it cannot update cache either. This cannot work as described.
Also, this article assumes that the last level cache contains a value consistent with the writer. You can’t rely on the cache coherency! It doesn’t scale! Coherency == global consistency == non-scale-town. On a highly scaled system (implied distributed) the nodes may exist on other machines, and even locally the cache may be split or non-uniform. Why? Because those algorithms aren’t magical. It scales so poorly, cpus with more than a certain number of cores don’t even include it… it costs too much.
So you’ll have to send messages, basically implementing your own multicasted coherency model. Otherwise, you are limiting yourself greatly to having your writers and readers all share the same cache, and thus limit the number of processes. Not to mention the rather frightening prospect of this running on a hypervisor.
So, this system as described to “work at all level of scale” is really just a single writer with all readers having a local copy with the readers distributed in some form of tree for a multicast (lots of interesting, fascinating papers for organizing such a tree). Which means it is roughly to my naive estimate O(log n + k) for distribution (to message pass the new value) in a balanced tree where k is the number of times a server drops a packet and postpones the subset from propagating. Now you also have to put provisions in to ensure ordering, handle failure (what if your writer dies!?). A nightmare. But it’s a distributed system, so your entire life is a nightmare. :P
It’s decentralized reading, centralized writing 101. And it. Does. Not. Scale. (unless you don’t write. just stop writing things)
Dynamic libraries strike again!
With the increased amount of RAM machines have, I always wonder if it’s not just best to statically link everything all the time nowadays.
You risk me agreeing with you and having the prophecies of the end of the world come true with comments such as these. The era of space over time is long done. There are just so many benefits of static linking such as utilizing runtime analysis across library code, not just, as you rightly imply, to reduce attack area.
I am a huge proponent of this, even if I get a lot of grief for it. A good read on the subject is the cat-v dynamic linking page.
Agreed. I would at least reach for static linking over dynamic linking as a first solution, and then only think about supporting dynamic linking if it was proved worthwhile.
But then you lose out on PIE/ASLR, and the next time there’s an OpenSSL or zlib bug, you’ll have to recompile everything on your system that uses those.
Though these days with advanced package management, I’m not sure the latter is such a big deal. The package manager (on OpenBSD at least) knows what binaries are linked to which system and package libraries and which versions of them, so getting those select packages rebuilt and reinstalled wouldn’t be a big deal.