I’m wondering whether they really had to implement everything in order to learn those lessons. The “lessons” are pretty much the widely-known basic definitions of the well defined terms contained within them. It’s not like we’re talking about a chaotic emergent phenomenon that arises due to the complex interactions of the pieces in a complex system.
I’m with enobayram – as a hobbyist game programmer in 1999 it was painfully obvious even then that TCP really wouldn’t cut it for any kind of twitch gaming over the internet.
Basic texts on networking also taught TCP was used for reliability with UDP for speed. Troubleshooting or warning guides talked about how TCP performance could drop off due to congestion control algorithms. So, I was writing reliable protocols on top of UDP. Until I discovered UDT. :)
Just sounds like the author never discovered this the easy way when learning about network programming. Then, had to discover it the hard way.
To be fair I think part of the issue was also expectations of LAN vs WAN. A “how bad could it be” when the answer turned out to be horrific.
For the record the author has learned his lesson in many ways and is doing fine (I work with him).
[Comment removed by author]
I imagine you would have a layer ontop of UDP that ensured the delivery of a few different classes of packets and let the ones where if you lose a few packets it wouldn’t matter as much. Also, you’d probably want a synchronization packet to synchronize the world state every now and again to avoid the client getting too far away from the server world view. so the light switch object would be guaranteed to be synchronized fully after n-intermediate packets.
With the company shutting down, we also wanted to find a new home for our team … We’re excited that the members of our engineering team will be joining Stripe
If they actually went out and found work for the whole engineering team, that’s extraordinary. Bravo.
I wonder if they went through a route like this.
Offhand, I’d guess not. They both had same seed rounds from A16Z and A16Z is very good with relocating talent within portfolio (source: i’ve worked for like 5 of the portfolio companies)
Meetings meetings meetings :)
Have to interview around a half dozen folks. The most interesting thing is we’re quantitatively measuring our TAMs (technical account managers) on how well they demo and, separately, how they handle some field support situations. While its nice to hire smart folks, it’s good to be able to do something like this to objectively qualify that their skills are staying sharp.
On the down side I have to do a fair amount of glue code / integration work with sales force this week.
Finally i get to start getting some ETL pulls together to start building our customer success measurement analytics (time permitting)
I really wish we had the full dumps from both the Friend Finder and OPM hacks. It’d be very interesting to correlate the data between Ashley Madison, Friend Finder, and OPM. Clearance-holding cheating spouses with kinky fetishes and STDs = no no.
Not true actually. I have a multitude of friends with SCI who are into kink, etc. It’s generally not considered a big deal. Drug and alchohol issues are considered much worse by DISA.
Would you say the same thing if this was a hack of various people’s gmail accounts? This is private data that happens to be owned by a service many people disagree with existing at all. I don’t think we should even joke about going through it just to play judge.
Everything old is new again I suppose. Back in the late 90s when this was the popular approach, the downside was you were effectively limited performance-wise to vertically scaling your database as opposed to being able to horizontally scale your application layer.
Also…with the Hickey quote….it could be argued that you’re keeping things simpler by making the primary function of the database to store data…that placing one’s business logic / transforms within the database is increasing the complexity.
Anyways, like anything else, there’s no one clear answer. It’s always good to revisit assumptions, best practices, etc. as times change to see if there’s anything that is ripe for change / can be done better.
Exactly. It is (typically) much easier to scale your application layer than your DB layer. By putting all of this logic in your DB server, you’re causing yourself extra woe when it comes time to replicate.
Also, the examples here are relatively simple, but once you start trying to do more complex queries purely through stored procedures, you’re again just eating up memory in your precious DB layer. So, lets say you start to split things up into smaller function which are called in sequence from your….application layer. And it all quickly breaks down from there.
There is a reason this is something we used to do.
I’ve been a pretty big fan of jq for a number of things: * lots of my logs these days are in json (so they’re machine parseable and human readable). It makes filtering them child’s play. * Some of our ETL pipelines have a decent amount of JSON in them and jq makes it pretty quick / short work to do pre / post processing pretty quickly.
I appreciate the conversation and that assumptions about Scrum are being challenged. And I agree that Scrum is not a silver bullet.
But I’ve seen planning poker work. It does not always go like the author’s anecdote. If some dev just barks “20 points? REALLY?” when the team is trying to come to an estimate, that dev is an asshole and you’ve got larger problems.
And I’ve seen morning standups work too. Someone has to be tasked with keeping the team on task. The conservation needs to be limited to what happened yesterday, what happens today, and what blockers can the PM tackle for the team.
I’ve seen this work in large organizations. I’m not talking about an 8 person start-up. Just because it’s not universally applicable or successful doesn’t mean it needs to die in a fire.
(Nice troll style points for the title and the image of “Visual Studio TFS” branded planning poker cards though.)
Agreed. I also think the article misses massively on the “Why are we supposed to think developers are not business people?” It’s more the case that developers are not necessarily subject matter experts on the business subjects. Your US-based developer is going to understand international finance issues better than the international accounting folks? Please tell me more of all the magical unicorns you’ve employed who hold better subject matter expertise than.. well… those who work in the subjects.
I’ve noticed that developers themselves are very prone to the misconception that being good at writing software makes them good at everything that their software deals with. particularly annoying is when they have some reductive argument that they are convinced is correct because everyone else is clearly just overcomplicating things.
Also, planning poker isn’t scrum in the same way that syrup isn’t pancakes. Some people use them together, sure. But it’s a pretty weak argument.
On the other hand, there’s something to be said about how common it is to do “scrum plus” or “scrum but.” (And, indeed, much has been written about this, and a fair bit more coherently as well.)
It’s both a criticism and a mundane fact that scrum doesn’t reliability fix every organizational misstep within a group and the groups with which it must interact. It’s not a very opinionated framework, and so it tends to attract opinions, both in favor of planning poker and the like, and against.
Apologies if this formats poorly:
1 -> Let the team member determine timing. You don’t want a meeting that is compulsory which is seen as a time sink.
I disagree with this sentiment. 1-on-1’s are really important and a lot of people either won’t own up to wanting them or won’t realize how important they are. Making it a regular thing (ours are every 2 weeks) means someone doesn’t have to feel weird or out of place asking for a meeting with their manager. I think a lot of people don’t naturally feel comfortable doing that and a regular, compulsory, meeting makes it much easier.
Most people won’t tell their manager what is on their mind naturally, you have to force it out of them.
While I agree that it’s easy to underestimate the value of 1-on-1’s as an engineer being managed, it is valuable to give the engineer some input in what frequency is ideal. It helps to make clear that 1-on-1’s don’t need to be super-structured if there is not much to talk about, just grabbing a coffee some weeks and talking about the family can be just as nice as airing complaints about peers or blockers on other weeks.
As a manager, there is a lot you can glean from casual conversation with your direct reports—about their happiness, productivity, ambitions.
I’m embarrassed to say! I’m implementing some really simple automated trading code. And it’s taking me fucking forever because I don’t know Java or automated trading systems.
I love the JVM, but you should be careful about using anything with stop the world pauses for super-low-latency systems.
There’s ways around it. In particular check out this guy’s blog: http://vanillajava.blogspot.com/ . From the same guy who wrote OpenHFT and Chronicle (Peter Lawrey).
Agreed (except that I admire the JVM without actually liking it), but we aren’t doing anything super-low-latency.
I don’t get it. Tracky posts all sorts of stuff that is available in the http headers anyway. There are still multiple invisble pixels (at least on the home page).
If i recall they did that (duplication of the http header info) so they had essentially denormalized records (everything in one flat record) in the json logs..it tends to make certain kinds of analysis quicker / easier when you get to the analytics systems.
“One question begged of Big Data has been – is anybody actually handling data big enough to merit a change to NoSQL architectures?”
I think part of the issue is that the volume (aka size) is only one of the 4 Vs. I would think that the velocity will end up having more of an impact on the architectures because of how some of the consensus algorithms end up working (well…velocity in combination with distribution (think transatlantic / transpacific) in combination with volume (larger clusters)).
work:
– getting pmacct <> rabbitmq <> influxdb working, then putting a nice frontend on top of it
– auditing a Palo Alto install the MSP royally boned on the migration. I know BGP is somewhat obtuse on PANOS but…no excuse. in pre-sales you, unprompted, mentioned having one of four experts qualified to configure whatever is after the top of the line 5000 series. c'mon!
– quickly utilizing the last six days of my Azure $200/30 day credit to boot OpenBSD, get IPsec tunnels with BFD running, and do some iperf tests between regions for a PoC
– setup graylog to ingest wireless controller and firewall logs and make nice dashboards for front line support network troubleshooting
fun work:
– continue building class outline and course work for a “python for network engineers” (a working title as it’s already in heavy use by Kirk Byers)
– lots of unikernel stuff. Kafka as a unikernel, pmacct as a unikernel. getting rumpkernels to boot with vmm on OpenBSD. getting ExaBGP into a unikernel, then doing ‘stress’ testing against OpenBGPd
– osm + packet clearing house IXP list + peeringDB + d3js = transform spreadsheet currently sitting at http://peering.exposed/ (after a particularly whiskey-infused discussion @ RIPE73)
– play with a couple of network verification tools I recently read and have been reading about, respectively: Propane and NetKAT
Is there some particular reason you’re going to rabbitmq first instead of tossing to influxdb via statsd or some such first? You just want to persist bits in flight?
(just curious)
mostly because pmacct speaks amqp natively, and slightly because I do not wish to run node.js in this instance.
What are you using for ingesting logs from rabbitmq to InfluxDB?
I’m looking forward that Paolo releases the support for Redis.