1. 5

    This is cool! Through a bizarre coincidence I’ve been working on something with quite similar mechanics. “Robots” battle on a grid; you write an algorithm that helps your robot beat the others.

    The big difference is it’s designed to be played live, between two players, like a game of chess. You get a limited time to write a function that allows the robot to make a decision on where to move based on the content of the squares around it. Your function is then called four times, interleaved with the opposing player’s.

    It’s meant to evoke that manic hacking-to-a-deadline feeling, where you have to balance careful thought with a scramble to finish your implementation before the time runs out. You don’t have a very sophisticated API available, and your robot is almost blind, so you have to do the high level thought yourself.

    There’s a very crappy single-client-only proof of concept, here. Hopefully I’ll finish it one day:

    https://github.com/dansitu/codegame

    1. 1

      Wow, that’s great! I had a cool idea similar to this a while back:

      You know games like StarCraft, where one player directs a bunch of units? So the game would have a “general” who would control all the units. The units would each be an individual player who coded its AI beforehand. During the game the player/programmer would also be able to enter commands in real-time to influence their unit. At the same time they would listen to the general for strategic instructions.

      You know those movies/anime where you have one guy giving orders, and everyone else hacking away? It’d be like that. :-D

    1. 1

      I love this concept. Is anyone else using it right now? Are there any unexpected points of friction?

      1. 7

        I really sympathize with the author on this one. I used to fear publicly releasing code because it might not be ‘good enough’. It was the attitude displayed in these tweets that I caused this fear.

        1. 6

          I was a bit surprised to see Steve being the one bashing on this, especially being someone that teaches programming. She even asked Steve for feedback and just got this snarky reply.

          1. 5

            I’ve specifically been staying out of ALL of these threads, but what I will say is that when I wrote that tweet, I did not know she was the author. They’re different usernames.

            1. 3

              I see nothing snarky about that reply. At all. One person’s snark is another person’s matter-of-fact directness. But it helps make my point.

              I feel bad for Steve. He made a mistake. But now, for at least some, he’s just an asshole, regardless of everything else he has done so far. This is what I find so amazing.

              There’s a thread about this over on HN (of course) and the level of vitriol is staggering. Suddenly he’s completely defined by a mistake. He’s pigeonholed with invective worse than what he said about someone’s code and coding skills.

              This is “someone’s wrong on the Internet”, with venom. If you choose the wrong words to criticize something (and I guarantee you that no matter how you phrase it someone will think you’ve been too harsh) you are become a target for the self-righteous to dump on you in ways far worse than whatever it was they think you did.

              1. 7

                I’d like more professionals to accept that often, their lauded “matter-of-fact directness” is as effective a communication tool as “no-nonsense single-character variable names”.

                On the whole, humans prefer criticism to be couched in sympathetic language, and there is not one thing wrong with that.

                1. 3

                  He’s pigeonholed with invective worse than what he said about someone’s code and coding skills.

                  It’s worse than that – by my reading Steve didn’t say anything about her code or coding skills.

                  The individual tweets linked here read differently alone than they did in context, but the context is impossible to link to.

                  Twitter is a bad place to discuss things.

            1. 1

              People have been making the same complaints about every branch of human endeavour since we went from wooden arrow heads to flint. There’s always compromise, and it always hurts to see your once-darling “ruined” by mass adoption.

              Always remember, though: a lot of people are now benefitting from something that didn’t previously exist.

              1. 2

                Outside of my day job at Green Dot, I’m taking the first steps into food entrepreneurship (and regulatory wrangling) – working with some friends to develop snack foods made with edible insects!

                It sounds crazy, but entomophagy (the human consumption of insects) is going to be a big deal this century. It’s exciting to be here at the start.

                1. 1

                  One approach could be to randomly generate thousands of possible scenarios and crowdsource the ranking of their outcomes in terms of moral preference.

                  When the self-driving car sees the impending schoolbus collision, it would attempt to find the closest and most morally acceptable match and replicate it.

                  1. 1

                    You cannot be serious?

                    This madness needs to be stopped. It would be better to just nuke the whole planet than continue in the direction everything is unfortunately heading.

                    Some call it progress. People are stupid and blind.

                    I can’t wait the day when I’ll be able to go off the grid to live and die in peace.

                    1. 3

                      Although opinions vary, it’s generally accepted that morality is sourced from some combination of intrinsic and societal values. In any given situation, a person’s decision is gated by their moral reasoning, itself subject to the perceived and expressed values of the person’s peers – based on a corpus of data we have gathered during our development and life experience.

                      The reason we are talking about this in terms of robotics is that self-driving cars, with their lightning-quick responses, are capable of making moral decisions during events that humans have never had to consider. Given a couple of milliseconds during an unavoidable collision, human reaction times don’t grant us the luxury of making any choice at all. Our behaviour is essentially random, or a continuation of that which we were exhibiting before the incident.

                      The self-driving car gives us the ability to make a moral decision by proxy. In some people’s eyes, this perhaps goes beyond acceptable morality. It remains necessary to discuss the mechanism by which such a ‘proxy’ decision would work, since many people also consider the alternative (preventing the vehicle from reacting to a situation faster than a human could) unethical.

                      If we do decide to allow such automated decisions, we need a way to provide the car with a similar corpus of moral data to our own. In the tightly bound world of a self-driving car, I suggested a way to load the vehicle with a set of example situations and ‘least worst’ outcomes. By ‘crowdsourcing’ the judgement on each situation from multiple people, you can avoid some of the dilemmas that come from a single decision-maker imposing their views. This is similar to the theory behind trial by jury, and similar to the way we arrive upon our own ideas of right and wrong.

                      The alternatives are to allow a single government or organization to impose their own morality (imagine a world where Ford’s moral preference in driving style differed from Honda’s, and you bought a car based on its ‘personality’), to allow the owner of the car to make the decision in advance, or to abdicate moral responsibility entirely by programming the car with the same reaction times as a human (and losing the massive increase in safety brought by self-driving cars).

                  1. 2

                    It’s called a ‘web browser.’

                    1. 1

                      Web browsers are awesome, but they’re designed for the exploration of content by individuals. The browser is an artifact of a world where humans are the primary direct consumers of content on the web. This is rapidly becoming untrue.

                      A client able to communicate between any APIs on the internet can do very different things at a much larger scale. While I can use my browser to individually check online clothing retailers for cool jeans in my size, I’d much rather have my virtual agent do it for me, sending me a Facebook message when something new shows up. I’d definitely prefer if it could automatically incorporate new retailers without my intervention. We currently depend on third parties to do this sort of aggregation – Hipmunk, Google Shopping, etc – but this is basically an inefficiency.

                      This is merely scratching the surface, too: imagine a client that could query for heuristics that indicate a global disease outbreak, alerts hospitals and automatically places pre-orders for antibiotics based on probable spread. Imagine if that client took 10 minutes for a non-developer health official to define. Try doing that with just a web browser.

                      1. 3

                        But he’s right. What you want is a hypermedia, so what you need is a medium (hypertext) and a protocol to transfer it (http) across a datagram network (ip+tcp) and therefore need a media representation and a client to represent it (html+browser). The combination is the world-wide-web.

                        What you are critiquing is the notion that people don’t do it right. Which is true. It would be easier if people defined things a bit better. That’s just a question of evangelizing the now 50 year old idea that hypermedia is the goal, not the means.

                        1. 2

                          So is the browser in this model intended to provide intercommunication between unrelated web services without user input? The browsers available today don’t do anything like that.

                          1. 2

                            Yep. It is intended to do this. It reads html over http, which means it is the universal intercommunication thing you want. :)

                            1. 1

                              Since a modern web browser is about as close to this ideal form as a Cessna is to a spaceship, we probably have some work to do…

                              1. 3

                                I can take a web-browser and write some html and css and display it in some rectangular region of the screen. I write a script that uses the parser of webkit (a web browser, or more generally a hypermedia client, I guess I could use a javascript parser :P) to pull the html off of my twitter/rstat.us/github or whatever and produce new html (that I prefer) of my twitter timeline. I can inject avatars and links to github profiles using gravatar using a nifty little <img>, <object>, and <a> tags when I discover through a <link> tag on their rstat.us profile or something that they have an associated github profile. (which people don’t necessarily do, or do through something indirect like an xrd, but that’s kinda what you need to promote…) Hey, look at that, local hypermedia that involves linking to the global document view (world-wide-web) packaged in a way that eliminates user input and all you need is a universal client (web browser). :D

                                Isn’t the past AWESOME? :P

                                1. 2

                                  Try doing all that as a non-developer. Job security is sweet, but I’d rather live in the future!

                                  1. 3

                                    I suppose IFTTTTTTTTx1000 is your thing then. You know what would make iftttttt actually good, and useful, and retains the crux of my argument? Fucking respect of hyperlinks. Like, it doesn’t look for rel=alternate for link tags to find out where things actually are, you still have to do the effort of finding the actual resource url for certain things, which I’ve complained at length about to their support team. That’s what /you/ want. You want people to use rels and links. And I agree wholeheartedly. And loudly.

                                    1. 2

                                      Yup. IFTTT shouldn’t need to exist :)

                    1. 4

                      I hadn’t heard the term Hypermedia before – but after a little reading, it seems that this is a very big deal.

                      APIs that describe themselves are the first step on the road to programs that can rewrite themselves as new capabilities come online. If your client knows where to start, it can ‘walk’ through the galaxy of available APIs, grabbing whatever functionality it needs on the way.

                      Instead of a user browsing a flight search site, their computer should be able to list flight entities from disparate sources without the developer ever specifying which. If you decide you want to book a car at the airport, just let your machine find out how.

                      Google want to index the world’s information. That’s the old fashioned approach. In the next phase of the future, data will index itself.

                      1. 4

                        Now you’re thinking like Ted Nelson!

                        1. 2

                          Yep! That’s a little more semantic-web than a simple definition of ‘hypermedia API,’ but it’s certainly within the same kind of realm.

                        1. 1

                          …and it runs on my phone. Remarkable.

                          1. 1

                            I think you found a good philosophy article, Steve.

                            1. 2

                              Thank you! I really need to keep posting more…

                            1. 2

                              The great irony here is that if the ‘soul’ doesn’t exist, thus making simulated consciousness possible, then it is highly probable that we currently reside in a simulation – thus proving the existence of some form of ‘God’.

                              Eep.

                              It would be even more amusing if our posthuman hosts were to keep us around as a form of ancestor worship. In simulated Earth, God worships YOU!

                              1. 1

                                “It’s about trying to come up with a working solution in a problem domain that you don’t fully understand and don’t have time to understand.”

                                To me, that’s the fun part! Who else gets to dive into a previously unknown domain, immerse themselves in it just long enough to figure something out and mentally construct a solution around whatever hurdles they find? It’s almost unknown, unless your name is Mulder or Scully.

                                1. 2

                                  Is this type of malware (that encrypts your personal files, presumably holding them random until you pay somebody) particularly common? It’s an idea that wouldn’t be out of place in a Gibson novel!

                                  1. 3

                                    It’s called ransomware, and while it’s not extremely prevalent, it’s also not uncommon. Here’s a Sophos article about a new technique from the other day. Also, check out the wikipedia page on ransomware.

                                  1. 6

                                    I really wish we had a ‘-1, terrible, terrible troll’ voting button.

                                    I found the rhetoric in this article incredibly annoying. I understand the point that was trying to be made, but seriously, I could barely get through it. The twisting of words to come up with some sort of inflammatory statement really grinds my gears.

                                    1. 3

                                      I came away thinking the same. There’s really no reason to inject such vitriol into an article about programming. It’s a good example of what makes parts of programmer culture unpalatable to many, to the detriment of all.

                                      1. 3

                                        This is an argument that is older (and about as useful) as vim vs emacs, where the people who engage the most aggressively typically understand that actual pros and cons of the other side very poorly.

                                        1. 1

                                          I guess anymore I just read past the troll comments in articles, and was more hoping to raise up interesting conversation. My bad.

                                          1. 2

                                            It happens! I myself had a submission here voted to -1. We’re all figuring out what the hell this place is, still. :)

                                          2. 1

                                            Bluster aside, I think the point about typed vs classified is worth some thought. It’s obvious to anyone who’s written a ruby or python C extension that every object is represented as a single type, the PyObj or ruby_thing or whatever it’s called.

                                            Of course, the author is an ML junkie who thinks type inference is the solution to all the “I don’t want to type so much” objections to static typing. I found it frustrating because until you learn to pattern your program into something the type inferencer expects, it will just keep beating you with the “can’t do that” stick.

                                            For the record, I feel static typing is much safer and leads to better programs, but still prefer Lua just because it’s easy. Like vegetables, I know it’s good for me, but I’m not going to eat it unless I have to.