I hope everyone enjoyed their three-day weekend as much as I did (visits home are always nice). Now that the work week has officially started, what are you working on? Links and explanations are encouraged.
I just started full-time at Amazon on the Route53 (DNS as a Service) team. I’m probably going to be busy this week getting ramped up at work and making sure I have everything taken care of for my apartment. If I have time, I’ll keep working on my FPGA FIR filter.
Are you using much python internally on that team?
No, Amazon generally doesn’t use Python for internal backend systems.
I’m putting the polish on wordy weather. I got tired of trying to guess what the weather would be based on a small icon and a number on my weather app, so I built it to grab full NWS forecasts and present them in a clean way by zip code rather than by NOAA zone codes. It looks great on mobile too (try adding it to your home screen!) Send me a message if you’ve broken something (or I left something broken)
EDIT: Just broke the prod build D: could take a while to get back up…
EDIT (2): Back up now.
I used to run a site called goingtorain.com which just showed a big “NO”, “YES”, or “MAYBE” answer on the screen as soon as you loaded it. It did this by geolocating your IP, then looking up that zipcode with Google’s weather API (which they since shut down), and then checking for words like “rain”, “sleet”, “snow”, etc. Your site might be improved by doing geolocation on the server side with one of the various free databases to get a zipcode, and then just show that weather to the user by default. One less step for them to do.
The wunderground API, albeit poorly documented, can be passed an IP and they will do the geoip on their end. Even fewer steps!
If you feel like doing it yourself, you could use HttpGeoipModule (assuming nginx) and pass the result to an API that takes a variety of inputs (city/country, lat/long, postal code, whatever the geoip module is able to provide) and then parse those results.
(No, I am not affiliated with wunderground. Their API has caused many headaches and wtf moments, but it is still my weather API of choice.)
Yeah, I’ve used the wunderground API on some projects and it was pretty nice. I used NWS forecasts for this project however, because it includes things like built-in uncertainty and more precipitation detail that you can only get from a professional natural language forecast. Also, for fetching the zone, I’ve been using data from this weather cli that includes NOAA zone coordinates as well, which would go nicely with that nginx module by the looks of it. Thanks!
Thanks for the advice, I’ll look into that. The database I’m pulling from right now includes coordinates too, but I’m not sure how I would change the flow of the site for that.
This week is crunch time for StackMachine as Global Game Jam is coming up this weekend. I’m sponsoring the SF event, so I want to finish as many features as possible before Friday. Features currently in the pipeline are greater customization options and an embeddable iframe to accept payments for your game.
I’ll also be working with @kb on refactoring Battle of Bits. We’re hoping to deploy to the first version of the server in a week or two.
I also worked on an open-source scraper for Magic: The Gathering. It pulls all the card information into a local JSON file. The project was a great excuse to learn how the html package works. Now that I have a better understanding of the package, I think I’ll be writing all future scraping scripts in Go instead of Python.
I’m getting ready to launch DrDebber, which provides a debian package repository as a service. I started developing it when we kind of needed it at work and there weren’t any alternatives as far as I could tell.
Likely going to pivot on the game I was working on last week. Finally put some time in on figuring out the game design and mechanics, and will be changing up a few things.
Also, I’d like to add more features to the unofficial lobste.rs chrome extension (https://bitbucket.org/sirpengi/lobx) I whipped up in response to https://lobste.rs/s/ulyq3l/feature_request_ability_to_hide_posts
Well there is a “request” to hide users https://lobste.rs/s/87phed/proposal_a_filter_for_users
Continuing work on a Storm Topology that I started last week. The initial “ideal” design was practically impossible for us to understand (and we were building it) so we’ve made a lot of possible parallelism sacrifices that should greatly increase the ability of someone else to come in and understand the vastly simplified data flows.
As part of that, I’m actively exploring the best ways to interact with json from scala and doing a moderately deep dive into using Play Json as transformation tool. So far, it looks like it might compare favorably to the ease of use we get from using Cheshire w/ Clojure. There’s more initial setup compared to cheshire but we have a level of validation built in with Play Json that we don’t with Cheshire.
The search for good clojure data transformation/validation tools (schema, herbert, strucjure, seqex and more) is an ongoing project that also continues this week.
This week, I’m working on figuring out why ffmpeg slows down when we send it 4 live streams to mux into a certain configuration.
Background is we are creating a web application that is the next step up from youtube videos for amateur and professional performers. Viewers can vote on the live streaming video and (if a threshold is crossed) kick performers off in near-real-time. We also want to have celebrity judges give feedback to performers. There’s a lot more, but I’ll link the site when we’re get to beta.
Point is, we have possibly 4 live streams coming into Wowza (the video server) and we want 1 stream out, which is a combination of the 4 input ones. Wowza doesn’t do it natively, so we shell out to ffmpeg to do the muxing. ffmpeg works OK for 3 streams, but terribly lags for 4. I get to solve it.
Update: Turns out, most of the lag was because we were scaling the streams. Remove the scaling code, remove the lag (most of it).
Congrats. I assume the scaling is moved to the output-end then? Makes sense as long as you have the memory for it, and probably results in a better picture.
We’re controlling how the video is ingested, so we can actually fix the size and resolution to what we want. This removes the scaling code and still keeps the configuration and picture looking good.
You should be able to do the scaling in another process by calling yuvscaler or graphicsmagick before piping it into ffmpeg. How many cores are your transcoding boxes?
We don’t have a transcoding setup yet, but we will be running on c3.xlarge or c3.2xlarge AWS boxes.
Yuvscaler or graphicsmagick sounds worth looking into.
I’ve got many of the pieces together (the last piece being the release of csr) to start building a small, open-source certificate authority. I’m also contemplating a few blog posts.
This week is about wrapping up some work on emscripten, largely involving edge cases in C++ exception handling.
It is also about optimizing the HTTP server that is written in Dylan (http://opendylan.org/). We’ve already gotten a few bottlenecks worked out and more improvements coming.
This week I’m working on polishing erd, which generates entity-relationship diagrams written from a plain text description (written in Haskell).
I also work with protein structures from the PDB, and I’m currently migrating to using their new PDBx/mmCIF format. I had to start with making a Crystallographic Information File (CIF) parser, and then provide some convenient types for PDB structures. (Written in Go.)
Drowning in callback hell trying to use IndexedDB.
Finishing up the Android port of my game Tidy Bubble. I’m doing the port in haxe/openFL. It’s been an amazing platform to develop on compared to the original which was in C++ using SFML. Haxe has been so fantastic that I can’t wait to start a new project using it.
More packaging and lots more code review for tokumx, and redesigning a benchmarking framework (https://github.com/leifwalsh/cortisol)
I’m trying to get prettier charts for briefmetrics.com (just posted a Show Lobsters yesterday) by moving away from the deprecated Google Image Charts and into a PhantomJS + D3 + NVD3 thing. Having some trouble with PhantomJS though, where it’ll render the chart on the first request (while running a webserver), but not on consequent requests. :( Might have to settle with a one-chart-per-process-instance model instead, though not ideal. Once I get this done, I’ll be able to make the custom charts I wanted for the monthly reports and get that out the door.
Woohoo, my serverside d3/nvd3 rendering prototype is functional. If anyone is interested: https://github.com/shazow/phantomd3
This week is working on the parser for my semantic web search application, brodlist, as well as starting shooting for the Kickstarter video.
I’m working on new plugins for RabbitMQ. Trying to get the sharding plugin right.
Started to work on a C++ game-engine. Just recently got it to run inside a editor written in C#. Next step is to get the communication between the c# editor and the .dll working.
A while ago I shipped an open source theme for Jenkins called Doony. Skinning Jenkins was kind of a nightmare due to the lack of relevant ID’s and classes.
As of 1.538 Jenkins has lots of useful ID’s for skinning things, so I think I’m gonna update the code to use those ID’s.
I also have been working with @shazow on adding retry support to urllib3; the semantics have been a little bit tricky about using a total number of retries, vs. more granular control (retry connection failures, but not read errors, etc). I wanted to merge this into Requests, but it’s unlikely this is a feature they will want.
I think it would make it in eventually in one form or another. Or at the very least, it’s just a matter of adding Requests docs for how to use urllib3’s internal Retries configuration object in Requests and I’m sure people would use it. :) It’s a solid feature, I’m looking forward to it.