This is the weekly thread to discuss what you’ve done recently and are working on this week.
Please be descriptive and don’t hesitate to ask for help, advice or other guidance.
I’m keeping it short this week. I’ve been working on the usual stuff with Open Dylan.
I finally got to land an upgrade to the emscripten compiler used at my client to move them to fastcomp from the old compiler. This is pretty exciting and opens up a lot of new doors.
I had mentioned some multi-user programming and security stuff … so I’ve also been spending time getting that stuff into shape to make it available again as it had largely disappeared from the net. I’m looking forward to writing about that in an update next week. :)
Finally, this Friday, my family and I are headed up to Bangkok for the weekend and then to Jakarta on Sunday for 5 days. Looking forward to some good satay and other food in Jakarta. Not looking forward to the worst traffic in the world.
I’m working on several things for Factor, including a binding for libsodium and a binding for libcouchbase, and will be getting around to checking whether Factor runs on OpenBSD/fixing any issues I hit with Factor on OpenBSD. I’ll also hopefully continuing submitting more documentation fixes.
Adding an index to the book so it’s easier to keep track of when we’ve introduced and elaborated on things.
Still more testing of the book’s material with a new programmer (never coded before).
Still recovering from the removal of four of my wisdom teeth last Friday. The pain is obnoxious and hinders my ability to work.
Kicking around some ideas for a web spider written in Haskell that’ll be able to handle JS rendered content.
We’ve signed with a publisher for the book, but I won’t be announcing much about it until there’s at least a page on their site for it.
Congrats on getting a publisher, and I hope the pain subsides quickly.
As my more-me-than-I-usually-am Twitter feed demonstrates, I am on good drugs.
I’ve been putting a lot of work into polishing Card Minion lately. A lot of the “big ideas” are finally in place but the devil’s in the details. Particularly vexing is address normalization - I’m finding that there’s a TON of different ways addresses could be sliced (“Oh, anyone without a city is from my hometown”, “I usually just ask my mom for their street number and never remember to put it into Google Contacts”, etc, etc).
I’m also finding I’m pretty bad at UX. Whatever I come up with inevitably works (thanks, Bootstrap!) but it usually takes forever and needs lots of tweaking. Are there any good books/blogs/etc on web UX I could read up on?
I’m trying to play catch-up on an AI
planning course I forgot I
signed up for. I’ve been using Common Lisp to solve problems for the
course (all toy problems so far, as I have a lot of catching up to
do), which has been a joy. The course has a lot of reading and there’s a
lot of ancillary subjects to brush up on for the course, so that’s been
taking up most of my !work time.
Work so far has been mostly working on our SSL
Trying to digest promises.
It looks like an errr, “Promising” generalization of “run that function in that thread context”.
Since I’m working on a product written in C using the Ecos RTOS using a message passing architecture… I’m trying to see if I can make the promises paradigm fit into the statically typed, multi-real time threaded, no garbage collector world of C.
Wish me luck and deep insight.
I’m going to need it.
I tend toward futures when it’s OK to block a thread waiting for a result. In most backend situations, if you’re on a non-UI thread, this is OK (presuming said operation completes/fails within a reasonable timespan).
To really understand futures, you should implement them. Think of them as a threadsafe queue that accepts one put, and blocks until the put completes.
The particular use case here is a real time system where no thread may block longer than the latency requirement on that thread. (Each thread has a different real time / absolute priority, single core, highest priority runnable thread runs.)
Looking for an internship! If you know any companies interested in hiring a summer intern with experience with security, functional programming, and math or one who’s just really interested in solving interesting problems, I would love a PM/reply.
I’m also going back to that timing attack resistant type systems project I was working on a lot last fall after getting accepted to speak about it at THOTCON. It turns out explaining timing attacks, the Curry-Howard isomorphism, and basic category theory is a bit tough to fit inside 25 minutes, but I think it can definitely be done.
Apart from that, I just finished up with the ISUSEC CTF I was helping write and run, and should be publishing code and writeups around Wednesday. This has been a good week!
Writing a proposal for a simple way to add generics to Go (I know, I know! But I think I’m on to something).
Late to the party this week, but feel I have something interesting to share from work.
Our API had a cache abstraction that allowed us to use either an in-memory LRU cache or memcached. The LRU cache was faster, but the memcached version could be shared, and “unlimited” in size. It was added on the theory that as we scaled out, the memcached version would be better because a higher number of instances would see fewer cache hits. We had a few problems with it and we had moved away from using it. Additionally it was a lot of code (about 10% of this particular project’s code base) so Wednesday I removed the abstraction layer and the memcache version, leaving only the LRU version.
Just for curiosity I did a load test before and after, on otherwise identical setup. I expected a small improvement in performance, but that’s not what I got. Instead I saw a 25% increase in throughput, sustained over 50 minutes, while replaying access logs from a few weeks back.
This API serves product data for a retail site. The baseline was the previous commit–but from Monday. It occured to me Thursday morning that because our sale ended on Tuesday, between the baseline and my second test, a lot of requests would return 0-length responses, which would be served faster. This explained everything! Except that the environment where I ran the load test has a static data dump, so its data hasn’t changed. Damn!
So today I redid the test, for both the baseline and my change, but this time replaying access logs from Wednesday (when the sale had ended) to be more representative. This time we saw a 50% increase in throughput. I am now questioning everything, including my sanity.
Best part: 99th percentile improvements in latency from 1.3s to 540ms!
Worst part: not understanding how.
It’s going into production next weeks, so we’ll get to see then if it has practical impact on latency…