As per the usual …
Feel free to share what you’ve been working on here. Also mention if you need advice, help, or a second pair of eyes.
I’m unexpectedly posting this from a beach in Krabi in southern Thailand. My wife’s father decided we should accompany him on a trip last Thursday (leaving late Friday night), so here we are. Unfortunately, it started raining on Saturday and hasn’t really stopped for long since then (it is Monday night now).
I’ve managed to do some interesting stuff though.
In Dylan land, you’ll probably see a status update again from someone who has been working on Docker and Heroku stuff with Dylan. I’m really excited about that. There’s also been some good improvements to our HTTP code where I’ve been helping out with some code reviews. Finally, housel has been making further gains on the LLVM backend. It is getting much closer to booting an application. (But with a bunch of things to go back and fix rather than just working around an issue.)
In my contract work, I finally got some new UI into a much better state for the memory / heap profiler for emscripten. Some of the interface is significantly faster, more flexible, etc now. However, I can really see why people like the idea of Web Components or Google’s Polymer.
Speaking of web components, for my new project, I wrote a couple of blog posts in the last week about some stuff that I’m working on with it (even though I’m not yet ready to discuss publicly what the project is):
I’m now working on a third post. I have a need for a really rich command shell environment, much like what the Lisp Machines had or what the CLIM Listener provided. This will all make a lot more sense in context one day when I can give real examples. As a result of this, I’ve been re-reading and re-learning about how some details of presentation-based GUI systems. An early paper on the concept was Presentation-Based User Interfaces from 1984 by Eugene Ciccarelli IV at MIT. An introduction to how some of these systems can be found in User Interface Management Systems: The CLIM Perspective. Since this work was heavily used in Dynamic Windows on the Symbolics Lisp Machine and then evolved into CLIM, I have the advantage of personally knowing one of the people who was heavily involved in this area and I plan to pick his brains soon. :)
I know that this model of user interface has, in recent years, been pushed heavily by tools like iPython, but having used some CLIM-like stuff in the past, I think there’s a lot of ground for moving forward in a slightly different or independent direction.
The future is looking bright. But a lot to do on every front …
I need to get back to Krabi – I arrived an hour after the Tsunami in 2004, left shortly after due to the chaos, and the inability to help, and have been wanting to go back and actually see more than the town of Krabi. So, I’m jealous!
Started reading Text Algorithms, and I’m planning to implement some of the algorithms in Common Lisp. The topic is interesting, I enjoy the writing style, and the algorithm descriptions are pretty straightforward, so I’m hoping for a good read.
At work I’m focusing on creating a nice way to get data out of our MongoDB logging/recording server and present it in a useful way.
If you’re interested in text algos (and natural lang processing) check the cl-nlp project. Also, check the project’s github page. It has some links to some blog posts that may help you.
Last week, I was pleased to make some progress on implementing exception handling in Hython. I added support for correctly handling break/continue/return in the midst of a try block. The result, as usual, leans heavily on continuations to ensure correctness. Though I had this spoiled for me by Matt Might’s excellent guide, I held off on implementing it until I fully grokked it; the result was a small code change that took quite awhile to come around to. Now, I have one more hurdle to clear before I can move onto the more mechanical aspects of proper EH (selecting the right handler, checking the exception is a subclass of BaseException, etc): ensuring that exceptions raised in exception handlers still run the corresponding finally block (if any).
I’d like to handle the exceptions-raising-exceptions-should-still-run-finally-blocks case this week, as well as start on the less exciting parts of exception handling. While I don’t believe that the final implementation will be perfect, I am satisfied that it will be good enough to start to lean on for other language features in the not-distant future. This is probably one of the few things I have really sat down with and tried to do well; previously I did more along the lines of parlor tricks to build enough support to move onto something else. That aspect of quality is intrinsically satisfying, and I think I’m starting to get closer to a point where I can focus on specific details and do them well.
I’ve started looking at elm for a game my friend and I have been working on to teach computer security to kids by having them hack the game itself, where for example a level would read data from cookies, and tampering with those cookies might change the layout of the level. At the moment I’m just trying to get basic movement and physics working and learn basic the basics of FRP, but I should hopefully have a working prototype in a couple weeks.
I’m still poking at my idea for mitigating timing attacks with types, specifically trying to implement Montgomery’s ladder for time-constant exponentiation, but I haven’t had a ton of luck with anything beyond the types. I’m thinking that I might want to explicitly encode the number of operations in the type itself instead of just in the result too, but I haven’t done much with that idea yet. If anyone good with dependent types or crypto wants to point me in the right direction here, I’d appreciate it a lot.
I’m also applying to colleges, which is a lot less fun than my other projects, but also probably way more important. I’m still looking for schools as well, and if any of you have suggestions for places to study things like category theory, type theory, cryptography, and programming language theory, that are ideally not crazy selective, I would love to hear them
At work, I’ll be starting a prototype project that ports part of our embedded Linux appliance to a commodity x86 Linux server, to determine if it is feasible to make some functionality available in the cloud. (Sorry for the vagueness, I’m not allowed to mention specifics!)
In my spare time, I will finish OpenDylan debug package generation for Debian, and I’m trying to finish a videogame by the end of this month for Ludum Dare’s October Challenge. My game concept is way too ambitious to be done by the end of this month, but maybe I can work out one mildly enjoyable aspect on its own.
At work, starting a tedious process of moving an enormous Rails 3.2 app to 4.x (probably 4.2 as it’s about to be released).
In my spare time, I’ve had lots of trouble with Delayed Job and Sidekiq recently and I decided to create my own worker with different storage adapters ideally. Hope to have lots of fun and learn something!
Working on ipfs, Im building an object to make modifications to files being represented as DAG Trees. Its very strange working on a ‘filesystem’ with no set block size. We are getting really close to an alpha release, just cleaning up a few remaining tasks!
Last week, I made very little progress on anything due to catching up on sleep. This week, I hope to continue working on tin, a very lightweight Lisp derivative (surprise!), hopefully/eventually, optimized for embedding. This is mostly for fun, but I do hope that it actually turns into something usable so I can use it for real tasks – we’ll see!
At work, I’ve been working on a project that replaces the way we collect system metrics (- collectd, + shh), which stalled while I figured out enough Ruby / Rails to make some modifications to some core infrastructure to support it.
Outside of my job, my plan for this week is to knock out as many issues with rustdoc, the documentation tool for the Rust programming language, as I can. Over the weekend I finished work on three issues, including two fairly annoying ones related to missing bounds on type parameters in generated API docs.
There’s nothing worse when using an unfamiliar language that finding wrong or less-than-helpful reference docs, and I hope my work can help make that experience a rare one!
This week I am going to make some UI improvements for Fire★ after doing some more user testing. The little things matter.
I am considering going crazy and re-doing the whole UI using QtQuick instead of QWidgets so that doing ports to phones would be easier.
In order to do that, I need to better separate the GUI code from the lua api code, which is both labor intensive work but may be worth it if I want specialized UI for different platforms.
Doing the separation would make running the app code in another thread than the UI code possible. This would improve UI responsiveness and also help isolate misbehaved apps and give me the ability to let the user stop such apps if needed.
What do you think, is this crazy? Should I go forth and work? Or am I setting myself up for a dream land rabbit hole type experience?
Believe it or not I got this idea visiting this insane place over the weekend called House on the Rock. If they can build something this amazing over 50 years, I can slowly build Fire★ to amazing and whimsical. Maybe?
Sounds very reasonable. I think recent Windows APIs take this approach of not letting you block the main thread easily. It is appalling that people still manage to block the UI thread these days; preventing them from doing that is a worthwhile step.
Sorry to appall you! Just joking. Not blocking the UI thread is even more important here because the apps that run are user created.
It was initially faster to prototype putting the app vm and ui together. But I regret it now.
This is a lesson for you young people! Do not block the UI thread! Design UI to be deeply asynchronous.
I am presenting about lowRISC at the OpenRISC Conference this weekend, so am spending some time preparing my talk and working on a related whitepaper. We were hoping to release it to coincide with the talk, but it may have to come a bit after.
Chibrary is an archive for mailing list messages that presents a discussion thread on a single page. Last week I [reconnected]() the frontend to the backend. This week I’m chasing down a regression in threading conversations back into trees (the process was hugely redesigned, fingers crossed this bug is as easy as my first guess), Then there’s some integration tests to write and, hopefully, deployment.
I’m also setting up a mailing list for organizing the Chicago study group for Erik Meijer’s Haskell MOOC starting on Oct 15. Contact me if you’d like to join.
So, this last week has seen more width and less depth …
The docker images for opendylan are on good track, but there’s some documentation work left. Right now there’s the possibility of creating images for compiling and running an application by using the last stable release (2013.2) or by building and using the latest from github. Right now everything is a little scattered here, but hope to polish and finish it in the next couple of days.
I also started hacking away on deft a dylan environment for tools; I started making a module to create a graph showing the dependencies between modules, and will soon begin another module to generate Procfiles and Dockerfiles.
This while also trying to make the wiki application to work and fixing some tiny issues on the way.
I’ve also been daydreaming about making a tiny ring-0 OS that boots right into a Common Lisp REPL; planning on maybe cross-compiling mkcl with a spartan-ish libc, and I’ve also been doing some research - meaning I’ve been opening a load of tabs from http://wiki.osdev.org, not reading them, and then closing them when my laptop complains about lack of memory. However I have short bursts of interests in particular projects (from 2 weeks to 1 month), so I’ve been delaying it’s start, right now there’s already a lot of interesting stuff to tackle on opendylan.
Evaluating InfluxDB for persisting time series. If I find anything seriously distasteful, Cassandra is on deck.
We’re using InfluxDB (the 0.7.x series) in production, and have been since August without a single issue. We’re waiting for the 0.9 series to come out, which should have much improved clustering support (despite the scale we’re operating at, we’re not using it right now).
Anyway, the experience has been very good, and they’re both responsive and helpful, which is always nice.
$work: Building some tooling around managing our… unique database setup. Trying to make life a little easier for everyone, and make our CI run a little smoother.
!$work: Messing around building something to organize my recent 40k addiction hobby acquisition. I’ve got a boatload of in-progress pictures and notes and my notebook is getting pretty crowded. It’s nice sometimes to do something really stupid-simple like make a ember app or w/e, esp. when you’ve been working on nothing but complicated stuff for so long.
I also beat Shadow of Mordor the other day, great game, well worth the 50$.
That game is like playing the CIA guerilla handbook. In a … good way?
Mordor or 40k? They both kind of fit (the former because it’s very much a manual-driven game, the latter because stabby stabby orc scum).
Mordor definitely, if it took place in any sort of human context it would feel really weird to play.
trying to reproduce some of lucasvb’s animations here [http://en.wikipedia.org/wiki/User:LucasVB/Gallery] using racket and metapict [http://soegaard.github.io/docs/metapict/metapict.html], and contribute features to metapict when i feel something is unnecessarily hard to do.
I am working on a web service that will convert short video files into animated gifs using:
I’m not sure if there’s a better way to orchestrate child processes using Node, here’s what I’m currently doing. Would appreciate any feedback from Node developers here.
working on a website in middleman (Ruby that runs on Puma), and playing around a lot with JS timeout functions and the css transition property to make the site very animated. I’m basing everything off of Google’s material design spec.
Beyond that I’m working with some friends on an android picture app and I’ve been tasked with building the web server. I’m building the authentication with strongloop loop back, a nodejs framework. Though their api docs are down for some reason and it’s totally killing my productivity. If anyone is familiar with strongloop loopback, ping me, I’d be very very grateful.
Mostly remembering I can (and should) say no to things, rather than ending up in a situation where I have ALL THE THINGS to do.
My home server install is going well, replaced a REALLY noisy (old) hard drive with a new one and the thing is so quiet I have to look at the LEDs to check it’s still on half the time. (Which is good, as it’s sat in the living room currently.) Managed to get SmartOS installed and figure out how to mount the zfs media shares into a smartos zone, which is happily sharing them to the network via smb. This week I plan to get crashplan installed on it to back our personal laptops up to, and get some resource stats collection going on with it. Also need to get tarsnap setup on my servers, so this week is about backing up all the things.
Work hours are being eaten up by caching algos. Building a robust test suite was remarkably time-consuming, since there are so many different variations that can be tested (various key distributions, dynamic range of distribution, value weights, cache sizes, temporal effects, etc).
The shootout currently consists of: Guava LRU, CAR, 2Q and my own invention called Percentile Weighted Cache. CAR has been tricky, since the paper suggests using linked lists…but this leads to awful lookup speed due to requiring a linear scan across four lists.
I re-implemented it using two ConcurrentHashMaps and two Guava LRUs, which gives much better performance. But I had to fudge the implementation a bit. CAR dynamically resizes the caches based on LRU vs LFU, but Guava LRU doesn’t allow you to pop() the least recently used, so the implementation is a bit restricted. This leads to poorer memory utilization and ultimately lower hit rate
2Q and PWCache are both looking pretty good, as well as being simpler and faster. 2Q is just implemented using three Guava LRUs (two hold data, one acts as a memory list). PWCache contains two 2Q caches and caches to one or the other depending on the weight (decided by a Frugal Stream on the 50th percentile).
Off hours, I’m packing to move cross country, so that’s boring :)
Jumped down a refactoring rathole without fully understanding the business and engineering pressures that informed the original decision. Which design turns out to be … pretty sensible. I need to decowboy a bit and take it slower, but at times I feel like I’m not contributing enough, so I grab these sorts of prematurely. Old dog needs new tricks.
I am working on building progressively larger neocortical models with NUPIC and doing research into deep learning.
Two things I have learnt this week from research and experimentation:
- With deep networks you don’t have to necessarily feed all of the data into the bottom of the network, in fact it’s better if you dont. I noticed this when I was reading papers about dissection of the human neocortex, layers 4 and 5 connect almost entirely vertically where as layers 1,2,3 connect horizontally more. This makes sense because you can present the neocortex with different spatial patterns for higher / lower SDR resolution - dont mix your data types in the same SDR, encode them differently and feed them into the higher parts of the network. Example: one might encode an SDR to an image using black / white pixels and then set another SDR to “dog” and “not dog” for classification purposes. Feed the classification SDR to the top of the network and the raw data to the bottom … this gave me stability much faster because the classification was not “mixed” up with the data.