This is the weekly thread to discuss what you’ve done recently and are working on this week.
Please be descriptive and don’t hesitate to ask for help, advice or other guidance.
Last week, I turned 40. I used to have trouble imagining that I would
survive past 35, but I did. I definitely never would have predicted the
life that I have now though even 6 years ago. My daughter turns 4
In my spare time in the last week, I’ve been doing a variety of things.
(This is in addition to work, family, errands and a bunch of other
I’ve been working on some “window” management code to contribute to
libtickit. This is a
conversion of some code from the Perl Tickit project into C. I’ve
gotten a lot of the basics working (over the last couple of weeks),
but in the last week, I finally started porting the tests over from
Perl. Hopefully, I’ll be able to start getting this into the upstream
libtickit distribution soon.
Last week, I mentioned that I was working on some changes to the
testworks-specs library in Open Dylan.
I’ve made further progress on that. I’ve updated the documentation
for testworks to document testworks-specs for the first time.
I also did some trial builds of various test suites based on
testworks-specs with my updated version and have been fixing
various issues that I found. Not all of the issues were in my new
code though! My changes to make some errors be visible at compile
time resulted in some trivial errors being visible for the first
time and I’ve pushed a variety of fixes to the various test suites.
I’ve got a draft of a blog post underway that describes the changes
in more detail and demonstrates some of the differences between the
old and new behaviors.
We got some new interest in Dylan from some people in the last week
and that may well prove to be exciting. I’m hoping they stick
around and help us take things to a new level (especially on
I also started on a Dylan binding for nanovg.
This can be found within my calvino
repository where I keep various graphics-related bindings libraries.
This has been pretty fun. In the process, I found a bug in the
Open Dylan compiler’s support for structs-by-value and submitted
a pull request
that fixed it.
I also used this opportunity to dig into some other issues.
Our bindings generator, melange,
allows you to specify an interface file
which is processed by melange to generate the actual C-FFI bindings.
I wanted to mark my C-FFI bindings for nanovg as being inlineable
so that a lot of overhead from boxing and other things could be
optimized away. Until now, this wasn’t supported by melange but I just
landed a commit that adds this support.
Another thing that I ran into is that when inline-only is specified
for a C-FFI function, a confluence of factors can result in this not
getting inlined and then multiple out-of-line copies of the function
being generated. I haven’t solved this yet, but I’ve been using it as
an excuse to dive into the compiler and learn about how the inliner and
related parts of the optimizer work.
I am producing a nanovg binding because it seemed fun and useful,
and a good excuse to continue diving into parts of the compiler and tools
that could always use some love. But I am also going to take a shot
at writing an OpenGL / nanovg backend for our DUIM user interface library
and see how it goes. I’ve mentioned DUIM before, but as a reminder,
it was written by one of the people who worked on CLIM in Common Lisp
and prior to that, Dynamic Windows at Symbolics for the Genera Lisp
One thing that I could use some advice on: When I write my
widget rendering code with nanovg, it seems like it would make
sense to make it possible to control some of that via something
like a stylesheet. Qt (4.x and later) does this pretty well from what
I remember. But many people also complain about CSS syntax and
suggest that there could or should be something better. What
other examples of controlling rendering in a UI library are
out there? Are there popular alternatives to providing something
Are there popular alternatives to providing something like CSS?
Have you looked into CSS preprocessors like sass and less? They make working with CSS less painful.
Personally, I use sass, but less compiles quicker.
Last year, I bought a Xeon Phi on an impulse. For those who are unfamiliar with the Xeon Phi, it is PCIe card that contains 57 1GHz Pentium (x86) cores, with some extensions for SIMD, 8GB of memory and running an embedded Linux on board. It’s great for highly parallel general purpose code that GPU’s aren’t suited for (lots of conditionals etc).
This weekend, I finally received the final parts of the system to house the Phi, and assembled it. I haven’t set up the software yet, and I still need to verify that the cooling is sufficient (these cards get ridiculously hot!).
The card was only $200, but after a lot of research, it turns out these babies require some pretty high-cost infrastructure to get running (motherboard with 64-bit PCIe addressing, which is rare and rarely advertised). The total setup has cost me around €800. I got a lot of help from Don Kinghorn from Puget Systems, a company that builds and sells specialized workstations and servers (Xeon Phi, Nvidia Tesla, …). They were very friendly and replied very quickly, despite the fact that I wasn’t a customer.
If anyone has an cool application for the Phi, or software that they want to port to the Phi, or something they want to test on a 57-core setup, feel free to let me know! I bought this card to experiment, but I can probably let someone else use it over SSH from time to time.
This week I’ll mostly be preparing to leave for Japan!
Very interesting. Do you happen to know why they decided to put 57 cores on the machine, as opposed to something rounder?
There are models with slightly more cores (60 or 61 cores), but I think this is a yield issue. My speculation is that they are building them as 64-cores per die, but they can’t get acceptable yields, so they release them as 57, 60 or 61 core versions depending on the quality of the specimen. These cards normally cost thousands of dollars a piece, so I’m guessing they’re pushing the limit to what they can physically fit onto a die/board.
I think this is also just where the thermals, performance and power consumption ended up being best for this kind of package (dual slot PCIe). Adding more cores likely would have meant dropping to lower clocks, and I think they didn’t want to settle for anything less than the symbolic 1GHz. As I mentioned, these cards get REALLY hot, even when idle.
That said, 57 cores with 4-way HyperThreading and 512bit SIMD (AVX-512) is a plenty of parallelism to play with.
I’m writing what more or less amounts to an sdk in purescript. The extensible effects are really nice and I’m still not sure how best to wrap a certain style of js library. I’m experimenting with the boundaries and calling the sdk layer from js.
Getting the 1.0 version of Snap for Beginners done has been a real slog this past year, but I’ve been making some progress in the new year with a new approach to the content.
I’m building out an api using postgrest and sqitch. Sqitch is really nice and is pushing my pg knowledge further. I’m planning to use the api as a microservice type deal behind a snap application. I haven’t really broached access control yet for the api. It would be really nice to use ghcjs for the gui layer.
I can take a breather now that Rust 1.0 alpha has shipped!
Today, I’m laying out my plans for the alpha period: what I’m gonna work on each of the six weeks. So I won’t really know this week until later today. I had to take the weekend off for once this weekend, burning the candle from all ends to hit that deadline Friday was killer. Seemed to go okay though!
I’m implementing for Python (probably for statsmodels) the medcouple, a robust measurement of skewness.
I may write a blog post or Wikipedia article once I’m done. I’m having fun with this problem. The end goal is to have adjusted boxplots, for better outlier detection on skewed distributions.
$work: Same old, same old. Trying to barnraise this docker/gitlab-ci/openstack system, it’s a real bear just trying to keep track of it all. Add to that the confusing networking setup at $work and – it’s just a recipe for headache.
!$work: I’ve somehow managed to pick up two new languages in the last week or so, both relatively unintentionally. I’ve been fiddling with Sage (the CAS) as a replacement for Mathematica, and I’ve been porting some Mathematica code to Python, and with great success. Mathematica (IMO) severely lacks a pleasant model for doing much of anything. The pattern matching features are pretty neat, but it’s not particularly convenient to build rich datastructures. Python, of course, much better. A rich class system and a wide array of third party libraries makes it way easier to structure things. It’s also pretty nice being able to do a lot of the grunt work in pure python, then just ‘sub in’ abstract values and have it magically spit out equations in those abstract terms.
I also fell into working on some Lua stuff. Suffice that my experience with Lua so far has been far less positive. It’s anemic-by-design standard library is pretty annoying, and the lack of a strong direction in terms of paradigm is equally frustrating. I’ve found that I simply think the notion of multi-paradigmatic languages is simply bad. I’m not saying that any one paradigm is better than another, but I think that these sorts of ‘jack of all trades’ languages tend to be more frustrating than the supposed ‘benefit’ of being able to write in multiple different paradigms. The tradeoff for this ‘feature’ is that nothing in the language can make assumptions about how your code will work, it can’t provide tools that support one paradigm more than any other, and ultimately you’re left with having to roll your own everything. I much prefer languages with a strong notion of paradigm. Ruby, Python, Haskell, even Java all know what they are, and so I (as a programmer) can simply join in the party with them. With Lua, I’m stuck trying to figure out how I’m going to implement my object model today, and how I’m going to do simple things like split a string on spaces, etc. It’s really very frustrating.
I will say that the ‘everything is a table'ism is pretty nice, and were it not for the fact that this Lua code is running in a pretty locked down environment, there are a lot of 3rd party libraries I could use to give me back some of these features. But the selling point of lua is that it’s embeddable anywhere, and these libraries won’t work in this particular embedding. I’m just frustrated by the doldrum-like nature of it all, no wind to help push the sails of thought along.
Have a look at SymPy. (Www.sympy.org) ( www.github.com/sympy/sympy).
Yah, this looks a lot like what sage offers – is there anything shocking that differentiates it? (I’m poking through the feature list atm and not seeing it, I suppose it is nice that it’s just a library (by the looks of it) rather than a sort of pseudointerpreter thing).
See also Mathics, which implements Mathematica-compatible syntax on top of SymPy. It might make migration from Mathematica easier. You can try out Mathics online.
I am continuing with the same tasks I mentioned last week. Unfortunately, I continued to fight the flu, and on top of it I had to have some emergency dental care.
The flu knocked me out for a solid two weeks, and I had a business trip the week after. Looking at code was the last thing I’ve wanted to do for awhile.
I did, however get some work done on Hython! I added support for the global keyword, the nonlocal keyword, and lambda expressions. What’s noteworthy about all those is all of the re-writing and such that I had to do made implementing them straightforward! I’m getting close to having all the pieces I need to finish the core of the Python language pretty soon. Which is wild, considering that when I started in July I didn’t know what I was doing.
I also spent some time extracting out a new type: Class. As you’d expect, it encompasses all the data related to a Class. I also implemented the C3 linearization algorithm used by Python 2.3 new-style objects, and Python 3. The point of the algorithm is to create a deterministic ordering of classes in the presence of multiple inheritance to use when doing method lookups.
The type signature for the merge portion of the algorithm is:
c3Merge :: [[Class]] -> [Class]
Look familiar? It should! I converted the algorithm and tested it using Char in place of Class there, making it easy to test using the examples on Wikipedia. That ended up helping a lot: I could specify inheritance hierarchies in the form of “CBO”, where each letter represented a discrete class, rather than building out individual Class instances.
Adding features to, and doing some interesting refactoring to support these, to a Scala work project. This should allow downstream users to query our API more efficiently and we’ve estimated saving about $170k in AWS hosting fees in a year.
Also continuing to play with my Clojure + Datomic toy project. Starting to get a bit more familiar with Emacs + Cider as a workflow.
I’m still working on the decision tree I posted last week. I found out I had a bug when calculating the information gain across all attributes. Hence, I need to modify my probability DSL to be able to take multiple variables as givens, as well as an unspecified RV to be able to calculate the info gain.
For fun, over the weekend I started playing with Android’s Activity Recognition API - so you can write an app that knows when you’ve started walking, driving, biking, or sitting still. Its a fairly hairy API, but I got it working and reporting its status to a little dummy webservice, which was fun. https://github.com/ludflu/WhereYaGoing
Once again, taking a break from Fire★ this week and working on my todo list project called “Muda”. Last week I managed to create a login page and user accounts. This week I want to get the user settings page and list export working (because I don’t want people locked in my software).
Then hopefully next week I can get payments working.
I am thinking that you can either pay a small amount per year for me to host your list, or you can just download the code and run it yourself. I will publish the code with a GPL license. Do you guys/gals think this is a good model?
Ok, I lied. I got bored and just committed Lua 5.3.0 support in Fire★. Lua 5.3.0 was just released. Will post binaries next week.
Still working on the Haskell book & Haskell contract.
Trying to prep early material in the Haskell book for somebody to beta-test.
Publishers are AWOL.
Among lots of other stuff working on a tool to get docker instances running and linked using a JSON config file. See Here Started it today because I was using a Makefile to do it before and it got a bit out of hand.