This week I’ve got a few things to do. For “work” I’m reading a lot of related literature around type theory and whatnot, lots of interesting literature on using certified code to prove constraints about low level systems (eg assembly).
For blogging I’m working on the rest of my “Intro to Dependent Types Series”. After this is done I’m planning to write about the equivalence between SSA and CPS.
Finally I’m working on a tiny imperative language with linear types that compiles to LLVM.
Not a huge amount since I’m also starting my first semester at CMU, currently I’m still at the “doodle inference rules to understand the type theory” stage. Yeah I’ll probably implement it in Haskell. I thought about Rust since I’m actually properly learning it lately but decided I wanted some libraries that are currently Haskell only.
I’m a bit late in responding here, but here goes….
For my work, I’ve been doing a bunch of work involving D3 still and working with data from the memory / heap profiler.
For my new project, I’ve set up a new blog and will start writing posts there this week. I’ve also set up a site with some experiments in user interface to get a feel for the different trade offs involved as I attempt to achieve my goals. This involves alternative presentation of news content so there is a lot to experiment with. I’ll have more to say about this in a week or two.
As for Dylan, I’ve been asked to hold off on removing the HARP compiler back-end for a short while, so I haven’t submitted that pull request yet. I spent some time doing code review on the pending changes to the LLVM compiler back-end from the person working on that and it is looking great.
I also updated my branch of LLDB to the current HEAD so that I can get back to adding support for Dylan directly to LLDB rather than via some Python scripts. Many language-specific features in LLDB currently require some C++ code rather than being entirely implementable via the Python interface. There’s still a lot of work to do in LLDB to make it cleanly extensible for a new language as there are many things that assume your language is one of C / C++ / Objective C and places where they use internal structures from clang.
There have been other areas where people have been working lately that needed code review as well. Francesco has been working on improving some of the code exploration / inspection features in our new build tool, Deft. Carl has been working on fixing some issues in our HTTP libraries. It is great to see some good solid work being done!
I’ve had a few things where I needed to generate a document in multiple formats lately while writing code in Dylan. We currently do some of this via other tools that call out to the docutils tool chain from Python, but that isn’t always as flexible as I’d like. So, for a while, I’ve been looking at creating a document-model library that uses a structure internally similar to that of docutils or Pandoc and can then render that out to various output formats. I’ll probably start some work on that this week. For now, I don’t need a surface syntax as I can just generate the document AST, but in the future, the primary surface syntax will probably be ReStructuredText as that’s what we use for all of our documentation in Dylan-land.
Help wanted: If someone is interested, we’d like to write an event logger for the runtime in a combination of C and Dylan. This would be similar to what one of our garbage collectors provides for event logging, but also similar to what GHC has, although we’d start out being real-time. Some of the design work is done for this, but some remains. It would be a fun small project and something that would be highly useful.
Experimenting with an ultra-minimal sign-up flow for, of all things, a webring. Thinking about it as a proof-of-concept for owning one’s own infrastructure as a basis for connecting services without major vendors or central sites.
$work: paralellizing a moderate but slow-by-requirement test suite across multiple jenkins workers. Chasing out bugs as I go.
!$work: Learning about C build systems, mostly Auto*, by slowly building up bits myself. Frustrated by how convoluted it all is, I don’t understand how anyone makes this stuff work for them, or how any software is actually built by this arcane pile of dreck. CMake and SCons aren’t much better. Is it truly so difficult to support a potentially deep src/ tree? To track dependencies in one way which works with reasonable reliability across sane platforms (I could give a shit about windows, but the differences between linux, bsd, and OSX are driving me mad). If none of those – why is it so difficult to slot in a custom preprocessing script to strip away the boilerplate you have to add to get unit testing libraries to work – don’t even get me started on the insanity of boilerplate you have to incant to get virtually every unit testing library I’ve come across to work. It’s unbelievable how complicated it all is.
I feel your pain regarding build systems. There’s no great one, but IMO Autotools is the worst of a set of bad alternatives.
I’ve used Autotools, SCons, CMake and qmake, and out of those I think I prefer CMake. But it’s not like I enjoy using it.
I have no doubt it’s the worst, but it’s also pretty ubiquitous, and it comes in parts. I recall reading an article about the origins of the Autotools ecosystem, something like:
First they said: “Let’s automate the configure script w/ some stupid macro language”.
Then they said: “Hmm, now I spend all my time writing boilerplate makefiles, let’s automate that with more stupid scripting”
Then it was: “Jeez, it’s a pain to write the scripts that the macros consume, let’s write more macros to automate that.”
Then it was: “Didn’t account for this corner case with libraries, better automate the generation of a automation script to automate automating the automatic configuration of automatic configuration configuration IA IA CTHULU F'THAGN.”
Right now I’m just using Autoconf to do some of the dependency checks, and I have a set of mostly generic makefiles to do the actual compilation procedure for the library, executable, and test executable. The problem I’m running into at the moment has to do with my version of Autotools not behaving consistently on OSX and Linux. I develop on both platforms routinely, the former for traveling around with (because the MBP is a lot lighter than my Linux Laptop), and the latter for normal development time. It’s also the case that CI (Travis) runs Linux (but a different distro). For whatever reason, OSX automatically puts the uuid library in the compilation invocation (i.e., you don’t have to specify -luuid), but can’t figure out that check (a unit testing tool) is a library via any incantation I can come up with, even though pkg-config seems to know about it. Not even the PKG_CHECK_MODULES command works. Linux happily finds these things but requires the -luuid line, and even though it includes -lcheck, it doesn’t link things appropriately, and thus the tests won’t run because of missing symbols.
I’m beginning to see why C programmers are so grumpy all the time.
EDIT: (Apologies for the drive-by venting. It’s been a long day generating more heat than light).
Well, autotools was born in a vastly different time. I don’t know how old you are, but the non-autotools alternatives on Unix were often far far far worse. Hopefully you don’t remember imake. :)
Many things used to just have a Makefile per-platform or a home-grown set of per-platform Makefile fragments that got included into the main one(s). That was always a joy, especially when what you needed to worry about was an API level thing that changed between versions of a platform. (Like the introduction of pthreads into HP-UX.) IIRC, the original open source release of Mozilla, NSPR and so on had a pretty arcane build system until Chris Seawood came along and did a huge amount of work to get an autotools build off the ground for it (way back in the summer of 1998).
That ignores the pain that used to be C++ compilers in the 1990s as well. AIX had xlc, HP-UX had aCC, Solaris had Sun Pro, and so on. Today is practically a golden age given that we mainly have to worry about gcc and clang and they have largely compatible command line options for many things. :)
By the time that started to go away and we were ending up where we are now with mostly-standardized platforms (but curse you, OS X, for lacking clock_gettime and other things), people started to realize they could write their own build systems. But then they had to compete with autotools which was available everywhere, available by default, and used by just about everything.
Unfortunately, everyone realizing this meant that everyone decided to write their own thing or push for their favorite choice to be adopted. Now, years and years later, we have cmake, scons, premake, various jam incarnations, and no clear winner. Cmake managed to get a decent amount of momentum though, so we appear to be stuck with it or autotools for the most part.
Now, we’re replacing that with the JS world where every kid and his mother has decided they need a new build system … I’d love to hear why that is. :)
Yeah, all that rings pretty true. Portability used to be much harder, and there’s lots of older stuff that has been rightfully forgotten. Say what you will about autotools, but it usually can do what you need.
I’m not that old, but just unlucky enough to have dealt with lots of projects with homegrown “portable” build systems. In HPC, code lives forever and gets ported to every platform, compiler and BLAS library. Build systems grow organically in many directions, often not toward the light.
Oh, I remember imake. I remember having to build across e.g. Irix, SunOS, Solaris, HPUX et al. What I don’t understand is why it doesn’t seem to be getting better.
I look forward to experimenting with guile / make integration. Non-recursive make is not that slow and scheme would provide a nice DSL for target definitions and platform-specific tweaks.
I have, and while it’s neat, it fails in the ‘potentially deeply nested src directory’ area. It does so for reasons I do not fully understand, but the author seems to be against the notion for reasons of complexity of some sort.
Here’s what I (ultimately) want:
src is a directory with a potentially deep (but finite) tree of subdirectories, containing source files that end in .c or .h (or potentially some other source-code-extension. They are assumed to always be annotated correctly annotated
test is similar to src, with the caveat that all files in test all end in _test.<source extension> and consist of test files.
obj is a toplevel directory which contains all the object files (ideally laid out in a copy of the directory structure of src, this is not a hard requirement, but I’d prefer if objectfiles aren’t left in src. It’s merely aesthetic, but it does make cleaning a bit easier.
I should be able to easily define how to preprocess or generate sourcefiles. In particular, I use Check as a unit testing framework, and that ends up with a bunch of boilerplate which is easy to template away with a bit of ruby, I generate a series of suite files for each _test.c file, and from those I generate a runner.c file which executes each suite.
No/minimal metascripting (i.e., as little scripts to generate scripts to build stuff. I’m okay with a bit of source-code preprocessing, but the build tool should stand on it’s own.
No IDEs. Despite the fact that Xcode actually ‘does what I want’ (even if it’s lying when it does so), and that other IDEs do similar, I’d really much prefer to stay on the command line, and also use a build system that’s (at least in theory) portable across operating systems.
Needs to be able to generate a working build on multiple operating systems, at minimum OSX, Arch Linux, and Ubuntu – but ideally OSX and any Linux. I don’t care about windows or obscure unices, or even some of the ‘big’ unices (AIX). BSD would be nice, but I’m not commited.
Some things I liked about tup. The idea of using fsevents to detect what things were connected to what was brilliant, a much better way than naming conventions. I also really rather liked the simple task DSL, but it did take some getting used too. What I didn’t like was the seemingly arbitrary limitations on what it was allowed to do. Ultimately it looked as though I’d end up having to write a script to generate the Tupfile to support my deep-hierarchy, or artificially limit how deep that hierarchy could go, both of which make my teeth itch.
I’ve tried CMake, too, and Scons, and premake, and a few others. I think at some point I’m probably just going to ditch all of the autoconf stuff entirely and break down to writing shell scripts that run uname -s (like that PHK post ultimately suggested) and trigger a different configure subscript accordingly. It’s a bear, because it means duplicating configuration logic across platforms, but I can count on bash running more or less the same across systems, I evidently can’t count on Autoconf doing that.
I’m on sick leave and decided to finish one of my pet projects: reimplementing Kestrel/Darner (https://github.com/twitter/kestrel, https://github.com/wavii/darner) in Rust. It was a bit dangling for a while and it’s a big yakshave (leveldb you say? Write your own binding), so all parts are unfinished. But it talks to the network, it persists data properly, only timeouts are missing to at least have all “important” pieces of machinery.
I am sharpening my Haskell skills by writing a compiler for the Swift programming language using Alex/Happy for lexer and parser and LLVM for code generation. It is in very early stages but feedback from fellow Haskellers and compiler hackers is welcome!
I just had a quick poke through, but one big piece of advice I can give you is to always insert type signatures for top levels. It’s a lesson I learned the hard way but just because the type can be inferred doesn’t mean you want it to be.
For work, I hope to wrap up a Haskell web service I have been working on. I need to find a way to deal with a rather large XML schema, so I’ll be doing a bit of research into available options there.
For !work, I am finally writing the last planned chapter of my PureScript book, which will be about a general collection of techniques for writing domain-specific languages in PureScript, such as free monads, finally-tagless encodings, and smart constructors.
After that, I get to relax at Strange Loop for a few days, then I plan to get back to PureScript compiler development.
I got my chromeless gist mirroring service up and running. I need to submit it to Embed.ly as a provider and hopefully in a week or so I’ll be able to use it to embed D3.js visualizations into Medium blog posts. After that, Project Yak Shave will be complete. :)
What I’m most pleased with the past week is the split between regulating mental energy and productivity. You see, I tend to either work too hard on something (and make myself tired) or just neglect it entirely. This week I had a nice split of relaxation and progress, despite a busy week of work.
On hython, I focused a lot on finishing out the parsing stage by parsing for/with/lambda/try/raise statements, along with a few more built-in functions. I didn’t implement support for any of those, because they tend to require features I don’t have implemented yet. This week, I’d like to write some more blog articles to try to catch up to where I am, especially since there’s demand.
For Real Life, I’m trying to switch to going to the gym a lot more often (~5x/wk). That’s my current focus of all my projects, actually.
Outside of work, I’m still doing a lot of random small projects in Common Lisp. The geo-coordinate library I mentioned last week turned into the UTM library, which I put on GitHub and submited a request to have it included in QuickLisp. I also finished up the script that gave me the idea for the library, and I can now compute speeds and distances from GPX files, but this isn’t on GitHub yet.
Then I wrote a quick SDL/OpenGL/Lisp game of life.
I’m also setting up a coding blog using GitHub pages and OctoPress, but haven’t posted anything substantial yet.
At work I’m finishing up some odds and ends before a release.
Low sales numbers so far, as expected. I’m proud of the app and people really like it when they play with it, but it’s kind of a niche that’s hard to market and get people interested in unless they’re handed a device running it, in which case they totally get it. I’m working now on doing some marketing/PR, but that isn’t my forte. Ah, well, the primary goal was to learn iOS development and it’s succeeded very well at that. So now I need to find a job, hopefully one using these new-found skills!
Im working on a method for DHT peer bootstrapping for ipfs, and building a daemon that the user commands communicate with. Boostrapping is a harder problem than I initially thought it would be, there are many solutions, but finding the right one, is tricky.
I’m working on a front-end development link-aggregation site created with Telescope, basically is a community of passionate front-end developers and web designers focused on help each other, discuss and share interesting things in our industry
Finally moving my MVP to our new stack! I can’t say too much, but it’s a combination of Elixir (OTP) and the JVM with a lot of influence from the Raft consensus algorithm.
In terms of Hex, I’ve been working on email verification for new accounts, email password reset, a new registry format (ets table + log table) and rate limiting for the API. The next step, after these are merged, is to focus on multiple registry support!
For anyone interested, the source for hex is hosted here and the source for hex_web is hosted here
For $FUN, I’m writing up a test suite for my Clojure library that should help me find where it breaks from being identical to the Python version.
For $SCHOOL, I’m starting to hold office hours as part of being a “course producer” (TA with a cooler title) for the operating systems course. I have a ton of reading, but thankfully all of that reading is really interesting.
I’m working on a JIT-compiled version of ipfw using netmap and LLVM on FreeBSD. I finished killing the bugs of the infrastructure (JIT-ed) code. I’m starting to test it on my laptop. So far…kernel panic! I want to check out what’s happening there.
Once I’m done with making it run for good, I’ll start with some the code generation (the bit corresponding to the actual actions executed for each rule, very simple, and easy to implement). When I have a little subset, I’ll start with some benchmarking. Hope to get to benchmark it this week.
I’m working on VirtKick, a simple orchestrator. Last week I finished power and storage management. This week I’m making it possible to create a new machine (like on this page in the prototype), and then all the settings from here except for user management.
I’m getting ready to go away on holiday on Wednesday (which should actually be a holiday, i.e. minimal email checking) so it’s a short work week.
In the lowRISC world, we’re working on a whitepaper describing some of the key features the initial revision will have. For me, that’s involving something of a literature survey on hardware security features. This whitepaper should be published in the middle of next month, in time for my talk on lowRISC at http://orconf.org
I’m putting an updated Raspbian SD card image featuring the new hardware-accelerated Epiphany browser through its paces before it goes live. Plus I’ve been taking on some tech reviewing lately, and have a few book chapters to review before I head off tomorrow.
Right now I’ve got a LSTM cloud-of-neurons RNN that’s just about ready to go, along with a generic DNA setup for genetic mixing of whatever. My testing program will be to attempt to evolve (via crossbreeding!) many populations of these networks to play Go. Neural network traditionally do pretty badly at Go so I’m curious to see how far my particular design can get. (C++11)
Downside: I don’t yet have the ability to actually score the Go board, which is pretty important for population ranking. I’ve looked up some algorithms and the general consensus is “it’s hard”. At least I can get all eyes, liberties, and groups, and used the “draw a Go board” part of the program to learn the basics of wxWidgets.
For work I am working put together a proposal for a tech mentoring programme. I’ve found myself getting less grumpy as I’m getting older, and wanting to help people here develop their careers. Moreover, I want to make it simpler for the other senior people here to get started mentoring. (Me, for example.)
At home I’m continuing to learn about Clojure / ClojureScript. It suddenly clicked for me earlier in the week what an incredibly slick workflow you can get with Emacs & Cider. You don’t even need to switch to the repo to run something in the REPL: just hit “C-c C-c” to evaluate the current defn or def, then hitting “C-c ,” to re-run the tests. Marvellous! (I should perhaps blog about this.)
More SAP oData stuff for my big client, wrapping up the very last little issues for another client, and starting to think about preparing talks for OSDC 2014. And I got to go visit a mushroom farm on Monday, which demonstrated that even things which sound simple get very very complicated when you scale them up.
Working on a Kafka deployment and getting it ready for production. Life on the bleeding edge of open source can be painful and risky. Last week, discovered that the primary Ruby gem, Poseidon (0.0.4), is not compatible with the latest release of Kafka (0.8.1.1) and the latest on master is broken as well. So, now evaluating rolling back to Kafka 0.8.1, which has problems staying up, or forking and moving forward to get back on track.
On the bright side, it’s been a great education, having recently switched from .NET to Ruby. After forking and patching, I can read and understand nearly all parts of a gem now as well as crank out new ones.
for work, i’m working on a payment system (an alternative to stripe, braintree, etc.) we are really getting closer to the end. i’m very proud with the results so far. apple pay got me excited too.
i’m also developing a startup for myself that i call Quickjobbs, a freelancing community platform. it will meet the need of getting things done as fast as possible for one side and making money as fast as possible for the other.
the main difference from other websites is the time limit on jobs. the longest job can take 1 week and the shortest one is around 1 - 6 hours. it will be an invite only platform. would love to hear your ideas on this.
This week I’ve got a few things to do. For “work” I’m reading a lot of related literature around type theory and whatnot, lots of interesting literature on using certified code to prove constraints about low level systems (eg assembly).
For blogging I’m working on the rest of my “Intro to Dependent Types Series”. After this is done I’m planning to write about the equivalence between SSA and CPS.
Finally I’m working on a tiny imperative language with linear types that compiles to LLVM.
How much time do you find yourself having to implement this “tiny languages” compiling to LLVM? And which lang do you use (haskell, I presume)?
[Edit] grammar
Not a huge amount since I’m also starting my first semester at CMU, currently I’m still at the “doodle inference rules to understand the type theory” stage. Yeah I’ll probably implement it in Haskell. I thought about Rust since I’m actually properly learning it lately but decided I wanted some libraries that are currently Haskell only.
I’m a bit late in responding here, but here goes….
For my work, I’ve been doing a bunch of work involving D3 still and working with data from the memory / heap profiler.
For my new project, I’ve set up a new blog and will start writing posts there this week. I’ve also set up a site with some experiments in user interface to get a feel for the different trade offs involved as I attempt to achieve my goals. This involves alternative presentation of news content so there is a lot to experiment with. I’ll have more to say about this in a week or two.
As for Dylan, I’ve been asked to hold off on removing the HARP compiler back-end for a short while, so I haven’t submitted that pull request yet. I spent some time doing code review on the pending changes to the LLVM compiler back-end from the person working on that and it is looking great.
I also updated my branch of LLDB to the current HEAD so that I can get back to adding support for Dylan directly to LLDB rather than via some Python scripts. Many language-specific features in LLDB currently require some C++ code rather than being entirely implementable via the Python interface. There’s still a lot of work to do in LLDB to make it cleanly extensible for a new language as there are many things that assume your language is one of C / C++ / Objective C and places where they use internal structures from clang.
There have been other areas where people have been working lately that needed code review as well. Francesco has been working on improving some of the code exploration / inspection features in our new build tool, Deft. Carl has been working on fixing some issues in our HTTP libraries. It is great to see some good solid work being done!
I’ve had a few things where I needed to generate a document in multiple formats lately while writing code in Dylan. We currently do some of this via other tools that call out to the docutils tool chain from Python, but that isn’t always as flexible as I’d like. So, for a while, I’ve been looking at creating a document-model library that uses a structure internally similar to that of docutils or Pandoc and can then render that out to various output formats. I’ll probably start some work on that this week. For now, I don’t need a surface syntax as I can just generate the document AST, but in the future, the primary surface syntax will probably be ReStructuredText as that’s what we use for all of our documentation in Dylan-land.
Help wanted: If someone is interested, we’d like to write an event logger for the runtime in a combination of C and Dylan. This would be similar to what one of our garbage collectors provides for event logging, but also similar to what GHC has, although we’d start out being real-time. Some of the design work is done for this, but some remains. It would be a fun small project and something that would be highly useful.
Still porting the es shell to Rust
Experimenting with an ultra-minimal sign-up flow for, of all things, a webring. Thinking about it as a proof-of-concept for owning one’s own infrastructure as a basis for connecting services without major vendors or central sites.
Poking at an Earley parser in javascript, thinking of porting code from marpa
$work: paralellizing a moderate but slow-by-requirement test suite across multiple jenkins workers. Chasing out bugs as I go.
!$work: Learning about C build systems, mostly Auto*, by slowly building up bits myself. Frustrated by how convoluted it all is, I don’t understand how anyone makes this stuff work for them, or how any software is actually built by this arcane pile of dreck. CMake and SCons aren’t much better. Is it truly so difficult to support a potentially deep
src/
tree? To track dependencies in one way which works with reasonable reliability across sane platforms (I could give a shit about windows, but the differences between linux, bsd, and OSX are driving me mad). If none of those – why is it so difficult to slot in a custom preprocessing script to strip away the boilerplate you have to add to get unit testing libraries to work – don’t even get me started on the insanity of boilerplate you have to incant to get virtually every unit testing library I’ve come across to work. It’s unbelievable how complicated it all is.I feel your pain regarding build systems. There’s no great one, but IMO Autotools is the worst of a set of bad alternatives. I’ve used Autotools, SCons, CMake and qmake, and out of those I think I prefer CMake. But it’s not like I enjoy using it.
I have no doubt it’s the worst, but it’s also pretty ubiquitous, and it comes in parts. I recall reading an article about the origins of the Autotools ecosystem, something like:
First they said: “Let’s automate the configure script w/ some stupid macro language”. Then they said: “Hmm, now I spend all my time writing boilerplate makefiles, let’s automate that with more stupid scripting” Then it was: “Jeez, it’s a pain to write the scripts that the macros consume, let’s write more macros to automate that.” Then it was: “Didn’t account for this corner case with libraries, better automate the generation of a automation script to automate automating the automatic configuration of automatic configuration configuration IA IA CTHULU F'THAGN.”
Right now I’m just using Autoconf to do some of the dependency checks, and I have a set of mostly generic makefiles to do the actual compilation procedure for the library, executable, and test executable. The problem I’m running into at the moment has to do with my version of Autotools not behaving consistently on OSX and Linux. I develop on both platforms routinely, the former for traveling around with (because the MBP is a lot lighter than my Linux Laptop), and the latter for normal development time. It’s also the case that CI (Travis) runs Linux (but a different distro). For whatever reason, OSX automatically puts the
uuid
library in the compilation invocation (i.e., you don’t have to specify-luuid
), but can’t figure out thatcheck
(a unit testing tool) is a library via any incantation I can come up with, even thoughpkg-config
seems to know about it. Not even thePKG_CHECK_MODULES
command works. Linux happily finds these things but requires the-luuid
line, and even though it includes-lcheck
, it doesn’t link things appropriately, and thus the tests won’t run because of missing symbols.I’m beginning to see why C programmers are so grumpy all the time.
EDIT: (Apologies for the drive-by venting. It’s been a long day generating more heat than light).
That sounds a lot like this PHK post ;):
https://www.varnish-cache.org/docs/2.1/phk/autocrap.html
Why are build tools so uniformly terrible? It’s a real head-scratcher.
Well, autotools was born in a vastly different time. I don’t know how old you are, but the non-autotools alternatives on Unix were often far far far worse. Hopefully you don’t remember
imake
. :)Many things used to just have a Makefile per-platform or a home-grown set of per-platform Makefile fragments that got included into the main one(s). That was always a joy, especially when what you needed to worry about was an API level thing that changed between versions of a platform. (Like the introduction of pthreads into HP-UX.) IIRC, the original open source release of Mozilla, NSPR and so on had a pretty arcane build system until Chris Seawood came along and did a huge amount of work to get an autotools build off the ground for it (way back in the summer of 1998).
That ignores the pain that used to be C++ compilers in the 1990s as well. AIX had
xlc
, HP-UX hadaCC
, Solaris had Sun Pro, and so on. Today is practically a golden age given that we mainly have to worry about gcc and clang and they have largely compatible command line options for many things. :)By the time that started to go away and we were ending up where we are now with mostly-standardized platforms (but curse you, OS X, for lacking
clock_gettime
and other things), people started to realize they could write their own build systems. But then they had to compete with autotools which was available everywhere, available by default, and used by just about everything.Unfortunately, everyone realizing this meant that everyone decided to write their own thing or push for their favorite choice to be adopted. Now, years and years later, we have cmake, scons, premake, various jam incarnations, and no clear winner. Cmake managed to get a decent amount of momentum though, so we appear to be stuck with it or autotools for the most part.
Now, we’re replacing that with the JS world where every kid and his mother has decided they need a new build system … I’d love to hear why that is. :)
Yeah, all that rings pretty true. Portability used to be much harder, and there’s lots of older stuff that has been rightfully forgotten. Say what you will about autotools, but it usually can do what you need.
I’m not that old, but just unlucky enough to have dealt with lots of projects with homegrown “portable” build systems. In HPC, code lives forever and gets ported to every platform, compiler and BLAS library. Build systems grow organically in many directions, often not toward the light.
Oh, I remember imake. I remember having to build across e.g. Irix, SunOS, Solaris, HPUX et al. What I don’t understand is why it doesn’t seem to be getting better.
I look forward to experimenting with guile / make integration. Non-recursive make is not that slow and scheme would provide a nice DSL for target definitions and platform-specific tweaks.
Have you seen tup? http://gittup.org/tup/index.html
I have, and while it’s neat, it fails in the ‘potentially deeply nested
src
directory’ area. It does so for reasons I do not fully understand, but the author seems to be against the notion for reasons of complexity of some sort.Here’s what I (ultimately) want:
src
is a directory with a potentially deep (but finite) tree of subdirectories, containing source files that end in.c
or.h
(or potentially some other source-code-extension. They are assumed to always be annotated correctly annotatedtest
is similar tosrc
, with the caveat that all files intest
all end in_test.<source extension>
and consist of test files.obj
is a toplevel directory which contains all the object files (ideally laid out in a copy of the directory structure ofsrc
, this is not a hard requirement, but I’d prefer if objectfiles aren’t left insrc
. It’s merely aesthetic, but it does make cleaning a bit easier.I should be able to easily define how to preprocess or generate sourcefiles. In particular, I use
Check
as a unit testing framework, and that ends up with a bunch of boilerplate which is easy to template away with a bit of ruby, I generate a series ofsuite
files for each_test.c
file, and from those I generate arunner.c
file which executes each suite.No/minimal metascripting (i.e., as little scripts to generate scripts to build stuff. I’m okay with a bit of source-code preprocessing, but the build tool should stand on it’s own.
No IDEs. Despite the fact that Xcode actually ‘does what I want’ (even if it’s lying when it does so), and that other IDEs do similar, I’d really much prefer to stay on the command line, and also use a build system that’s (at least in theory) portable across operating systems.
Needs to be able to generate a working build on multiple operating systems, at minimum OSX, Arch Linux, and Ubuntu – but ideally OSX and any Linux. I don’t care about windows or obscure unices, or even some of the ‘big’ unices (AIX). BSD would be nice, but I’m not commited.
Some things I liked about
tup
. The idea of using fsevents to detect what things were connected to what was brilliant, a much better way than naming conventions. I also really rather liked the simple task DSL, but it did take some getting used too. What I didn’t like was the seemingly arbitrary limitations on what it was allowed to do. Ultimately it looked as though I’d end up having to write a script to generate theTupfile
to support my deep-hierarchy, or artificially limit how deep that hierarchy could go, both of which make my teeth itch.I’ve tried CMake, too, and Scons, and premake, and a few others. I think at some point I’m probably just going to ditch all of the autoconf stuff entirely and break down to writing shell scripts that run
uname -s
(like that PHK post ultimately suggested) and trigger a different configure subscript accordingly. It’s a bear, because it means duplicating configuration logic across platforms, but I can count onbash
running more or less the same across systems, I evidently can’t count on Autoconf doing that.I’m on sick leave and decided to finish one of my pet projects: reimplementing Kestrel/Darner (https://github.com/twitter/kestrel, https://github.com/wavii/darner) in Rust. It was a bit dangling for a while and it’s a big yakshave (leveldb you say? Write your own binding), so all parts are unfinished. But it talks to the network, it persists data properly, only timeouts are missing to at least have all “important” pieces of machinery.
https://github.com/skade/drossel https://github.com/skade/leveldb https://github.com/skade/strand https://github.com/skade/drossel-server
I am sharpening my Haskell skills by writing a compiler for the Swift programming language using Alex/Happy for lexer and parser and LLVM for code generation. It is in very early stages but feedback from fellow Haskellers and compiler hackers is welcome!
I just had a quick poke through, but one big piece of advice I can give you is to always insert type signatures for top levels. It’s a lesson I learned the hard way but just because the type can be inferred doesn’t mean you want it to be.
For work, I hope to wrap up a Haskell web service I have been working on. I need to find a way to deal with a rather large XML schema, so I’ll be doing a bit of research into available options there.
For !work, I am finally writing the last planned chapter of my PureScript book, which will be about a general collection of techniques for writing domain-specific languages in PureScript, such as free monads, finally-tagless encodings, and smart constructors.
After that, I get to relax at Strange Loop for a few days, then I plan to get back to PureScript compiler development.
I got my chromeless gist mirroring service up and running. I need to submit it to Embed.ly as a provider and hopefully in a week or so I’ll be able to use it to embed D3.js visualizations into Medium blog posts. After that, Project Yak Shave will be complete. :)
What I’m most pleased with the past week is the split between regulating mental energy and productivity. You see, I tend to either work too hard on something (and make myself tired) or just neglect it entirely. This week I had a nice split of relaxation and progress, despite a busy week of work.
On hython, I focused a lot on finishing out the parsing stage by parsing for/with/lambda/try/raise statements, along with a few more built-in functions. I didn’t implement support for any of those, because they tend to require features I don’t have implemented yet. This week, I’d like to write some more blog articles to try to catch up to where I am, especially since there’s demand.
For Real Life, I’m trying to switch to going to the gym a lot more often (~5x/wk). That’s my current focus of all my projects, actually.
Outside of work, I’m still doing a lot of random small projects in Common Lisp. The geo-coordinate library I mentioned last week turned into the UTM library, which I put on GitHub and submited a request to have it included in QuickLisp. I also finished up the script that gave me the idea for the library, and I can now compute speeds and distances from GPX files, but this isn’t on GitHub yet.
Then I wrote a quick SDL/OpenGL/Lisp game of life.
I’m also setting up a coding blog using GitHub pages and OctoPress, but haven’t posted anything substantial yet.
At work I’m finishing up some odds and ends before a release.
Got my first iOS project up on the App Store, woo hoo! Mosaics - Make beautiful mosaics by hand.
Low sales numbers so far, as expected. I’m proud of the app and people really like it when they play with it, but it’s kind of a niche that’s hard to market and get people interested in unless they’re handed a device running it, in which case they totally get it. I’m working now on doing some marketing/PR, but that isn’t my forte. Ah, well, the primary goal was to learn iOS development and it’s succeeded very well at that. So now I need to find a job, hopefully one using these new-found skills!
Im working on a method for DHT peer bootstrapping for ipfs, and building a daemon that the user commands communicate with. Boostrapping is a harder problem than I initially thought it would be, there are many solutions, but finding the right one, is tricky.
I’m working on converting the R7RS specification to HTML, so that people can reference it using a web browser.
I’m working on a front-end development link-aggregation site created with Telescope, basically is a community of passionate front-end developers and web designers focused on help each other, discuss and share interesting things in our industry
You can check it here Frontends.org
Meteor ? Neat!
Reading up on prolog web applications since I’m trying to learn web frameworks.
Finally moving my MVP to our new stack! I can’t say too much, but it’s a combination of Elixir (OTP) and the JVM with a lot of influence from the Raft consensus algorithm.
In terms of Hex, I’ve been working on email verification for new accounts, email password reset, a new registry format (ets table + log table) and rate limiting for the API. The next step, after these are merged, is to focus on multiple registry support!
For anyone interested, the source for hex is hosted here and the source for hex_web is hosted here
For $FUN, I’m writing up a test suite for my Clojure library that should help me find where it breaks from being identical to the Python version.
For $SCHOOL, I’m starting to hold office hours as part of being a “course producer” (TA with a cooler title) for the operating systems course. I have a ton of reading, but thankfully all of that reading is really interesting.
I’m working on a JIT-compiled version of ipfw using netmap and LLVM on FreeBSD. I finished killing the bugs of the infrastructure (JIT-ed) code. I’m starting to test it on my laptop. So far…kernel panic! I want to check out what’s happening there.
Once I’m done with making it run for good, I’ll start with some the code generation (the bit corresponding to the actual actions executed for each rule, very simple, and easy to implement). When I have a little subset, I’ll start with some benchmarking. Hope to get to benchmark it this week.
Last week I cleaned up GUI and redesigned the website for firestr.
This week I am out if town for work. So in the evebings I will work in my secret game TCFODS. Adding more weapons and enemies!
I’m working on VirtKick, a simple orchestrator. Last week I finished power and storage management. This week I’m making it possible to create a new machine (like on this page in the prototype), and then all the settings from here except for user management.
Last week, I finished work on a flo to verilog converter.
This week, I am reading up on FPGA place-and-route algorithms.
Recently got my first Arduino so I’ve been going through the tutorials and starter projects for that.
I’m getting ready to go away on holiday on Wednesday (which should actually be a holiday, i.e. minimal email checking) so it’s a short work week.
In the lowRISC world, we’re working on a whitepaper describing some of the key features the initial revision will have. For me, that’s involving something of a literature survey on hardware security features. This whitepaper should be published in the middle of next month, in time for my talk on lowRISC at http://orconf.org
I’m putting an updated Raspbian SD card image featuring the new hardware-accelerated Epiphany browser through its paces before it goes live. Plus I’ve been taking on some tech reviewing lately, and have a few book chapters to review before I head off tomorrow.
Right now I’ve got a LSTM cloud-of-neurons RNN that’s just about ready to go, along with a generic DNA setup for genetic mixing of whatever. My testing program will be to attempt to evolve (via crossbreeding!) many populations of these networks to play Go. Neural network traditionally do pretty badly at Go so I’m curious to see how far my particular design can get. (C++11)
Downside: I don’t yet have the ability to actually score the Go board, which is pretty important for population ranking. I’ve looked up some algorithms and the general consensus is “it’s hard”. At least I can get all eyes, liberties, and groups, and used the “draw a Go board” part of the program to learn the basics of wxWidgets.
For work I am working put together a proposal for a tech mentoring programme. I’ve found myself getting less grumpy as I’m getting older, and wanting to help people here develop their careers. Moreover, I want to make it simpler for the other senior people here to get started mentoring. (Me, for example.)
At home I’m continuing to learn about Clojure / ClojureScript. It suddenly clicked for me earlier in the week what an incredibly slick workflow you can get with Emacs & Cider. You don’t even need to switch to the repo to run something in the REPL: just hit “C-c C-c” to evaluate the current defn or def, then hitting “C-c ,” to re-run the tests. Marvellous! (I should perhaps blog about this.)
More SAP oData stuff for my big client, wrapping up the very last little issues for another client, and starting to think about preparing talks for OSDC 2014. And I got to go visit a mushroom farm on Monday, which demonstrated that even things which sound simple get very very complicated when you scale them up.
Working on a Kafka deployment and getting it ready for production. Life on the bleeding edge of open source can be painful and risky. Last week, discovered that the primary Ruby gem, Poseidon (0.0.4), is not compatible with the latest release of Kafka (0.8.1.1) and the latest on master is broken as well. So, now evaluating rolling back to Kafka 0.8.1, which has problems staying up, or forking and moving forward to get back on track.
On the bright side, it’s been a great education, having recently switched from .NET to Ruby. After forking and patching, I can read and understand nearly all parts of a gem now as well as crank out new ones.
for work, i’m working on a payment system (an alternative to stripe, braintree, etc.) we are really getting closer to the end. i’m very proud with the results so far. apple pay got me excited too.
i’m also developing a startup for myself that i call Quickjobbs, a freelancing community platform. it will meet the need of getting things done as fast as possible for one side and making money as fast as possible for the other.
the main difference from other websites is the time limit on jobs. the longest job can take 1 week and the shortest one is around 1 - 6 hours. it will be an invite only platform. would love to hear your ideas on this.