I guess since nobody has posted this yet Ill take up the torch!
I didn’t post last week, Open Dylan related or not.
I didn’t think I was going to post this week either. I’ve been going through a change in medications that has left me with a resting heart rate of about 120bpm and occasional spikes above that. It is a pretty terrible experience.
I’m also really much less than thrilled with what is going on in the world (Ferguson among many other things), which probably doesn’t help my heart rate. At any rate, at the end of the day, I haven’t felt like doing much outside of my day to day work. I’ve been told that my Twitter timeline reads like a list of war crimes and health complaints, and that’s probably fair.
For work, I’m still working on stuff related to the memory / heap profiler that will be contributed to emscripten. (There’s actually an open pull request for it.) This has been pretty enjoyable and some interesting work. It seems to be working out pretty well for its intended purpose for my client so far as well.
I’ve had a couple of comments from people that I should do a version that is not limited / targeted at Emscripten and that would work on multiple platforms. This is a pretty interesting idea to me and something I’m actively considering. There’s a lot of room for interesting integrations and extensions as well, like getting a dump of the events inside a GC (be it Boehm or Memory Pool System or something custom). I’m not sure how this would work out as a commercial product though. It seems like a lot of people just don’t care about memory usage to this extent outside of the games industry and some mobile applications. I’m not sure that I’d be willing to invest the effort into something open source along these lines without some sort of funding or compensation.
As for Open Dylan … I’ve been writing some stuff about the type system for future blog posts and I did some quick experiments with the idea of allowing users to create their own kinds of types. I really wish that we had some people interested in type systems helping out, like jozefg, but somehow, that’s never worked out with anyone, which is too bad … as there’s a lot of novel and interesting work that could be done, especially by someone who was more versed in the theory than I am.
I work in the embedded world, and tracking and managing memory consumption is a big challenge. I think the market for software development tooling is also larger in embedded than in games or mobile.
I’m currently working on memory consumption reduction at $WORK, and at my previous job I briefly inherited memory budgeting. This was done entirely by hand in a spreadsheet, which had been expanded from the microcontroller days up to today’s embedded Linux stack with multiple processors and a shared memory architecture. Prior to my arrival, memory consumption wasn’t measured, and it was really becoming a problem. After I left the company I had only managed to add per-process memory consumption to our automated post-mortem crash reports, and guessing memory usage from there.
At my current job, this is done mostly using static analysis tools, which also give limited information. Given that in both cases memory consumption was an issue (old job just dimensioned memory to be “just enough”, current job has devices that are in the field and receive software updates for decades), you might have some success selling to embedded software firms.
Basic memory measurements (ps, /proc/*/map, …) are all on a process level granularity. Many embedded projects are still “one process to rule them all” (really, my current project has a binary that breaks the 1GB limit when you keep debug symbols. We have workarounds for GNU ld bugs that only occur with unrealistically large binaries). So knowing that process X has a heap of 504MB is really not very useful.
I’m not saying that the companies I work(ed) for would buy these tools, but at least there’s some use for them. Getting companies to buy these things is another challenge :) I know we spend a lot of time and manpower on getting valgrind to work on our various platforms, and the performance impact is so large you can’t use it on a running system, only on stand alone tests. Since valgrind is mostly used to find memory leaks, a lower overhead tool that just dispatches information to a developers machine would be very useful. The developers workstation could then just highlight suspicious or growing allocations, allocations that are never released, etc.
I have so many things to say in reply to this that it would end up being as long as your comment or longer … I’ll save it for a blog post perhaps!
Your honesty and openness in recording what you’re up to week to week is a good benchmark to me of what is possible for someone like me who aspires to practice this craft called programming at a higher level :-) Your posts are appreciated. What you describe you’re up to in your free time is more than I would usually get done in a week, work and free-time inclusive. Among the feelings of drudgery at certain jobs I’ve had (PHP-land ugh), it really is an indicator to me the promised land is out there, and people can work on cool things :-) I hope you feel better soon.
I always appreciate your weekly posts, thank you for posting this week!
This week I’ll be working on chapter 10 of my PureScript Book which will deal with the foreign function interface and dealing with untyped data. I also plan to work on a wrapper for the yargs CLI library.
At work, I’ll be implementing a small web service in Haskell using Scotty.
Ive been reading papers on various different DHTs and implementing one in go for the IPFS project that ive been working on for a little while. Im also trying to build a better testbed for stress testing my code at the moment.
I just finished up a W3C-compliant CSS3 parser & lexer in pure Go. There are some docs in there right now but I’m going to work on improving them and adding usage examples. Then I’m going to start on a pure Go LESS port. Ultimately, I want to build a zero-dependency asset pipeline tool so people don’t have to download and install node.js just to use tools like LESS.
I’m giving a Papers We Love talk in NYC tomorrow night: https://www.meetup.com/papers-we-love/events/184704082/
I just finished a hopefully-correct implementation of the algorithms described in the paper, here: https://github.com/leifwalsh/rmq
Looking forward to your talk, Leif! Everyone in NYC should come!
At $WORK I have mostly finished a bridge that translates between Wincor-Nixdorf Serial Protocol the card terminals talk and ZeroMQ + JSON our webapp developers are willing to use. With luck, some thousand students or so (at the start of new semester) will be able to pay their library fees with contact-less cards. :-)
Last week was pretty productive, and I hope to continue that into this week as well.
This week I’ll be working on shh at work, and ensuring that the outputter to Librato is robust enough for our purposes.
At home, I hope to knock out some backlog items for grapt, and in addition, start hacking again on some programming language ideas.
I mentioned last week that I was blogging about explaining dependent types for Haskellers. That post has now grown to cover enough material that it’s really 4 posts, so I’m starting to split that apart and flesh apart the different sections. I’m also trying to keep the notation/syntax either explained in the post or common knowledge to any Haskeller. I’m now starting to realize exactly how many weird symbols I’ve jammed in my head this last couple of years.
On the compilers side hasquito is running and doing cool things. The language is fully functional but currently recomputes every value rather than “forcing and caching” as one would like. That should be fixed this week.
Very much looking forward to the dependent types series!
For $work I just started working on the meaty part of a re-implementation of an airline flight schedule combiner in Go. This thing has to handle messages in ancient and ill-specified protocols, so…fun. For Open Dylan I’m on my second round of trying to add multi-line strings to the compiler, after refactoring the lexer a bit to enable some testing. I normally don’t hack the compiler so this is a bit different for me. Planning to use Python-like triple-double-quote syntax.
What would be an alternative to the triple-double-quote? It looks good, but it’s a nightmare to syntax highlight & providing auto-complete for, I’ve heard. Admittedly haven’t looked into it myself, but anecdotally IntelliJ’s handling of Scala’s """ hasn’t been great.
Good point about editor support. It seems like there should be some general support for it out there due to Python though.
Other alternatives I thought about are #s and #r since Dylan has #t, #f, and #“symbol” syntax already. That is, #s"…“ and #s'…‘ for strings with interpreted escape characters like \n and #r”…“ and #r’…‘ for "raw” strings with no interpretation. The latter is particularly useful for regular expressions.
The bottom line, I think, is that if you want to put a big blob of text in a constant there’s a good chance it has both a double-quote and a single-quote in it somewhere, and that’s why “ ” “ is nice.
There’s some discussion here…
Storm 0.9.2 + Kafka 0.7 + Cassandra 2.1.0-rc5 + Elasticsearch 1.3 cluster is now up-and-running in production, running against around 3,000 web traffic requests per second. Time to test it in more detail and make it fast!
are these 3000 real requests per second? or is that just in benchmarks?
real requests per second
why deploy kafka 0.7 instead of 0.8.1?
we want to upgrade to 0.8.1, but we currently use a Python driver we wrote for 0.7 and are in the midst of merging its functionality with an open source driver for 0.8.1
I’ve been watering a mini-lobsters for my office. With much smaller people and a pre-existing community, a little bit of pruning and grafting is needed:
Smaller people? Your office employs child labour? :-)
Nothing exciting; a front-end refactor, some schema tidying, and in my free time … dotfiles. There are times when my job is SO GLAMOROUS.
Why does SQL make it such a pain in the ass to deal with composite foreign keys? Oh, because SQL HATE ME. Sigh.
dont worry, SQL hates everyone equally.
How I wish I had some better way to store and query very large volumes of structured data.
EDIT to make sense.
This week, I finished the first draft of an idea for a proposal deliberation system that is unlike Loomio and many others in that it includes randomized moderator selection, testable hypotheses to ascertain achievement (i.e., Agile and Lean mechanics), and resource allocation.
The documentation was written using ConTeXt. The project is hosted on BitBucket. The web pages in the document were developed using XML, browser-based client-side XSLT, and SVG elements.
Would greatly appreciate your comments.
I’m working on a Ruby app server that’s 2x-6x faster than Puma, Unicorn and Phusion Passenger. Lots of C++, profling, optimizing malloc calls, optimizing system call usage, zero-copy techniques, CPU cache and pipeline optimizations, etc.
This week is a Rust Workweek at Mozilla, so tons of meetings. The one I’m in right now is about our versioning strategy going forward.
I’m also going to be going to a conference, Madison Ruby, and keeping working on Rust docs. Nothing super special this week.
I’ve been working on Hex yet again, putting the finishing touches on pull requests to make it both more seure, as well as more stable and usable in self run environments.
Im glad to hear about progress! Im not familiar with erlang, but how do you handle versioning?
Few things that currently occupy me:
Looking forward to seeing more blog posts!
I’m in the final stretch before submitting my first independent iOS app. It lets you make mosaics by hand and uses OpenGL lighting effects to make the pieces shimmer as your device tilts, giving a pretty cool, immersive effect (if I do say so myself).
For a long time I’ve badly wanted to do more “human” apps (a few months ago I left my steady, boring paycheck) and there’s no way to learn a new development environment like building a big project. So, this project has been a win-win: I’ve really wanted to make this app and I’ve learned a ton while doing it. Just gotta get it to 1.0 now…
In my spare time I’ve been working through Seven Concurrency Models in Seven Weeks. I’m looking forward to the Elixir chapter this week.
$WORK: Porting an iOS app to Android for a startup I joined in July.
I’ve gotten my basic Raft implementation working in test scenarios. Now I’m working on a TCP transport for it so I can do simple cross-process tests.
I feel bad always talking about the same thing, but that might be my impostor syndrome. I’ve been dealing with some significant grief, so I don’t have much to report.
Last week, I finished converting hython (https://github.com/mattgreen/hython) to use Alex and Happy. What a pain! Though it bought me enormous expressivity at the parser level, I can’t decide if I prefer sweating Parsec’s proper ordering, or just using a more traditional approach. I nearly burnt out on it a bit, so I have been reading up on ways to simplify the interpreter using CPS (and bugging tel on SO). I think ContT will do everything I want here, so I’m going to look at moving to that. I’d like to implement composite data types soon (tuples, lists, dictionaries).
I also converted the README into a giant to-do list. I can’t decide if that’s motivating or demotivating. Depends on the day. :)
I’m starting grad school at UC Berkeley.
I’ll be working in the Aspire Lab, makers of Chisel and RISC-V.
That’s quite exciting! RISC-V seems like a neat project. I am especially excited for the lowRISC implementation to appear.
Good luck & have fun. I’m sure they give you boatloads of advice during orientation, but in case no one mentions the book “Getting what you came for”, I found it useful. (Although, definitely ignore its technology recommendations).
And for anyone not at Berkeley, see David Patterson’s classic talk “How to Have a Bad Career in Research/Academia” – I assume zhemao has already seen it :)
I had not seen this talk before. Thanks for sharing.
On Monday, added vagrant support for building Fire★ (github). Also updated all README files to markdown.
You can now run “vagrant up” and get a whole desktop with all dependencies to build and test. Takes a while to provision, I may decide to bake a box myself.
I try to get sublime text’s markdown syntax highlighter to work with markdown + latex. (thus support even more of pandoc flavored markdown.)
Here is the current code. https://gist.github.com/Mgccl/195ce33124f384a2f4e4
As usual I’m writing nix pills. In addition, I’m setting up a nix pastebin similar to sprunge.