I’d rank mine somewhere between Excellent and Exceptional.
I work 40 hours a week but I’ve got freedom to decide where and at what times
the work happens.
If my phone rings at 7pm it’s totally fine if I decline and call back at 10am the
I’m responsible for organising my own schedule.
To give a bit of context, I live in Germany and work at a software consultancy providing
high-level consulting, development, and training services.
It’s a medium-sized company, soon to be 20 years old.
I’ve just joined; I’m there for slightly over a year now.
If the tuition cost is the most important factor consider studying in Europe. A US-American friend of mine earned both her BSc and MSc in CS in Germany. Graduating debt-free from a respected university, she couldn’t have been happier.
Wow! Thanks for this! I’d have to get my partner on board, but this seems totally feasible. Do you mind sharing which college in Germany your friend attended? How much German did she need to know?
One source of information is the German Academic Exchange Service’s list of English-language degree courses in Germany. Teaching bachelor degrees in English is a relatively recent thing in Germany, and is most common in areas like International Business where there’s an argument that even German students might benefit from doing the degree in English. But it looks like there are a few universities offering CS bachelors with English-language instruction. (English-language courses then get much more common for master’s degrees, with almost 10x as many offerings.)
I’m putting the finishing touches to my talk about mutation testing for an upcoming conference. Still trying to wrap my head around the problem of non-terminating mutants. I can’t tell up front if execution of a certain piece of code will terminate. At the same time I can’t just let it run and kill it if I suspect it’s non-terminating, because I’m on JVM.
From what I can see there appears to be no reliable way to stop a thread on JVM. Thread.stop is deprecated and Thread.interrupt—IIUC—can be ignored. If I were able to fork, I could’ve run potentially non-terminating computation in a child process and kill it once time runs out, but that’s not possible. I’m looking for a solution to the problem, trying not to think about bytecode manipulation yet. Ideas? Please let me know!
Godel’s theorem shows it’s impossible to do that in general. Formal methods people do it on semi-complex, constrained programs with much work. High-assurance field cheated around it in embedded with two methods: watchdog timers; preemptible, self-contained threads with deterministic scheduling.
The first is simple: set timer for max it should ever take under any circumstances, set error function for that, timer counts down as function works, interrupts if it’s hung, and error code deals with it. Built into a lot of MCU’s.
The second in a VM would involve monitoring threads, counters, and worker threads. Monitor either starts threads or is informed when they do. This by itself can be useful to log. The counters are internal to functions or objects always going up. Monitor keeps a copy that it periodically compares on certain intervals to see if progress is being made. Likewise, might look at start and runtimes of threads to see if they ran too long like watchdogs. Kills whatever it needs to. Should be easy to implement if it’s non-standard.
The third route was one I used for fake real time and what sklogic on Hacker News independently came up with. Best Godel cheat ever: a while loop that runs the code a few steps at a time with a counter always running downward. If hits zero, while terminates with notification and/or state in logs. I have on my backlog ths idea of making compiler pass or DSL that automatically generates code like this for a specific piece of existing code. Not doing it for now but might make a nice project.
Copilot is similar to what Im describing. Such techniques might be modified for Java:
Can we all at least admit that it is kinda silly to have a VM where you can create threads but can’t do something as simple as “Hey, next time this thread is scheduled, just…um…don’t?”.
There are a lot of great academic reasons why that’s a bad thing, and make concurrency hard, and is prone to instability…but in spite of all that it’s still a useful feature to have.
Im 100% with you. I dont do Java but assumed they had a kill function. That poster said they depreciated it or something. I cant imagine whatever they’re preventing is as hard to analyze as integrating the work arounds I just posted. Even RTOS’s, easiest to analyze for reliability, let you kill things or give them no CPU. Truly WTF.
The potentially non-terminating code can’t periodically check in with a semaphore on the outer loop to know to halt?
Ask Lobste.rs: Any videos you particularly liked? If so, why?
I really liked “End-to-end encryption: Behind the scenes” – it was quite a well-coordinated performance. :-)
“Rusty Runtimes: Building Languages In Rust” by Aditya Siram.
Aditya implements a klambda-to-Rust compiler in Rust! Very cool. Also exciting because klambda is what powers the Shen lisp language.
Here’s one list of recommendations.
I felt too entrenched in my professional comfort zone, so I pushed myself
out of it.
After spending over three great years with my current employer, I filed my notice.
Last week I announced my availability and from this week on I’m starting
my search for the next challenge.
Fun times ahead! Best of luck!
Catching up with my reading list after spending way too much time with EU IV and
Factorio. Currently reading a book about Szymborska.
When it’s nice outside I grab my running shoes and head to a park nearby.
There I coach two friends of mine who decided to finish a half marathon this year.
On rainy days we give up being sporty and play go instead.
I never got into Europa Universalis, but I really like Victoria 2. It’s a very rewarding game, and allows for developing inwards over conquest.
A week ago I asked my Twitter bubble why is there no mutation testing
library for Clojure. My tweet resulted in a storm of two direct reactions,
one being a like and the other a tongue-in-cheek reply. I realised that
there might be a niche.
I’ve experimented over past couple of afternoons and today I’ve pushed
Mutant to GitHub. It’s wildly experimental and has a lot of rough
edges but it appears to work. I was able to find a couple of interesting
results when mutation-testing some popular Clojure libraries.
It’s amazing how simple it was to get the basic functionality up and
running. Building atop a homoiconic language with dynamic code reloading
already feels like cheating. Libraries such as rewrite-clj and
tools.namespace came in really handy, too.
Interestingly enough, with some extra bootstrapping Mutant can mutation-test itself.
Back to fighting false positives. Let’s see how far I’ll be able to get it.
chartd looks like a nice replacement for now-deprecated Image Charts by Google. Thanks for sharing!
How does NPM allow somebody to mass-unpublish their own libraries?
I think it’s great that they do. That’s how it should be. It’s the author’s library, not NPM’s.
Thanks for linking to the guy’s blog. Beautiful stuff, and love the “you broke my code but more power to you for standing up for your principles” comments.
Is that really how it should be? Authors of packages released to NPM—or for the sake of argument to any other centralised repository of libraries—publish their code under terms of free or open source licenses. The act of releasing an artifact (note the key word: releasing) entitles maintainers of a particular repository to redistribute the source code freely as long as they obey terms of the particular license.
In this case, left-pad was published under terms of the extremely permissive WTFPL. It uses somewhat coarse language to give licensees virtually all the rights, and among others, the right to freely redistribute.
The right of redistribution matters. Once a particular library becomes an important node in the dependency graph of a serious and growing ecosystem its just too risky to let it disappear. This case demonstrates it aptly. I would argue that a healthy and responsible community should actively pursue the goal of maintaining availability of critical components of the ecosystem. Otherwise stories like this hit front pages of all tech news sites and the entire community suffers.
Mistakes? Sure, they do happen. But there are means to address them. Consider Clojars, the central repository of the Clojure ecosystem. You uploaded something accidentally? No worries, just open an issue, explain the situation, and tag it as delete-request. Maintainers will review your request and will act accordingly. The process is transparent and prevents situations like the calamity we’ve just witnessed.
Named after Gottlob Frege, a German philosopher, logician, and mathematician.
I’m happy that I’ve finally published a belated blog post announcing that
we’ve open sourced Rib, our Erlang-based demultiplexer for JSON HTTP APIs. Now it’s time to finish the talk for :clojureD and Clojure
Remote, where I’m going to share what we’ve learned when building Rib. Can’t wait!
Looks very nice, but instead of faking it I’d rather record my terminal with asciinema and embed the recording in my slides.
The nice thing about faking it is that you can control in real time the precise speed at which everything happens, including the responding commands. For example, if your commands open a text file with less or view, you can then pace yourself along that file as you see fit.
I’ve been thinking about this sort of thing in Ruby - the language makes a lot of tradeoffs for dynamic, polymorphic everything that is little-used in practice.
Three years ago I attempted to introduce such an optimisation in MRI, the canonical Ruby VM. I looked for monomorphic call sites and had them compiled to custom opcodes invoking statically determined methods. I hoped to introduce some savings by removing expensive method lookups dominating MRI’s profiling graphs.
It all looked promising in theory, but in practice I didn’t achieve a lot, as documented in my MSc thesis. Some synthetic benchmarks showed promise, but more real world scenarios—e.g. simple Sinatra or Rails apps under load—weren’t affected by my changes. MRI’s built-in inline method caches were already doing a very good job. Moreover, I’m happy to see that people are still working on making them more efficient.
Great research! I would never have expected that practical result, that’s really interesting.
I can’t seem to find a PDF of this. I’m super interested, as my day job is working on JIT technology, currently being proven on Ruby, and we’re always on the lookout for reading material.
My opinion is that many people default to dynamically typed and move to static types over time but we should probably have languages which make it easier to default to static types and provide a dynamic escape hatch. While I have not used it, I think C# offers this with it’s dynamic type.
This made me finally learn about two Haskell features that I’d heard about. Haskell has (at least) two different mechanisms for doing dynamic sorts of things. The first use case is when you have incorrect code in your program, but you’d like to compile it anyway. You know that you aren’t actually going to use that part of the code and so it shouldn’t matter. In that case, use the compiler flag -fdefer-type-errors. This gets you pretty much what you have in a dynamic language. Blatant type errors are only a problem if you actually try to run that code:
aString :: String
aString = 42
main :: IO ()
main = putStrLn "Hello world!"
This will compile with just a warning. The second case is when you want explicit dynamic types. Here you use the Dynamic type from the standard library:
vals :: [Dynamic]
vals = [ toDyn 1
, toDyn "cat"
, toDyn 3.14159
useString :: [Dynamic] -> String
useString list =
let str :: String
str = fromDyn (list !! 1) ""
in map toUpper str
And this behaves like this:
> useString vals
All-in-all, since the bulk of code is (monomorphically) typed, defaulting to static types, with dynamic escape hatches like the above sounds nice!
It’s always interesting to me that Haskell has such a nice dynamic type which sees almost no use.
I wonder if you’re thinking of Simple and Effective Type Check Removal through Lazy Basic Block Versioning – Maxime Chevalier-Boisvert, Marc Feeley, ECOOP 2015.
Even if you weren’t, it’s a good read; Interesting technique for exploiting this observation in a novel JIT compilation strategy.
Edit: Updated link, which didn’t have PDF.