This reminds me of Depinguinator
I’ve put together some code for building a FreeBSD disk image which will boot into memory, configure the network, set a root password, and enable SSH. This can be used to “depenguinate” a Linux box, without requiring any access beyond a network connection.
Hmm, I can’t seem to find the link (I think it was on Twitter somewhere), but the only person I’ve seen who attempted to actually correlate them, so far, found no correlation between Trump mentioning a company positively/negatively in a Tweet and short-term stock-price movements in either direction. Unless there’s better evidence of a relationship, running this bot is probably about as good as just trading randomly…
The bot was featured in an article on Mashable Google guy builds bot that earns money from Trump tweets which in turn referred to the original post on medium by the bot author This Machine Turns Trump Tweets into Planned Parenthood Donations , long story short:
But does it actually work? Let’s look at the numbers.
Check out the benchmark report. It’s essentially a test run that shows you how the algorithm performs on past tweets and market data. You’ll see that it sometimes misses a company or gets a sentiment wrong, but it also gets it right a lot. The trading strategy sometimes leaves you up and sometimes down.
Overall, the algorithm seems to succeed more often than not: The simulated fund has an annualized return of about 59% since inception. There are limits to the simulation and the underlying data, so take it all with a grain of salt.
Well you’d make money about 50% of the time, then, and that’s a better return than most investment strategies. ;)
I read an article yesterday that said there was an impact for example Locheed Martin price dropping after Trump tweeted that the fighter jet price was too high, but the impact was relatively short and all the companies recovered.
The impact of each tweet is expected to lessen in the next few months as investors work out that he is a scatter brained moron that doesn’t actually know what he is talking about and won’t actually implement any of his claims. ie. I wouldn’t bother buying up shared in concrete or wall building companies.
Yeah, Lockheed did seem to go down in the short term, but when he tweeted negatively about Nordstrom, it went up. Two examples is only slightly better a ‘data set’ than one, but I’m not convinced there’s a real pattern there that I’d bet money on.
I’ve been a fan of baobab for a few years, but others I forget the name of going back to the late 90s. As @peter says, this is well-explored design space.
I used to use http://grandperspectiv.sourceforge.net/, and, well, du -h . | sort -h .
There is a chapter on io in the first Seven Languages in Seven Weeks and it is available online as an excerpt PDF.
I can very much recommend it, playing with Io was a lot of fun to get introduced to OOP concepts when taken to the extreme.
Obligatory wat talk.
I don’t think that chart is accurate. According to the chart [] should work as NaN does for reflective, but if you try it both in the game and in the JS console it doesn’t. So I think it may be safe to say, don’t trust that table…
I think the chart is pretty accurate; according to the chart [] != [] and in fact it is (being the left and right different instances of Array), only NaN works in reflective because it is only the value that is different from itself, i.e. if a = [] then a == a holds true while if a = NaN it does not.
Interesting take on sonification, still I would find quite difficult just listening to this “sonification” and getting a suggestion on the data being sonificated.
I think a great example, where the sonification enriches the visuals but you can close your eyes and follow along the “narrative” of the data, is the Sonification of Income Inequality on the NYC Subway’s 2 Train.
I’ve been doing a fair amount of AVR work in Ada lately, as well; at some point, I’ll probably switch to using Spark. It’s nice to have strict typing and formal semantics on even these devices.
One of the (many) items on my (ever growing) todo list is to experiment with Ada, SPIN and model-checking embedded-systems software.
@angersock Spin is a model-checker i.e. you provide a formal specification of a program (usually a concurrent one) and it is able to prove some properties (i.e. absence of deadlocks), it also features tools to aid you in extracting [formal] models from C programs. I was speculating that Ada should be more friendly with respect to formal model extraction and hence use with Spin.
Whoa, it’s quite a trip :) it kind of reminded me of from-nand-to-tetris (whose it would be a nice follow-up).
Previous articles/threads on similar topics: It’s Time To Get Over That Stored Procedure Aversion You Have and Actually Using the Database. It’s worth noting that the database mentioned is always Postgres, which I guess it’s not by accident, and it is indeed related to its expressiveness (rich native data types) and extensibility.
Yeah. Well, MySQL explicitly doesn’t place a priority on this sort of capability. Oracle certainly does, but essentially nobody can afford Oracle.
I’d like to decide that for myself someday, but it doesn’t look likely I’ll ever be able to justify the expense. :)
Sigh. Completely unreadable on iPhone. Locked viewport with half the content off screen. Mobile site design never tested on mobile device considered harmful.
It wasn’t even wide enough for my Galaxy S4. I lost a few words off the side. I tried different pages and it got worse.
And you don’t even have to have a mobile device. Just use the chrome mobile development mode or Firefox developer edition.
As someone in charge of ui on a website, I am paranoid about these things and check on my devices as well as chrome.
I should probably digest this a bit further before posting this but… this article seems to take web tech as the canonical UI app development context, and thus makes the assumption that anything bringing that to other platforms is axiomatically good. Has it really come to this? Excuse me, I’m just going to go hide in a cave until something else happens cos I’m so tired of seeing the same square wheel over and over and over.
To me it looks like the OP is opposing React to the traditional/vanilla way of developing web apps as he writes:
“The web is fundamentally weird to build apps on: the mess of HTML and CSS get in the way of frameworks instead of helping them”
The question that he poses is how well the React way (i.e. declarative rendering of the UI based on some state) and tooling translate in building native apps, and the answer is “so far so good”.
This may well be the much rumored merger of web and native apps that’s been prophesied for quite some time. The two are having sex, and the result doesn’t appear too bad (to my eyes at least). They’re doing something Java tried but failed to do, and that’s an accomplishment that I’m happy to see.
kudos @shazow, it’s pretty awesome (and neat).
If someone happens to use a dark Solarized theme on the terminal (as me) /theme mono helps with (somewhat hidden text in) system messages.
The data center operating system would not need to replace Linux or any other host operating systems we use in our data centers today. The data center operating system would provide a software stack on top of the host operating system. Continuing to use the host operating system to provide standard execution environments is critical to immediately supporting existing applications.
While this makes sense, I think it’s a very frightening future. Existing host OS’s are so complicated, and distributed systems increase the complexity significantly. Building on top of that leaning tower is going to be fragile. It will work, after a lot of effort and probably some really nasty warts people just accept.
As a comparison, James Hamilton’s re:Invent talk for this year mentioned that Amazon rewrote its networking stack from ground up and it had better availability than bought things. The reason being: it only did what they needed it to do, so millions of lines of code could be tossed out. Millions of lines of code that just adds bugs.
It’s frightening, indeed, as I was recently (re)reading Unit Testing in Coders at Work, and in particular, the Bloch anecdote on the bug in the assembly for lock/try-lock I couldn’t help but think how deep can go the rabbit hole of our current software “stacks” and no wonder if everything is broken (all the time).
The hope is that after “immediately supporting existing applications” we move to shaving off cruft; Amazon AWS reimplementation of the networking stack has proven a somewhat evolutionary approach (towards less cruft) is, after all, possible (for organisations with the right resources and motivations).
[slightly related] this reminded me that Darcs, written in Haskell, was one of the first free/opensource DVCS and is also built around a rather compact kernel of concepts (its “patch theory”) although experimental rather than proven and battle-tested (like in git case).
Part of me says “Yes! This is the way I program!”
Part of me says, “Errr, Be Honest John, you are actively looking for an exit strategy from this anti-pattern…”
You see, the problem with Sematic Compression is the update problem.
Suppose you need to change something…. too often you have uncompress everything, update and then recompress, often with a subtly diffferent schema / dictionary.
Yes, trivial updates are grreat, trivial updates are easy to do if your code is semantically compressed tight as a xz bundle, non-trivial ones, the ones that first force you to decompress….. Urrggh!
So what is the answer? I’m not sure, as I said I’m looking for the exit sign, the “This Way Out to a Better Paradigm” door.
A strong hint is relational algebra. The things CJ Data writes about. Denormalisation decompresses data first to make it easier to deduplicate the important stuff, the facts.
The key, (a small pun there), is to know what your data model is. What are your primary keys, what are your foreign keys, and keep the data model of your whole program, not merely your RDBMS, in as high a normal form as your brain can understand, so integrity is built in, part of the basic schema of the program.
Most Object Oriented Programming designs are very 1970’s Heirarchical Database in style, in fact the higher the semantic compression as described in the post here, the more duplication is pulled out and pulled out and pulled out into a digraph of abstractions.
The problem is that makes it stiff to change, when we suddenly find some core “essential thing” we thought we understood isn’t what we thought…. and we have to run around and inspect each use and understand what we need to change in each case.
In a static world of perfect understanding, Semantic Compression is the ideal… In a shifting world of imperfect understanding…. it’s fragile.
Regarding the update problem I think the author of the post makes a great case at avoiding premature (semantic) compression and operating compression only after the necessity arise from the domain; also, in the following post he deals with the challenges posed by adding new functionalities to the previously compressed code, when to further compress, when not to and how not to lose “granularity”.
I find the concept of continuous granularity useful in not to fall into the trap of over-compress, with the risk, down the road, that you have to “uncompress” to accomodate new changes. Synthesising (and simplifying, the post is more explicative): if by compressing you’re introducing some discontinuity in the usage of your API/code (things that were possible to express before but you won’t be able to express after) well then you’re compression is not quite right.
To paraphrase K. Beck: Make it work. Make it right. Make it beautiful. Make it fast.
For what concerns most OOP designs, as you pointed out much of the incidental complexity lie in the frozen and “hierarchical nature” of the designs where on one side you continue to introduce specializations and on the other side you continue to abstract until you end up having an AbstractFactoryServiceFactoryFactory.
There seems to be really a resurgence of ML; I found the first part (ML-centric) of proglang course on Coursera [https://www.coursera.org/course/proglang] quite enjoyable.
From a quick skim the approach taken in extending the powerful static type system of ML in a dynamic/open programming context is quite elegant [see Alice ML Quick Tour > packages and following sections]: it builds the notion of packing/serialisation/distribution (both code and data) taking advantage of the ML module system.
Some context from https://angel.co/tlon :
“Tlon is the corporate vehicle of the open-source Urbit software stack (urbit.org). Urbit is a clean-slate reimplementation of the whole system software stack. On the bottom it’s a replacement of the lambda calculus, in the middle it’s a new functional programming language, on top it’s a purely functional network operating system in which address space is property. Tlon owns approximately half of the entire address-space on the Urbit network. The goal is to create a new layer over the Internet the way the Internet layered over the PSTN. This layer can also earn adoption by providing Internet services. On the Internet, your Urbit ship is a general-purpose personal cloud computer which replaces the 47 special-purpose cloud silos you’re currently using.”
Configuration files suck, but at least they have simple syntax and declarations. I don’t have to learn to evaluate a new programming language in my head to understand each program’s config.
(Also, @crocket, please post your opinion on the story in the comments rather than in the story itself.)
It cannot be overstated how nice it is to be able to, over the phone, tell somebody “Hey, go to this line, change this value to say this, restart and let me know what happens”.
Also, it’s nice to be able to consume configuration files from other languages. Having to embed a language interpreter to parse a config “file” years later is annoying at best.
If they remain that simple… Great.
But they don’t.
They gather features until https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule holds.
And then you have a Turing Tarpit that is as complicated as a programming language, less formally specified and really poorly documented and with no debugger support.
Linker script is a truly horrible example.
I don’t believe this is the universal rule you make it out to be. Sometimes they do stay simple. Personally I’ve never run into a single config that morphed into something that would have benefited from being written in a programming language. All the config files I’ve ever used were simple. Whereas it seems that in your experience you’ve only encountered configs that eventually become complex.
If you’d like a specific example, one that I have run into: Apache rewrite rules making use of the “skip next N rules” feature. If you have more than one of these it’s easier to write explicitly procedural code with if statements and blocks like you get in Varnish VCL or any general-purpose programming language.
(Varnish’s VCL is a somewhat nice middle point; it’s explicitly procedural but it doesn’t have any facility for loops except for one feature which, off the top of my head, I think restarts the current request from scratch and you’re somewhat discouraged from using.)
I have met a couple of proprietary gnarlies.
The thin edge of the wedge is when “this config item + that config item must == the other”
After screwing that up a time or ten, someone adds an addition operator.
And maybe a loop.
More usually it some sort of implicit goto or include.
If somebody told me pam.conf was turing complete I wouldn’t be surprised. Appalled, but not surprised.
The only reason why BPF isn’t, is that was a conscious design goal….. but if you told me you had worked out an insanely cunning way to do it…. I wouldn’t be (too) surprised.
Taking a trawl through /etc/ I find a fair number of config files are indeed scripts. Shell scripts.
css is a classic example that is now within an very very close to turing complete. (Depending on how kindly you view things, it is already).
I find that simple configuration files follow the Rule of least power, a good and sound [software] engineering principle.
That said “everything should be made as simple as possible, but not simpler” and the OP makes a fair point that configurations (and configuration languages) gets more and more complex over time but then your problem (and your domain) has moved from configuration to scripting and you have new requirements to account for and new trade-offs should be made.
In all likelihood, they would use one of a fairly small number of languages.
You could also still do what people do now, which is search for a solution on StackOverflow.
I use the Awesome window manager, and its config file is written in Lua. A couple months ago an update I installed included a breaking change and, while I looked at StackOverflow, most of the resolution involved learning how Lua deals with null values, casts (or doesn’t) strings into ints, and other language minutia. None of it was “Hey, paste this in to replace what you have”, it was just programming time.
I feel like “whether it embeds Lua or a JSON parser” is somewhat orthogonal to “handles breaking changes gracefully over time”.