I have to agree with Helge Bahmann’s comment on the blog. inotify isn’t designed to solve the problem the author is trying to solve, and there are good reasons why it can’t behave the way the he wants it to.
If the author wants to see what a process is doing with the file system he should use something like strace and monitor the fs related calls. inotify doesn’t even tell which process triggered the fs change, so it doesn’t solve his problem.
A much better example, IMO, are these Markov generated Tumbler posts, trained with Puppet documentation and a collection of H.P. Lovecraft stories.
Poetic.
And this one looks like one of those quotes that become historical, but almost no one that uses it knows what it means:
“Any reasonable number of resources can be specified in a way I can never hope to depict.”
I like King James Programming. Example: Exercise 3.63 addresses why we want a local variable rather than a simple map as in the days of Herod the king
I recently finished Karel Čapek’s “War With the Newts”, which I highly recommend. Fantastic sci-fi satire.
Now I’m in reading limbo. I’ve read a few chapters into several books that seem interesting (on fluid simulation, computer vision, knot theory, computation geometry), but haven’t had one really capture my interest yet. At some point I’ll just have to pick one.
If you’re into early 20th century political sci-fi satire, may I recommend The Clockwork Man by E. V. Odle? I found it surprisingly modern in its prose style & pretty amusing. (If you don’t feel like buying it & don’t mind reading online, hilobrow.com serialized it about a decade ago as a prelude to printing their new edition.)
Thanks! I’m going to start reading it tonight.
I couldn’t find it from their front page, but the hilobrow.com version is still online http://hilobrow.com/2013/03/20/the-clockwork-man-1/.
Saturday I need to tune up my bikes, and then I’ll probably go to the coffee shop and work on some Lisp projects or go through the huge backlog of photos that I need to touch up, categorize, and upload.
Sunday I’m biking with a friend in the morning, and then I’ll probably be back at the photos or Lisp.
I’d like to start contributing to some larger Lisp projects (SBCL and StumpWM, or maybe porting common-qt to support Qt5), so I’ve been familiarizing myself with those codebases and looking through their issue trackers for problems I can fix. So far I’ve tinkered with them, but haven’t contributed anything back.
I am mildly dismayed by the amount of active participation it takes to enhance browser privacy. I’m not suggesting that all these extensions should be baked-in to Firefox, it’s more commentary on how out-of-control privacy things have become on the web today.
Unfortunately, all of the browser developers (Google, Mozilla, Apple, and Microsoft) have vested interest in not improving privacy by default.
There’s just too much money to be made off the people who don’t know any better.
I’m not sure what to think of this.
On one hand, it’s an interesting problem and a neat idea, and I can think of some cool ways to enhance the output for specific languages (for example using plists in Common Lisp to store hints for the natural language generation, or the same using annotations in Python).
But on the other hand, I’m a little concerned about the lack of examples, and I don’tunderstand the criteria they use to evaluate the quality of the generated text. The two examples in the paper are nearly useless, IMO. One step removed from comments like “Add 5 to x”.
To be useful, the natural language explanations need to be at a higher level of abstraction, and without “general AI” there’s no way they can do that using only the syntax tree. So the problem circles back to the developer needing to add annotations and/or comments, which is right back where we are now.
Fluid Simulation for Computer Graphics does a much better job covering fluid simulation, without the lame, “I don’t know what this is doing” crap.
The book doesn’t include source code, but it explains the math behind fluid sim really well.
A funny thing I realized a while back is that, for me, there’s very little difference between Linux and the BSDs, and to some extent even OSX.
Most of the software I use on a daily basis (Emacs, StumpWM, SBCL, Chromium, rxvt, zsh, etc.) is virtually identical between systems. There are some nuances, like GNU vs BSD userland tools, but for the most part it doesn’t affect me much.
OSX has a different UI, but even there most of my time is spent in Emacs, Chromium, or the terminal (with zsh), so it ends up being nearly the same, too.
I feel that’s only true for the time spent coding? As soon as you have to deploy or manage a service/system, things get interesting. Though I guess it’s less of a problem for any kind of code that only runs locally?
Lobsters has me all hyped for OpenBSD, and there’s openbsd.amsterdam now offering VMs, which is really interesting. But at the same time, I don’t want to spend a lot of time on the maintenance of private little side-projects. I currently take a single VM from a generic provider, run Debian on it, and set it update and reboot automatically. If I can get to that point with OpenBSD, I’d be even more interested in trying. (But I’ve only spent a little time researching so far.)
I feel that’s only true for the time spent coding? As soon as you have to deploy or manage a service/system, things get interesting. Though I guess it’s less of a problem for any kind of code that only runs locally?
Yeah, I suppose that’s true, but almost everything I write lately is just loaded into a Lisp image and launched from the REPL, so it’s largely the same every where.
I think administration is definitely where there are the biggest user noticeable differences between all the systems.
Unix is Unix. I no longer really draw distinctions, because they are largely meaningless for the level at which I interact with systems.
I’d never heard of “snap” before, and I’m not sure I understand what it’s trying to do. Is it just a lazy way to publish closed source software on Linux? I really don’t see why I’d choose a snap over using my regular package manager.
Yes and no. Snaps are the last of many tries to have a single, uniform package format among a lot of different Linux distributions. They can be installed on *buntu, debian, arch, fedora, opensuse, …. They also offer built-in sandbox.
You are correct that a lot of propritiatory software is using it to distribute packages because it saves some time, but that’s not the main goal of the format.
Snaps are the last of many tries to have a single, uniform package format among a lot of different Linux distributions.
Unfortunately, there are at least three competing standards (Snap, Flatpak, AppImage). However, Flatpak seems to be supported in more distributions than Snap:
https://kamikazow.wordpress.com/2018/06/08/adoption-of-flatpak-vs-snap-2018-edition/
Regardless of what one thinks of such formats, they have already lead to interesting phenomena. Flatpak is, for instance, quite popular among pirates who use it to pack Windows games with a custom Wine configuration:
Then there is the Winepak project, which packages redistributable Windows software:
There is some strange things about snap: https://medium.com/@acam/im-afraid-for-the-future-of-ubuntu-2f41796073b2
This is really a non-issue as far as I’m concerned.
Browsers (either standalone or with plugins) let users turn off images, turn off Javascript, override or ignore stylesheets, block web fonts, block video/flash, and block advertisements and tracking. Users can opt-out of almost any part of the web if it bothers them.
On top of that, nobody’s twisting anybody’s arm to visit “heavy” sites like CNN. If CNN loads too much crap, visit a lighter site. They probably won’t be as biased as CNN, either.
Nobody pays attention to these rants because at the end of the day they’re just some random people stating their arbitrary opinions. Rewind 10 or 15 or 20 years and Flash was killing the web, or Javascript, or CSS, or the img tag, or table based layouts, or whatever.
Rewind 10 or 15 or 20 years and Flash was killing the web, or Javascript, or CSS, or the img tag, or table based layouts, or whatever
Flash and table based layouts really were and, to the extent that you still see them, are either hostile or opaque to people who require something like a screen reader to use a website. Abuse of javascript or images excludes people with low end hardware. Sure you can disable these things but it’s all too common that there is no functional fallback (apparently I can’t even vote or reply here without javascript being on).
Are these things “killing the web” in the sense that the web is going to stop existing as a result? Of course not, but the fact that they don’t render the web totally unusable is not a valid defense of abuses of these practices.
I wouldn’t call any of those things “abuses”, though.
Maybe it all boils down to where the line is drawn between supported hardware and hardware too old to use on the modern web, and everybody will have different opinions. Should I be able to still browser the web on my old 100 Mhz Petnium with 8 Mb of RAM? I could in 1996…
Should I be able to still browser the web on my old 100 Mhz Petnium with 8 Mb of RAM?
To view similar information? Absolutely. If what I learn after viewing a web page hasn’t changed, then neither should the requirements to view it. If a 3D visualization helps me learn fluid dynamics, ok, bring it on, but if it’s page of Cicero quotes, let’s stick with the text, shall we?
I wouldn’t call any of those things “abuses”, though.
I think table based layouts are really pretty uncontroversially an abuse. The spec explicitly forbids it.
The rest are tradeoffs, they’re not wrong 100% of the time. If you wanted to make youtube in 2005 presumably you had to use flash and people didn’t criticize that, it was the corporate website that required flash for no apparent reason that drew fire. The question that needs to be asked is if the cost is worth the benefit. The reason people like to call out news sites is they haven’t really seen meaningfully new features in two decades (they’re still primarily textual content, presented with pretty similar style, maybe with images and hyperlinks. All things that 90s hardware could handle just fine) but somehow the basic experience requires 10? 20? 100 times the resources? What did we buy with all that bandwidth and CPU time? Nothing except user-hostile advertising as far as I can tell.
If you wanted to make youtube in 2005 presumably you had to use flash and people didn’t criticize that
At the time (ok, 2007, same era) I had a browser extension that let people view YouTube without flash by swapping the flash embed for a direct video embed. Was faster and cleaner than the flash-based UI.
Maybe you would like this one https://github.com/thisdotvoid/youtube-classic-extension
I’d say text-as-images and text-as-Flash from the pre-webfont era are abuses too.
On top of that, nobody’s twisting anybody’s arm to visit “heavy” sites like CNN. If CNN loads too much crap, visit a lighter site.
Or just use http://lite.cnn.io
nobody’s twisting anybody’s arm to visit “heavy” sites like CNN
Exactly. It’s not a “web developers are making the web bloated” problem, it’s a “news organizations are desperate to make money and are convinced that personalized advertising and tons of statistics (Big Data!!) will help them” problem.
Lobsters is light, HN, MetaFilter, Reddit, GitHub, GitLab, personal sites/blogs, various wikis, forums, issue trackers, control panels… Most of the stuff I use is really not bloated.
If you’re reading general world news all day… stop :)
I laughed when I saw this as historical. I was working on a Z Series security project last year. They’re everywhere.
Well, MVS z/OS M-O-U-S-E does have an unbroken lineage (and backwards compatibility) to the 1960s, so it’s living history. It’s also cloistered, so it doesn’t influence outside design and outside design doesn’t influence it. Classic IBM, in other words.
For some definitions of “everywhere”. They’re not in the cloud-startup-macbook-webdev-mobile world, which is where I’d expect most lobsters/hn/reddit/etc. users to be.
Off topic, but has there ever been a poll about this? I’m not a web or mobile developer, but they do seem to be the majority on most tech discussion sites. Maybe not so much on Lobsters but definitely on HN.
I just got back yesterday from 8 days of bike touring, so I’m relaxing and getting used to being at home.
I’ve fallen dangerously behind in the Prolog class I’m enrolled in, so most of the weekend will be spent catching up on that.
I’m also hoping to finish reading Karel Čapek’s “The War With the Newts”.
Outside of eBikes, which I don’t know much about, the rest of the bicycle world scores pretty well on repairability, in my experience.
I built up my current bike from components, and I’ve done all of my own maintenance for years now, and although there are a lot of standards (some official, some not), most companies are really good about saying which ones they’re using and what they’re compatible with. And for pre-built assemblies, like derailleurs, hubs, and free wheels, most of the better manufacturers will have PDFs available on their website showing how to disassemble, clean, and reassemble them.
SRAM and Shimano are “okay” on this, but the mid-to-high end companies, like Paul, White Industries, Industry Nine, etc. are usually really good.
I feel like APL took terseness too far. Every code snippet looks like somebody was playing “code golf”. It may be great for demoing the language, but won’t it be a nightmare for real code?
And I could technically achieve (nearly) the same thing in Lisp by giving my functions and variables names like “ῴ”, but it’s easier to understand when it’s spelled out like “solution-matrix”. Just because everything can be abbreviated to a single symbol doesn’t mean it should be.
Also, as neat as these purely algorithmic problems are, what does real life code look like in APL? What’s an HTTP request look like? How would I parse a JSON blob?
won’t it be a nightmare for real code?
No. Not generally.
In fact usually the opposite.
Iverson won a Turing award on this very subject, and I recommend you read his notation as a tool of thought paper for more on this subject.
I programmed in Common Lisp for about a decade but I do a fair amount of programming in q/k (an aplish language that uses ascii characters) these days and having good array support is a massive improvement in my code size and how quickly I can bring solutions. One of the applications I work on has a dozen or so developers on it at the moment.
What’s an HTTP request look like? How would I parse a JSON blob?
Pretty similar to other languages: we just use library or built-ins like everyone else.
To do an HTTP GET in q I write:
.Q.hg`:https://domain/url
And to parse JSON I say:
.j.k text
If you want to see what a parser looks like, I can point you at an example, but you will find it unsatisfying as a beginner since you will lack the ability to read it at this point.
From the outside, the controversy over PEP 572 looks like out of control bike shedding. I’m honestly surprised anybody on either side felt strongly enough that it would get to this point. AFAICT, it’s just syntactic sugar to streamline a few common idioms and avoid a common typo.
Syntax is at the heart of Python’s value proposition, and assignment syntax is pretty core. In most languages I’d agree with you, but this is a language that prides itself on being “executable pseudocode” and is very commonly recommended as a teaching/first language for just that reason.
I realize that, but I just don’t see this new syntax as a big deal at all. It’s not mandatory. Existing code still works exactly like it always has. And the new extension isn’t difficult to understand or explain. There’s value in keeping the language small and simple, but that ship sailed a long time ago with Python.
But, at the same time, it’s just syntactic sugar and really isn’t necessary at all.
I honestly don’t care either way, I’m just surprised it blew up so much.
I only skimmed after the first few paragraphs, but I think each of his complaints applies equally to non-distributed version control.
I’m also skeptical of his assumptions. Building from source is a developer activity. A clueless (for lack of a better word) end user, like “Joe”, will be in over their head regardless of the VCS used. And generally speaking, for any non-trivial software, cloning from the VCS is likely to be the easiest part of the process.
Brutalism as an architectonic style is disgusting and oppressive as shit (intentionally). I spent quite a bit of time in a brutalist building, I felt like shit. Like how did intentional hostility ever become a trend?
While the term certainly originates from concrete, the author is not trying to advocate making websites out of concrete (figuratively). I think the main point can be seen in the paragraph mentioning Truth to Materials. That is, don’t try to hide what the structure is made out of - and in the case of a website it is a hypertext document.
This website could be seen in that light. It is very minimally styled and operates exactly how the elements of the interface should (be expected to). The points of interaction are very clear.
The styling doesn’t even have to be minimal, but there is certainly a minimalism implied.
I respect your opinion, but I personally really enjoy brutalist architecture. I like the minimalism and utilitarian simplicity of the concrete exteriors, and I like how the style emphasizes the structure of the buildings.
I think if you added a splash of color it would make the environment much more enjoyable while still embracing the pragmatism and the seriousness.
It isn’t intentionally being oppressive or hostile. It represents pragmatism, modernity, and moral seriousness. However it doesn’t take a large logical jump to realize that pragmatism, modernity, and moral seriousness could feel oppressive. In the same way to the architects who designed brutalism, the indulgent designs of 1930’s-1940’s might feel like a spit in the face if you’re struggling to make ends meet. Neither were trying to hurt anyone, yet here we are.
I consider the 1930s designs (as can be seen in shows such as Poirot) to be rather elegant styling. But I also see the pragmatism that was prompted with the war shortages.
I am not a great fan of giant concrete structures that have no accommodation for natural lighting, but I also dislike the “glass monstrosities” that have been built after brutalist designs.
I find myself respecting the exterior of some of the brick buildings of the 19th Century and possibly early 20th. Western University in London Canada has many buildings with that style.
Some of the updates done to the Renaissance Center in Detroit have mitigated some of the problems with Brutalist - ironically with a lot of glass.
This might be true of Brutalism specifically, but (at least some) modern (“Modern”, “Post-modern”, etc.) architecture is deliberately hostile.
I found this article on that very topic pretty interesting.
In my home town, the public library and civic center (pool, gymnasium) are brutalist. It was really quite lovely. Especially the library was extremely cozy on the inside, with big open spaces with tables and little nooks with comfortable chairs.
My pet theory is that brutalism is a style that looks good in black-and-white photographs at the extent of looking good in real life. So it was successful in a time period when architects were judged mainly on black-and-white photographs of their buildings.
Last week’s bike trip was even more awesome than I had hoped. We had a few mechanical issues, but we were prepared and everything worked out. We circled the Holy Cross Wilderness area, and the scenery was absolutely beautiful. Now I need to go through the roughly 800 pictures I took during the trip, and I’m hoping to get them posted to SmugMug later this week.
Besides the pictures, I need to catch up on the SWI Prolog course I’m taking. I read a little bit during the trip, but I’ve fallen far behind on the course work :-/
At my job I’m finishing up a couple small features and a couple bug fixes before our next release.
I don’t understand what I’m supposed to take away from this.
Don’t emacs and LaTeX work on *BSD? And even AbiWord and OpenOffice? Is it a joke? Is it making a point about using simple tools?
I’m confused. This is a sarcastic man page, right? Like, people don’t actually consider it the standard text editor - do they?
I personally prefer a reasonable coding assignment over a normal interview. IMO it’s reasonable and makes sense to have people write code as part of the vetting process for a job writing code.
Is this really a thing? I’ve never seen a hiring process setup as a direct competition before.
The article talks specifically about coding challenges as the first step in the hiring process. It doesn’t argue against coding assignments after the hiring company has invested some resources itself.