Here’s my issue with this article. The author posits that most podcatchers will remove the ability to subscribe via a URL.
This makes no sense to me at all. There are many cases where people might want to listen to podcasts not offered through GOOG or APPL.
Every podcatcher I have access to still supports and explicitly provides options for this.
Op here,
I’m saying “I won’t be surprised if these apps gradually and silently remove this feature”. Of course, I can’t know this, but this is what I’m afraid of. And I don’t think it’s that crazy to imagine.
It’s a valid concern. I guess I feel like as long as there’s any kind of application ecosystem on a given device, there will always be a podcatcher that allows subscriptions via bog standard RSS URL.
I subscribe to a lot of RSS feeds, including podcasts and there’s been a worrying trend over the last year or two where new podcasts don’t even provide a direct RSS/Atom feed.
You have to visit their site to download the mp3 manually like some kind of animal. Or worse still, they make some stupid javascript widget or expect you to use a 3rd party app, or they proudly say it’s on itunes - which doesn’t expose the RSS feed - I had to write a scraper to get the RSS feed from the itunes page myself.
Same with blogs too. So many blogs now don’t have a feed. You’re expected to go to the site to check for new content.
The slow demise of RSS/Atom is a really worrying situation fo me and very few people seem to care.
That is disappointing, and surprising given that there are companies like Feedly and Flipboard among others whose sole business relies on consuming RSS-ish feeds.
How was the author saying that? Sounded like they were saying the other way around, if a podcaster posts just over RSS on their site then users on just Apple won’t see it on Apple by default.
I think you’re conflating two things.
There are two problems here:
Oh you said podcatcher. I read that as podcaster because I never heard of it called a podcatcher but that makes sense now.
What a delightful API.
No questions, but I’d really like to see a link to the source code along with the docs though.
Sounds like the C version of English As She Is Spoke
This type of naive translation is really common. A personal favourite
Nice to know about – I’ve got a few python scripts that’ll help clean up a bit.
(Note also the <<- here-doc variant, which is similarly convenient when writing shell scripts.)
Here docs/strings are awesome.
Another use case I like (beyond ascii art) is embedding test text file contents in a string along with the test itself. When you come back to it later, instead of the indirection of looking up the contents of an external file and cluttering up the file system, you have it right there with the test.
Perl has __DATA__, which is places at the end of the code in the file, and everything which comes after you can read with the DATA filehandle (I don’t know where Larry stole this idea from).
So a file looks like:
#!/usr/bin/env perl
print "here be dragons!\n"
__DATA__
{
"id": 42
}
and you can read that last bit by passing a filehandle along. Pretty nice to embed simple stuff in test files, for example.
Note that the <<- heredoc form only works on text indented with tabs, other kinds of white space will be ignored.
I think I’ll recognise SICP code forever.
This just makes me think of when Facebook bought Oculus and everyone was like, “Well fuck, I really wanted on, but I guess not now.”
It’s interesting even how, a decade past the days of Bill Gates as a Borg, even as our community has matured and we don’t had on MS anywhere near as much as we use to, we still see this as not something we really want.
I agree, Microsoft is really not the company to be running Github. I wonder if it will still stay strong or end up going the way of Source Forge.
Well, for me, it’s not the ancient past so much as present. They patent troll the crap out of companies. That’s anti-innovation that will control an innovation hotbed. The Windows 8 UI debacle and them putting ads on paid services like Live makes me weary of UI-facing changes they might do. Then, they put surveillance into their products mostly for advertisers but maybe governments, too. They do this is in paid products which arent those you expect to sell your info.
So, the company’s current actions show they suck in a lot of ways which include screwing over their customers and suing innovators. Bad fit for Github.
Not even the ancient past:
That’s just off the top of my head for Windows 10 as of now.
Spying on your activities through telemetry
Telemetry seems to be getting built into everything now as well, Visual Studio and Code, SQL Server, the OS (backported into Win 7 and 8 too), not sure about Office (offline) but it can’t be far behind if not already in there.
Installing stuff onto your computer without your consent like Candy Crush
Is the crapware issue really on Microsoft, or OEMs like Dell and HP?
No, it’s really on Microsoft. It was in an update - https://community.spiceworks.com/topic/1369179-how-can-i-prevent-windows-10-from-automatically-installing-sponsored-apps
It doesn’t matter to me if it’s Microsoft or not. If Microsoft hadn’t acquired Github, then some other megacorporation probably would have. It just so happens that Microsoft is trying to mind its manners after getting pimp-slapped by Google, Apple, and Facebook, but I’m not going to trust them just because they’re currently the underdog.
The problem isn’t Microsoft. The problem is the way we allow corporations to operate in the US. Every time one corporation acquires another, the acquiring corporation becomes bigger and more powerful.
This might seem quaint, but I don’t think that corporations as large as Microsoft, Facebook, Apple, AT&T, Alphabet, Comcast, Samsung, Disney, etc. should be permitted to exist. I think they’re inherently inimical to free markets and to democracy. I think that when a corporation’s market capitalization exceeds a certain threshold, it should either be regulated as a public utility, broken up, or dissolved.
Of course, plenty more great material from the root of the website
If you try to build it yourself, their build script also appears to download and inject additional code into the built artifact from marketplace.visualstudio.com during the build. I opened one two three issues.
My guess is that these practices that disrespect or completely ignore users privacy, like requiring an internet connection (whether to build or just to use a piece of software - even the OS itself) are so deeply baked into Microsoft’s mental culture now there’s no going back. It’s just a given now that they assume they are entitled to grab and record whatever information they want from your machine just in order to use their software.
It’s not just Microsoft, many companies seem to be jumping on the same or similar bandwagon.
The same thing happens when you try compiling coreclr (called from here), there’s no way to properly bootstrap it. (And of course, there’s some “telemetry” in there as well, enabled by default during the build, before you have a chance to turn it off.)
The same thing happens when you try compiling coreclr
Wow, now I wonder whether this pattern happens in other Microsoft “open source” projects.
And you gotta love this comment:
# curl has HTTPS CA trust-issues less often than wget, so lets try that first.
Amazing. Well written, well researched and concise.
Parsing is such an interesting problem - it arises naturally almost anywhere you find text.
I’ve read the phrase “parsing is a solved problem” so often in discussions about it but this article puts that in its place. Thoroughly.
Parsing is a solved problem in the sense that if you spend enough time working on a grammar it is often possible to produce something that Bison can handle. I once spent several months trying to get the grammar from the 1991(?) SQL standard into a form that Bison could handle without too many complaints.
Bison’s GLR option helps a lot, but at the cost of sometimes getting runtime ambiguity errors, something that non-GLR guarantees will not happen.
So the ‘newer’ parsing techniques don’t require so much manual intervention to get them into a form that the algorithm can handle.
I think that usually means for most languages and problems people are applying it to. If so, then it probably is a solved problem given we have parsers for all of them plus generators for quite a lot of those kinds of parsers. Just look at what Semantic Designs does with GLR. There’s FOSS tools that try to similarly be thorough on parsing, term-rewriting, metaprogramming, etc. At some point, it seemed like we should be done with parsing algorithm research for most scenarios to instead polish and optimize implementations of stuff like that. Or put time into stuff that’s not parsing at all much of which is nowhere near as solved.
I still say it’s a solved problem for most practical uses if it’s something machine-readable like a data format or programming language.
Note: It is as awesome an article as you all say, though. I loved it.
A solved problem, but for a subset of constrained grammars and languages. Still people say it without any qualification as if there’s nothing more to be said or understood.
I take your points though and there are definitely other areas that need work.
I think my point is just that I’m glad people are still putting effort and research into solved problems like this. No telling where breakthroughs in one area can lead and we’re still restricted constructing and representing language grammar.
A solved problem, but for a subset of constrained grammars and languages. Still people say it without any qualification as if there’s nothing more to be said or understood.
Yeah, they should definitely be clear on what’s solved and isn’t. It’s inaccurate if they don’t.
I don’t think I’ve ever seen a talk by Guy Steele that wasn’t compelling and this is a true classic. He’s a great thinker and a really talented presenter to boot.
*sigh* continuations are one of those things I have trouble with.
The conceptual idea is easy enough to grasp, but I have trouble reasoning about them in code - especially about when they would be a useful choice.
If anyone has links to some good material about continuations with real world practical use cases - not just the simple arithmetic expressions, I’d really appreciate them.
I think that they are useful as language building tool, but not a sensible choice for your everyday programming. You (or preferably the base library authors) should hide them in more opinionated abstractions. Much like we hide goto with various flow control mechanisms.
It’s simple to implement cooperative threading with proper IO scheduler using continuations, you can also implement generators or exception handling system.
I had trouble with them as well and ended up building them into my statement-oriented language to understand them better. Perhaps that works better for you, like it did for me?
I agree, but that’s the way to get the job done at the moment. I’m not sure as to what security measures they have in store…
Additionally, a lot of tutorials I found access Jupyter Notebooks via an HTTP connection, so I’m thinking of writing a blog post about securely accessing Jupyter Notebooks…
Very closely related are the book Programming and Programming Languages and Brown’s Programming Languages course CS173 - course archives
Deliberate Git by Stephen Ball, recorded at Steel City Ruby 2013. To this day, I have my team sit down together and rewatch it every time we onboard someone new. It’s a fantastic level-set of commit message etiquette and purpose plus an overview of history tools.
I got more from Steve Smith’s talk - Knowledge is Power: Getting out of trouble by understanding Git than any other git video I’ve ever seen,
Admirable effort, but punts badly on collision detection (in part 2 if one follows the link). One really needs at least some basic physics engine in even the simplest platformer. Hopefully that’ll be in a future post (box2d?).
Author of the article here. The future post won’t use a physics engine. Physics engines are bad for 2d platformers which aren’t necessarily physics based. I mean if the goal is to make a physics based game (think Angry Birds) then sure, but if the goal is snappy controls like Super Mario, then you’ll fight the engine more than it helps imho. Sliding platforms, elevators and similar things are quite the pain for 2d physics engines. Can’t speak for 3d though.
The next post will be using box colliders with raycasts. Once you have a 2d raycast (might even be enough to just have horizontal/vertical raycasts) you can do even stuff like slopes fairly easily, but the 3rd part most likely won’t get into that. I’ll probably cut it off when gravity/jumping works with static platforms.
Any tips are welcome though.
edit: Just to react to the comment :P
but punts badly on collision detection (in part 2 if one follows the link)
You’re right, I’m not really that happy with the state where its at. Initially I thought I’d make it a one big article, but keeping all the code in sync ended up being a nightmare, which is why I decided to split it up, cut it off at a point where something works, and do the next part in a more conscise manner.
Regarding keeping the code organised, for a multi-part article, why not create a git or mercurial repository - on github, bitbucket or gitlab for example. The code for each article can be in a separate branch which you can link to directly. You’d still need to manage changes made to an earlier stage but that’s pretty straightforward.
That’s not a bad idea, I thought about having a Gist of the finished code in each article. But the issue I was hinting at is with code snippets within a single article, not spanning multiple ones.
My approach to these articles is to have incremental samples with JSFiddle along the way, but all of those are separate snippets, and if I decide to change one thing I have to re-write a lot of the article. I mean the solution to this is easy, not to have 10 copies of the code in each post, but I feel like that’s just making it more difficult to follow along. I guess I should probably figure out the whole code first before I start writing to minimize the changes.
The next post will be using box colliders with raycasts. Once you have a 2d raycast (might even be enough to just have horizontal/vertical raycasts) you can do even stuff like slopes fairly easily, but the 3rd part most likely won’t get into that. I’ll probably cut it off when gravity/jumping works with static platforms.
This sounds great, I’d love to read that. It would be valuable addition to material already out there.
I think the method is describing what’s now called deliberate practice.
The better books will have relevant exercises - different from the material in the book - that test your understanding of the material and stretch it a little (otherwise known as deliberate practice.) Focussed exercises that stretch your ability somewhat beyond your current ability.
I mean people should really do that for themselves, but often you need some domain expertise and guidance to steer you in the right direction. The best teachers I’ve known do this, sometimes naturally, sometimes through experiences and sometimes through deliberate practice.
The Benjamin Franklin method is a subset of deliberate practice. K. Anders Ericcson, the author mentioned in the post, is actually the father of deliberate practice.
One of the components of training is rapid feedback. The Benjamin Franklin method specifically solves the problem of how to get feedback in the absence of a teacher.
Thanks. I wasn’t aware of K. Anders Ericcson. I learned about the ideas and techniques from Daniel Coyle’s books - The Talent Code and The Little Book of Talent, The wonders of Myelin!
Interesting point about rapid feedback - it’s often said that unit tests should run fast so that you can try changes quickly, but I think there’s a large psychological component to that.
William Byrd’s The Most Beautiful Program Ever Written
It starts off slow, but engaging, and when it pays off it pays off hard.
Thanks for posting this talk.
I’ve noticed lots of extremely smart programming language researchers get excited about Kanren or mini-Kanren and I’m not really smart enough to see why. It’s always been presented somewhat opaquely. This talk has blown my mind. One of those rare, true satori moments. Gonna spend some serious time re-reading The Reasoned Schemer to kick off 2018.
Aye. miniKanren didn’t click with me until I saw this talk, and this is some amazing stuff.
I was particularly excited about the Barliman demo and the possibility of future optimizations.
I actually used infinite lists in my Python solutions this year. I can’t remember which one, but it was essentially “find the smallest number with some property”, which you can do like
next(i for i in itertools.count(1) if has_property(i))
I once gave a whole talk that was (in essence) about using Haskell-style infinite lists in Python:
https://docs.google.com/presentation/d/1eI60SL3UxtWfr9ktrv48-pcIkk4S7JiDmeXGCyyGhCs/edit?usp=sharing
Not to be all self-promoting, but it’s a fun talk.
–
Anyway, good on you for doing it all in Haskell, I always aspire to try that, and then I always end up just doing it in Python.
I think the trick is do it with something you know well to get a feel for solving the problem, then try it in X.
Generators are great though - I used Racket’s generator support to implement the infinite spiral of day 3. I should look into doing it using the lazy language to just use regular list operations but lazily. I tried using streams but the performance was much worse - the garbage collector got overwhelmed which probably means I’m not using them correctly.
I suggested a leaderboard and it looks as if there’s some interest.
I have a leaderboard with the code 37542-10cbb3de. Feel free to join. And good luck whether you choose to or not!
Edit thanks @kghose for reminding me to post some instructions….
For noobs like me, you need to go to: http://adventofcode.com/2017/leaderboard/private probably after logging in and use this code there.
Thanks - I’ve joined. I don’t think I’ve ever got in the top 100 for any of the challenges so a private leaderboard feels much more relaxed. I’m really not good at speed coding which isn’t a good combination with being hyper competitive :(
Welcome!
There’s an ongoing meta-discussion about the leaderboard, coupled with the release time of midnight EST. As the creator has mentioned, the envisioned scale was in the order of hundreds of competitors, now it’s up to 100K! The downsides of success.
In the end, I find it more enjoyable competing against yourself. There will always be someone more dedicated and faster, so the idea is to make the best solution you can make. My overarching goal is to solve every problem myself without resorting to hints from other users.
I think it’s worth linking to the whole course too.
It’s a shame the slide show doesn’t respond to mouse clicks below the main text. The arrow keys work, but I spent a minute or so clicking like an idiot before the next slide showed up.
Yeah, the lecture notes on the GHC implementation seem a bit easier to digest than the slides.